Next Issue
Volume 15, December
Previous Issue
Volume 15, October
 
 

Algorithms, Volume 15, Issue 11 (November 2022) – 50 articles

Cover Story (view full-size image): Recent advances in the field of artificial neural networks, coupled with the increase in the performance of modern computers, have made it possible to apply artificial intelligence technologies in the field of medical image processing. In our opinion, one of the most important types of medical images are whole slide images (WSIs), as they are used to diagnose oncological diseases. In this paper, we collected a database of 1785 colon WSIs and trained convolutional neural networks (CNNs) to classify several types of colorectal lesions. This development can be used to speed up the work of histopathologists and improve the quality of diagnostics. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
29 pages, 3561 KiB  
Review
Taxonomy of Scheduling Problems with Learning and Deterioration Effects
by Yenny Alexandra Paredes-Astudillo, Jairo R. Montoya-Torres and Valérie Botta-Genoulaz
Algorithms 2022, 15(11), 439; https://doi.org/10.3390/a15110439 - 21 Nov 2022
Cited by 4 | Viewed by 1772
Abstract
In traditional scheduling problems, job processing times are considered constant and known in advance. This assumption is, however, a simplification when it comes to hand-intensive real-life production contexts because workers usually induce variability in the job processing times due to several factors such [...] Read more.
In traditional scheduling problems, job processing times are considered constant and known in advance. This assumption is, however, a simplification when it comes to hand-intensive real-life production contexts because workers usually induce variability in the job processing times due to several factors such as learning, monotony, fatigue, psychological factors, etc. These effects can decrease or increase the actual processing time when workers execute a job. The academic literature has reported several modeling and resolution approaches to deal with the phenomenon in a variety of configurations. However, there is no comprehensive review of these research outputs to the best of our knowledge. In this paper, we follow a systematic approach to review relevant contributions addressing the scheduling problem with learning and deterioration effects. Modeling approaches for learning and deterioration effects, objective functions, and solution methods employed in the literature are the main topics for the taxonomy proposed in this review. A total of 455 papers from 1999 to 2021 are included and analyzed. Different areas of interest are presented, and some opportunities for future research are identified. Full article
(This article belongs to the Special Issue Optimization Methods in Operations and Supply Chain Management)
Show Figures

Figure 1

30 pages, 2252 KiB  
Systematic Review
Machine Learning Approaches for Skin Cancer Classification from Dermoscopic Images: A Systematic Review
by Flavia Grignaffini, Francesco Barbuto, Lorenzo Piazzo, Maurizio Troiano, Patrizio Simeoni, Fabio Mangini, Giovanni Pellacani, Carmen Cantisani and Fabrizio Frezza
Algorithms 2022, 15(11), 438; https://doi.org/10.3390/a15110438 - 21 Nov 2022
Cited by 16 | Viewed by 4504
Abstract
Skin cancer (SC) is one of the most prevalent cancers worldwide. Clinical evaluation of skin lesions is necessary to assess the characteristics of the disease; however, it is limited by long timelines and variety in interpretation. As early and accurate diagnosis of SC [...] Read more.
Skin cancer (SC) is one of the most prevalent cancers worldwide. Clinical evaluation of skin lesions is necessary to assess the characteristics of the disease; however, it is limited by long timelines and variety in interpretation. As early and accurate diagnosis of SC is crucial to increase patient survival rates, machine-learning (ML) and deep-learning (DL) approaches have been developed to overcome these issues and support dermatologists. We present a systematic literature review of recent research on the use of machine learning to classify skin lesions with the aim of providing a solid starting point for researchers beginning to work in this area. A search was conducted in several electronic databases by applying inclusion/exclusion filters and for this review, only those documents that clearly and completely described the procedures performed and reported the results obtained were selected. Sixty-eight articles were selected, of which the majority use DL approaches, in particular convolutional neural networks (CNN), while a smaller portion rely on ML techniques or hybrid ML/DL approaches for skin cancer detection and classification. Many ML and DL methods show high performance as classifiers of skin lesions. The promising results obtained to date bode well for the not-too-distant inclusion of these techniques in clinical practice. Full article
Show Figures

Figure 1

11 pages, 304 KiB  
Article
Dynamic SAFFRON: Disease Control Over Time via Group Testing
by Batuhan Arasli and Sennur Ulukus
Algorithms 2022, 15(11), 437; https://doi.org/10.3390/a15110437 - 21 Nov 2022
Viewed by 1422
Abstract
Group testing is an efficient algorithmic approach to the infection identification problem, based on mixing the test samples and testing the mixed samples instead of individually testing each sample. In this paper, we consider the dynamic infection spread model that is based on [...] Read more.
Group testing is an efficient algorithmic approach to the infection identification problem, based on mixing the test samples and testing the mixed samples instead of individually testing each sample. In this paper, we consider the dynamic infection spread model that is based on the discrete SIR model, which assumes the disease to be spread over time via infected and non-isolated individuals. In our system, the main objective is not to minimize the number of required tests to identify every infection, but instead, to utilize the available, given testing capacity T at each time instance to efficiently control the infection spread. We introduce and study a novel performance metric, which we coin as ϵ-disease control time. This metric can be used to measure how fast a given algorithm can control the spread of a disease. We characterize the performance of the dynamic individual testing algorithm and introduce a novel dynamic SAFFRON-based group testing algorithm. We present theoretical results and implement the proposed algorithms to compare their performances. Full article
Show Figures

Figure 1

8 pages, 254 KiB  
Article
Higher-Order Curvatures of Plane and Space Parametrized Curves
by Mircea Crasmareanu
Algorithms 2022, 15(11), 436; https://doi.org/10.3390/a15110436 - 18 Nov 2022
Cited by 1 | Viewed by 1583
Abstract
We start by introducing and studying two sequences of curvatures provided by the higher-order derivatives of the usual Frenet equation of a given plane curve C. These curvatures are expressed by a recurrence starting with the pair [...] Read more.
We start by introducing and studying two sequences of curvatures provided by the higher-order derivatives of the usual Frenet equation of a given plane curve C. These curvatures are expressed by a recurrence starting with the pair (0,k) where k is the classical curvature function of C. Moreover, for the space curves, we succeed in introducing three recurrent sequences of curvatures starting with the triple (k,0,τ). Some kinds of helices of a higher order are defined. Full article
(This article belongs to the Special Issue Machine Learning in Computational Geometry)
25 pages, 642 KiB  
Article
Generalizing the Alpha-Divergences and the Oriented Kullback–Leibler Divergences with Quasi-Arithmetic Means
by Frank Nielsen
Algorithms 2022, 15(11), 435; https://doi.org/10.3390/a15110435 - 17 Nov 2022
Viewed by 2041
Abstract
The family of α-divergences including the oriented forward and reverse Kullback–Leibler divergences is often used in signal processing, pattern recognition, and machine learning, among others. Choosing a suitable α-divergence can either be done beforehand according to some prior knowledge of the [...] Read more.
The family of α-divergences including the oriented forward and reverse Kullback–Leibler divergences is often used in signal processing, pattern recognition, and machine learning, among others. Choosing a suitable α-divergence can either be done beforehand according to some prior knowledge of the application domains or directly learned from data sets. In this work, we generalize the α-divergences using a pair of strictly comparable weighted means. Our generalization allows us to obtain in the limit case α1 the 1-divergence, which provides a generalization of the forward Kullback–Leibler divergence, and in the limit case α0, the 0-divergence, which corresponds to a generalization of the reverse Kullback–Leibler divergence. We then analyze the condition for a pair of weighted quasi-arithmetic means to be strictly comparable and describe the family of quasi-arithmetic α-divergences including its subfamily of power homogeneous α-divergences. In particular, we study the generalized quasi-arithmetic 1-divergences and 0-divergences and show that these counterpart generalizations of the oriented Kullback–Leibler divergences can be rewritten as equivalent conformal Bregman divergences using strictly monotone embeddings. Finally, we discuss the applications of these novel divergences to k-means clustering by studying the robustness property of the centroids. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition)
Show Figures

Figure 1

20 pages, 1229 KiB  
Article
Insights into Multi-Model Federated Learning: An Advanced Approach for Air Quality Index Forecasting
by Duy-Dong Le, Anh-Khoa Tran, Minh-Son Dao, Kieu-Chinh Nguyen-Ly, Hoang-Son Le, Xuan-Dao Nguyen-Thi, Thanh-Qui Pham, Van-Luong Nguyen and Bach-Yen Nguyen-Thi
Algorithms 2022, 15(11), 434; https://doi.org/10.3390/a15110434 - 17 Nov 2022
Cited by 6 | Viewed by 2788
Abstract
The air quality index (AQI) forecast in big cities is an exciting study area in smart cities and healthcare on the Internet of Things. In recent years, a large number of empirical, academic, and review papers using machine learning (ML) for air quality [...] Read more.
The air quality index (AQI) forecast in big cities is an exciting study area in smart cities and healthcare on the Internet of Things. In recent years, a large number of empirical, academic, and review papers using machine learning (ML) for air quality analysis have been published. However, most of those studies focused on traditional centralized processing on a single machine, and there had been few surveys of federated learning (FL) in this field. This overview aims to fill this gap and provide newcomers with a broader perspective to inform future research on this topic, especially for the multi-model approach. In this survey, we went over the works that previous scholars have conducted in AQI forecast both in traditional ML approaches and FL mechanisms. Our objective is to comprehend previous research on AQI prediction including methods, models, data sources, achievements, challenges, and solutions applied in the past. We also convey a new path of using multi-model FL, which has piqued the computer science community’s interest recently. Full article
(This article belongs to the Special Issue Machine Learning Algorithms in Prediction Model)
Show Figures

Figure 1

17 pages, 326 KiB  
Article
Unrelated Parallel Machine Scheduling with Job and Machine Acceptance and Renewable Resource Allocation
by Alexandru-Liviu Olteanu, Marc Sevaux and Mohsen Ziaee
Algorithms 2022, 15(11), 433; https://doi.org/10.3390/a15110433 - 17 Nov 2022
Cited by 1 | Viewed by 1337
Abstract
In this paper, an unrelated parallel machine scheduling problem with job (product) and machine acceptance and renewable resource constraints was considered. The main idea of this research was to establish a production facility without (or with minimum) investment in machinery, equipment, and location. [...] Read more.
In this paper, an unrelated parallel machine scheduling problem with job (product) and machine acceptance and renewable resource constraints was considered. The main idea of this research was to establish a production facility without (or with minimum) investment in machinery, equipment, and location. This problem can be applied to many real problems. The objective was to maximize the net profit; that is, the total revenue minus the total cost, including fixed costs of jobs, job transportation costs, renting costs of machines, renting cost of resources, and transportation costs of resources. A mixed-integer linear programming (MILP) model and several heuristics (greedy, GRASP, and simulated annealing) are presented to solve the problem. Full article
(This article belongs to the Special Issue Algorithms for Real-World Complex Engineering Optimization Problems)
Show Figures

Figure 1

40 pages, 6231 KiB  
Article
Automatic Fault Detection and Diagnosis in Cellular Networks and Beyond 5G: Intelligent Network Management
by Arun Kumar Sangaiah, Samira Rezaei, Amir Javadpour, Farimasadat Miri, Weizhe Zhang and Desheng Wang
Algorithms 2022, 15(11), 432; https://doi.org/10.3390/a15110432 - 17 Nov 2022
Cited by 8 | Viewed by 2719
Abstract
Handling faults in a running cellular network can impair the performance and dissatisfy the end users. It is important to design an automatic self-healing procedure to not only detect the active faults, but also to diagnosis them automatically. Although fault detection has been [...] Read more.
Handling faults in a running cellular network can impair the performance and dissatisfy the end users. It is important to design an automatic self-healing procedure to not only detect the active faults, but also to diagnosis them automatically. Although fault detection has been well studied in the literature, fewer studies have targeted the more complicated task of diagnosing. Our presented method aims to tackle fault detection and diagnosis using two sets of data collected by the network: performance support system data and drive test data. Although performance support system data is collected automatically by the network, drive test data are collected manually in three mode call scenarios: short, long and idle. The short call can identify faults in a call setup, the long call is designed to identify handover failures and call interruption, and, finally, the idle mode is designed to understand the characteristics of the standard signal in the network. We have applied unsupervised learning, along with various classified algorithms, on performance support system data. Congestion and failures in TCH assignments are a few examples of the detected and diagnosed faults with our method. In addition, we present a framework to identify the need for handovers. The Silhouette coefficient is used to evaluate the quality of the unsupervised learning approach. We achieved an accuracy of 96.86% with the dynamic neural network method. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

27 pages, 4121 KiB  
Article
Ensembles of Random SHAPs
by Lev Utkin and Andrei Konstantinov
Algorithms 2022, 15(11), 431; https://doi.org/10.3390/a15110431 - 17 Nov 2022
Cited by 7 | Viewed by 2084
Abstract
The ensemble-based modifications of the well-known SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model are proposed. The modifications aim to simplify the SHAP which is computationally expensive when there is a large number of features. The main idea [...] Read more.
The ensemble-based modifications of the well-known SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model are proposed. The modifications aim to simplify the SHAP which is computationally expensive when there is a large number of features. The main idea behind the proposed modifications is to approximate the SHAP by an ensemble of SHAPs with a smaller number of features. According to the first modification, called the ER-SHAP, several features are randomly selected many times from the feature set, and the Shapley values for the features are computed by means of “small” SHAPs. The explanation results are averaged to obtain the final Shapley values. According to the second modification, called the ERW-SHAP, several points are generated around the explained instance for diversity purposes, and the results of their explanation are combined with weights depending on the distances between the points and the explained instance. The third modification, called the ER-SHAP-RF, uses the random forest for a preliminary explanation of the instances and determines a feature probability distribution which is applied to the selection of the features in the ensemble-based procedure of the ER-SHAP. Many numerical experiments illustrating the proposed modifications demonstrate their efficiency and properties for a local explanation. Full article
(This article belongs to the Special Issue Ensemble Algorithms and/or Explainability)
Show Figures

Figure 1

16 pages, 1471 KiB  
Article
Topic Scaling: A Joint Document Scaling–Topic Model Approach to Learn Time-Specific Topics
by Sami Diaf and Ulrich Fritsche
Algorithms 2022, 15(11), 430; https://doi.org/10.3390/a15110430 - 16 Nov 2022
Cited by 1 | Viewed by 1602
Abstract
This paper proposes a new methodology to study sequential corpora by implementing a two-stage algorithm that learns time-based topics with respect to a scale of document positions and introduces the concept of Topic Scaling, which ranks learned topics within the same document [...] Read more.
This paper proposes a new methodology to study sequential corpora by implementing a two-stage algorithm that learns time-based topics with respect to a scale of document positions and introduces the concept of Topic Scaling, which ranks learned topics within the same document scale. The first stage ranks documents using Wordfish, a Poisson-based document-scaling method, to estimate document positions that serve, in the second stage, as a dependent variable to learn relevant topics via a supervised Latent Dirichlet Allocation. This novelty brings two innovations in text mining as it explains document positions, whose scale is a latent variable, and ranks the inferred topics on the document scale to match their occurrences within the corpus and track their evolution. Tested on the U.S. State Of The Union two-party addresses, this inductive approach reveals that each party dominates one end of the learned scale with interchangeable transitions that follow the parties’ term of office, while it shows for the corpus of German economic forecasting reports a shift in the narrative style adopted by economic institutions following the 2008 financial crisis. Besides a demonstrated high accuracy in predicting in-sample document positions from topic scores, this method unfolds further hidden topics that differentiate similar documents by increasing the number of learned topics to expand potential nested hierarchical topic structures. Compared to other popular topic models, Topic Scaling learns topics with respect to document similarities without specifying a time frequency to learn topic evolution, thus capturing broader topic patterns than dynamic topic models and yielding more interpretable outputs than a plain Latent Dirichlet Allocation. Full article
(This article belongs to the Special Issue Algorithms for Non-negative Matrix Factorisation)
Show Figures

Figure 1

22 pages, 4021 KiB  
Article
An Auto-Encoder with Genetic Algorithm for High Dimensional Data: Towards Accurate and Interpretable Outlier Detection
by Jiamu Li, Ji Zhang, Mohamed Jaward Bah, Jian Wang, Youwen Zhu, Gaoming Yang, Lingling Li and Kexin Zhang
Algorithms 2022, 15(11), 429; https://doi.org/10.3390/a15110429 - 15 Nov 2022
Cited by 3 | Viewed by 2860
Abstract
When dealing with high-dimensional data, such as in biometric, e-commerce, or industrial applications, it is extremely hard to capture the abnormalities in full space due to the curse of dimensionality. Furthermore, it is becoming increasingly complicated but essential to provide interpretations for outlier [...] Read more.
When dealing with high-dimensional data, such as in biometric, e-commerce, or industrial applications, it is extremely hard to capture the abnormalities in full space due to the curse of dimensionality. Furthermore, it is becoming increasingly complicated but essential to provide interpretations for outlier detection results in high-dimensional space as a consequence of the large number of features. To alleviate these issues, we propose a new model based on a Variational AutoEncoder and Genetic Algorithm (VAEGA) for detecting outliers in subspaces of high-dimensional data. The proposed model employs a neural network to create a probabilistic dimensionality reduction variational autoencoder (VAE) that applies its low-dimensional hidden space to characterize the high-dimensional inputs. Then, the hidden vector is sampled randomly from the hidden space to reconstruct the data so that it closely matches the input data. The reconstruction error is then computed to determine an outlier score, and samples exceeding the threshold are tentatively identified as outliers. In the second step, a genetic algorithm (GA) is used as a basis for examining and analyzing the abnormal subspace of the outlier set obtained by the VAE layer. After encoding the outlier dataset’s subspaces, the degree of anomaly for the detected subspaces is calculated using the redefined fitness function. Finally, the abnormal subspace is calculated for the detected point by selecting the subspace with the highest degree of anomaly. The clustering of abnormal subspaces helps filter outliers that are mislabeled (false positives), and the VAE layer adjusts the network weights based on the false positives. When compared to other methods using five public datasets, the VAEGA outlier detection model results are highly interpretable and outperform or have competitive performance compared to current contemporary methods. Full article
Show Figures

Figure 1

26 pages, 855 KiB  
Article
Applying Artificial Intelligence in Cryptocurrency Markets: A Survey
by Rasoul Amirzadeh, Asef Nazari and Dhananjay Thiruvady
Algorithms 2022, 15(11), 428; https://doi.org/10.3390/a15110428 - 14 Nov 2022
Cited by 9 | Viewed by 10140
Abstract
The total capital in cryptocurrency markets is around two trillion dollars in 2022, which is almost the same as Apple’s market capitalisation at the same time. Increasingly, cryptocurrencies have become established in financial markets with an enormous number of transactions and trades happening [...] Read more.
The total capital in cryptocurrency markets is around two trillion dollars in 2022, which is almost the same as Apple’s market capitalisation at the same time. Increasingly, cryptocurrencies have become established in financial markets with an enormous number of transactions and trades happening every day. Similar to other financial systems, price prediction is one of the main challenges in cryptocurrency trading. Therefore, the application of artificial intelligence, as one of the tools of prediction, has emerged as a recently popular subject of investigation in the cryptocurrency domain. Since machine learning models, as opposed to traditional financial models, demonstrate satisfactory performance in quantitative finance, they seem ideal for coping with the price prediction problem in the complex and volatile cryptocurrency market. There have been several studies that have focused on applying machine learning for price and movement prediction and portfolio management in cryptocurrency markets, though these methods and models are in their early stages. This survey paper aims to review the current research trends in applications of supervised and reinforcement learning models in cryptocurrency price prediction. This study also highlights potential research gaps and possible areas for improvement. In addition, it emphasises potential challenges and research directions that will be of interest in the artificial intelligence and machine learning communities focusing on cryptocurrencies. Full article
(This article belongs to the Special Issue Machine Learning Algorithms in Prediction Model)
Show Figures

Figure 1

16 pages, 1710 KiB  
Article
Overlapping Grid-Based Optimized Single-Step Hybrid Block Method for Solving First-Order Initial Value Problems
by Sandile Motsa
Algorithms 2022, 15(11), 427; https://doi.org/10.3390/a15110427 - 14 Nov 2022
Cited by 3 | Viewed by 1539
Abstract
This study presents a new variant of the hybrid block methods (HBMs) for solving initial value problems (IVPs). The overlapping hybrid block technique is developed by changing each integrating block of the HBM to incorporate the penultimate intra-step point of the previous block. [...] Read more.
This study presents a new variant of the hybrid block methods (HBMs) for solving initial value problems (IVPs). The overlapping hybrid block technique is developed by changing each integrating block of the HBM to incorporate the penultimate intra-step point of the previous block. In this paper, we present preliminary results obtained by applying the overlapping HBM to IVPs of the first order, utilizing equally spaced grid points and optimal points that maximize the local truncation errors of the main formulas at the intersection of each integration block. It is proven that the novel method reduces the local truncation error by at least one order of the integration step size, O(h). In order to demonstrate the superiority of the suggested method, numerical experimentation results were compared to the corresponding HBM based on the standard non-overlapping grid. It is established that the proposed method is more accurate than HBM versions of the same order that have been published in the literature. Full article
Show Figures

Figure 1

2 pages, 163 KiB  
Editorial
Special Issue “Selected Algorithmic Papers From CSR 2020”
by Henning Fernau
Algorithms 2022, 15(11), 426; https://doi.org/10.3390/a15110426 - 14 Nov 2022
Cited by 1 | Viewed by 1051
Abstract
The 15th International Computer Science Symposium in Russia (CSR 2020) was organized by the Ural Federal University located in Ekaterinburg, Russian Federation [...] Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From CSR 2020)
26 pages, 9148 KiB  
Article
Consistency and Convergence Properties of 20 Recent and Old Numerical Schemes for the Diffusion Equation
by Ádám Nagy, János Majár and Endre Kovács
Algorithms 2022, 15(11), 425; https://doi.org/10.3390/a15110425 - 10 Nov 2022
Cited by 5 | Viewed by 1706
Abstract
We collected 20 explicit and stable numerical algorithms for the one-dimensional transient diffusion equation and analytically examined their consistency and convergence properties. Most of the methods used have been constructed recently and their truncation errors are given in this paper for the first [...] Read more.
We collected 20 explicit and stable numerical algorithms for the one-dimensional transient diffusion equation and analytically examined their consistency and convergence properties. Most of the methods used have been constructed recently and their truncation errors are given in this paper for the first time. The truncation errors contain the ratio of the time and space steps; thus, the algorithms are conditionally consistent. We performed six numerical tests to compare their performance and try to explain the observed accuracies based on the truncation errors. In one of the experiments, the diffusion coefficient is supposed to change strongly in time, where a nontrivial analytical solution containing the Kummer function was successfully reproduced. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

16 pages, 485 KiB  
Article
Hybrid Harmony Search for Stochastic Scheduling of Chemotherapy Outpatient Appointments
by Roberto Rosario Corsini, Antonio Costa, Sergio Fichera and Vincenzo Parrinello
Algorithms 2022, 15(11), 424; https://doi.org/10.3390/a15110424 - 10 Nov 2022
Cited by 4 | Viewed by 1574
Abstract
This research deals with the same-day chemotherapy outpatient scheduling problem that is recognized as a leading strategy to pursue the objective of reducing patient waiting time. Inspired by a real-world context and different from the other studies, we modeled a multi-stage chemotherapy ward [...] Read more.
This research deals with the same-day chemotherapy outpatient scheduling problem that is recognized as a leading strategy to pursue the objective of reducing patient waiting time. Inspired by a real-world context and different from the other studies, we modeled a multi-stage chemotherapy ward in which the pharmacy is located away from the treatment area and drugs are delivered in batches. Processes in oncology wards are characterized by several sources of uncertainty that increase the complexity of the problem; thus, a stochastic approach was preferred to study the outpatient scheduling problem. To generate effective appointment schedules, we moved in two directions. First, we adopted a late-start scheduling strategy to reduce the idle times within and among the different stages, namely medical consultation, drug preparation and infusion. Then, since the problem is NP-hard in the strong sense, we developed a hybrid harmony search metaheuristic whose effectiveness was proved through an extended numerical analysis involving another optimization technique from the relevant literature. The outcomes from the numerical experiments confirmed the efficacy of the proposed scheduling model and the hybrid metaheuristic algorithm as well. Full article
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

15 pages, 5714 KiB  
Article
Leverage Boosting and Transformer on Text-Image Matching for Cheap Fakes Detection
by Tuan-Vinh La, Minh-Son Dao, Duy-Dong Le, Kim-Phung Thai, Quoc-Hung Nguyen and Thuy-Kieu Phan-Thi
Algorithms 2022, 15(11), 423; https://doi.org/10.3390/a15110423 - 10 Nov 2022
Cited by 5 | Viewed by 2230
Abstract
The explosive growth of the social media community has increased many kinds of misinformation and is attracting tremendous attention from the research community. One of the most prevalent ways of misleading news is cheapfakes. Cheapfakes utilize non-AI techniques such as unaltered images with [...] Read more.
The explosive growth of the social media community has increased many kinds of misinformation and is attracting tremendous attention from the research community. One of the most prevalent ways of misleading news is cheapfakes. Cheapfakes utilize non-AI techniques such as unaltered images with false context news to create false news, which makes it easy and “cheap” to create and leads to an abundant amount in the social media community. Moreover, the development of deep learning also opens and invents many domains relevant to news such as fake news detection, rumour detection, fact-checking, and verification of claimed images. Nevertheless, despite the impact on and harmfulness of cheapfakes for the social community and the real world, there is little research on detecting cheapfakes in the computer science domain. It is challenging to detect misused/false/out-of-context pairs of images and captions, even with human effort, because of the complex correlation between the attached image and the veracity of the caption content. Existing research focuses mostly on training and evaluating on given dataset, which makes the proposal limited in terms of categories, semantics and situations based on the characteristics of the dataset. In this paper, to address these issues, we aimed to leverage textual semantics understanding from the large corpus and integrated with different combinations of text-image matching and image captioning methods via ANN/Transformer boosting schema to classify a triple of (image, caption1, caption2) into OOC (out-of-context) and NOOC (no out-of-context) labels. We customized these combinations according to various exceptional cases that we observed during data analysis. We evaluate our approach using the dataset and evaluation metrics provided by the COSMOS baseline. Compared to other methods, including the baseline, our method achieves the highest Accuracy, Recall, and F1 scores. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

31 pages, 2897 KiB  
Article
A Systematic Approach to the Management of Military Human Resources through the ELECTRE-MOr Multicriteria Method
by Igor Pinheiro de Araújo Costa, Adilson Vilarinho Terra, Miguel Ângelo Lellis Moreira, Maria Teresa Pereira, Luiz Paulo Lopes Fávero, Marcos dos Santos and Carlos Francisco Simões Gomes
Algorithms 2022, 15(11), 422; https://doi.org/10.3390/a15110422 - 09 Nov 2022
Cited by 11 | Viewed by 1942
Abstract
Personnel selection is increasingly proving to be an essential factor for the success of organizations. These issues almost universally involve multiple conflicting objectives, uncertainties, costs, and benefits in decision-making. In this context, personnel assessment problems, which include several candidates as alternatives, along with [...] Read more.
Personnel selection is increasingly proving to be an essential factor for the success of organizations. These issues almost universally involve multiple conflicting objectives, uncertainties, costs, and benefits in decision-making. In this context, personnel assessment problems, which include several candidates as alternatives, along with several complex evaluation criteria, can be solved by applying Multicriteria Decision Making (MCDM) methods. Uncertainty and subjectivity characterize the choice of personnel for missions or promotions at the military level. In this paper, we evaluated 30 Brazilian Navy officers in the light of four criteria and 34 subcriteria. To support the decision-making process regarding the promotion of officers, we applied the ELECTRE-Mor MCDM method. We categorized the alternatives into three classes in the modeling proposed in this work, namely: Class A (Promotion by deserving), Class B (Promotion by seniority), and Class C (Military not promoted). As a result, the method presented 20% of the officers evaluated with performance corresponding to class A, 53% of the alternatives to class B, and 26.7% with performances attributed to class C. In addition, we presented a sensitivity analysis procedure through variation of the cut-off level λ, allowing decision-making on more flexible or rigorous scenarios at the discretion of the Naval High Administration. This work brings a valuable contribution to academia and society since it represents the application of an MCDM method in state of the art to contribute to solving a real problem. Full article
Show Figures

Figure 1

25 pages, 620 KiB  
Article
Personalized Federated Multi-Task Learning over Wireless Fading Channels
by Matin Mortaheb, Cemil Vahapoglu and Sennur Ulukus
Algorithms 2022, 15(11), 421; https://doi.org/10.3390/a15110421 - 09 Nov 2022
Cited by 4 | Viewed by 2267
Abstract
Multi-task learning (MTL) is a paradigm to learn multiple tasks simultaneously by utilizing a shared network, in which a distinct header network is further tailored for fine-tuning for each distinct task. Personalized federated learning (PFL) can be achieved through MTL in the context [...] Read more.
Multi-task learning (MTL) is a paradigm to learn multiple tasks simultaneously by utilizing a shared network, in which a distinct header network is further tailored for fine-tuning for each distinct task. Personalized federated learning (PFL) can be achieved through MTL in the context of federated learning (FL) where tasks are distributed across clients, referred to as personalized federated MTL (PF-MTL). Statistical heterogeneity caused by differences in the task complexities across clients and the non-identically independently distributed (non-i.i.d.) characteristics of local datasets degrades the system performance. To overcome this degradation, we propose FedGradNorm, a distributed dynamic weighting algorithm that balances learning speeds across tasks by normalizing the corresponding gradient norms in PF-MTL. We prove an exponential convergence rate for FedGradNorm. Further, we propose HOTA-FedGradNorm by utilizing over-the-air aggregation (OTA) with FedGradNorm in a hierarchical FL (HFL) setting. HOTA-FedGradNorm is designed to have efficient communication between the parameter server (PS) and clients in the power- and bandwidth-limited regime. We conduct experiments with both FedGradNorm and HOTA-FedGradNorm using MT facial landmark (MTFL) and wireless communication system (RadComDynamic) datasets. The results indicate that both frameworks are capable of achieving a faster training performance compared to equal-weighting strategies. In addition, FedGradNorm and HOTA-FedGradNorm compensate for imbalanced datasets across clients and adverse channel effects. Full article
(This article belongs to the Special Issue Gradient Methods for Optimization)
Show Figures

Figure 1

30 pages, 1134 KiB  
Article
k-Pareto Optimality-Based Sorting with Maximization of Choice and Its Application to Genetic Optimization
by Jean Ruppert, Marharyta Aleksandrova and Thomas Engel
Algorithms 2022, 15(11), 420; https://doi.org/10.3390/a15110420 - 08 Nov 2022
Viewed by 2235
Abstract
Deterioration of the searchability of Pareto dominance-based, many-objective evolutionary optimization algorithms is a well-known problem. Alternative solutions, such as scalarization-based and indicator-based approaches, have been proposed in the literature. However, Pareto dominance-based algorithms are still widely used. In this paper, we propose to [...] Read more.
Deterioration of the searchability of Pareto dominance-based, many-objective evolutionary optimization algorithms is a well-known problem. Alternative solutions, such as scalarization-based and indicator-based approaches, have been proposed in the literature. However, Pareto dominance-based algorithms are still widely used. In this paper, we propose to redefine the calculation of Pareto-dominance. Instead of assigning solutions to non-dominated fronts, they are ranked according to the measure of dominating solutions referred to as k-Pareto optimality. In the case of probability measures, such re-definition results in an elegant and fast approximate procedure. Through experimental results on the many-objective 0/1 knapsack problem, we demonstrate the advantages of the proposed approach: (1) the approximate calculation procedure is much faster than the standard sorting by Pareto dominance; (2) it allows for achieving higher hypervolume values for both multi-objective (two objectives) and many-objective (25 objectives) optimization; (3) in the case of many-objective optimization, the increased ability to differentiate between solutions results in a better compared to NSGA-II and NSGA-III. Apart from the numerical improvements, the probabilistic procedure can be considered as a linear extension of multidimentional topological sorting. It produces almost no ties and, as opposed to other popular linear extensions, has an intuitive interpretation. Full article
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

28 pages, 699 KiB  
Article
Recent Developments in Low-Power AI Accelerators: A Survey
by Christoffer Åleskog, Håkan Grahn and Anton Borg
Algorithms 2022, 15(11), 419; https://doi.org/10.3390/a15110419 - 08 Nov 2022
Cited by 3 | Viewed by 4814
Abstract
As machine learning and AI continue to rapidly develop, and with the ever-closer end of Moore’s law, new avenues and novel ideas in architecture design are being created and utilized. One avenue is accelerating AI as close to the user as possible, i.e., [...] Read more.
As machine learning and AI continue to rapidly develop, and with the ever-closer end of Moore’s law, new avenues and novel ideas in architecture design are being created and utilized. One avenue is accelerating AI as close to the user as possible, i.e., at the edge, to reduce latency and increase performance. Therefore, researchers have developed low-power AI accelerators, designed specifically to accelerate machine learning and AI at edge devices. In this paper, we present an overview of low-power AI accelerators between 2019–2022. Low-power AI accelerators are defined in this paper based on their acceleration target and power consumption. In this survey, 79 low-power AI accelerators are presented and discussed. The reviewed accelerators are discussed based on five criteria: (i) power, performance, and power efficiency, (ii) acceleration targets, (iii) arithmetic precision, (iv) neuromorphic accelerators, and (v) industry vs. academic accelerators. CNNs and DNNs are the most popular accelerator targets, while Transformers and SNNs are on the rise. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

15 pages, 498 KiB  
Review
A Survey of Intellectual Property Rights Protection in Big Data Applications
by Rafik Hamza and Hilmil Pradana
Algorithms 2022, 15(11), 418; https://doi.org/10.3390/a15110418 - 08 Nov 2022
Cited by 3 | Viewed by 7544
Abstract
Big Data applications have the potential to transform any digital business platform by enabling the analysis of vast amounts of data. However, the biggest problem with Big Data is breaking down the intellectual property barriers to using that data, especially for cross-database applications. [...] Read more.
Big Data applications have the potential to transform any digital business platform by enabling the analysis of vast amounts of data. However, the biggest problem with Big Data is breaking down the intellectual property barriers to using that data, especially for cross-database applications. It is a challenge to achieve this trade-off and overcome the difficulties of Big Data, even though intellectual property restrictions have been developed to limit misuse and regulate access to Big Data. This study examines the scope of intellectual property rights in Big Data applications with a security framework for protecting intellectual property rights, watermarking and fingerprinting algorithms. The emergence of Big Data necessitates the development of new conceptual frameworks, security standards, and laws. This study addresses the significant copyright difficulties on cross-database platforms and the paradigm shift from ownership to control of access to and use of Big Data, especially on such platforms. We provide a comprehensive overview of copyright applications for multimedia data and a summary of the main trends in the discussion of intellectual property protection, highlighting crucial issues and existing obstacles and identifying the three major findings for investigating the relationship between them. Full article
Show Figures

Figure 1

25 pages, 5609 KiB  
Article
Integrated Design of a Supermarket Refrigeration System by Means of Experimental Design Adapted to Computational Problems
by Daniel Sarabia, María Cruz Ortiz and Luis Antonio Sarabia
Algorithms 2022, 15(11), 417; https://doi.org/10.3390/a15110417 - 07 Nov 2022
Cited by 1 | Viewed by 1828
Abstract
In this paper, an integrated design of a supermarket refrigeration system has been used to obtain a process with better operability. It is formulated as a multi-objective optimization problem where control performance is evaluated by six indices and the design variables are the [...] Read more.
In this paper, an integrated design of a supermarket refrigeration system has been used to obtain a process with better operability. It is formulated as a multi-objective optimization problem where control performance is evaluated by six indices and the design variables are the number and discrete power of each compressor to be installed. The functional dependence between design and performance is unknown, and therefore the optimal configuration must be obtained through a computational experimentation. This work has a double objective: to adapt the surface response methodology (SRM) to optimize problems without experimental variability as are the computational ones and show the advantage of considering the integrated design. In the SRM framework, the problem is stated as a mixture design with constraints and a synergistic cubic model where a D-optimal design is applied to perform the experiments. Finally, the multi-objective problem is reduced to a single objective one by means of a desirability function. The optimal configuration of the power distribution of the three compressors, in percentage, is (50,20,20). This solution has an excellent behaviour with respect to the six indices proposed, with a significant reduction in time oscillations of controlled variables and power consumption compared with other possible power distributions. Full article
Show Figures

Figure 1

14 pages, 392 KiB  
Article
An Algorithm for Generating a Diverse Set of Multi-Modal Journeys
by Federico Mosquera, Pieter Smet and Greet Vanden Berghe
Algorithms 2022, 15(11), 416; https://doi.org/10.3390/a15110416 - 07 Nov 2022
Cited by 1 | Viewed by 1334
Abstract
A direct way of reducing the number of cars on the road is to dissuade individuals from exclusively using their car and instead integrate public transport into their daily routine. Planning multi-modal journeys is a complex task for which individuals often rely on [...] Read more.
A direct way of reducing the number of cars on the road is to dissuade individuals from exclusively using their car and instead integrate public transport into their daily routine. Planning multi-modal journeys is a complex task for which individuals often rely on decision support tools. However, offering individuals different journey options represents a significant algorithmic challenge. The failure to provide users with a set of journey options that differ considerably from one another in terms of the modes of transport employed is currently preventing the widespread uptake of multi-modal journey planning among the general public. In this paper, we introduce a dynamic programming algorithm that remedies this situation by modeling different transport networks as a graph that is then pruned by various graph-reduction pre-processing techniques. This approach enables us to offer a diverse set of efficient multi-modal solutions to users almost instantaneously. A computational study on three datasets corresponding to various real-world mobility networks with up to 30,000 vertices and 596,000 arcs demonstrates the effectiveness of the proposed algorithm. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

22 pages, 5465 KiB  
Article
Fast and Interactive Positioning of Proteins within Membranes
by André Lanrezac, Benoist Laurent, Hubert Santuz, Nicolas Férey and Marc Baaden
Algorithms 2022, 15(11), 415; https://doi.org/10.3390/a15110415 - 07 Nov 2022
Cited by 3 | Viewed by 1910
Abstract
(1) Background: We developed an algorithm to perform interactive molecular simulations (IMS) of protein alignment in membranes, allowing on-the-fly monitoring and manipulation of such molecular systems at various scales. (2) Methods: UnityMol, an advanced molecular visualization software; MDDriver, a socket for data communication; [...] Read more.
(1) Background: We developed an algorithm to perform interactive molecular simulations (IMS) of protein alignment in membranes, allowing on-the-fly monitoring and manipulation of such molecular systems at various scales. (2) Methods: UnityMol, an advanced molecular visualization software; MDDriver, a socket for data communication; and BioSpring, a Spring network simulation engine, were extended to perform IMS. These components are designed to easily communicate with each other, adapt to other molecular simulation software, and provide a development framework for adding new interaction models to simulate biological phenomena such as protein alignment in the membrane at a fast enough rate for real-time experiments. (3) Results: We describe in detail the integration of an implicit membrane model for Integral Membrane Protein And Lipid Association (IMPALA) into our IMS framework. Our implementation can cover multiple levels of representation, and the degrees of freedom can be tuned to optimize the experience. We explain the validation of this model in an interactive and exhaustive search mode. (4) Conclusions: Protein positioning in model membranes can now be performed interactively in real time. Full article
(This article belongs to the Special Issue Algorithms for Computational Biology 2022)
Show Figures

Figure 1

24 pages, 4724 KiB  
Article
Phase-Type Survival Trees to Model a Delayed Discharge and Its Effect in a Stroke Care Unit
by Lalit Garg, Sally McClean, Brian Meenan, Maria Barton, Ken Fullerton, Sandra C. Buttigieg and Alexander Micallef
Algorithms 2022, 15(11), 414; https://doi.org/10.3390/a15110414 - 05 Nov 2022
Viewed by 1789
Abstract
The problem of hospital patients’ delayed discharge or ‘bed blocking’ has long been a challenge for healthcare managers and policymakers. It negatively affects the hospital performance metrics and has other severe consequences for the healthcare system, such as affecting patients’ health. In our [...] Read more.
The problem of hospital patients’ delayed discharge or ‘bed blocking’ has long been a challenge for healthcare managers and policymakers. It negatively affects the hospital performance metrics and has other severe consequences for the healthcare system, such as affecting patients’ health. In our previous work, we proposed the phase-type survival tree (PHTST)-based analysis to cluster patients into clinically meaningful patient groups and an extension of this approach to examine the relationship between the length of stay in hospitals and the destination on discharge. This paper describes how PHTST-based clustering can be used for modelling delayed discharge and its effects in a stroke care unit, especially the extra beds required, additional cost, and bed blocking. The PHTST length of stay distribution of each group of patients (each PHTST node) is modelled separately as a finite state continuous-time Markov chain using Coxian-phase-type distributions. Delayed discharge patients waiting for discharge are modelled as the Markov chain, called the ‘blocking state’ in a special state. We can use the model to recognise the association between demographic factors and discharge delays and their effects and identify groups of patients who require attention to resolve the most common delays and prevent them from happening again. The approach is illustrated using five years of retrospective data of patients admitted to the Belfast City Hospital with a stroke diagnosis. Full article
(This article belongs to the Special Issue Process Mining and Its Applications)
Show Figures

Figure 1

18 pages, 3219 KiB  
Article
AMR-Assisted Order Picking: Models for Picker-to-Parts Systems in a Two-Blocks Warehouse
by Giulia Pugliese, Xiaochen Chou, Dominic Loske, Matthias Klumpp and Roberto Montemanni
Algorithms 2022, 15(11), 413; https://doi.org/10.3390/a15110413 - 05 Nov 2022
Cited by 3 | Viewed by 2589
Abstract
Manual order picking, the process of retrieving stock keeping units from their storage location to fulfil customer orders, is one of the most labour-intensive and costly activity in modern supply chains. To improve the outcome of order picking systems, automated and robotized components [...] Read more.
Manual order picking, the process of retrieving stock keeping units from their storage location to fulfil customer orders, is one of the most labour-intensive and costly activity in modern supply chains. To improve the outcome of order picking systems, automated and robotized components are increasingly introduced creating hybrid order picking systems where humans and machines jointly work together. This study focuses on the application of a hybrid picker-to-parts order picking system, in which human operators collaborate with Automated Mobile Robots (AMRs). In this paper a warehouse with a two-blocks layout is investigated. The main contributions are new mathematical models for the optimization of picking operations and synchronizations. Two alternative implementations for an AMR system are considered. In the first one handover locations, where pickers load AMRs are shared between pairs of opposite sub-aisles, while in the second they are not. It is shown that solving the mathematical models proposed by the meaning of black-box solvers provides a viable algorithmic optimization approach that can be used in practice to derive efficient operational plannings. The experimental study presented, based on a real warehouse and real orders, finally allows to evaluate and strategically compare the two alternative implementations considered for the AMR system. Full article
Show Figures

Figure 1

19 pages, 321 KiB  
Article
Branch and Price Algorithm for Multi-Trip Vehicle Routing with a Variable Number of Wagons and Time Windows
by Leila Karimi and Chowdhury Nawrin Ferdous
Algorithms 2022, 15(11), 412; https://doi.org/10.3390/a15110412 - 04 Nov 2022
Cited by 2 | Viewed by 1838
Abstract
Motivated by the transportation needs of modern-day retailers, we consider a variant of the vehicle routing problem with time windows in which each truck has a variable capacity. In our model, each vehicle can bring one or more wagons. The clients are visited [...] Read more.
Motivated by the transportation needs of modern-day retailers, we consider a variant of the vehicle routing problem with time windows in which each truck has a variable capacity. In our model, each vehicle can bring one or more wagons. The clients are visited within specified time windows, and the vehicles can also make multiple trips. We give a mathematical programming formulation for the problem, and a branch and price algorithm is developed to solve the model. In each iteration of branch and price, column generation is used. Different subproblems are created based on the different capacities to find the best column. We use CPLEX to solve the problem computationally and extend Solomon’s instances to evaluate our approach. To our knowledge, ours is the first such study in this field. Full article
Show Figures

Figure 1

2 pages, 156 KiB  
Editorial
Special Issue “1st Online Conference on Algorithms (IOCA2021)”
by Frank Werner
Algorithms 2022, 15(11), 411; https://doi.org/10.3390/a15110411 - 04 Nov 2022
Viewed by 981
Abstract
This Special Issue of Algorithms is dedicated to the 1st Online Conference on Algorithms (IOCA 2021), which was held completely online from 27 September to 10 October 2021 [...] Full article
(This article belongs to the Special Issue 1st Online Conference on Algorithms (IOCA2021))
22 pages, 3259 KiB  
Article
Hybrid InceptionV3-SVM-Based Approach for Human Posture Detection in Health Monitoring Systems
by Roseline Oluwaseun Ogundokun, Rytis Maskeliūnas, Sanjay Misra and Robertas Damasevicius
Algorithms 2022, 15(11), 410; https://doi.org/10.3390/a15110410 - 04 Nov 2022
Cited by 9 | Viewed by 3455
Abstract
Posture detection targets toward providing assessments for the monitoring of the health and welfare of humans have been of great interest to researchers from different disciplines. The use of computer vision systems for posture recognition might result in useful improvements in healthy aging [...] Read more.
Posture detection targets toward providing assessments for the monitoring of the health and welfare of humans have been of great interest to researchers from different disciplines. The use of computer vision systems for posture recognition might result in useful improvements in healthy aging and support for elderly people in their daily activities in the field of health care. Computer vision and pattern recognition communities are particularly interested in fall automated recognition. Human sensing and artificial intelligence have both paid great attention to human posture detection (HPD). The health status of elderly people can be remotely monitored using human posture detection, which can distinguish between positions such as standing, sitting, and walking. The most recent research identified posture using both deep learning (DL) and conventional machine learning (ML) classifiers. However, these techniques do not effectively identify the postures and overfits of the model overfits. Therefore, this study suggested a deep convolutional neural network (DCNN) framework to examine and classify human posture in health monitoring systems. This study proposes a feature selection technique, DCNN, and a machine learning technique to assess the previously mentioned problems. The InceptionV3 DCNN model is hybridized with SVM ML and its performance is compared. Furthermore, the performance of the proposed system is validated with other transfer learning (TL) techniques such as InceptionV3, DenseNet121, and ResNet50. This study uses the least absolute shrinkage and selection operator (LASSO)-based feature selection to enhance the feature vector. The study also used various techniques, such as data augmentation, dropout, and early stop, to overcome the problem of model overfitting. The performance of this DCNN framework is tested using benchmark Silhouettes of human posture and classification accuracy, loss, and AUC value of 95.42%, 0.01, and 99.35% are attained, respectively. Furthermore, the results of the proposed technology offer the most promising solution for indoor monitoring systems. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Medicine)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop