Next Issue
Volume 15, July
Previous Issue
Volume 15, May
 
 

Algorithms, Volume 15, Issue 6 (June 2022) – 44 articles

Cover Story (view full-size image): For maximum profitability, ethical compliance, minimized downtime, and optimal utilization of equipment, safety and reliability are often priorities. Despite the utility of the micro drill bit automatic regrinding equipment, constant monitoring of the grinder improves the drill bit’s life. Interestingly, vibration monitoring offers a reliable solution for identifying the different stages of degradation. As part of a pre-processing algorithm, the spectral isolation technique ensures that only the most critical spectral segments of inputs are retained for improved deep learning-based diagnostic accuracy at a reduced computational cost. While open issues exist, the possibilities of further improvement present ripe opportunities for continued research in the domain. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 577 KiB  
Article
Scale-Free Random SAT Instances
by Carlos Ansótegui , Maria Luisa Bonet and Jordi Levy
Algorithms 2022, 15(6), 219; https://doi.org/10.3390/a15060219 - 20 Jun 2022
Cited by 1 | Viewed by 2014
Abstract
We focus on the random generation of SAT instances that have properties similar to real-world instances. It is known that many industrial instances, even with a great number of variables, can be solved by a clever solver in a reasonable amount of time. [...] Read more.
We focus on the random generation of SAT instances that have properties similar to real-world instances. It is known that many industrial instances, even with a great number of variables, can be solved by a clever solver in a reasonable amount of time. This is not possible, in general, with classical randomly generated instances. We provide a different generation model of SAT instances, called scale-free random SAT instances. This is based on the use of a non-uniform probability distribution P(i)iβ to select variable i, where β is a parameter of the model. This results in formulas where the number of occurrences k of variables follows a power-law distribution P(k)kδ, where δ=1+1/β. This property has been observed in most real-world SAT instances. For β=0, our model extends classical random SAT instances. We prove the existence of a SAT–UNSAT phase transition phenomenon for scale-free random 2-SAT instances with β<1/2 when the clause/variable ratio is m/n=12β(1β)2. We also prove that scale-free random k-SAT instances are unsatisfiable with a high probability when the number of clauses exceeds ω(n(1β)k). The proof of this result suggests that, when β>11/k, the unsatisfiability of most formulas may be due to small cores of clauses. Finally, we show how this model will allow us to generate random instances similar to industrial instances, of interest for testing purposes. Full article
(This article belongs to the Special Issue Algorithms in Complex Networks)
Show Figures

Figure 1

34 pages, 2044 KiB  
Review
A Review of an Artificial Intelligence Framework for Identifying the Most Effective Palm Oil Prediction
by Fatini Nadhirah Mohd Nain, Nurul Hashimah Ahamed Hassain Malim, Rosni Abdullah, Muhamad Farid Abdul Rahim, Mohd Azinuddin Ahmad Mokhtar and Nurul Syafika Mohamad Fauzi
Algorithms 2022, 15(6), 218; https://doi.org/10.3390/a15060218 - 20 Jun 2022
Cited by 1 | Viewed by 3418
Abstract
Machine Learning (ML) offers new precision technologies with intelligent algorithms and robust computation. This technology benefits various agricultural industries, such as the palm oil sector, which possesses one of the most sustainable industries worldwide. Hence, an in-depth analysis was conducted, which is derived [...] Read more.
Machine Learning (ML) offers new precision technologies with intelligent algorithms and robust computation. This technology benefits various agricultural industries, such as the palm oil sector, which possesses one of the most sustainable industries worldwide. Hence, an in-depth analysis was conducted, which is derived from previous research on ML utilisation in the palm oil in-dustry. The study provided a brief overview of widely used features and prediction algorithms and critically analysed current the state of ML-based palm oil prediction. This analysis is extended to the ML application in the palm oil industry and a comparison of related studies. The analysis was predicated on thoroughly examining the advantages and disadvantages of ML-based palm oil prediction and the proper identification of current and future agricultural industry challenges. Potential solutions for palm oil prediction were added to this list. Artificial intelligence and ma-chine vision were used to develop intelligent systems, revolutionising the palm oil industry. Overall, this article provided a framework for future research in the palm oil agricultural industry by highlighting the importance of ML. Full article
Show Figures

Figure 1

21 pages, 2055 KiB  
Article
An Online Algorithm for Routing an Unmanned Aerial Vehicle for Road Network Exploration Operations after Disasters under Different Refueling Strategies
by Lorena Reyes-Rubiano, Jana Voegl and Patrick Hirsch
Algorithms 2022, 15(6), 217; https://doi.org/10.3390/a15060217 - 20 Jun 2022
Cited by 1 | Viewed by 1890
Abstract
This paper is dedicated to studying on-line routing decisions for exploring a disrupted road network in the context of humanitarian logistics using an unmanned aerial vehicle (UAV) with flying range limitations. The exploration aims to extract accurate information for assessing damage to infrastructure [...] Read more.
This paper is dedicated to studying on-line routing decisions for exploring a disrupted road network in the context of humanitarian logistics using an unmanned aerial vehicle (UAV) with flying range limitations. The exploration aims to extract accurate information for assessing damage to infrastructure and road accessibility of victim locations in the aftermath of a disaster. We propose an algorithm to conduct routing decisions involving the aerial and road network simultaneously, assuming that no information about the state of the road network is available in the beginning. Our solution approach uses different strategies to deal with the detected disruptions and refueling decisions during the exploration process. The strategies differ mainly regarding where and when the UAV is refueled. We analyze the interplay of the type and level of disruption of the network with the number of possible refueling stations and the refueling strategy chosen. The aim is to find the best combination of the number of refueling stations and refueling strategy for different settings of the network type and disruption level. Full article
(This article belongs to the Special Issue Advanced Graph Algorithms)
Show Figures

Figure 1

12 pages, 546 KiB  
Article
Pulsed Electromagnetic Field Transmission through a Small Rectangular Aperture: A Solution Based on the Cagniard–DeHoop Method of Moments
by Martin Štumpf
Algorithms 2022, 15(6), 216; https://doi.org/10.3390/a15060216 - 20 Jun 2022
Cited by 1 | Viewed by 1934
Abstract
Pulsed electromagnetic (EM) field transmission through a relatively small rectangular aperture is analyzed with the aid of the Cagniard–deHoop method of moments (CdH-MoM). The classic EM scattering problem is formulated using the EM reciprocity theorem of the time-convolution type. The resulting TD reciprocity [...] Read more.
Pulsed electromagnetic (EM) field transmission through a relatively small rectangular aperture is analyzed with the aid of the Cagniard–deHoop method of moments (CdH-MoM). The classic EM scattering problem is formulated using the EM reciprocity theorem of the time-convolution type. The resulting TD reciprocity relation is then, under the assumption of piecewise-linear, space–time magnetic-current distribution over the aperture, cast analytically into the form of discrete time-convolution equations. The latter equations are subsequently solved via a stable marching-on-in-time scheme. Illustrative examples are presented and validated using a 3D numerical EM tool. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

21 pages, 3074 KiB  
Article
Domain Generalization Model of Deep Convolutional Networks Based on SAND-Mask
by Jigang Wang, Liang Chen and Rui Wang
Algorithms 2022, 15(6), 215; https://doi.org/10.3390/a15060215 - 18 Jun 2022
Viewed by 1545
Abstract
In the actual operation of the machine, due to a large number of operating conditions and a wide range of operating conditions, the data under many operating conditions cannot be obtained. However, the different data distributions between different operating conditions will reduce the [...] Read more.
In the actual operation of the machine, due to a large number of operating conditions and a wide range of operating conditions, the data under many operating conditions cannot be obtained. However, the different data distributions between different operating conditions will reduce the performance of fault diagnosis. Currently, most studies remain on the level of generalization caused by a change of working conditions under a single condition. In the scenario where various conditions such as speed, load and temperature lead to changes in working conditions, there are problems such as the explosion of working conditions and complex data distribution. Compared with previous research work, this is more difficult to generalize. To cope with this problem, this paper improves generalization method SAND-Mask (Smoothed-AND (SAND)-masking) by using the total gradient variance of samples in a batch instead of the gradient variance of each sample to calculate parameter σ. The SAND-Mask method is extended to the fault diagnosis domain, and the DCNG model (Deep Convolutional Network Generalization) is proposed. Finally, multi-angle experiments were conducted on three publicly available bearing datasets, and diagnostic performances of more than 90%, 99%, and 70% were achieved on all transfer tasks. The results show that the DCNG model has better stability as well as diagnostic performance compared to other generalization methods. Full article
Show Figures

Figure 1

13 pages, 1958 KiB  
Article
Pendulum Search Algorithm: An Optimization Algorithm Based on Simple Harmonic Motion and Its Application for a Vaccine Distribution Problem
by Nor Azlina Ab. Aziz and Kamarulzaman Ab. Aziz
Algorithms 2022, 15(6), 214; https://doi.org/10.3390/a15060214 - 17 Jun 2022
Cited by 4 | Viewed by 2249
Abstract
The harmonic motion of pendulum swinging centered at a pivot point is mimicked in this work. The harmonic motion’s amplitude at both side of the pivot are equal, damped, and decreased with time. This behavior is mimicked by the agents of the pendulum [...] Read more.
The harmonic motion of pendulum swinging centered at a pivot point is mimicked in this work. The harmonic motion’s amplitude at both side of the pivot are equal, damped, and decreased with time. This behavior is mimicked by the agents of the pendulum search algorithm (PSA) to move and look for an optimization solution within a search area. The high amplitude at the beginning encourages exploration and expands the search area while the small amplitude towards the end encourages fine-tuning and exploitation. PSA is applied for a vaccine distribution problem. The extended SEIR model of Hong Kong’s 2009 H1N1 influenza epidemic is adopted here. The results show that PSA is able to generate a good solution that is able to minimize the total infection better than several other methods. PSA is also tested using 13 multimodal functions from the CEC2014 benchmark function. To optimize multimodal functions, an algorithm must be able to avoid premature convergence and escape from local optima traps. Hence, the functions are chosen to validate the algorithm as a robust metaheuristic optimizer. PSA is found to be able to provide low error values. PSA is then benchmarked with the state-of-the-art particle swarm optimization (PSO) and sine cosine algorithm (SCA). PSA is better than PSO and SCA in a greater number of test functions; these positive results show the potential of PSA. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

16 pages, 1801 KiB  
Article
A New Subject-Sensitive Hashing Algorithm Based on MultiRes-RCF for Blockchains of HRRS Images
by Kaimeng Ding, Shiping Chen, Jiming Yu, Yanan Liu and Jie Zhu
Algorithms 2022, 15(6), 213; https://doi.org/10.3390/a15060213 - 17 Jun 2022
Cited by 2 | Viewed by 2006
Abstract
Aiming at the deficiency that blockchain technology is too sensitive to the binary-level changes of high resolution remote sensing (HRRS) images, we propose a new subject-sensitive hashing algorithm specially for HRRS image blockchains. To implement this subject-sensitive hashing algorithm, we designed and implemented [...] Read more.
Aiming at the deficiency that blockchain technology is too sensitive to the binary-level changes of high resolution remote sensing (HRRS) images, we propose a new subject-sensitive hashing algorithm specially for HRRS image blockchains. To implement this subject-sensitive hashing algorithm, we designed and implemented a deep neural network model MultiRes-RCF (richer convolutional features) for extracting features from HRRS images. A MultiRes-RCF network is an improved RCF network that borrows the MultiRes mechanism of MultiResU-Net. The subject-sensitive hashing algorithm based on MultiRes-RCF can detect the subtle tampering of HRRS images while maintaining robustness to operations that do not change the content of the HRRS images. Experimental results show that our MultiRes-RCF-based subject-sensitive hashing algorithm has better tamper sensitivity than the existing deep learning models such as RCF, AAU-net, and Attention U-net, meeting the needs of HRRS image blockchains. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

19 pages, 2519 KiB  
Article
A Cost-Efficient MCSA-Based Fault Diagnostic Framework for SCIM at Low-Load Conditions
by Chibuzo Nwabufo Okwuosa, Ugochukwu Ejike Akpudo and Jang-Wook Hur
Algorithms 2022, 15(6), 212; https://doi.org/10.3390/a15060212 - 16 Jun 2022
Cited by 7 | Viewed by 2156
Abstract
In industry, electric motors such as the squirrel cage induction motor (SCIM) generate motive power and are particularly popular due to their low acquisition cost, strength, and robustness. Along with these benefits, they have minimal maintenance costs and can run for extended periods [...] Read more.
In industry, electric motors such as the squirrel cage induction motor (SCIM) generate motive power and are particularly popular due to their low acquisition cost, strength, and robustness. Along with these benefits, they have minimal maintenance costs and can run for extended periods before requiring repair and/or maintenance. Early fault detection in SCIMs, especially at low-load conditions, further helps minimize maintenance costs and mitigate abrupt equipment failure when loading is increased. Recent research on these devices is focused on fault/failure diagnostics with the aim of reducing downtime, minimizing costs, and increasing utility and productivity. Data-driven predictive maintenance offers a reliable avenue for intelligent monitoring whereby signals generated by the equipment are harnessed for fault detection and isolation (FDI). Particularly, motor current signature analysis (MCSA) provides a reliable avenue for extracting and/or exploiting discriminant information from signals for FDI and/or fault diagnosis. This study presents a fault diagnostic framework that exploits underlying spectral characteristics following MCSA and intelligent classification for fault diagnosis based on extracted spectral features. Results show that the extracted features reflect induction motor fault conditions with significant diagnostic performance (minimal false alarm rate) from intelligent models, out of which the random forest (RF) classifier was the most accurate, with an accuracy of 79.25%. Further assessment of the models showed that RF had the highest computational cost of 3.66 s, while NBC had the lowest at 0.003 s. Other significant empirical assessments were conducted, and the results support the validity of the proposed FDI technique. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
Show Figures

Figure 1

16 pages, 345 KiB  
Article
Optimizing Cybersecurity Investments over Time
by Alessandro Mazzoccoli and Maurizio Naldi
Algorithms 2022, 15(6), 211; https://doi.org/10.3390/a15060211 - 16 Jun 2022
Cited by 2 | Viewed by 2037
Abstract
In the context of growing vulnerabilities, cyber-risk management cannot rely on a one-off approach, instead calling for a continuous re-assessment of the risk and adaptation of risk management strategies. Under the mixed investment–insurance approach, where both risk mitigation and risk transfer are employed, [...] Read more.
In the context of growing vulnerabilities, cyber-risk management cannot rely on a one-off approach, instead calling for a continuous re-assessment of the risk and adaptation of risk management strategies. Under the mixed investment–insurance approach, where both risk mitigation and risk transfer are employed, the adaptation implies the re-computation of the optimal amount to invest in security over time. In this paper, we deal with the problem of computing the optimal balance between investment and insurance payments to achieve the minimum overall security expense when the vulnerability grows over time according to a logistic function, adopting a greedy approach, where strategy adaptation is carried out periodically at each investment epoch. We consider three liability degrees, from full liability to partial liability with deductibles. We find that insurance represents by far the dominant component in the mix and may be relied on as a single protection tool when the vulnerability is very low. Full article
Show Figures

Figure 1

28 pages, 1344 KiB  
Review
Overview of Distributed Machine Learning Techniques for 6G Networks
by Eugenio Muscinelli, Swapnil Sadashiv Shinde and Daniele Tarchi
Algorithms 2022, 15(6), 210; https://doi.org/10.3390/a15060210 - 15 Jun 2022
Cited by 22 | Viewed by 3938
Abstract
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to [...] Read more.
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to serve many heterogeneous wireless devices. Various machine learning (ML) techniques are expected to be deployed over the intelligent 6G wireless network that provide solutions to highly complex networking problems. In order to do this, various 6G nodes and devices are expected to generate tons of data through external sensors, and data analysis will be needed. With such massive and distributed data, and various innovations in computing hardware, distributed ML techniques are expected to play an important role in 6G. Though they have several advantages over the centralized ML techniques, implementing the distributed ML algorithms over resource-constrained wireless environments can be challenging. Therefore, it is important to select a proper ML algorithm based upon the characteristics of the wireless environment and the resource requirements of the learning process. In this work, we survey the recently introduced distributed ML techniques with their characteristics and possible benefits by focusing our attention on the most influential papers in the area. We finally give our perspective on the main challenges and advantages for telecommunication networks, along with the main scenarios that could eventuate. Full article
(This article belongs to the Special Issue Algorithms for Communication Networks)
Show Figures

Figure 1

16 pages, 1314 KiB  
Article
Maximum Entropy Approach to Massive Graph Spectrum Learning with Applications
by Diego Granziol, Binxin Ru, Xiaowen Dong, Stefan Zohren, Michael Osborne and Stephen Roberts
Algorithms 2022, 15(6), 209; https://doi.org/10.3390/a15060209 - 15 Jun 2022
Viewed by 1586
Abstract
We propose an alternative maximum entropy approach to learning the spectra of massive graphs. In contrast to state-of-the-art Lanczos algorithm for spectral density estimation and applications thereof, our approach does not require kernel smoothing. As the choice of kernel function and associated bandwidth [...] Read more.
We propose an alternative maximum entropy approach to learning the spectra of massive graphs. In contrast to state-of-the-art Lanczos algorithm for spectral density estimation and applications thereof, our approach does not require kernel smoothing. As the choice of kernel function and associated bandwidth heavily affect the resulting output, our approach mitigates these issues. Furthermore, we prove that kernel smoothing biases the moments of the spectral density. Our approach can be seen as an information-theoretically optimal approach to learning a smooth graph spectral density, which fully respects moment information. The proposed method has a computational cost linear in the number of edges, and hence can be applied even to large networks with millions of nodes. We showcase the approach on problems of graph similarity learning and counting cluster number in the graph, where the proposed method outperforms existing iterative spectral approaches on both synthetic and real-world graphs. Full article
Show Figures

Figure 1

25 pages, 3969 KiB  
Article
Eyes versus Eyebrows: A Comprehensive Evaluation Using the Multiscale Analysis and Curvature-Based Combination Methods in Partial Face Recognition
by Regina Lionnie, Catur Apriono and Dadang Gunawan
Algorithms 2022, 15(6), 208; https://doi.org/10.3390/a15060208 - 14 Jun 2022
Cited by 4 | Viewed by 1947
Abstract
This work aimed to find the most discriminative facial regions between the eyes and eyebrows for periocular biometric features in a partial face recognition system. We propose multiscale analysis methods combined with curvature-based methods. The goal of this combination was to capture the [...] Read more.
This work aimed to find the most discriminative facial regions between the eyes and eyebrows for periocular biometric features in a partial face recognition system. We propose multiscale analysis methods combined with curvature-based methods. The goal of this combination was to capture the details of these features at finer scales and offer them in-depth characteristics using curvature. The eye and eyebrow images cropped from four face 2D image datasets were evaluated. The recognition performance was calculated using the nearest neighbor and support vector machine classifiers. Our proposed method successfully produced richer details in finer scales, yielding high recognition performance. The highest accuracy results were 76.04% and 98.61% for the limited dataset and 96.88% and 93.22% for the larger dataset for the eye and eyebrow images, respectively. Moreover, we compared the results between our proposed methods and other works, and we achieved similar high accuracy results using only eye and eyebrow images. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Show Figures

Figure 1

15 pages, 1543 KiB  
Article
Multi-View Graph Fusion for Semi-Supervised Learning: Application to Image-Based Face Beauty Prediction
by Fadi Dornaika and Abdelmalik Moujahid
Algorithms 2022, 15(6), 207; https://doi.org/10.3390/a15060207 - 14 Jun 2022
Cited by 4 | Viewed by 1873
Abstract
Facial Beauty Prediction (FBP) is an important visual recognition problem to evaluate the attractiveness of faces according to human perception. Most existing FBP methods are based on supervised solutions using geometric or deep features. Semi-supervised learning for FBP is an almost unexplored research [...] Read more.
Facial Beauty Prediction (FBP) is an important visual recognition problem to evaluate the attractiveness of faces according to human perception. Most existing FBP methods are based on supervised solutions using geometric or deep features. Semi-supervised learning for FBP is an almost unexplored research area. In this work, we propose a graph-based semi-supervised method in which multiple graphs are constructed to find the appropriate graph representation of the face images (with and without scores). The proposed method combines both geometric and deep feature-based graphs to produce a high-level representation of face images instead of using a single face descriptor and also improves the discriminative ability of graph-based score propagation methods. In addition to the data graph, our proposed approach fuses an additional graph adaptively built on the predicted beauty values. Experimental results on the SCUTFBP-5500 facial beauty dataset demonstrate the superiority of the proposed algorithm compared to other state-of-the-art methods. Full article
(This article belongs to the Special Issue Advanced Graph Algorithms)
Show Figures

Figure 1

1 pages, 189 KiB  
Correction
Correction: Filion, G.J. Analytic Combinatorics for Computing Seeding Probabilities. Algorithms 2018, 11, 3
by Guillaume J. Filion
Algorithms 2022, 15(6), 206; https://doi.org/10.3390/a15060206 - 14 Jun 2022
Viewed by 1137
Abstract
The author wishes to make the following correction to this paper [...] Full article
43 pages, 565 KiB  
Review
A Review: Machine Learning for Combinatorial Optimization Problems in Energy Areas
by Xinyi Yang, Ziyi Wang, Hengxi Zhang, Nan Ma, Ning Yang, Hualin Liu, Haifeng Zhang and Lei Yang
Algorithms 2022, 15(6), 205; https://doi.org/10.3390/a15060205 - 13 Jun 2022
Cited by 12 | Viewed by 5842
Abstract
Combinatorial optimization problems (COPs) are a class of NP-hard problems with great practical significance. Traditional approaches for COPs suffer from high computational time and reliance on expert knowledge, and machine learning (ML) methods, as powerful tools have been used to overcome these problems. [...] Read more.
Combinatorial optimization problems (COPs) are a class of NP-hard problems with great practical significance. Traditional approaches for COPs suffer from high computational time and reliance on expert knowledge, and machine learning (ML) methods, as powerful tools have been used to overcome these problems. In this review, the COPs in energy areas with a series of modern ML approaches, i.e., the interdisciplinary areas of COPs, ML and energy areas, are mainly investigated. Recent works on solving COPs using ML are sorted out firstly by methods which include supervised learning (SL), deep learning (DL), reinforcement learning (RL) and recently proposed game theoretic methods, and then problems where the timeline of the improvements for some fundamental COPs is the layout. Practical applications of ML methods in the energy areas, including the petroleum supply chain, steel-making, electric power system and wind power, are summarized for the first time, and challenges in this field are analyzed. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

17 pages, 1416 KiB  
Article
Topic Modeling for Automatic Analysis of Natural Language: A Case Study in an Italian Customer Support Center
by Gabriele Papadia, Massimo Pacella and Vincenzo Giliberti
Algorithms 2022, 15(6), 204; https://doi.org/10.3390/a15060204 - 13 Jun 2022
Cited by 6 | Viewed by 2802
Abstract
This paper focuses on the automatic analysis of conversation transcriptions in the call center of a customer care service. The goal is to recognize topics related to problems and complaints discussed in several dialogues between customers and agents. Our study aims to implement [...] Read more.
This paper focuses on the automatic analysis of conversation transcriptions in the call center of a customer care service. The goal is to recognize topics related to problems and complaints discussed in several dialogues between customers and agents. Our study aims to implement a framework able to automatically cluster conversation transcriptions into cohesive and well-separated groups based on the content of the data. The framework can alleviate the analyst selecting proper values for the analysis and the clustering processes. To pursue this goal, we consider a probabilistic model based on the latent Dirichlet allocation, which associates transcriptions with a mixture of topics in different proportions. A case study consisting of transcriptions in the Italian natural language, and collected in a customer support center of an energy supplier, is considered in the paper. Performance comparison of different inference techniques is discussed using the case study. The experimental results demonstrate the approach’s efficacy in clustering Italian conversation transcriptions. It also results in a practical tool to simplify the analytic process and off-load the parameter tuning from the end-user. According to recent works in the literature, this paper may be valuable for introducing latent Dirichlet allocation approaches in topic modeling for the Italian natural language. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

17 pages, 349 KiB  
Article
An Algorithm for the Closed-Form Solution of Certain Classes of Volterra–Fredholm Integral Equations of Convolution Type
by Efthimios Providas
Algorithms 2022, 15(6), 203; https://doi.org/10.3390/a15060203 - 12 Jun 2022
Cited by 1 | Viewed by 1754
Abstract
In this paper, a direct operator method is presented for the exact closed-form solution of certain classes of linear and nonlinear integral Volterra–Fredholm equations of the second kind. The method is based on the existence of the inverse of the relevant linear Volterra [...] Read more.
In this paper, a direct operator method is presented for the exact closed-form solution of certain classes of linear and nonlinear integral Volterra–Fredholm equations of the second kind. The method is based on the existence of the inverse of the relevant linear Volterra operator. In the case of convolution kernels, the inverse is constructed using the Laplace transform method. For linear integral equations, results for the existence and uniqueness are given. The solution of nonlinear integral equations depends on the existence and type of solutions ofthe corresponding nonlinear algebraic system. A complete algorithm for symbolic computations in a computer algebra system is also provided. The method finds many applications in science and engineering. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
23 pages, 1335 KiB  
Article
Constraint Preserving Mixers for the Quantum Approximate Optimization Algorithm
by Franz Georg Fuchs, Kjetil Olsen Lye, Halvor Møll Nilsen, Alexander Johannes Stasik and Giorgio Sartor
Algorithms 2022, 15(6), 202; https://doi.org/10.3390/a15060202 - 10 Jun 2022
Cited by 12 | Viewed by 2682
Abstract
The quantum approximate optimization algorithm/quantum alternating operator ansatz (QAOA) is a heuristic to find approximate solutions of combinatorial optimization problems. Most of the literature is limited to quadratic problems without constraints. However, many practically relevant optimization problems do have (hard) constraints that need [...] Read more.
The quantum approximate optimization algorithm/quantum alternating operator ansatz (QAOA) is a heuristic to find approximate solutions of combinatorial optimization problems. Most of the literature is limited to quadratic problems without constraints. However, many practically relevant optimization problems do have (hard) constraints that need to be fulfilled. In this article, we present a framework for constructing mixing operators that restrict the evolution to a subspace of the full Hilbert space given by these constraints. We generalize the “XY”-mixer designed to preserve the subspace of “one-hot” states to the general case of subspaces given by a number of computational basis states. We expose the underlying mathematical structure which reveals more of how mixers work and how one can minimize their cost in terms of the number of CX gates, particularly when Trotterization is taken into account. Our analysis also leads to valid Trotterizations for an “XY”-mixer with fewer CX gates than is known to date. In view of practical implementations, we also describe algorithms for efficient decomposition into basis gates. Several examples of more general cases are presented and analyzed. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Figure 1

20 pages, 342 KiB  
Article
Comparing the Reasoning Capabilities of Equilibrium Theories and Answer Set Programs
by Jorge Fandinno, David Pearce, Concepción Vidal and Stefan Woltran
Algorithms 2022, 15(6), 201; https://doi.org/10.3390/a15060201 - 08 Jun 2022
Cited by 2 | Viewed by 1553
Abstract
Answer Set Programming (ASP) is a well established logical approach in artificial intelligence that is widely used for knowledge representation and problem solving. Equilibrium logic extends answer set semantics to more general classes of programs and theories. When intertheory relations are studied in [...] Read more.
Answer Set Programming (ASP) is a well established logical approach in artificial intelligence that is widely used for knowledge representation and problem solving. Equilibrium logic extends answer set semantics to more general classes of programs and theories. When intertheory relations are studied in ASP, or in the more general form of equilibrium logic, they are usually understood in the form of comparisons of the answer sets or equilibrium models of theories or programs. This is the case for strong and uniform equivalence and their relativised and projective versions. However, there are many potential areas of application of ASP for which query answering is relevant and a comparison of programs in terms of what can be inferred from them may be important. We formulate and study some natural equivalence and entailment concepts for programs and theories that are couched in terms of inference and query answering. We show that, for the most part, these new intertheory relations coincide with their model-theoretic counterparts. We also extend some previous results on projective entailment for theories and for the new connective called fork. Full article
(This article belongs to the Special Issue Logic-Based Artificial Intelligence)
19 pages, 2550 KiB  
Article
MedicalSeg: A Medical GUI Application for Image Segmentation Management
by Christian Mata, Josep Munuera, Alain Lalande, Gilberto Ochoa-Ruiz and Raul Benitez
Algorithms 2022, 15(6), 200; https://doi.org/10.3390/a15060200 - 08 Jun 2022
Cited by 6 | Viewed by 2684
Abstract
In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One [...] Read more.
In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One of the main focuses in the computer vision field is based on artificial intelligence algorithms for segmentation and classification, including machine learning and deep learning approaches. The main drawback of supervised segmentation approaches is that a large dataset of ground truth validated by medical experts is required. In this sense, many research groups have developed their segmentation approaches according to their specific needs. However, a generalised application aimed at visualizing, assessing and comparing the results of different methods facilitating the generation of a ground-truth repository is not found in recent literature. In this paper, a new graphical user interface application (MedicalSeg) for the management of medical imaging based on pre-processing and segmentation is presented. The objective is twofold, first to create a test platform for comparing segmentation approaches, and secondly to generate segmented images to create ground truths that can then be used for future purposes as artificial intelligence tools. An experimental demonstration and performance analysis discussion are presented in this paper. Full article
(This article belongs to the Special Issue 1st Online Conference on Algorithms (IOCA2021))
Show Figures

Figure 1

29 pages, 3889 KiB  
Article
XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework
by Ghada El-khawaga, Mervat Abu-Elkheir and Manfred Reichert
Algorithms 2022, 15(6), 199; https://doi.org/10.3390/a15060199 - 08 Jun 2022
Cited by 5 | Viewed by 2451
Abstract
Predictive Process Monitoring (PPM) has been integrated into process mining use cases as a value-adding task. PPM provides useful predictions on the future of the running business processes with respect to different perspectives, such as the upcoming activities to be executed next, the [...] Read more.
Predictive Process Monitoring (PPM) has been integrated into process mining use cases as a value-adding task. PPM provides useful predictions on the future of the running business processes with respect to different perspectives, such as the upcoming activities to be executed next, the final execution outcome, and performance indicators. In the context of PPM, Machine Learning (ML) techniques are widely employed. In order to gain trust of stakeholders regarding the reliability of PPM predictions, eXplainable Artificial Intelligence (XAI) methods have been increasingly used to compensate for the lack of transparency of most of predictive models. Multiple XAI methods exist providing explanations for almost all types of ML models. However, for the same data, as well as, under the same preprocessing settings or same ML models, generated explanations often vary significantly. Corresponding variations might jeopardize the consistency and robustness of the explanations and, subsequently, the utility of the corresponding model and pipeline settings. This paper introduces a framework that enables the analysis of the impact PPM-related settings and ML-model-related choices may have on the characteristics and expressiveness of the generated explanations. Our framework provides a means to examine explanations generated either for the whole reasoning process of an ML model, or for the predictions made on the future of a certain business process instance. Using well-defined experiments with different settings, we uncover how choices made through a PPM workflow affect and can be reflected through explanations. This framework further provides the means to compare how different characteristics of explainability methods can shape the resulting explanations and reflect on the underlying model reasoning process. Full article
(This article belongs to the Special Issue Process Mining and Its Applications)
Show Figures

Figure 1

22 pages, 22712 KiB  
Article
Improved JPS Path Optimization for Mobile Robots Based on Angle-Propagation Theta* Algorithm
by Yuan Luo, Jiakai Lu, Qiong Qin and Yanyu Liu
Algorithms 2022, 15(6), 198; https://doi.org/10.3390/a15060198 - 08 Jun 2022
Cited by 7 | Viewed by 2876
Abstract
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path [...] Read more.
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path optimization strategy of the JPS algorithm by combining the viewable angle of the Angle-Propagation Theta* (AP Theta*) algorithm, and it proposes the AP-JPS algorithm based on an any-angle pathfinding strategy. First, based on the JPS algorithm, this paper proposes a vision triangle judgment method to optimize the generated path by selecting the successor search point. Secondly, the idea of the node viewable angle in the AP Theta* algorithm is introduced to modify the line of sight (LOS) reachability detection between two nodes. Finally, the paths are optimized using a seventh-order polynomial based on minimum snap, so that the AP-JPS algorithm generates paths that better match the actual robot motion. The feasibility and effectiveness of this method are proved by simulation experiments and comparison with other algorithms. The results show that the path planning algorithm in this paper obtains paths with good smoothness in environments with different obstacle densities and different map sizes. In the algorithm comparison experiments, it can be seen that the AP-JPS algorithm reduces the path by 1.61–4.68% and the total turning angle of the path by 58.71–84.67% compared with the JPS algorithm. The AP-JPS algorithm reduces the computing time by 98.59–99.22% compared with the AP-Theta* algorithm. Full article
Show Figures

Figure 1

22 pages, 5237 KiB  
Article
BAG-DSM: A Method for Generating Alternatives for Hierarchical Multi-Attribute Decision Models Using Bayesian Optimization
by Martin Gjoreski, Vladimir Kuzmanovski and Marko Bohanec
Algorithms 2022, 15(6), 197; https://doi.org/10.3390/a15060197 - 07 Jun 2022
Cited by 2 | Viewed by 1778
Abstract
Multi-attribute decision analysis is an approach to decision support in which decision alternatives are evaluated by multi-criteria models. An advanced feature of decision support models is the possibility to search for new alternatives that satisfy certain conditions. This task is important for practical [...] Read more.
Multi-attribute decision analysis is an approach to decision support in which decision alternatives are evaluated by multi-criteria models. An advanced feature of decision support models is the possibility to search for new alternatives that satisfy certain conditions. This task is important for practical decision support; however, the related work on generating alternatives for qualitative multi-attribute decision models is quite scarce. In this paper, we introduce Bayesian Alternative Generator for Decision Support Models (BAG-DSM), a method to address the problem of generating alternatives. More specifically, given a multi-attribute hierarchical model and an alternative representing the initial state, the goal is to generate alternatives that demand the least change in the provided alternative to obtain a desirable outcome. The brute force approach has exponential time complexity and has prohibitively long execution times, even for moderately sized models. BAG-DSM avoids these problems by using a Bayesian optimization approach adapted to qualitative DEX models. BAG-DSM was extensively evaluated and compared to a baseline method on 43 different DEX decision models with varying complexity, e.g., different depth and attribute importance. The comparison was performed with respect to: the time to obtain the first appropriate alternative, the number of generated alternatives, and the number of attribute changes required to reach the generated alternatives. BAG-DSM outperforms the baseline in all of the experiments by a large margin. Additionally, the evaluation confirms BAG-DSM’s suitability for the task, i.e., on average, it generates at least one appropriate alternative within two seconds. The relation between the depth of the multi-attribute hierarchical models—a parameter that increases the search space exponentially—and the time to obtaining the first appropriate alternative was linear and not exponential, by which BAG-DSM’s scalability is empirically confirmed. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

15 pages, 1894 KiB  
Article
Process Mining the Performance of a Real-Time Healthcare 4.0 Systems Using Conditional Survival Models
by Adele H. Marshall and Aleksandar Novakovic
Algorithms 2022, 15(6), 196; https://doi.org/10.3390/a15060196 - 07 Jun 2022
Viewed by 1611
Abstract
As the world moves into the exciting age of Healthcare 4.0, it is essential that patients and clinicians have confidence and reassurance that the real-time clinical decision support systems being used throughout their care guarantee robustness and optimal quality of care. However, current [...] Read more.
As the world moves into the exciting age of Healthcare 4.0, it is essential that patients and clinicians have confidence and reassurance that the real-time clinical decision support systems being used throughout their care guarantee robustness and optimal quality of care. However, current systems involving autonomic behaviour and those with no prior clinical feedback, have generally to date had little focus on demonstrating robustness in the use of data and final output, thus generating a lack of confidence. This paper wishes to address this challenge by introducing a new process mining approach based on a statistically robust methodology that relies on the utilisation of conditional survival models for the purpose of evaluating the performance of Healthcare 4.0 systems and the quality of the care provided. Its effectiveness is demonstrated by analysing the performance of a clinical decision support system operating in an intensive care setting with the goal to monitor ventilated patients in real-time and to notify clinicians if the patient is predicted at risk of receiving injurious mechanical ventilation. Additionally, we will also demonstrate how the same metrics can be used for evaluating the patient quality of care. The proposed methodology can be used to analyse the performance of any Healthcare 4.0 system and the quality of care provided to the patient. Full article
(This article belongs to the Special Issue Process Mining and Its Applications)
Show Figures

Figure 1

18 pages, 300 KiB  
Article
Machine Learning and rs-fMRI to Identify Potential Brain Regions Associated with Autism Severity
by Igor D. Rodrigues, Emerson A. de Carvalho, Caio P. Santana and Guilherme S. Bastos
Algorithms 2022, 15(6), 195; https://doi.org/10.3390/a15060195 - 07 Jun 2022
Cited by 5 | Viewed by 2760
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized primarily by social impairments that manifest in different severity levels. In recent years, many studies have explored the use of machine learning (ML) and resting-state functional magnetic resonance images (rs-fMRI) to investigate the disorder. [...] Read more.
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized primarily by social impairments that manifest in different severity levels. In recent years, many studies have explored the use of machine learning (ML) and resting-state functional magnetic resonance images (rs-fMRI) to investigate the disorder. These approaches evaluate brain oxygen levels to indirectly measure brain activity and compare typical developmental subjects with ASD ones. However, none of these works have tried to classify the subjects into severity groups using ML exclusively applied to rs-fMRI data. Information on ASD severity is frequently available since some tools used to support ASD diagnosis also include a severity measurement as their outcomes. The aforesaid is the case of the Autism Diagnostic Observation Schedule (ADOS), which splits the diagnosis into three groups: ‘autism’, ‘autism spectrum’, and ‘non-ASD’. Therefore, this paper aims to use ML and fMRI to identify potential brain regions as biomarkers of ASD severity. We used the ADOS score as a severity measurement standard. The experiment used fMRI data of 202 subjects with an ASD diagnosis and their ADOS scores available at the ABIDE I consortium to determine the correct ASD sub-class for each one. Our results suggest a functional difference between the ASD sub-classes by reaching 73.8% accuracy on cingulum regions. The aforementioned shows the feasibility of classifying and characterizing ASD using rs-fMRI data, indicating potential areas that could lead to severity biomarkers in further research. However, we highlight the need for more studies to confirm our findings. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

20 pages, 15507 KiB  
Article
Exploring the Efficiencies of Spectral Isolation for Intelligent Wear Monitoring of Micro Drill Bit Automatic Regrinding In-Line Systems
by Ugochukwu Ejike Akpudo and Jang-Wook Hur
Algorithms 2022, 15(6), 194; https://doi.org/10.3390/a15060194 - 06 Jun 2022
Viewed by 2001
Abstract
Despite the increasing digitalization of equipment diagnostic/condition monitoring systems, it remains a challenge to accurately harness discriminant information from multiple sensors with unique spectral (and transient) behaviors. High-precision systems such as the automatic regrinding in-line equipment provide intelligent regrinding of micro drill bits; [...] Read more.
Despite the increasing digitalization of equipment diagnostic/condition monitoring systems, it remains a challenge to accurately harness discriminant information from multiple sensors with unique spectral (and transient) behaviors. High-precision systems such as the automatic regrinding in-line equipment provide intelligent regrinding of micro drill bits; however, immediate monitoring of the grinder during the grinding process has become necessary because ignoring it directly affects the drill bit’s life and the equipment’s overall utility. Vibration signals from the frame and the high-speed grinding wheels reflect the different health stages of the grinding wheel and can be exploited for intelligent condition monitoring. The spectral isolation technique as a preprocessing tool ensures that only the critical spectral segments of the inputs are retained for improved diagnostic accuracy at reduced computational costs. This study explores artificial intelligence-based models for learning the discriminant spectral information stored in the vibration signals and considers the accuracy and cost implications of spectral isolation of the critical spectral segments of the signals for accurate equipment monitoring. Results from one-dimensional convolutional neural networks (1D-CNN) and multi-layer perceptron (MLP) neural networks, respectively, reveal that spectral isolation offers a higher condition monitoring accuracy at reduced computational costs. Experimental results using different 1D-CNN and MLP architectures reveal 4.6% and 7.5% improved diagnostic accuracy by the 1D-CNNs and MLPs, respectively, at about 1.3% and 5.71% reduced computational costs, respectively. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Figure 1

22 pages, 848 KiB  
Review
A Survey on Network Optimization Techniques for Blockchain Systems
by Robert Antwi, James Dzisi Gadze, Eric Tutu Tchao, Axel Sikora, Henry Nunoo-Mensah, Andrew Selasi Agbemenu, Kwame Opunie-Boachie Obour Agyekum, Justice Owusu Agyemang, Dominik Welte and Eliel Keelson
Algorithms 2022, 15(6), 193; https://doi.org/10.3390/a15060193 - 04 Jun 2022
Cited by 10 | Viewed by 4892
Abstract
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as [...] Read more.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

15 pages, 928 KiB  
Article
Constructing the Neighborhood Structure of VNS Based on Binomial Distribution for Solving QUBO Problems
by Dhidhi Pambudi and Masaki Kawamura
Algorithms 2022, 15(6), 192; https://doi.org/10.3390/a15060192 - 02 Jun 2022
Cited by 1 | Viewed by 2100
Abstract
The quadratic unconstrained binary optimization (QUBO) problem is categorized as an NP-hard combinatorial optimization problem. The variable neighborhood search (VNS) algorithm is one of the leading algorithms used to solve QUBO problems. As neighborhood structure change is the central concept in the VNS [...] Read more.
The quadratic unconstrained binary optimization (QUBO) problem is categorized as an NP-hard combinatorial optimization problem. The variable neighborhood search (VNS) algorithm is one of the leading algorithms used to solve QUBO problems. As neighborhood structure change is the central concept in the VNS algorithm, the design of the neighborhood structure is crucial. This paper presents a modified VNS algorithm called “B-VNS”, which can be used to solve QUBO problems. A binomial trial was used to construct the neighborhood structure, and this was used with the aim of reducing computation time. The B-VNS and VNS algorithms were tested on standard QUBO problems from Glover and Beasley, on standard max-cut problems from Helmberg–Rendl, and on those proposed by Burer, Monteiro, and Zhang. Finally, Mann–Whitney tests were conducted using α=0.05, to statistically compare the performance of the two algorithms. It was shown that the B-VNS and VNS algorithms are able to provide good solutions, but the B-VNS algorithm runs substantially faster. Furthermore, the B-VNS algorithm performed the best in all of the max-cut problems, regardless of problem size, and it performed the best in QUBO problems, with sizes less than 500. The results suggest that the use of binomial distribution, to construct the neighborhood structure, has the potential for further development. Full article
Show Figures

Figure 1

27 pages, 1576 KiB  
Article
Clustering Algorithm with a Greedy Agglomerative Heuristic and Special Distance Measures
by Guzel Shkaberina, Leonid Verenev, Elena Tovbis, Natalia Rezova and Lev Kazakovtsev
Algorithms 2022, 15(6), 191; https://doi.org/10.3390/a15060191 - 01 Jun 2022
Viewed by 2628
Abstract
Automatic grouping (clustering) involves dividing a set of objects into subsets (groups) so that the objects from one subset are more similar to each other than to the objects from other subsets according to some criterion. Kohonen neural networks are a class of [...] Read more.
Automatic grouping (clustering) involves dividing a set of objects into subsets (groups) so that the objects from one subset are more similar to each other than to the objects from other subsets according to some criterion. Kohonen neural networks are a class of artificial neural networks, the main element of which is a layer of adaptive linear adders, operating on the principle of “winner takes all”. One of the advantages of Kohonen networks is their ability of online clustering. Greedy agglomerative procedures in clustering consistently improve the result in some neighborhood of a known solution, choosing as the next solution the option that provides the least increase in the objective function. Algorithms using the agglomerative greedy heuristics demonstrate precise and stable results for a k-means model. In our study, we propose a greedy agglomerative heuristic algorithm based on a Kohonen neural network with distance measure variations to cluster industrial products. Computational experiments demonstrate the comparative efficiency and accuracy of using the greedy agglomerative heuristic in the problem of grouping of industrial products into homogeneous production batches. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

16 pages, 7382 KiB  
Article
A Missing Data Reconstruction Method Using an Accelerated Least-Squares Approximation with Randomized SVD
by Siriwan Intawichai and Saifon Chaturantabut
Algorithms 2022, 15(6), 190; https://doi.org/10.3390/a15060190 - 31 May 2022
Cited by 3 | Viewed by 1836
Abstract
An accelerated least-squares approach is introduced in this work by incorporating a greedy point selection method with randomized singular value decomposition (rSVD) to reduce the computational complexity of missing data reconstruction. The rSVD is used to speed up the computation of a low-dimensional [...] Read more.
An accelerated least-squares approach is introduced in this work by incorporating a greedy point selection method with randomized singular value decomposition (rSVD) to reduce the computational complexity of missing data reconstruction. The rSVD is used to speed up the computation of a low-dimensional basis that is required for the least-squares projection by employing randomness to generate a small matrix instead of a large matrix from high-dimensional data. A greedy point selection algorithm, based on the discrete empirical interpolation method, is then used to speed up the reconstruction process in the least-squares approximation. The accuracy and computational time reduction of the proposed method are demonstrated through three numerical experiments. The first two experiments consider standard testing images with missing pixels uniformly distributed on them, and the last numerical experiment considers a sequence of many incomplete two-dimensional miscible flow images. The proposed method is shown to accelerate the reconstruction process while maintaining roughly the same order of accuracy when compared to the standard least-squares approach. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop