Next Issue
Volume 15, September
Previous Issue
Volume 15, July
 
 

Algorithms, Volume 15, Issue 8 (August 2022) – 41 articles

Cover Story (view full-size image): Recent studies have been evaluating the presence of patterns associated with the occurrence of cancer in different types of tissue. In this article, we describe preliminary results for the automatic detection of cancer (Walker 256 tumor) in laboratory animals using preclinical microphotographs of the subject’s liver tissue. In the proposed approach, two different types of descriptors were explored to capture texture properties from the images, one based on spectral information and another built by application of a granulometry given by a family of morphological filters. We tried three different classifier methods (SVM, kNN, and logistic regression). Promising results were obtained for both descriptors in isolation, and even better results were achieved by combining classifiers based on them. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 443 KiB  
Article
QFC: A Parallel Software Tool for Feature Construction, Based on Grammatical Evolution
by Ioannis G. Tsoulos
Algorithms 2022, 15(8), 295; https://doi.org/10.3390/a15080295 - 21 Aug 2022
Cited by 4 | Viewed by 2090
Abstract
This paper presents and analyzes a programming tool that implements a method for classification and function regression problems. This method builds new features from existing ones with the assistance of a hybrid algorithm that makes use of artificial neural networks and grammatical evolution. [...] Read more.
This paper presents and analyzes a programming tool that implements a method for classification and function regression problems. This method builds new features from existing ones with the assistance of a hybrid algorithm that makes use of artificial neural networks and grammatical evolution. The implemented software exploits modern multi-core computing units for faster execution. The method has been applied to a variety of classification and function regression problems, and an extensive comparison with other methods of computational intelligence is made. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Figure 1

20 pages, 593 KiB  
Article
Properties and Recognition of Atom Graphs
by Geneviève Simonet and Anne Berry
Algorithms 2022, 15(8), 294; https://doi.org/10.3390/a15080294 - 19 Aug 2022
Viewed by 1305
Abstract
The atom graph of a connected graph is a graph whose vertices are the atoms obtained by clique minimal separator decomposition of this graph, and whose edges are the edges of all its atom trees. A graph G is an atom graph if [...] Read more.
The atom graph of a connected graph is a graph whose vertices are the atoms obtained by clique minimal separator decomposition of this graph, and whose edges are the edges of all its atom trees. A graph G is an atom graph if there is a graph whose atom graph is isomorphic to G. We study the class of atom graphs, which is also the class of atom graphs of chordal graphs, and the associated recognition problem. We prove that each atom graph is a perfect graph and give a characterization of atom graphs in terms of a spanning tree, inspired by the characterization of clique graphs of chordal graphs as expanded trees. We also characterize the chordal graphs having the same atom and clique graph, and solve the recognition problem of atom graphs of two graph classes. Full article
(This article belongs to the Special Issue Combinatorial Designs: Theory and Applications)
Show Figures

Figure 1

20 pages, 2415 KiB  
Article
Defending against FakeBob Adversarial Attacks in Speaker Verification Systems with Noise-Adding
by Zesheng Chen, Li-Chi Chang, Chao Chen, Guoping Wang and Zhuming Bi
Algorithms 2022, 15(8), 293; https://doi.org/10.3390/a15080293 - 17 Aug 2022
Cited by 2 | Viewed by 1693
Abstract
Speaker verification systems use human voices as an important biometric to identify legitimate users, thus adding a security layer to voice-controlled Internet-of-things smart homes against illegal access. Recent studies have demonstrated that speaker verification systems are vulnerable to adversarial attacks such as FakeBob. [...] Read more.
Speaker verification systems use human voices as an important biometric to identify legitimate users, thus adding a security layer to voice-controlled Internet-of-things smart homes against illegal access. Recent studies have demonstrated that speaker verification systems are vulnerable to adversarial attacks such as FakeBob. The goal of this work is to design and implement a simple and light-weight defense system that is effective against FakeBob. We specifically study two opposite pre-processing operations on input audios in speak verification systems: denoising that attempts to remove or reduce perturbations and noise-adding that adds small noise to an input audio. Through experiments, we demonstrate that both methods are able to weaken the ability of FakeBob attacks significantly, with noise-adding achieving even better performance than denoising. Specifically, with denoising, the targeted attack success rate of FakeBob attacks can be reduced from 100% to 56.05% in GMM speaker verification systems, and from 95% to only 38.63% in i-vector speaker verification systems, respectively. With noise adding, those numbers can be further lowered down to 5.20% and 0.50%, respectively. As a proactive measure, we study several possible adaptive FakeBob attacks against the noise-adding method. Experiment results demonstrate that noise-adding can still provide a considerable level of protection against these countermeasures. Full article
Show Figures

Figure 1

29 pages, 10473 KiB  
Article
Improving the Efficiency of Oncological Diagnosis of the Breast Based on the Combined Use of Simulation Modeling and Artificial Intelligence Algorithms
by Alexander V. Khoperskov and Maxim V. Polyakov
Algorithms 2022, 15(8), 292; https://doi.org/10.3390/a15080292 - 17 Aug 2022
Cited by 5 | Viewed by 2047
Abstract
This work includes a brief overview of the applications of the powerful and easy-to-perform method of microwave radiometry (MWR) for the diagnosis of various diseases. The main goal of this paper is to develop a method for diagnosing breast oncology based on machine [...] Read more.
This work includes a brief overview of the applications of the powerful and easy-to-perform method of microwave radiometry (MWR) for the diagnosis of various diseases. The main goal of this paper is to develop a method for diagnosing breast oncology based on machine learning algorithms using thermometric data, both real medical measurements and simulation results of MWR examinations. The dataset includes distributions of deep and skin temperatures calculated in numerical models of the dynamics of thermal and radiation fields inside biological tissue. The constructed combined dataset allows us to explore the limits of applicability of the MWR method for detecting weak tumors. We use convolutional neural networks and classic machine learning algorithms (k-nearest neighbors, naive Bayes classifier, support vector machine) to classify data. The construction of Kohonen self-organizing maps to explore the structure of our combined dataset demonstrated differences between the temperatures of patients with positive and negative diagnoses. Our analysis shows that the MWR can detect tumors with a radius of up to 0.5 cm if they are at the stage of rapid growth, when the tumor volume doubling occurs in approximately 100 days or less. The use of convolutional neural networks for MWR provides both high sensitivity (sens=0.86) and specificity (spec=0.82), which is an advantage over other methods for diagnosing breast cancer. A new modified scheme for medical measurements of IR temperature and brightness temperature is proposed for a larger number of points in the breast compared to the classical scheme. This approach can increase the effectiveness and sensitivity of diagnostics by several percent. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Medicine)
Show Figures

Figure 1

23 pages, 3231 KiB  
Article
Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)
by Harshkumar Mehta and Kalpdrum Passi
Algorithms 2022, 15(8), 291; https://doi.org/10.3390/a15080291 - 17 Aug 2022
Cited by 19 | Viewed by 6537
Abstract
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. [...] Read more.
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. As a part of this research study, two datasets were taken to demonstrate hate speech detection using XAI. Data preprocessing was performed to clean data of any inconsistencies, clean the text of the tweets, tokenize and lemmatize the text, etc. Categorical variables were also simplified in order to generate a clean dataset for training purposes. Exploratory data analysis was performed on the datasets to uncover various patterns and insights. Various pre-existing models were applied to the Google Jigsaw dataset such as decision trees, k-nearest neighbors, multinomial naïve Bayes, random forest, logistic regression, and long short-term memory (LSTM), among which LSTM achieved an accuracy of 97.6%. Explainable methods such as LIME (local interpretable model—agnostic explanations) were applied to the HateXplain dataset. Variants of BERT (bidirectional encoder representations from transformers) model such as BERT + ANN (artificial neural network) with an accuracy of 93.55% and BERT + MLP (multilayer perceptron) with an accuracy of 93.67% were created to achieve a good performance in terms of explainability using the ERASER (evaluating rationales and simple English reasoning) benchmark. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

17 pages, 2157 KiB  
Article
Discrete-Time Observations of Brownian Motion on Lie Groups and Homogeneous Spaces: Sampling and Metric Estimation
by Mathias Højgaard Jensen, Sarang Joshi and Stefan Sommer
Algorithms 2022, 15(8), 290; https://doi.org/10.3390/a15080290 - 17 Aug 2022
Viewed by 1608
Abstract
We present schemes for simulating Brownian bridges on complete and connected Lie groups and homogeneous spaces. We use this to construct an estimation scheme for recovering an unknown left- or right-invariant Riemannian metric on the Lie group from samples. We subsequently show how [...] Read more.
We present schemes for simulating Brownian bridges on complete and connected Lie groups and homogeneous spaces. We use this to construct an estimation scheme for recovering an unknown left- or right-invariant Riemannian metric on the Lie group from samples. We subsequently show how pushing forward the distributions generated by Brownian motions on the group results in distributions on homogeneous spaces that exhibit a non-trivial covariance structure. The pushforward measure gives rise to new non-parametric families of distributions on commonly occurring spaces such as spheres and symmetric positive tensors. We extend the estimation scheme to fit these distributions to homogeneous space-valued data. We demonstrate both the simulation schemes and estimation procedures on Lie groups and homogenous spaces, including SPD(3)=GL+(3)/SO(3) and S2=SO(3)/SO(2). Full article
Show Figures

Figure 1

15 pages, 678 KiB  
Article
Biased-Randomized Discrete-Event Heuristics for Dynamic Optimization with Time Dependencies and Synchronization
by Juliana Castaneda, Mattia Neroni, Majsa Ammouriova, Javier Panadero and Angel A. Juan
Algorithms 2022, 15(8), 289; https://doi.org/10.3390/a15080289 - 16 Aug 2022
Cited by 3 | Viewed by 1618
Abstract
Many real-life combinatorial optimization problems are subject to a high degree of dynamism, while, simultaneously, a certain level of synchronization among agents and events is required. Thus, for instance, in ride-sharing operations, the arrival of vehicles at pick-up points needs to be synchronized [...] Read more.
Many real-life combinatorial optimization problems are subject to a high degree of dynamism, while, simultaneously, a certain level of synchronization among agents and events is required. Thus, for instance, in ride-sharing operations, the arrival of vehicles at pick-up points needs to be synchronized with the times at which users reach these locations so that waiting times do not represent an issue. Likewise, in warehouse logistics, the availability of automated guided vehicles at an entry point needs to be synchronized with the arrival of new items to be stored. In many cases, as operational decisions are made, a series of interdependent events are scheduled for the future, thus making the synchronization task one that traditional optimization methods cannot handle easily. On the contrary, discrete-event simulation allows for processing a complex list of scheduled events in a natural way, although the optimization component is missing here. This paper discusses a hybrid approach in which a heuristic is driven by a list of discrete events and then extended into a biased-randomized algorithm. As the paper discusses in detail, the proposed hybrid approach allows us to efficiently tackle optimization problems with complex synchronization issues. Full article
Show Figures

Figure 1

15 pages, 9279 KiB  
Article
A Coupled Variational System for Image Decomposition along with Edges Detection
by Jianlou Xu, Yuying Guo, Yan Hao and Leigang Huo
Algorithms 2022, 15(8), 288; https://doi.org/10.3390/a15080288 - 16 Aug 2022
Viewed by 1463
Abstract
In order to better decompose the images and protect their edges, in this paper, we proposed a coupled variational system consisting of two steps. The first step, an improved weighted variational model is introduced to obtain the cartoon and texture. Using the obtained [...] Read more.
In order to better decompose the images and protect their edges, in this paper, we proposed a coupled variational system consisting of two steps. The first step, an improved weighted variational model is introduced to obtain the cartoon and texture. Using the obtained cartoon image, in the second step, a new vector function is obtained for describing the pseudo edge of the considered image by one Tikhonov regularization variational model. Because Tikhonov regularization model is equivalent to carrying out a Gaussian linear filtering, the obtained vector function is smoother. To solve the coupled system, we give the alternating direction method, primal-dual method and Gauss-Seidel iteration. Using the coupled system, we can not only separate out the cartoon and texture parts, but also extract the edge. Extensive numerical experiments are given to show the effectiveness of the proposed method compared with other variational methods. Full article
(This article belongs to the Special Issue Young Researchers in Imaging Science: Modelling and Algorithms)
Show Figures

Figure 1

17 pages, 3390 KiB  
Article
CNN Based on Transfer Learning Models Using Data Augmentation and Transformation for Detection of Concrete Crack
by Md. Monirul Islam, Md. Belal Hossain, Md. Nasim Akhtar, Mohammad Ali Moni and Khondokar Fida Hasan
Algorithms 2022, 15(8), 287; https://doi.org/10.3390/a15080287 - 15 Aug 2022
Cited by 39 | Viewed by 4555
Abstract
Cracks in concrete cause initial structural damage to civil infrastructures such as buildings, bridges, and highways, which in turn causes further damage and is thus regarded as a serious safety concern. Early detection of it can assist in preventing further damage and can [...] Read more.
Cracks in concrete cause initial structural damage to civil infrastructures such as buildings, bridges, and highways, which in turn causes further damage and is thus regarded as a serious safety concern. Early detection of it can assist in preventing further damage and can enable safety in advance by avoiding any possible accident caused while using those infrastructures. Machine learning-based detection is gaining favor over time-consuming classical detection approaches that can only fulfill the objective of early detection. To identify concrete surface cracks from images, this research developed a transfer learning approach (TL) based on Convolutional Neural Networks (CNN). This work employs the transfer learning strategy by leveraging four existing deep learning (DL) models named VGG16, ResNet18, DenseNet161, and AlexNet with pre-trained (trained on ImageNet) weights. To validate the performance of each model, four performance indicators are used: accuracy, recall, precision, and F1-score. Using the publicly available CCIC dataset, the suggested technique on AlexNet outperforms existing models with a testing accuracy of 99.90%, precision of 99.92%, recall of 99.80%, and F1-score of 99.86% for crack class. Our approach is further validated by using an external dataset, BWCI, available on Kaggle. Using BWCI, models VGG16, ResNet18, DenseNet161, and AlexNet achieved the accuracy of 99.90%, 99.60%, 99.80%, and 99.90% respectively. This proposed transfer learning-based method, which is based on the CNN method, is demonstrated to be more effective at detecting cracks in concrete structures and is also applicable to other detection tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

12 pages, 10060 KiB  
Article
Temari Balls, Spheres, SphereHarmonic: From Japanese Folkcraft to Music
by Maria Mannone and Takashi Yoshino
Algorithms 2022, 15(8), 286; https://doi.org/10.3390/a15080286 - 14 Aug 2022
Viewed by 3032
Abstract
Temari balls are traditional Japanese toys and artworks. The variety of their geometries and tessellations can be investigated formally and computationally with the means of combinatorics. As a further step, we also propose a musical application of the core idea of Temari balls. [...] Read more.
Temari balls are traditional Japanese toys and artworks. The variety of their geometries and tessellations can be investigated formally and computationally with the means of combinatorics. As a further step, we also propose a musical application of the core idea of Temari balls. In fact, inspired by the classical idea of music of spheres and by the CubeHarmonic, a musical application of the Rubik’s cube, we present the concept of a new musical instrument, the SphereHarmonic. The mathematical (and musical) description of Temari balls lies in the wide background of interactions between art and combinatorics. Concerning the methods, we present the tools of permutations and tessellations we adopted here, and the core idea for the SphereHarmonic. As the results, we first describe a classification of structures according to the theory of groups. Then, we summarize the main passages implemented in our code, to make the SphereHarmonic play on a laptop. Our study explores an aspect of the deep connections between the mutually inspiring scientific and artistic thinking. Full article
(This article belongs to the Special Issue Combinatorial Designs: Theory and Applications)
Show Figures

Figure 1

17 pages, 1176 KiB  
Article
Fast Conflict Detection for Multi-Dimensional Packet Filters
by Chun-Liang Lee, Guan-Yu Lin and Yaw-Chung Chen
Algorithms 2022, 15(8), 285; https://doi.org/10.3390/a15080285 - 14 Aug 2022
Cited by 1 | Viewed by 1561
Abstract
To support advanced network services, Internet routers must perform packet classification based on a set of rules called packet filters. If two or more filters overlap, a filter conflict will occur and lead to ambiguity in packet classification. Further, it may affect network [...] Read more.
To support advanced network services, Internet routers must perform packet classification based on a set of rules called packet filters. If two or more filters overlap, a filter conflict will occur and lead to ambiguity in packet classification. Further, it may affect network security or even the correctness of packet routing. Hence, it is necessary to detect conflicts to avoid the above problems. In recent years, many conflict detection algorithms have been proposed, but most of them detect conflicts for only prefix fields (i.e., source/destination IP address fields) of filters. For greater practicality, conflict detection must include non-prefix fields such as source/destination IP port fields and the protocol field. In this study, we propose an efficient conflict detection algorithm for five-dimensional filters, which include both prefix and non-prefix fields. In the proposed algorithm, a tiny lookup table is created for quickly filtering out a large portion of non-conflicting filter pairs, thereby reducing the overall conflict detection time. Experimental results show that our algorithm reduces the detection time by 10% to 28% compared with other conflict detection algorithms for 20 K filter databases. More importantly, our algorithm can be used to extend any existing conflict detection algorithms for two-dimensional filters to support fast conflict detection for five-dimensional filters. Full article
(This article belongs to the Special Issue Algorithms for Communication Networks)
Show Figures

Figure 1

32 pages, 1318 KiB  
Article
A Vibration Based Automatic Fault Detection Scheme for Drilling Process Using Type-2 Fuzzy Logic
by Satyam Paul, Rob Turnbull, Davood Khodadad and Magnus Löfstrand
Algorithms 2022, 15(8), 284; https://doi.org/10.3390/a15080284 - 12 Aug 2022
Cited by 7 | Viewed by 1538
Abstract
The fault detection system using automated concepts is a crucial aspect of the industrial process. The automated system can contribute efficiently in minimizing equipment downtime therefore improving the production process cost. This paper highlights a novel model based fault detection (FD) approach combined [...] Read more.
The fault detection system using automated concepts is a crucial aspect of the industrial process. The automated system can contribute efficiently in minimizing equipment downtime therefore improving the production process cost. This paper highlights a novel model based fault detection (FD) approach combined with an interval type-2 (IT2) Takagi–Sugeno (T–S) fuzzy system for fault detection in the drilling process. The system uncertainty is considered prevailing during the process, and type-2 fuzzy methodology is utilized to deal with these uncertainties in an effective way. Two theorems are developed; Theorem 1, which proves the stability of the fuzzy modeling, and Theorem 2, which establishes the fault detector algorithm stability. A Lyapunov stabilty analysis is implemented for validating the stability criterion for Theorem 1 and Theorem 2. In order to validate the effective implementation of the complex theoretical approach, a numerical analysis is carried out at the end. The proposed methodology can be implemented in real time to detect faults in the drilling tool maintaining the stability of the proposed fault detection estimator. This is critical for increasing the productivity and quality of the machining process, and it also helps improve the surface finish of the work piece satisfying the customer needs and expectations. Full article
Show Figures

Figure 1

27 pages, 1546 KiB  
Review
Adversarial Training Methods for Deep Learning: A Systematic Review
by Weimin Zhao, Sanaa Alwidian and Qusay H. Mahmoud
Algorithms 2022, 15(8), 283; https://doi.org/10.3390/a15080283 - 12 Aug 2022
Cited by 24 | Viewed by 11923
Abstract
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. [...] Read more.
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. It is a training schema that utilizes an alternative objective function to provide model generalization for both adversarial data and clean data. In this systematic review, we focus particularly on adversarial training as a method of improving the defensive capacities and robustness of machine learning models. Specifically, we focus on adversarial sample accessibility through adversarial sample generation methods. The purpose of this systematic review is to survey state-of-the-art adversarial training and robust optimization methods to identify the research gaps within this field of applications. The literature search was conducted using Engineering Village (Engineering Village is an engineering literature search tool, which provides access to 14 engineering literature and patent databases), where we collected 238 related papers. The papers were filtered according to defined inclusion and exclusion criteria, and information was extracted from these papers according to a defined strategy. A total of 78 papers published between 2016 and 2021 were selected. Data were extracted and categorized using a defined strategy, and bar plots and comparison tables were used to show the data distribution. The findings of this review indicate that there are limitations to adversarial training methods and robust optimization. The most common problems are related to data generalization and overfitting. Full article
Show Figures

Figure 1

27 pages, 363 KiB  
Review
Techniques and Paradigms in Modern Game AI Systems
by Yunlong Lu and Wenxin Li
Algorithms 2022, 15(8), 282; https://doi.org/10.3390/a15080282 - 12 Aug 2022
Cited by 5 | Viewed by 5794
Abstract
Games have long been benchmarks and test-beds for AI algorithms. With the development of AI techniques and the boost of computational power, modern game AI systems have achieved superhuman performance in many games played by humans. These games have various features and present [...] Read more.
Games have long been benchmarks and test-beds for AI algorithms. With the development of AI techniques and the boost of computational power, modern game AI systems have achieved superhuman performance in many games played by humans. These games have various features and present different challenges to AI research, so the algorithms used in each of these AI systems vary. This survey aims to give a systematic review of the techniques and paradigms used in modern game AI systems. By decomposing each of the recent milestones into basic components and comparing them based on the features of games, we summarize the common paradigms to build game AI systems and their scope and limitations. We claim that deep reinforcement learning is the most general methodology to become a mainstream method for games with higher complexity. We hope this survey can both provide a review of game AI algorithms and bring inspiration to the game AI community for future directions. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

22 pages, 3781 KiB  
Article
Automated Pixel-Level Deep Crack Segmentation on Historical Surfaces Using U-Net Models
by Esraa Elhariri, Nashwa El-Bendary and Shereen A. Taie
Algorithms 2022, 15(8), 281; https://doi.org/10.3390/a15080281 - 11 Aug 2022
Cited by 4 | Viewed by 2363
Abstract
Crack detection on historical surfaces is of significant importance for credible and reliable inspection in heritage structural health monitoring. Thus, several object detection deep learning models are utilized for crack detection. However, the majority of these models are powerful at most in achieving [...] Read more.
Crack detection on historical surfaces is of significant importance for credible and reliable inspection in heritage structural health monitoring. Thus, several object detection deep learning models are utilized for crack detection. However, the majority of these models are powerful at most in achieving the task of classification, with primitive detection of the crack location. On the other hand, several state-of-the-art studies have proven that pixel-level crack segmentation can powerfully locate objects in images for more accurate and reasonable classification. In order to realize pixel-level deep crack segmentation in images of historical buildings, this paper proposes an automated deep crack segmentation approach designed based on an exhaustive investigation of several U-Net deep learning network architectures. The utilization of pixel-level crack segmentation with U-Net deep learning ensures the identification of pixels that are important for the decision of image classification. Moreover, the proposed approach employs the deep learned features extracted by the U-Net deep learning model to precisely describe crack characteristics for better pixel-level crack segmentation. A primary image dataset of various crack types and severity is collected from historical building surfaces and used for training and evaluating the performance of the proposed approach. Three variants of the U-Net convolutional network architecture are considered for the deep pixel-level segmentation of different types of cracks on historical surfaces. Promising results of the proposed approach using the U2Net deep learning model are obtained, with a Dice score and mean Intersection over Union (mIoU) of 71.09% and 78.38% achieved, respectively, at the pixel level. Conclusively, the significance of this work is the investigation of the impact of utilizing pixel-level deep crack segmentation, supported by deep learned features, through adopting variants of the U-Net deep learning model for crack detection on historical surfaces. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection)
Show Figures

Figure 1

15 pages, 435 KiB  
Article
Research Trends, Enabling Technologies and Application Areas for Big Data
by Lars Lundberg and Håkan Grahn
Algorithms 2022, 15(8), 280; https://doi.org/10.3390/a15080280 - 09 Aug 2022
Cited by 2 | Viewed by 2345
Abstract
The availability of large amounts of data in combination with Big Data analytics has transformed many application domains. In this paper, we provide insights into how the area has developed in the last decade. First, we identify seven major application areas and six [...] Read more.
The availability of large amounts of data in combination with Big Data analytics has transformed many application domains. In this paper, we provide insights into how the area has developed in the last decade. First, we identify seven major application areas and six groups of important enabling technologies for Big Data applications and systems. Then, using bibliometrics and an extensive literature review of more than 80 papers, we identify the most important research trends in these areas. In addition, our bibliometric analysis also includes trends in different geographical regions. Our results indicate that manufacturing and agriculture or forestry are the two application areas with the fastest growth. Furthermore, our bibliometric study shows that deep learning and edge or fog computing are the enabling technologies increasing the most. We believe that the data presented in this paper provide a good overview of the current research trends in Big Data and that this kind of information is very useful when setting strategic agendas for Big Data research. Full article
Show Figures

Figure 1

17 pages, 3352 KiB  
Article
High-Fidelity Surrogate Based Multi-Objective Optimization Algorithm
by Adel Younis and Zuomin Dong
Algorithms 2022, 15(8), 279; https://doi.org/10.3390/a15080279 - 07 Aug 2022
Cited by 1 | Viewed by 1537
Abstract
The employment of conventional optimization procedures that must be repeatedly invoked during the optimization process in real-world engineering applications is hindered despite significant gains in computing power by computationally expensive models. As a result, surrogate models that require far less time and resources [...] Read more.
The employment of conventional optimization procedures that must be repeatedly invoked during the optimization process in real-world engineering applications is hindered despite significant gains in computing power by computationally expensive models. As a result, surrogate models that require far less time and resources to analyze are used in place of these time-consuming analyses. In multi-objective optimization (MOO) problems involving pricey analysis and simulation techniques such as multi-physics modeling and simulation, finite element analysis (FEA), and computational fluid dynamics (CFD), surrogate models are found to be a promising endeavor, particularly for the optimization of complex engineering design problems involving black box functions. In order to reduce the expense of fitness function evaluations and locate the Pareto frontier for MOO problems, the automated multiobjective surrogate based Pareto finder MOO algorithm (AMSP) is proposed. Utilizing data samples taken from the feasible design region, the algorithm creates three surrogate models. The algorithm repeats the process of sampling and updating the Pareto set, by assigning weighting factors to those surrogates in accordance with the values of the root mean squared error, until a Pareto frontier is discovered. AMSP was successfully employed to identify the Pareto set and the Pareto border. Utilizing multi-objective benchmark test functions and engineering design examples such airfoil shape geometry of wind turbine, the unique approach was put to the test. The cost of computing the Pareto optima for test functions and real engineering design problem is reduced, and promising results were obtained. Full article
Show Figures

Figure 1

24 pages, 11958 KiB  
Article
Simulation of Low-Speed Buoyant Flows with a Stabilized Compressible/Incompressible Formulation: The Full Navier–Stokes Approach versus the Boussinesq Model
by Guillermo Hauke and Jorge Lanzarote
Algorithms 2022, 15(8), 278; https://doi.org/10.3390/a15080278 - 05 Aug 2022
Cited by 2 | Viewed by 1472
Abstract
This paper compares two strategies to compute buoyancy-driven flows using stabilized methods. Both formulations are based on a unified approach for solving compressible and incompressible flows, which solves the continuity, momentum, and total energy equations in a coupled entropy-consistent way. The first approach [...] Read more.
This paper compares two strategies to compute buoyancy-driven flows using stabilized methods. Both formulations are based on a unified approach for solving compressible and incompressible flows, which solves the continuity, momentum, and total energy equations in a coupled entropy-consistent way. The first approach introduces the variable density thermodynamics of the liquid or gas without any artificial buoyancy terms, i.e., without applying any approximate models into the Navier–Stokes equations. Furthermore, this formulation holds for flows driven by high temperature differences. Further advantages of this formulation are seen in the fact that it conserves the total energy and it lacks the incompressibility inconsistencies due to volume changes induced by temperature variations. The second strategy uses the Boussinesq approximation to account for temperature-driven forces. This method models the thermal terms in the momentum equation through a temperature-dependent nonlinear source term. Computer examples show that the thermodynamic approach, which does not introduce any artificial terms into the Navier–Stokes equations, is conceptually simpler and, with the incompressible stabilization matrix, attains similar residual convergence with iteration count to methods based on the Boussinesq approximation. For the Boussinesq model, the SUPG and SGS methods are compared, displaying very similar computational behavior. Finally, the VMS a posteriori error estimator is applied to adapt the mesh, helping to achieve better accuracy for the same number of degrees of freedom. Full article
Show Figures

Figure 1

16 pages, 476 KiB  
Article
Solar Photovoltaic Integration in Monopolar DC Networks via the GNDO Algorithm
by Oscar Danilo Montoya, Walter Gil-González and Luis Fernando Grisales-Noreña
Algorithms 2022, 15(8), 277; https://doi.org/10.3390/a15080277 - 05 Aug 2022
Cited by 8 | Viewed by 1843
Abstract
This paper focuses on minimizing the annual operative costs in monopolar DC distribution networks with the inclusion of solar photovoltaic (PV) generators while considering a planning period of 20 years. This problem is formulated through a mixed-integer nonlinear programming (MINLP) model, in which [...] Read more.
This paper focuses on minimizing the annual operative costs in monopolar DC distribution networks with the inclusion of solar photovoltaic (PV) generators while considering a planning period of 20 years. This problem is formulated through a mixed-integer nonlinear programming (MINLP) model, in which binary variables define the nodes where the PV generators must be located, and continuous variables are related to the power flow solution and the optimal sizes of the PV sources. The implementation of a master–slave optimization approach is proposed in order to address the complexity of the MINLP formulation. In the master stage, the discrete-continuous generalized normal distribution optimizer (DCGNDO) is implemented to define the nodes for the PV sources along with their sizes. The slave stage corresponds to a specialized power flow approach for monopolar DC networks known as the successive approximation power flow method, which helps determine the total energy generation at the substation terminals and its expected operative costs in the planning period. Numerical results in the 33- and 69-bus grids demonstrate the effectiveness of the DCGNDO optimizer compared to the discrete-continuous versions of the Chu and Beasley genetic algorithm and the vortex search algorithm. Full article
Show Figures

Figure 1

5 pages, 180 KiB  
Editorial
Algorithms for Reliable Estimation, Identification and Control
by Andreas Rauh, Luc Jaulin and Julien Alexandre dit Sandretto
Algorithms 2022, 15(8), 276; https://doi.org/10.3390/a15080276 - 05 Aug 2022
Viewed by 1208
Abstract
The two-part Special Issue “Algorithms for Reliable Estimation, Identification and Control” deals with the optimization of feedforward and feedback controllers with respect to predefined performance criteria as well as the state and parameter estimation for systems with uncertainty [...] Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
16 pages, 4140 KiB  
Article
New Step Size Control Algorithm for Semi-Implicit Composition ODE Solvers
by Petr Fedoseev, Dmitriy Pesterev, Artur Karimov and Denis Butusov
Algorithms 2022, 15(8), 275; https://doi.org/10.3390/a15080275 - 04 Aug 2022
Cited by 7 | Viewed by 1561
Abstract
Composition is a powerful and simple approach for obtaining numerical integration methods of high accuracy order while preserving the geometric properties of a basic integrator. Adaptive step size control allows one to significantly increase the performance of numerical integration methods. However, there is [...] Read more.
Composition is a powerful and simple approach for obtaining numerical integration methods of high accuracy order while preserving the geometric properties of a basic integrator. Adaptive step size control allows one to significantly increase the performance of numerical integration methods. However, there is a lack of efficient step size control algorithms for composition solvers due to some known difficulties in constructing a low-cost embedded local error estimator. In this paper, we propose a novel local error estimator based on a difference between the semi-implicit CD method and semi-explicit midpoint methods within a common composition scheme. We evaluate the performance of adaptive composition schemes with the proposed local error estimator, comparing it with the other state-of-the-art approaches. We show that composition ODE solvers with the proposed step size control algorithm possess higher numerical efficiency than known methods, by using a comprehensive set of nonlinear test problems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 1054 KiB  
Article
A Neural Network Approach for the Analysis of Reproducible Ribo–Seq Profiles
by Giorgia Giacomini, Caterina Graziani, Veronica Lachi, Pietro Bongini, Niccolò Pancino, Monica Bianchini, Davide Chiarugi, Angelo Valleriani and Paolo Andreini
Algorithms 2022, 15(8), 274; https://doi.org/10.3390/a15080274 - 04 Aug 2022
Cited by 2 | Viewed by 2861
Abstract
In recent years, the Ribosome profiling technique (Ribo–seq) has emerged as a powerful method for globally monitoring the translation process in vivo at single nucleotide resolution. Based on deep sequencing of mRNA fragments, Ribo–seq allows to obtain profiles that reflect the time spent [...] Read more.
In recent years, the Ribosome profiling technique (Ribo–seq) has emerged as a powerful method for globally monitoring the translation process in vivo at single nucleotide resolution. Based on deep sequencing of mRNA fragments, Ribo–seq allows to obtain profiles that reflect the time spent by ribosomes in translating each part of an open reading frame. Unfortunately, the profiles produced by this method can vary significantly in different experimental setups, being characterized by a poor reproducibility. To address this problem, we have employed a statistical method for the identification of highly reproducible Ribo–seq profiles, which was tested on a set of E. coli genes. State-of-the-art artificial neural network models have been used to validate the quality of the produced sequences. Moreover, new insights into the dynamics of ribosome translation have been provided through a statistical analysis on the obtained sequences. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Bioinformatics Problems)
Show Figures

Figure 1

14 pages, 1141 KiB  
Article
Communication-Efficient Vertical Federated Learning
by Afsana Khan, Marijn ten Thij and Anna Wilbik
Algorithms 2022, 15(8), 273; https://doi.org/10.3390/a15080273 - 04 Aug 2022
Cited by 9 | Viewed by 3658
Abstract
Federated learning (FL) is a privacy-preserving distributed learning approach that allows multiple parties to jointly build machine learning models without disclosing sensitive data. Although FL has solved the problem of collaboration without compromising privacy, it has a significant communication overhead due to the [...] Read more.
Federated learning (FL) is a privacy-preserving distributed learning approach that allows multiple parties to jointly build machine learning models without disclosing sensitive data. Although FL has solved the problem of collaboration without compromising privacy, it has a significant communication overhead due to the repetitive updating of models during training. Several studies have proposed communication-efficient FL approaches to address this issue, but adequate solutions are still lacking in cases where parties must deal with different data features, also referred to as vertical federated learning (VFL). In this paper, we propose a communication-efficient approach for VFL that compresses the local data of clients, and then aggregates the compressed data from all clients to build an ML model. Since local data are shared in compressed form, the privacy of these data is preserved. Experiments on publicly available benchmark datasets using our proposed method show that the final model obtained by aggregation of compressed data from clients outperforms the performance of the local models of the clients. Full article
Show Figures

Figure 1

17 pages, 1134 KiB  
Article
Building a Technology Recommender System Using Web Crawling and Natural Language Processing Technology
by Nathalie Campos Macias, Wilhelm Düggelin, Yesim Ruf and Thomas Hanne
Algorithms 2022, 15(8), 272; https://doi.org/10.3390/a15080272 - 03 Aug 2022
Cited by 7 | Viewed by 3115
Abstract
Finding, retrieving, and processing information on technology from the Internet can be a tedious task. This article investigates if technological concepts such as web crawling and natural language processing are suitable means for knowledge discovery from unstructured information and the development of a [...] Read more.
Finding, retrieving, and processing information on technology from the Internet can be a tedious task. This article investigates if technological concepts such as web crawling and natural language processing are suitable means for knowledge discovery from unstructured information and the development of a technology recommender system by developing a prototype of such a system. It also analyzes how well the resulting prototype performs in regard to effectivity and efficiency. The research strategy based on design science research consists of four stages: (1) Awareness generation; (2) suggestion of a solution considering the information retrieval process; (3) development of an artefact in the form of a Python computer program; and (4) evaluation of the prototype within the scope of a comparative experiment. The evaluation yields that the prototype is highly efficient in retrieving basic and rather random extractive text summaries from websites that include the desired search terms. However, the effectivity, measured by the quality of results is unsatisfactory due to the aforementioned random arrangement of extracted sentences within the resulting summaries. It is found that natural language processing and web crawling are indeed suitable technologies for such a program whilst the use of additional technology/concepts would add significant value for a potential user. Several areas for incremental improvement of the prototype are identified. Full article
Show Figures

Figure 1

26 pages, 7996 KiB  
Article
Optimal Motorcycle Engine Mount Design Parameter Identification Using Robust Optimization Algorithms
by Adel Younis, Fadi AlKhatib and Zuomin Dong
Algorithms 2022, 15(8), 271; https://doi.org/10.3390/a15080271 - 03 Aug 2022
Viewed by 2763
Abstract
Mechanical vibrations have a significant impact on ride comfort; the driver is constantly distracted as a result. Volumetric engine inertial unbalances and road profile irregularities create mechanical vibrations. The purpose of this study is to employ optimization algorithms to identify structural elements that [...] Read more.
Mechanical vibrations have a significant impact on ride comfort; the driver is constantly distracted as a result. Volumetric engine inertial unbalances and road profile irregularities create mechanical vibrations. The purpose of this study is to employ optimization algorithms to identify structural elements that contribute to vibration propagation and to provide optimal solutions for reducing structural vibrations induced by engine unbalance and/or road abnormalities in a motorcycle. The powertrain assembly, swing-arm assembly, and vibration-isolating mounts make up the vibration-isolating system. Engine mounts are used to restrict transferred forces to the motorbike frame owing to engine shaking or road irregularities. Two 12-degree-of-freedom (DOF) powertrain motorcycle engine systems (PMS) were modeled and examined for design optimization in this study. The first model was used to compute engine mount parameters by reducing the transmitted load through the mounts while only considering shaking loads, whereas the second model considered both shaking and road bump loads. In both configurations, the frame is infinitely stiff. The mount stiffness, location, and orientation are considered to be the design parameters. The purpose of this study is to employ computational methods to minimize the loads induced by shaking forces. To continue the optimization process, Grey Wolf Optimizer (GWO), a meta-heuristic swarm intelligence optimization algorithm inspired by grey wolves in nature, was utilized. To demonstrate GWO’s superior performance in PMS, other optimization methods such as a Genetic Algorithm (GA) and Sequential Quadratic Programming (SQP) were used for comparison. To minimize the engine’s transmitted force, GWO was employed to determine the optimal mounting design parameters. The cost and constraint functions were formulated and optimized, and promising results were obtained and documented. The vibration modes due to shaking and road loads were decoupled for a smooth ride. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

22 pages, 2169 KiB  
Article
Validating and Testing an Agent-Based Model for the Spread of COVID-19 in Ireland
by Elizabeth Hunter and John D. Kelleher
Algorithms 2022, 15(8), 270; https://doi.org/10.3390/a15080270 - 03 Aug 2022
Cited by 7 | Viewed by 2442
Abstract
Agent-based models can be used to better understand the impacts of lifting restrictions or implementing interventions during a pandemic. However, agent-based models are computationally expensive, and running a model of a large population can result in a simulation taking too long to run [...] Read more.
Agent-based models can be used to better understand the impacts of lifting restrictions or implementing interventions during a pandemic. However, agent-based models are computationally expensive, and running a model of a large population can result in a simulation taking too long to run for the model to be a useful analysis tool during a public health crisis. To reduce computing time and power while running a detailed agent-based model for the spread of COVID-19 in the Republic of Ireland, we introduce a scaling factor that equates 1 agent to 100 people in the population. We present the results from model validation and show that the scaling factor increases the variability in the model output, but the average model results are similar in scaled and un-scaled models of the same population, and the scaled model is able to accurately simulate the number of cases per day in Ireland during the autumn of 2020. We then test the usability of the model by using the model to explore the likely impacts of increasing community mixing when schools reopen after summer holidays. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

18 pages, 6796 KiB  
Article
Research of Flexible Assembly of Miniature Circuit Breakers Based on Robot Trajectory Optimization
by Yan Han, Liang Shu, Ziran Wu, Xuan Chen, Gaoyan Zhang and Zili Cai
Algorithms 2022, 15(8), 269; https://doi.org/10.3390/a15080269 - 31 Jul 2022
Cited by 5 | Viewed by 1998
Abstract
This paper is dedicated to achieving flexible automatic assembly of miniature circuit breakers (MCBs) to resolve the high rigidity issue of existing MCB assembly by proposing a flexible automatic assembly process and method with industrial robots. To optimize the working performance of the [...] Read more.
This paper is dedicated to achieving flexible automatic assembly of miniature circuit breakers (MCBs) to resolve the high rigidity issue of existing MCB assembly by proposing a flexible automatic assembly process and method with industrial robots. To optimize the working performance of the robot, a time-optimal trajectory planning method of the improved Particle Swarm Optimization (PSO) with a multi-optimization mechanism is proposed. The solution uses a fitness switch function for particle sifting to improve the stability of the acceleration and jerk of the robot motion as well as to increase the computational efficiency. The experimental results show that the proposed method achieves flexible assembly for multi-type MCB parts of varying postures. Compared with other optimization algorithms, the proposed improved PSO is significantly superior in both computational efficiency and optimization accuracy. Compared with the standard PSO, the proposed trajectory planning method shortens the assembly time by 6.9 s and raises the assembly efficiency by 16.7%. The improved PSO is implemented on the experimental assembly platform and achieves smooth and stable operations, which proves the high significance and practicality for MCB fabrication. Full article
Show Figures

Figure 1

16 pages, 1677 KiB  
Article
Cancer Identification in Walker 256 Tumor Model Exploring Texture Properties Taken from Microphotograph of Rats Liver
by Mateus F. T. Carvalho, Sergio A. Silva, Jr., Carla Cristina O. Bernardo, Franklin César Flores, Juliana Vanessa C. M. Perles, Jacqueline Nelisis Zanoni and Yandre M. G. Costa
Algorithms 2022, 15(8), 268; https://doi.org/10.3390/a15080268 - 31 Jul 2022
Viewed by 1938
Abstract
Recent studies have been evaluating the presence of patterns associated with the occurrence of cancer in different types of tissue present in the individual affected by the disease. In this article, we describe preliminary results for the automatic detection of cancer (Walker 256 [...] Read more.
Recent studies have been evaluating the presence of patterns associated with the occurrence of cancer in different types of tissue present in the individual affected by the disease. In this article, we describe preliminary results for the automatic detection of cancer (Walker 256 tumor) in laboratory animals using preclinical microphotograph images of the subject’s liver tissue. In the proposed approach, two different types of descriptors were explored to capture texture properties from the images, and we also evaluated the complementarity between them. The first texture descriptor experimented is the widely known Local Phase Quantization (LPQ), which is a descriptor based on spectral information. The second one is built by the application of a granulometry given by a family of morphological filters. For classification, we have evaluated the algorithms Support Vector Machine (SVM), k-Nearest Neighbor (k-NN) and Logistic Regression. Experiments carried out on a carefully curated dataset developed by the Enteric Neural Plasticity Laboratory of the State University of Maringá showed that both texture descriptors provide good results in this scenario. The accuracy rates obtained using the SVM classifier were 96.67% for the texture operator based on granulometry and 91.16% for the LPQ operator. The dataset was made available also as a contribution of this work. In addition, it is important to remark that the best overall result was obtained by combining classifiers created using both descriptors in a late fusion strategy, achieving an accuracy of 99.16%. The results obtained show that it is possible to automatically perform the identification of cancer in laboratory animals by exploring texture properties found on the tissue taken from the liver. Moreover, we observed a high level of complementarity between the classifiers created using LPQ and granulometry properties in the application addressed here. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

14 pages, 678 KiB  
Article
Short Text Classification with Tolerance-Based Soft Computing Method
by Vrushang Patel, Sheela Ramanna, Ketan Kotecha and Rahee Walambe
Algorithms 2022, 15(8), 267; https://doi.org/10.3390/a15080267 - 30 Jul 2022
Cited by 3 | Viewed by 2171
Abstract
Text classification aims to assign labels to textual units such as documents, sentences and paragraphs. Some applications of text classification include sentiment classification and news categorization. In this paper, we present a soft computing technique-based algorithm (TSC) to classify sentiment polarities of tweets [...] Read more.
Text classification aims to assign labels to textual units such as documents, sentences and paragraphs. Some applications of text classification include sentiment classification and news categorization. In this paper, we present a soft computing technique-based algorithm (TSC) to classify sentiment polarities of tweets as well as news categories from text. The TSC algorithm is a supervised learning method based on tolerance near sets. Near sets theory is a more recent soft computing methodology inspired by rough sets where instead of set approximation operators used by rough sets to induce tolerance classes, the tolerance classes are directly induced from the feature vectors using a tolerance level parameter and a distance function. The proposed TSC algorithm takes advantage of the recent advances in efficient feature extraction and vector generation from pre-trained bidirectional transformer encoders for creating tolerance classes. Experiments were performed on ten well-researched datasets which include both short and long text. Both pre-trained SBERT and TF-IDF vectors were used in the experimental analysis. Results from transformer-based vectors demonstrate that TSC outperforms five well-known machine learning algorithms on four datasets, and it is comparable with all other datasets based on the weighted F1, Precision and Recall scores. The highest AUC-ROC (Area under the Receiver Operating Characteristics) score was obtained in two datasets and comparable in six other datasets. The highest ROC-PRC (Area under the Precision–Recall Curve) score was obtained in one dataset and comparable in four other datasets. Additionally, significant differences were observed in most comparisons when examining the statistical difference between the weighted F1-score of TSC and other classifiers using a Wilcoxon signed-ranks test. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

21 pages, 370 KiB  
Article
Dark Type Dynamical Systems: The Integrability Algorithm and Applications
by Yarema A. Prykarpatsky, Ilona Urbaniak, Radosław A. Kycia and Anatolij K. Prykarpatski
Algorithms 2022, 15(8), 266; https://doi.org/10.3390/a15080266 - 28 Jul 2022
Cited by 4 | Viewed by 1213
Abstract
Based on a devised gradient-holonomic integrability testing algorithm, we analyze a class of dark type nonlinear dynamical systems on spatially one-dimensional functional manifolds possessing hidden symmetry properties and allowing their linearization on the associated cotangent spaces. We described main spectral properties of nonlinear [...] Read more.
Based on a devised gradient-holonomic integrability testing algorithm, we analyze a class of dark type nonlinear dynamical systems on spatially one-dimensional functional manifolds possessing hidden symmetry properties and allowing their linearization on the associated cotangent spaces. We described main spectral properties of nonlinear Lax type integrable dynamical systems on periodic functional manifolds particular within the classical Floquet theory, as well as we presented the determining functional relationships between the conserved quantities and related geometric Poisson and recursion structures on functional manifolds. For evolution flows on functional manifolds, parametrically depending on additional functional variables, naturally related with the classical Bellman-Pontriagin optimal control problem theory, we studied a wide class of nonlinear dynamical systems of dark type on spatially one-dimensional functional manifolds, which are both of diffusion and dispersion classes and can have interesting applications in modern physics, optics, mechanics, hydrodynamics and biology sciences. We prove that all of these dynamical systems possess rich hidden symmetry properties, are Lax type linearizable and possess finite or infinite hierarchies of suitably ordered conserved quantities. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Previous Issue
Back to TopTop