Special Issue "Artificial Intelligence and Mathematical Methods"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 31 October 2023 | Viewed by 5147

Special Issue Editors

Faculty of Electronic Engineering, University of Nis, Nis, Serbia
Interests: quantization; compression; coding; speech processing; deep learning; approximation theory
Faculty of Sciences and Mathematics, Department of Computer Science, University of Nis, Nis, Serbia
Interests: generalized inverses of matrices; numerical linear algebra; dynamic and stochastic stability of mechanical systems; information theory and coding; hankel determinants and integer sequences; soliton theory

Special Issue Information

Dear Colleagues,

The goal of this Special Issue is to encourage researchers dealing with problems in the field of artificial intelligence (AI) to present their latest results. Although great progress has been achieved in this field, modern trends dictate increasing demands on the application of advanced methods of AI (for DNN), especially for resource-constrained devices (with memory, energy, and computational constraints). In order to solve these challenging problems, it is necessary not only to consider various mathematical methods of optimization and approximation, but also to consider solving these problems from different engineering points of view. One way to solve these problems is to thoughtfully apply DNN compression techniques, such as pruning and quantization techniques. The transfer of knowledge from the field of mathematical methods of optimization and approximation to the field of AI is very important for current and future effective determining solutions to the identified problems. With the effective application of engineering logic and studious analysis of the problems, the benefits in solving these problems can be even more significant. In brief, this Special Issue welcomes research in the field of artificial intelligence, mathematical methods of optimization and approximation, as well as joint harmonization of solutions to cutting-edge engineering problems.

Potential topics include but are not limited to the following:

  • Artificial intelligence (AI) in engineering and science;
  • Deep learning methods;
  • Machine learning algorithms;
  • Mathematical methods of optimization (coast function, etc.);
  • Deep neural network (DNN) architecture and feasible applications;
  • Methods for solving linear and nonlinear equations with possible correlation with DNNs;
  • Artificial intelligence for solving prediction and classification problems;
  • Pruning technique for DNN compression;
  • Quantization techniques for neural network compression;
  • Fixed and variable length coding of DNN parameters;
  • Artificial intelligence for data and signal processing;
  • AI algorithms for applications on resource-constrained devices (memory, energy, and computational constraints);
  • Approximate methods for applications in engineering problems and AI.

Prof. Dr. Zoran H. Perić
Dr. Jelena Nikolić
Prof. Dr. Marko Petković
Prof. Dr. Vlado Delić
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2100 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Reducing the Dimensionality of SPD Matrices with Neural Networks in BCI
Mathematics 2023, 11(7), 1570; https://doi.org/10.3390/math11071570 - 23 Mar 2023
Viewed by 232
Abstract
In brain–computer interface (BCI)-based motor imagery, the symmetric positive definite (SPD) covariance matrices of electroencephalogram (EEG) signals with discriminative information features lie on a Riemannian manifold, which is currently attracting increasing attention. Under a Riemannian manifold perspective, we propose a non-linear dimensionality reduction [...] Read more.
In brain–computer interface (BCI)-based motor imagery, the symmetric positive definite (SPD) covariance matrices of electroencephalogram (EEG) signals with discriminative information features lie on a Riemannian manifold, which is currently attracting increasing attention. Under a Riemannian manifold perspective, we propose a non-linear dimensionality reduction algorithm based on neural networks to construct a more discriminative low-dimensional SPD manifold. To this end, we design a novel non-linear shrinkage layer to modify the extreme eigenvalues of the SPD matrix properly, then combine the traditional bilinear mapping to non-linearly reduce the dimensionality of SPD matrices from manifold to manifold. Further, we build the SPD manifold network on a Siamese architecture which can learn the similarity metric from the data. Subsequently, the effective signal classification method named minimum distance to Riemannian mean (MDRM) can be implemented directly on the low-dimensional manifold. Finally, a regularization layer is proposed to perform subject-to-subject transfer by exploiting the geometric relationships of multi-subject. Numerical experiments for synthetic data and EEG signal datasets indicate the effectiveness of the proposed manifold network. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Article
Optimization of the 24-Bit Fixed-Point Format for the Laplacian Source
Mathematics 2023, 11(3), 568; https://doi.org/10.3390/math11030568 - 21 Jan 2023
Viewed by 641
Abstract
The 32-bit floating-point (FP32) binary format, commonly used for data representation in computers, introduces high complexity, requiring powerful and expensive hardware for data processing and high energy consumption, hence being unsuitable for implementation on sensor nodes, edge devices, and other devices with limited [...] Read more.
The 32-bit floating-point (FP32) binary format, commonly used for data representation in computers, introduces high complexity, requiring powerful and expensive hardware for data processing and high energy consumption, hence being unsuitable for implementation on sensor nodes, edge devices, and other devices with limited hardware resources. Therefore, it is often necessary to use binary formats of lower complexity than FP32. This paper proposes the usage of the 24-bit fixed-point format that will reduce the complexity in two ways, by decreasing the number of bits and by the fact that the fixed-point format has significantly less complexity than the floating-point format. The paper optimizes the 24-bit fixed-point format and examines its performance for data with the Laplacian distribution, exploiting the analogy between fixed-point binary representation and uniform quantization. Firstly, the optimization of the 24-bit uniform quantizer is performed by deriving two new closed-form formulas for a very accurate calculation of its maximal amplitude. Then, the 24-bit fixed-point format is optimized by optimization of its key parameter and by proposing two adaptation procedures, with the aim to obtain the same performance as of the optimal uniform quantizer in a wide range of variance of input data. It is shown that the proposed 24-bit fixed-point format achieves for 18.425 dB higher performance than the floating-point format with the same number of bits while being less complex. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Article
Analysis of Industrial Product Sound by Applying Image Similarity Measures
Mathematics 2023, 11(3), 498; https://doi.org/10.3390/math11030498 - 17 Jan 2023
Viewed by 528
Abstract
The sounds of certain industrial products (machines) carry important information about these products. Product classification or malfunction detection can be performed utilizing a product’s sound. In this regard, sound can be used as it is or it can be mapped to either features [...] Read more.
The sounds of certain industrial products (machines) carry important information about these products. Product classification or malfunction detection can be performed utilizing a product’s sound. In this regard, sound can be used as it is or it can be mapped to either features or images. The latter enables the implementation of recently achieved performance improvements with respect to image processing. In this paper, the sounds of seven industrial products are mapped into mel-spectrograms. The similarities of these images within the same class (machine type) and between classes, representing the intraclass and interclass similarities, respectively, are investigated. Three often-used image similarity measures are applied: Euclidean distance (ED), the Pearson correlation coefficient (PCC), and the structural similarity index (SSIM). These measures are mutually compared to analyze their behaviors in a particular use-case. According to the obtained results, the mel-spectrograms of five classes are similar, while two classes have unique properties manifested in considerably larger intraclass as opposed to interclass similarity. The applied image similarity measures lead to similar general results showing the same main trends, but there are differences among them as mutual relationship of similarity among classes. The differences between the images are more blurred when the SSIM is applied than using ED and the PCC. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Article
Feature Map Regularized CycleGAN for Domain Transfer
Mathematics 2023, 11(2), 372; https://doi.org/10.3390/math11020372 - 10 Jan 2023
Viewed by 478
Abstract
CycleGAN domain transfer architectures use cycle consistency loss mechanisms to enforce the bijectivity of highly underconstrained domain transfer mapping. In this paper, in order to further constrain the mapping problem and reinforce the cycle consistency between two domains, we also introduce a novel [...] Read more.
CycleGAN domain transfer architectures use cycle consistency loss mechanisms to enforce the bijectivity of highly underconstrained domain transfer mapping. In this paper, in order to further constrain the mapping problem and reinforce the cycle consistency between two domains, we also introduce a novel regularization method based on the alignment of feature maps probability distributions. This type of optimization constraint, expressed via an additional loss function, allows for further reducing the size of the regions that are mapped from the source domain into the same image in the target domain, which leads to mapping closer to the bijective and thus better performance. By selecting feature maps of the network layers with the same depth d in the encoder of the direct generative adversarial networks (GANs), and the decoder of the inverse GAN, it is possible to describe their d-dimensional probability distributions and, through novel regularization term, enforce similarity between representations of the same image in both domains during the mapping cycle. We introduce several ground distances between Gaussian distributions of the corresponding feature maps used in the regularization. In the experiments conducted on several real datasets, we achieved better performance in the unsupervised image transfer task in comparison to the baseline CycleGAN, and obtained results that were much closer to the fully supervised pix2pix method for all used datasets. The PSNR measure of the proposed method was, on average, 4.7% closer to the results of the pix2pix method in comparison to the baseline CycleGAN over all datasets. This also held for SSIM, where the described percentage was 8.3% on average over all datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Article
Measure of Similarity between GMMs Based on Geometry-Aware Dimensionality Reduction
Mathematics 2023, 11(1), 175; https://doi.org/10.3390/math11010175 - 29 Dec 2022
Viewed by 539
Abstract
Gaussian Mixture Models (GMMs) are used in many traditional expert systems and modern artificial intelligence tasks such as automatic speech recognition, image recognition and retrieval, pattern recognition, speaker recognition and verification, financial forecasting applications and others, as simple statistical representations of underlying data. [...] Read more.
Gaussian Mixture Models (GMMs) are used in many traditional expert systems and modern artificial intelligence tasks such as automatic speech recognition, image recognition and retrieval, pattern recognition, speaker recognition and verification, financial forecasting applications and others, as simple statistical representations of underlying data. Those representations typically require many high-dimensional GMM components that consume large computing resources and increase computation time. On the other hand, real-time applications require computationally efficient algorithms and for that reason, various GMM similarity measures and dimensionality reduction techniques have been examined to reduce the computational complexity. In this paper, a novel GMM similarity measure is proposed. The measure is based on a recently presented nonlinear geometry-aware dimensionality reduction algorithm for the manifold of Symmetric Positive Definite (SPD) matrices. The algorithm is applied over SPD representations of the original data. The local neighborhood information from the original high-dimensional parameter space is preserved by preserving distance to the local mean. Instead of dealing with high-dimensional parameter space, the method operates on much lower-dimensional space of transformed parameters. Resolving the distance between such representations is reduced to calculating the distance among lower-dimensional matrices. The method was tested within a texture recognition task where superior state-of-the-art performance in terms of the trade-off between recognition accuracy and computational complexity has been achieved in comparison with all baseline GMM similarity measures. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Article
Optimal Neural Network Model for Short-Term Prediction of Confirmed Cases in the COVID-19 Pandemic
Mathematics 2022, 10(20), 3804; https://doi.org/10.3390/math10203804 - 15 Oct 2022
Viewed by 516
Abstract
COVID-19 is one of the largest issues that humanity still has to cope with and has an impact on the daily lives of billions of people. Researchers from all around the world have made various attempts to establish accurate mathematical models of COVID-19 [...] Read more.
COVID-19 is one of the largest issues that humanity still has to cope with and has an impact on the daily lives of billions of people. Researchers from all around the world have made various attempts to establish accurate mathematical models of COVID-19 spread. In many branches of science, it is difficult to make accurate predictions about short time series with extremely irregular behavior. Artificial neural networks (ANNs) have lately been extensively used for such applications. Although ANNs may mimic the nonlinear behavior of short time series, they frequently struggle to handle all turbulences. Alternative methods must be used as a result. In order to reduce errors and boost forecasting confidence, a novel methodology that combines Time Delay Neural Networks is suggested in this work. Six separate datasets are used for its validation showing the number of confirmed daily COVID-19 infections in 2021 for six world countries. It is demonstrated that the method may greatly improve the individual networks’ forecasting accuracy independent of their topologies, which broadens the applicability of the approach. A series of additional predictive experiments involving state-of-the-art Extreme Learning Machine modeling were performed to quantitatively compare the accuracy of the proposed methodology with that of similar methodologies. It is shown that the forecasting accuracy of the system outperforms ELM modeling and is in the range of other state-of-the art solutions. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Article
Two Interval Upper-Bound Q-Function Approximations with Applications
Mathematics 2022, 10(19), 3590; https://doi.org/10.3390/math10193590 - 01 Oct 2022
Viewed by 643
Abstract
The Gaussian Q-function has considerable applications in numerous areas of science and engineering. However, the fact that a closed-form expression for this function does not exist encourages finding approximations or bounds of the Q-function. In this paper, we determine analytically two [...] Read more.
The Gaussian Q-function has considerable applications in numerous areas of science and engineering. However, the fact that a closed-form expression for this function does not exist encourages finding approximations or bounds of the Q-function. In this paper, we determine analytically two novel interval upper bound Q-function approximations and show that they could be used efficiently not only for the symbol error probability (SEP) estimation of transmission over Nakagami-m fading channels, but also for the average symbol error probability (ASEP) evaluation for two modulation formats. Specifically, we determine analytically the composition of the upper bound Q-function approximations specified at disjoint intervals of the input argument values so as to provide the highest accuracy within the intervals, by utilizing the selected one of two upper bound Q-function approximations. We show that a further increase of the accuracy, achieved in the case with two upper-bound approximations composing the interval approximation, can be obtained by forming a composite interval approximation of the Q-function that assumes another extra interval and by specifying the third form for the upper-bound Q-function approximation. The proposed analytical approach can be considered universal and widely applicable. The results presented in the paper indicate that the proposed Q-function approximations outperform in terms of accuracy other well-known approximations carefully chosen for comparison purposes. This approximation can be used in numerous theoretical communication problems based on the Q-function calculation. In this paper, we apply it to estimate the average bit error rate (ABER), when the transmission in a Nakagami-m fading channel is observed for the assumed binary phase-shift keying (BPSK) and differentially encoded quadrature phase-shift keying (DE-QPSK) modulation formats, as well as to design scalar quantization with equiprobable cells for variables from a Gaussian source. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Article
Two Novel Non-Uniform Quantizers with Application in Post-Training Quantization
Mathematics 2022, 10(19), 3435; https://doi.org/10.3390/math10193435 - 21 Sep 2022
Viewed by 682
Abstract
With increased network downsizing and cost minimization in deployment of neural network (NN) models, the utilization of edge computing takes a significant place in modern artificial intelligence today. To bridge the memory constraints of less-capable edge systems, a plethora of quantizer models and [...] Read more.
With increased network downsizing and cost minimization in deployment of neural network (NN) models, the utilization of edge computing takes a significant place in modern artificial intelligence today. To bridge the memory constraints of less-capable edge systems, a plethora of quantizer models and quantization techniques are proposed for NN compression with the goal of enabling the fitting of the quantized NN (QNN) on the edge device and guaranteeing a high extent of accuracy preservation. NN compression by means of post-training quantization has attracted a lot of research attention, where the efficiency of uniform quantizers (UQs) has been promoted and heavily exploited. In this paper, we propose two novel non-uniform quantizers (NUQs) that prudently utilize one of the two properties of the simplest UQ. Although having the same quantization rule for specifying the support region, both NUQs have a different starting setting in terms of cell width, compared to a standard UQ. The first quantizer, named the simplest power-of-two quantizer (SPTQ), defines the width of cells that are multiplied by the power of two. As it is the case in the simplest UQ design, the representation levels of SPTQ are midpoints of the quantization cells. The second quantizer, named the modified SPTQ (MSPTQ), is a more competitive quantizer model, representing an enhanced version of SPTQ in which the quantizer decision thresholds are centered between the nearest representation levels, similar to the UQ design. These properties make the novel NUQs relatively simple. Unlike UQ, the quantization cells of MSPTQ are not of equal widths and the representation levels are not midpoints of the quantization cells. In this paper, we describe the design procedure of SPTQ and MSPTQ and we perform their optimization for the assumed Laplacian source. Afterwards, we perform post-training quantization by implementing SPTQ and MSPTQ, study the viability of QNN accuracy and show the implementation benefits over the case where UQ of an equal number of quantization cells is utilized in QNN for the same classification task. We believe that both NUQs are particularly substantial for memory-constrained environments, where simple and acceptably accurate solutions are of crucial importance. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Back to TopTop