entropy-logo

Journal Browser

Journal Browser

Uncertainty in Large Neural Systems: Validation, Explanation and Correction of Multidimensional Intelligence in a Multidimensional World

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 59269

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK
Interests: neural networks; chemical and biological kinetics; human adaptation to hard living conditions; methods and technologies of collective thinking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK
Interests: industrial mathematics; dynamical systems; adaptive control; neural networks; certified artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Significant progress in data-driven neural Artificial Intelligence (AI) over recent years has brought great benefits to end-users ranging from health, banking, and security areas to advanced manufacturing and space. Modern AI systems are built using massive volumes of data, both curated and raw, with all the uncertainties inherent to these data. One of the major fundamental barriers limiting further advances and use of AI systems of this type is the problem of validation, explanation, and correction of AI’s decision-making. This is particularly important for safety-critical and infrastructural applications, but it is also crucial in other use-cases, including financial, career, education, and health services. High-dimensional data and high-dimensional representations of reality are typical features of modern data-driven AI. There is a fundamental trade-off between the “curse of dimensionality” and the “blessing of dimensionality” in high-dimensional data spaces: Some popular low-dimensional methods do not work in high-dimensional data spaces, whereas the blessing of dimensionality makes some simple methods unexpectedly powerful in high dimensionality.

It is well known in brain science that small groups of neurons play an important role in pattern recognition. Mathematical explanation of this effect can be found in the multidimensional nature of data. The single-cell revolution in neuroscience, phenomena of grandmother cells, and sparse coding discovered in the human brain meet the new mathematical blessing of dimensionality ideas.

This Special Issue focuses on the development of mathematical and algorithmic foundations underpinning the problems of uncertainty, validation, explanation, and correction in artificial and natural intelligence.

Prof. Alexander Gorban
Prof. Ivan Tyukin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

22 pages, 3394 KiB  
Article
Ensuring Explainability and Dimensionality Reduction in a Multidimensional HSI World for Early XAI-Diagnostics of Plant Stress
by Maxim Lysov, Konstantin Pukhkiy, Evgeny Vasiliev, Alexandra Getmanskaya and Vadim Turlapov
Entropy 2023, 25(5), 801; https://doi.org/10.3390/e25050801 - 15 May 2023
Cited by 1 | Viewed by 1325
Abstract
This work is mostly devoted to the search for effective solutions to the problem of early diagnosis of plant stress (given an example of wheat and its drought stress), which would be based on explainable artificial intelligence (XAI). The main idea is to [...] Read more.
This work is mostly devoted to the search for effective solutions to the problem of early diagnosis of plant stress (given an example of wheat and its drought stress), which would be based on explainable artificial intelligence (XAI). The main idea is to combine the benefits of two of the most popular agricultural data sources, hyperspectral images (HSI) and thermal infrared images (TIR), in a single XAI model. Our own dataset of a 25-day experiment was used, which was created via both (1) an HSI camera Specim IQ (400–1000 nm, 204, 512 × 512) and (2) a TIR camera Testo 885-2 (320 × 240, res. 0.1 °C). The HSI were a source of the k-dimensional high-level features of plants (k ≤ K, where K is the number of HSI channels) for the learning process. Such combination was implemented as a single-layer perceptron (SLP) regressor, which is the main feature of the XAI model and receives as input an HSI pixel-signature belonging to the plant mask, which then automatically through the mask receives a mark from the TIR. The correlation of HSI channels with the TIR image on the plant’s mask on the days of the experiment was studied. It was established that HSI channel 143 (820 nm) was the most correlated with TIR. The problem of training the HSI signatures of plants with their corresponding temperature value via the XAI model was solved. The RMSE of plant temperature prediction is 0.2–0.3 °C, which is acceptable for early diagnostics. Each HSI pixel was represented in training by a number (k) of channels (k ≤ K = 204 in our case). The number of channels used for training was minimized by a factor of 25–30, from 204 to eight or seven, while maintaining the RMSE value. The model is computationally efficient in training; the average training time was much less than one minute (Intel Core i3-8130U, 2.2 GHz, 4 cores, 4 GB). This XAI model can be considered a research-aimed model (R-XAI), which allows the transfer of knowledge about plants from the TIR domain to the HSI domain, with their contrasting onto only a few from hundreds of HSI channels. Full article
Show Figures

Figure 1

20 pages, 6073 KiB  
Article
MyI-Net: Fully Automatic Detection and Quantification of Myocardial Infarction from Cardiovascular MRI Images
by Shuihua Wang, Ahmed M. S. E. K. Abdelaty, Kelly Parke, Jayanth Ranjit Arnold, Gerry P. McCann and Ivan Y. Tyukin
Entropy 2023, 25(3), 431; https://doi.org/10.3390/e25030431 - 28 Feb 2023
Cited by 5 | Viewed by 2450
Abstract
Myocardial infarction (MI) occurs when an artery supplying blood to the heart is abruptly occluded. The “gold standard” method for imaging MI is cardiovascular magnetic resonance imaging (MRI) with intravenously administered gadolinium-based contrast (with damaged areas apparent as late gadolinium enhancement [LGE]). However, [...] Read more.
Myocardial infarction (MI) occurs when an artery supplying blood to the heart is abruptly occluded. The “gold standard” method for imaging MI is cardiovascular magnetic resonance imaging (MRI) with intravenously administered gadolinium-based contrast (with damaged areas apparent as late gadolinium enhancement [LGE]). However, no “gold standard” fully automated method for the quantification of MI exists. In this work, we propose an end-to-end fully automatic system (MyI-Net) for the detection and quantification of MI in MRI images. It has the potential to reduce uncertainty due to technical variability across labs and the inherent problems of data and labels. Our system consists of four processing stages designed to maintain the flow of information across scales. First, features from raw MRI images are generated using feature extractors built on ResNet and MoblieNet architectures. This is followed by atrous spatial pyramid pooling (ASPP) to produce spatial information at different scales to preserve more image context. High-level features from ASPP and initial low-level features are concatenated at the third stage and then passed to the fourth stage where spatial information is recovered via up-sampling to produce final image segmentation output into: (i) background, (ii) heart muscle, (iii) blood and (iv) LGE areas. Our experiments show that the model named MI-ResNet50-AC provides the best global accuracy (97.38%), mean accuracy (86.01%), weighted intersection over union (IoU) of 96.47%, and bfscore of 64.46% for the global segmentation. However, in detecting only LGE tissue, a smaller model, MI-ResNet18-AC, exhibited higher accuracy (74.41%) than MI-ResNet50-AC (64.29%). New models were compared with state-of-the-art models and manual quantification. Our models demonstrated favorable performance in global segmentation and LGE detection relative to the state-of-the-art, including a four-fold better performance in matching LGE pixels to contours produced by clinicians. Full article
Show Figures

Figure 1

26 pages, 8909 KiB  
Article
Domain Adaptation Principal Component Analysis: Base Linear Method for Learning with Out-of-Distribution Data
by Evgeny M. Mirkes, Jonathan Bac, Aziz Fouché, Sergey V. Stasenko, Andrei Zinovyev and Alexander N. Gorban
Entropy 2023, 25(1), 33; https://doi.org/10.3390/e25010033 - 24 Dec 2022
Cited by 5 | Viewed by 2462
Abstract
Domain adaptation is a popular paradigm in modern machine learning which aims at tackling the problem of divergence (or shift) between the labeled training and validation datasets (source domain) and a potentially large unlabeled dataset (target domain). The task is to embed both [...] Read more.
Domain adaptation is a popular paradigm in modern machine learning which aims at tackling the problem of divergence (or shift) between the labeled training and validation datasets (source domain) and a potentially large unlabeled dataset (target domain). The task is to embed both datasets into a common space in which the source dataset is informative for training while the divergence between source and target is minimized. The most popular domain adaptation solutions are based on training neural networks that combine classification and adversarial learning modules, frequently making them both data-hungry and difficult to train. We present a method called Domain Adaptation Principal Component Analysis (DAPCA) that identifies a linear reduced data representation useful for solving the domain adaptation task. DAPCA algorithm introduces positive and negative weights between pairs of data points, and generalizes the supervised extension of principal component analysis. DAPCA is an iterative algorithm that solves a simple quadratic optimization problem at each iteration. The convergence of the algorithm is guaranteed, and the number of iterations is small in practice. We validate the suggested algorithm on previously proposed benchmarks for solving the domain adaptation task. We also show the benefit of using DAPCA in analyzing single-cell omics datasets in biomedical applications. Overall, DAPCA can serve as a practical preprocessing step in many machine learning applications leading to reduced dataset representations, taking into account possible divergence between source and target domains. Full article
Show Figures

Figure 1

13 pages, 717 KiB  
Article
Rosenblatt’s First Theorem and Frugality of Deep Learning
by Alexander Kirdin, Sergey Sidorov and Nikolai Zolotykh
Entropy 2022, 24(11), 1635; https://doi.org/10.3390/e24111635 - 10 Nov 2022
Cited by 2 | Viewed by 2199
Abstract
The Rosenblatt’s first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number [...] Read more.
The Rosenblatt’s first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of the receptive field for each neuron at the hidden layer. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. In this note, we demonstrated Rosenblatt’s first theorem at work, showed how an elementary perceptron can solve a version of the travel maze problem, and analysed the complexity of that solution. We also constructed a deep network algorithm for the same problem. It is much more efficient. The shallow network uses an exponentially large number of neurons on the hidden layer (Rosenblatt’s A-elements), whereas for the deep network, the second-order polynomial complexity is sufficient. We demonstrated that for the same complex problem, the deep network can be much smaller and reveal a heuristic behind this effect. Full article
Show Figures

Figure 1

15 pages, 4905 KiB  
Article
Entropy as a High-Level Feature for XAI-Based Early Plant Stress Detection
by Maxim Lysov, Irina Maximova, Evgeny Vasiliev, Alexandra Getmanskaya and Vadim Turlapov
Entropy 2022, 24(11), 1597; https://doi.org/10.3390/e24111597 - 3 Nov 2022
Cited by 2 | Viewed by 1673
Abstract
This article is devoted to searching for high-level explainable features that can remain explainable for a wide class of objects or phenomena and become an integral part of explainable AI (XAI). The present study involved a 25-day experiment on early diagnosis of wheat [...] Read more.
This article is devoted to searching for high-level explainable features that can remain explainable for a wide class of objects or phenomena and become an integral part of explainable AI (XAI). The present study involved a 25-day experiment on early diagnosis of wheat stress using drought stress as an example. The state of the plants was periodically monitored via thermal infrared (TIR) and hyperspectral image (HSI) cameras. A single-layer perceptron (SLP)-based classifier was used as the main instrument in the XAI study. To provide explainability of the SLP input, the direct HSI was replaced by images of six popular vegetation indices and three HSI channels (R630, G550, and B480; referred to as indices), along with the TIR image. Furthermore, in the explainability analysis, each of the 10 images was replaced by its 6 statistical features: min, max, mean, std, max–min, and the entropy. For the SLP output explainability, seven output neurons corresponding to the key states of the plants were chosen. The inner layer of the SLP was constructed using 15 neurons, including 10 corresponding to the indices and 5 reserved neurons. The classification possibilities of all 60 features and 10 indices of the SLP classifier were studied. Study result: Entropy is the earliest high-level stress feature for all indices; entropy and an entropy-like feature (max–min) paired with one of the other statistical features can provide, for most indices, 100% accuracy (or near 100%), serving as an integral part of XAI. Full article
Show Figures

Figure 1

18 pages, 1017 KiB  
Article
A Fast kNN Algorithm Using Multiple Space-Filling Curves
by Konstantin Barkalov, Anton Shtanyuk and Alexander Sysoyev
Entropy 2022, 24(6), 767; https://doi.org/10.3390/e24060767 - 30 May 2022
Cited by 6 | Viewed by 2417
Abstract
The paper considers a time-efficient implementation of the k nearest neighbours (kNN) algorithm. A well-known approach for accelerating the kNN algorithm is to utilise dimensionality reduction methods based on the use of space-filling curves. In this paper, we take this approach further and [...] Read more.
The paper considers a time-efficient implementation of the k nearest neighbours (kNN) algorithm. A well-known approach for accelerating the kNN algorithm is to utilise dimensionality reduction methods based on the use of space-filling curves. In this paper, we take this approach further and propose an algorithm that employs multiple space-filling curves and is faster (with comparable quality) compared with the kNN algorithm, which uses kd-trees to determine the nearest neighbours. A specific method for constructing multiple Peano curves is outlined, and statements are given about the preservation of object proximity information in the course of dimensionality reduction. An experimental comparison with known kNN implementations using kd-trees was performed using test and real-life data. Full article
Show Figures

Figure 1

29 pages, 2640 KiB  
Article
A Biomorphic Model of Cortical Column for Content—Based Image Retrieval
by Alexander Telnykh, Irina Nuidel, Olga Shemagina and Vladimir Yakhno
Entropy 2021, 23(11), 1458; https://doi.org/10.3390/e23111458 - 3 Nov 2021
Cited by 2 | Viewed by 2240
Abstract
How do living systems process information? The search for an answer to this question is ongoing. We have developed an intelligent video analytics system. The process of the formation of detectors for content-based image retrieval aimed at detecting objects of various types simulates [...] Read more.
How do living systems process information? The search for an answer to this question is ongoing. We have developed an intelligent video analytics system. The process of the formation of detectors for content-based image retrieval aimed at detecting objects of various types simulates the operation of the structural and functional modules for image processing in living systems. The process of detector construction is, in fact, a model of the formation (or activation) of connections in the cortical column (structural and functional unit of information processing in the human and animal brain). The process of content-based image retrieval, that is, the detection of various types of images in the developed system, reproduces the process of “triggering” a model biomorphic column, i.e., a detector in which connections are formed during the learning process. The recognition process is a reaction of the receptive field of the column to the activation by a given signal. Since the learning process of the detector can be visualized, it is possible to see how a column (a detector of specific stimuli) is formed: a face, a digit, a number, etc. The created artificial cognitive system is a biomorphic model of the recognition column of living systems. Full article
Show Figures

Figure 1

14 pages, 532 KiB  
Article
Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning
by Konstantin Barkalov, Ilya Lebedev and Evgeny Kozinov
Entropy 2021, 23(10), 1272; https://doi.org/10.3390/e23101272 - 28 Sep 2021
Cited by 3 | Viewed by 2513
Abstract
This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used [...] Read more.
This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used a deterministic algorithm for finding the global extremum. This algorithm is based neither on the concept of multistart, nor nature-inspired algorithms. The article provides computational rules of the one-dimensional algorithm and the nested optimization scheme which could be applied for solving multidimensional problems. Please note that the solution complexity of global optimization problems essentially depends on the presence of multiple local extrema. In this paper, we apply machine learning methods to identify regions of attraction of local minima. The use of local optimization algorithms in the selected regions can significantly accelerate the convergence of global search as it could reduce the number of search trials in the vicinity of local minima. The results of computational experiments carried out on several hundred global optimization problems of different dimensionalities presented in the paper confirm the effect of accelerated convergence (in terms of the number of search trials required to solve a problem with a given accuracy). Full article
Show Figures

Figure 1

25 pages, 3871 KiB  
Article
Learning from Scarce Information: Using Synthetic Data to Classify Roman Fine Ware Pottery
by Santos J. Núñez Jareño, Daniël P. van Helden, Evgeny M. Mirkes, Ivan Y. Tyukin and Penelope M. Allison
Entropy 2021, 23(9), 1140; https://doi.org/10.3390/e23091140 - 31 Aug 2021
Cited by 6 | Viewed by 3825
Abstract
In this article, we consider a version of the challenging problem of learning from datasets whose size is too limited to allow generalisation beyond the training set. To address the challenge, we propose to use a transfer learning approach whereby the model is [...] Read more.
In this article, we consider a version of the challenging problem of learning from datasets whose size is too limited to allow generalisation beyond the training set. To address the challenge, we propose to use a transfer learning approach whereby the model is first trained on a synthetic dataset replicating features of the original objects. In this study, the objects were smartphone photographs of near-complete Roman terra sigillata pottery vessels from the collection of the Museum of London. Taking the replicated features from published profile drawings of pottery forms allowed the integration of expert knowledge into the process through our synthetic data generator. After this first initial training the model was fine-tuned with data from photographs of real vessels. We show, through exhaustive experiments across several popular deep learning architectures, different test priors, and considering the impact of the photograph viewpoint and excessive damage to the vessels, that the proposed hybrid approach enables the creation of classifiers with appropriate generalisation performance. This performance is significantly better than that of classifiers trained exclusively on the original data, which shows the promise of the approach to alleviate the fundamental issue of learning from small datasets. Full article
Show Figures

Figure 1

55 pages, 3058 KiB  
Article
High-Dimensional Separability for One- and Few-Shot Learning
by Alexander N. Gorban, Bogdan Grechuk, Evgeny M. Mirkes, Sergey V. Stasenko and Ivan Y. Tyukin
Entropy 2021, 23(8), 1090; https://doi.org/10.3390/e23081090 - 22 Aug 2021
Cited by 15 | Viewed by 4900
Abstract
This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special ‘external’ devices, correctors. Elementary correctors consist of two [...] Read more.
This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special ‘external’ devices, correctors. Elementary correctors consist of two parts, a classifier that separates the situations with high risk of error from the situations in which the legacy AI system works well and a new decision that should be recommended for situations with potential errors. Input signals for the correctors can be the inputs of the legacy AI system, its internal signals, and outputs. If the intrinsic dimensionality of data is high enough then the classifiers for correction of small number of errors can be very simple. According to the blessing of dimensionality effects, even simple and robust Fisher’s discriminants can be used for one-shot learning of AI correctors. Stochastic separation theorems provide the mathematical basis for this one-short learning. However, as the number of correctors needed grows, the cluster structure of data becomes important and a new family of stochastic separation theorems is required. We refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. New stochastic separation theorems for data with fine-grained structure are formulated and proved. On the basis of these theorems, the multi-correctors for granular data are proposed. The advantages of the multi-corrector technology were demonstrated by examples of correcting errors and learning new classes of objects by a deep convolutional neural network on the CIFAR-10 dataset. The key problems of the non-classical high-dimensional data analysis are reviewed together with the basic preprocessing steps including the correlation transformation, supervised Principal Component Analysis (PCA), semi-supervised PCA, transfer component analysis, and new domain adaptation PCA. Full article
Show Figures

Figure 1

38 pages, 31500 KiB  
Article
On the Scope of Lagrangian Vortex Methods for Two-Dimensional Flow Simulations and the POD Technique Application for Data Storing and Analyzing
by Kseniia Kuzmina, Ilia Marchevsky, Irina Soldatova and Yulia Izmailova
Entropy 2021, 23(1), 118; https://doi.org/10.3390/e23010118 - 18 Jan 2021
Cited by 12 | Viewed by 3665
Abstract
The possibilities of applying the pure Lagrangian vortex methods of computational fluid dynamics to viscous incompressible flow simulations are considered in relation to various problem formulations. The modification of vortex methods—the Viscous Vortex Domain method—is used which is implemented in the VM2D code [...] Read more.
The possibilities of applying the pure Lagrangian vortex methods of computational fluid dynamics to viscous incompressible flow simulations are considered in relation to various problem formulations. The modification of vortex methods—the Viscous Vortex Domain method—is used which is implemented in the VM2D code developed by the authors. Problems of flow simulation around airfoils with different shapes at various Reynolds numbers are considered: the Blasius problem, the flow around circular cylinders at different Reynolds numbers, the flow around a wing airfoil at the Reynolds numbers 104 and 105, the flow around two closely spaced circular cylinders and the flow around rectangular airfoils with a different chord to the thickness ratio. In addition, the problem of the internal flow modeling in the channel with a backward-facing step is considered. To store the results of the calculations, the POD technique is used, which, in addition, allows one to investigate the structure of the flow and obtain some additional information about the properties of flow regimes. Full article
Show Figures

Figure 1

17 pages, 787 KiB  
Article
Exploring Evolutionary Fitness in Biological Systems Using Machine Learning Methods
by Oleg Kuzenkov, Andrew Morozov and Galina Kuzenkova
Entropy 2021, 23(1), 35; https://doi.org/10.3390/e23010035 - 29 Dec 2020
Cited by 7 | Viewed by 2520
Abstract
Here, we propose a computational approach to explore evolutionary fitness in complex biological systems based on empirical data using artificial neural networks. The essence of our approach is the following. We first introduce a ranking order of inherited elements (behavioral strategies or/and life [...] Read more.
Here, we propose a computational approach to explore evolutionary fitness in complex biological systems based on empirical data using artificial neural networks. The essence of our approach is the following. We first introduce a ranking order of inherited elements (behavioral strategies or/and life history traits) in considered self-reproducing systems: we use available empirical information on selective advantages of such elements. Next, we introduce evolutionary fitness, which is formally described as a certain function reflecting the introduced ranking order. Then, we approximate fitness in the space of key parameters using a Taylor expansion. To estimate the coefficients in the Taylor expansion, we utilize artificial neural networks: we construct a surface to separate the domains of superior and interior ranking of pair inherited elements in the space of parameters. Finally, we use the obtained approximation of the fitness surface to find the evolutionarily stable (optimal) strategy which maximizes fitness. As an ecologically important study case, we apply our approach to explore the evolutionarily stable diel vertical migration of zooplankton in marine and freshwater ecosystems. Using machine learning we reconstruct the fitness function of herbivorous zooplankton from empirical data and predict the daily trajectory of a dominant species in the northeastern Black Sea. Full article
Show Figures

Figure 1

13 pages, 2524 KiB  
Article
ML-Based Analysis of Particle Distributions in High-Intensity Laser Experiments: Role of Binning Strategy
by Yury Rodimkov, Evgeny Efimenko, Valentin Volokitin, Elena Panova, Alexey Polovinkin, Iosif Meyerov and Arkady Gonoskov
Entropy 2021, 23(1), 21; https://doi.org/10.3390/e23010021 - 25 Dec 2020
Cited by 2 | Viewed by 2493
Abstract
When entering the phase of big data processing and statistical inferences in experimental physics, the efficient use of machine learning methods may require optimal data preprocessing methods and, in particular, optimal balance between details and noise. In experimental studies of strong-field quantum electrodynamics [...] Read more.
When entering the phase of big data processing and statistical inferences in experimental physics, the efficient use of machine learning methods may require optimal data preprocessing methods and, in particular, optimal balance between details and noise. In experimental studies of strong-field quantum electrodynamics with intense lasers, this balance concerns data binning for the observed distributions of particles and photons. Here we analyze the aspect of binning with respect to different machine learning methods (Support Vector Machine (SVM), Gradient Boosting Trees (GBT), Fully-Connected Neural Network (FCNN), Convolutional Neural Network (CNN)) using numerical simulations that mimic expected properties of upcoming experiments. We see that binning can crucially affect the performance of SVM and GBT, and, to a less extent, FCNN and CNN. This can be interpreted as the latter methods being able to effectively learn the optimal binning, discarding unnecessary information. Nevertheless, given limited training sets, the results indicate that the efficiency can be increased by optimizing the binning scale along with other hyperparameters. We present specific measurements of accuracy that can be useful for planning of experiments in the specified research area. Full article
Show Figures

Figure 1

22 pages, 748 KiB  
Article
Integrated Information in the Spiking–Bursting Stochastic Model
by Oleg Kanakov, Susanna Gordleeva and Alexey Zaikin
Entropy 2020, 22(12), 1334; https://doi.org/10.3390/e22121334 - 24 Nov 2020
Cited by 13 | Viewed by 2844
Abstract
Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, [...] Read more.
Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, it remained unclear which underlying mechanisms governing the complex behavior of a neuron–astrocyte network are essential to generating positive integrated information. This study presents an analytic consideration of this question based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for a reduced mathematical model (discrete-time, discrete-state stochastic model) reflecting the main features of the “spiking–bursting” dynamics of a neuron–astrocyte network. The analysis was performed in terms of the empirical “whole minus sum” version of integrated information in comparison to the “decoder based” version. The “whole minus sum” information may change sign, and an interpretation of this transition in terms of “net synergy” is available in the literature. This motivated our particular interest in the sign of the “whole minus sum” information in our analytical considerations. The behaviors of the “whole minus sum” and “decoder based” information measures are found to bear a lot of similarity—they have mutual asymptotic convergence as time-uncorrelated activity increases, and the sign transition of the “whole minus sum” information is associated with a rapid growth in the “decoder based” information. The study aims at creating a theoretical framework for using the spiking–bursting model as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior. The model can also be of interest as a new discrete-state test bench for different formulations of integrated information. Full article
Show Figures

Figure 1

20 pages, 1785 KiB  
Article
Linear and Fisher Separability of Random Points in the d-Dimensional Spherical Layer and Inside the d-Dimensional Cube
by Sergey Sidorov and Nikolai Zolotykh
Entropy 2020, 22(11), 1281; https://doi.org/10.3390/e22111281 - 12 Nov 2020
Cited by 2 | Viewed by 2108
Abstract
Stochastic separation theorems play important roles in high-dimensional data analysis and machine learning. It turns out that in high dimensional space, any point of a random set of points can be separated from other points by a hyperplane with high probability, even if [...] Read more.
Stochastic separation theorems play important roles in high-dimensional data analysis and machine learning. It turns out that in high dimensional space, any point of a random set of points can be separated from other points by a hyperplane with high probability, even if the number of points is exponential in terms of dimensions. This and similar facts can be used for constructing correctors for artificial intelligent systems, for determining the intrinsic dimensionality of data and for explaining various natural intelligence phenomena. In this paper, we refine the estimations for the number of points and for the probability in stochastic separation theorems, thereby strengthening some results obtained earlier. We propose the boundaries for linear and Fisher separability, when the points are drawn randomly, independently and uniformly from a d-dimensional spherical layer and from the cube. These results allow us to better outline the applicability limits of the stochastic separation theorems in applications. Full article
Show Figures

Figure 1

21 pages, 4735 KiB  
Article
Minimum Spanning vs. Principal Trees for Structured Approximations of Multi-Dimensional Datasets
by Alexander Chervov, Jonathan Bac and Andrei Zinovyev
Entropy 2020, 22(11), 1274; https://doi.org/10.3390/e22111274 - 11 Nov 2020
Cited by 4 | Viewed by 2771
Abstract
Construction of graph-based approximations for multi-dimensional data point clouds is widely used in a variety of areas. Notable examples of applications of such approximators are cellular trajectory inference in single-cell data analysis, analysis of clinical trajectories from synchronic datasets, and skeletonization of images. [...] Read more.
Construction of graph-based approximations for multi-dimensional data point clouds is widely used in a variety of areas. Notable examples of applications of such approximators are cellular trajectory inference in single-cell data analysis, analysis of clinical trajectories from synchronic datasets, and skeletonization of images. Several methods have been proposed to construct such approximating graphs, with some based on computation of minimum spanning trees and some based on principal graphs generalizing principal curves. In this article we propose a methodology to compare and benchmark these two graph-based data approximation approaches, as well as to define their hyperparameters. The main idea is to avoid comparing graphs directly, but at first to induce clustering of the data point cloud from the graph approximation and, secondly, to use well-established methods to compare and score the data cloud partitioning induced by the graphs. In particular, mutual information-based approaches prove to be useful in this context. The induced clustering is based on decomposing a graph into non-branching segments, and then clustering the data point cloud by the nearest segment. Such a method allows efficient comparison of graph-based data approximations of arbitrary topology and complexity. The method is implemented in Python using the standard scikit-learn library which provides high speed and efficiency. As a demonstration of the methodology we analyse and compare graph-based data approximation methods using synthetic as well as real-life single cell datasets. Full article
Show Figures

Figure 1

31 pages, 2164 KiB  
Article
Fractional Norms and Quasinorms Do Not Help to Overcome the Curse of Dimensionality
by Evgeny M. Mirkes, Jeza Allohibi and Alexander Gorban
Entropy 2020, 22(10), 1105; https://doi.org/10.3390/e22101105 - 30 Sep 2020
Cited by 27 | Viewed by 5430
Abstract
The curse of dimensionality causes the well-known and widely discussed problems for machine learning methods. There is a hypothesis that using the Manhattan distance and even fractional lp quasinorms (for p less than 1) can help to overcome the curse of dimensionality [...] Read more.
The curse of dimensionality causes the well-known and widely discussed problems for machine learning methods. There is a hypothesis that using the Manhattan distance and even fractional lp quasinorms (for p less than 1) can help to overcome the curse of dimensionality in classification problems. In this study, we systematically test this hypothesis. It is illustrated that fractional quasinorms have a greater relative contrast and coefficient of variation than the Euclidean norm l2, but it is shown that this difference decays with increasing space dimension. It has been demonstrated that the concentration of distances shows qualitatively the same behaviour for all tested norms and quasinorms. It is shown that a greater relative contrast does not mean a better classification quality. It was revealed that for different databases the best (worst) performance was achieved under different norms (quasinorms). A systematic comparison shows that the difference in the performance of kNN classifiers for lp at p = 0.5, 1, and 2 is statistically insignificant. Analysis of curse and blessing of dimensionality requires careful definition of data dimensionality that rarely coincides with the number of attributes. We systematically examined several intrinsic dimensions of the data. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

19 pages, 309 KiB  
Review
Limit Theorems as Blessing of Dimensionality: Neural-Oriented Overview
by Vladik Kreinovich and Olga Kosheleva
Entropy 2021, 23(5), 501; https://doi.org/10.3390/e23050501 - 22 Apr 2021
Cited by 3 | Viewed by 1980
Abstract
As a system becomes more complex, at first, its description and analysis becomes more complicated. However, a further increase in the system’s complexity often makes this analysis simpler. A classical example is Central Limit Theorem: when we have a few independent sources of [...] Read more.
As a system becomes more complex, at first, its description and analysis becomes more complicated. However, a further increase in the system’s complexity often makes this analysis simpler. A classical example is Central Limit Theorem: when we have a few independent sources of uncertainty, the resulting uncertainty is very difficult to describe, but as the number of such sources increases, the resulting distribution gets close to an easy-to-analyze normal one—and indeed, normal distributions are ubiquitous. We show that such limit theorems often make analysis of complex systems easier—i.e., lead to blessing of dimensionality phenomenon—for all the aspects of these systems: the corresponding transformation, the system’s uncertainty, and the desired result of the system’s analysis. Full article

Other

Jump to: Research, Review

12 pages, 15573 KiB  
Technical Note
Scikit-Dimension: A Python Package for Intrinsic Dimension Estimation
by Jonathan Bac, Evgeny M. Mirkes, Alexander N. Gorban, Ivan Tyukin and Andrei Zinovyev
Entropy 2021, 23(10), 1368; https://doi.org/10.3390/e23101368 - 19 Oct 2021
Cited by 42 | Viewed by 5590
Abstract
Dealing with uncertainty in applications of machine learning to real-life data critically depends on the knowledge of intrinsic dimensionality (ID). A number of methods have been suggested for the purpose of estimating ID, but no standard package to easily apply them one by [...] Read more.
Dealing with uncertainty in applications of machine learning to real-life data critically depends on the knowledge of intrinsic dimensionality (ID). A number of methods have been suggested for the purpose of estimating ID, but no standard package to easily apply them one by one or all at once has been implemented in Python. This technical note introduces scikit-dimension, an open-source Python package for intrinsic dimension estimation. The scikit-dimension package provides a uniform implementation of most of the known ID estimators based on the scikit-learn application programming interface to evaluate the global and local intrinsic dimension, as well as generators of synthetic toy and benchmark datasets widespread in the literature. The package is developed with tools assessing the code quality, coverage, unit testing and continuous integration. We briefly describe the package and demonstrate its use in a large-scale (more than 500 datasets) benchmarking of methods for ID estimation for real-life and synthetic data. Full article
Show Figures

Figure 1

Back to TopTop