Previous Issue
Volume 5, March
 
 

Foundations, Volume 5, Issue 2 (June 2025) – 4 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 1520 KiB  
Article
Bayesian Nonparametric Inference in Elliptic PDEs: Convergence Rates and Implementation
by Matteo Giordano
Foundations 2025, 5(2), 14; https://doi.org/10.3390/foundations5020014 - 23 Apr 2025
Abstract
Parameter identification problems in partial differential equations (PDEs) consist in determining one or more functional coefficient in a PDE. In this article, the Bayesian nonparametric approach to such problems is considered. Focusing on the representative example of inferring the diffusivity function in an [...] Read more.
Parameter identification problems in partial differential equations (PDEs) consist in determining one or more functional coefficient in a PDE. In this article, the Bayesian nonparametric approach to such problems is considered. Focusing on the representative example of inferring the diffusivity function in an elliptic PDE from noisy observations of the PDE solution, the performance of Bayesian procedures based on Gaussian process priors is investigated. Building on recent developments in the literature, we derive novel asymptotic theoretical guarantees that establish posterior consistency and convergence rates for methodologically attractive Gaussian series priors based on the Dirichlet–Laplacian eigenbasis. An implementation of the associated posterior-based inference is provided and illustrated via a numerical simulation study, where excellent agreement with the theory is obtained. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

14 pages, 280 KiB  
Article
Fault-Tolerant Metric Dimension in Carbon Networks
by Kamran Azhar, Asim Nadeem and Yilun Shang
Foundations 2025, 5(2), 13; https://doi.org/10.3390/foundations5020013 - 16 Apr 2025
Viewed by 99
Abstract
In this paper, we study the fault-tolerant metric dimension in graph theory, an important measure against failures in unique vertex identification. The metric dimension of a graph is the smallest number of vertices required to uniquely identify every other vertex based on their [...] Read more.
In this paper, we study the fault-tolerant metric dimension in graph theory, an important measure against failures in unique vertex identification. The metric dimension of a graph is the smallest number of vertices required to uniquely identify every other vertex based on their distances from these chosen vertices. Building on existing work, we explore fault tolerance by considering the minimal number of vertices needed to ensure that all other vertices remain uniquely identifiable even if a specified number of these vertices fails. We compute the fault-tolerant metric dimension of various chemical graphs, namely fullerenes, benzene, and polyphenyl graphs. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

10 pages, 259 KiB  
Perspective
Revisiting the Definition of Vectors—From ‘Magnitude and Direction’ to Abstract Tuples
by Reinout Heijungs
Foundations 2025, 5(2), 12; https://doi.org/10.3390/foundations5020012 - 15 Apr 2025
Viewed by 82
Abstract
Vectors are almost always introduced as objects having magnitude and direction. Following that idea, textbooks and courses introduce the concept of a vector norm and the angle between two vectors. While this is correct and useful for vectors in two- or three-dimensional Euclidean [...] Read more.
Vectors are almost always introduced as objects having magnitude and direction. Following that idea, textbooks and courses introduce the concept of a vector norm and the angle between two vectors. While this is correct and useful for vectors in two- or three-dimensional Euclidean space, these concepts make no sense for more general vectors, that are defined in abstract, non-metric vector spaces. This is even the case when an inner product exists. Here, we analyze how several textbooks are imprecise in presenting the restricted validity of the expressions for the norm and the angle. We also study one concrete example, the so-called ‘vector-based sustainability analytics’, in which scientists have gone astray by mistaking an abstract vector for a Euclidean vector. We recommend that future textbook authors introduce the distinction between vectors that have and that do not have magnitude and direction, even in cases where an inner product exists. Full article
(This article belongs to the Section Mathematical Sciences)
31 pages, 926 KiB  
Article
Introducing an Evolutionary Method to Create the Bounds of Artificial Neural Networks
by Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
Foundations 2025, 5(2), 11; https://doi.org/10.3390/foundations5020011 - 25 Mar 2025
Viewed by 168
Abstract
Artificial neural networks are widely used in applications from various scientific fields and in a multitude of practical applications. In recent years, a multitude of scientific publications have been presented on the effective training of their parameters, but in many cases overfitting problems [...] Read more.
Artificial neural networks are widely used in applications from various scientific fields and in a multitude of practical applications. In recent years, a multitude of scientific publications have been presented on the effective training of their parameters, but in many cases overfitting problems appear, where the artificial neural network shows poor results when used on data that were not present during training. This text proposes the incorporation of a three-stage evolutionary technique, which has roots in the differential evolution technique, for the effective training of the parameters of artificial neural networks and the avoidance of the problem of overfitting. The new method effectively constructs the parameter value range of the artificial neural network with one processing level and sigmoid outputs, both achieving a reduction in training error and preventing the network from experiencing overfitting phenomena. This new technique was successfully applied to a wide range of problems from the relevant literature and the results were extremely promising. From the conducted experiments, it appears that the proposed method reduced the average classification error by 30%, compared to the genetic algorithm, and the average regression error by 45%, as compared to the genetic algorithm. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

Previous Issue
Back to TopTop