Previous Issue
Volume 5, March
 
 

Foundations, Volume 5, Issue 2 (June 2025) – 8 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 325 KiB  
Review
Advances in Fractional Lyapunov-Type Inequalities: A Comprehensive Review
by Sotiris K. Ntouyas, Bashir Ahmad and Jessada Tariboon
Foundations 2025, 5(2), 18; https://doi.org/10.3390/foundations5020018 - 27 May 2025
Abstract
In this survey, we have included the recent results on Lyapunov-type inequalities for differential equations of fractional order associated with Dirichlet, nonlocal, multi-point, anti-periodic, and discrete boundary conditions. Our results involve a variety of fractional derivatives such as Riemann–Liouville, Caputo, Hilfer–Hadamard, ψ-Riemann–Liouville, [...] Read more.
In this survey, we have included the recent results on Lyapunov-type inequalities for differential equations of fractional order associated with Dirichlet, nonlocal, multi-point, anti-periodic, and discrete boundary conditions. Our results involve a variety of fractional derivatives such as Riemann–Liouville, Caputo, Hilfer–Hadamard, ψ-Riemann–Liouville, Atangana–Baleanu, tempered, half-linear, and discrete fractional derivatives. Full article
(This article belongs to the Section Mathematical Sciences)
22 pages, 365 KiB  
Article
Entropy Production Assumption and Objectivity in Continuum Physics Modelling
by Angelo Morro
Foundations 2025, 5(2), 17; https://doi.org/10.3390/foundations5020017 - 22 May 2025
Viewed by 148
Abstract
This paper revisits some aspects connected with the methods for the determination of thermodynamically consistent models. While the concepts apply to the general context of continuum physics, the details are developed for the modelling of deformable dielectrics. The symmetry condition arising from the [...] Read more.
This paper revisits some aspects connected with the methods for the determination of thermodynamically consistent models. While the concepts apply to the general context of continuum physics, the details are developed for the modelling of deformable dielectrics. The symmetry condition arising from the balance of angular momentum is viewed as a constraint for the constitutive equations and is shown to be satisfied by sets of objective fields that account jointly for deformation and electric field. The second law of thermodynamics is considered in a generalized form where the entropy production is given by a constitutive function possibly independent of the other constitutive functions. Furthermore, a representation formula is applied for solving the Clausius–Duhem inequality with respect to the chosen unknown fields. Full article
(This article belongs to the Section Physical Sciences)
12 pages, 414 KiB  
Article
Maxwell’s Demon Is Foiled by the Entropy Cost of Measurement, Not Erasure
by Ruth E. Kastner
Foundations 2025, 5(2), 16; https://doi.org/10.3390/foundations5020016 - 22 May 2025
Viewed by 397
Abstract
I dispute the conventional claim that the second law of thermodynamics is saved from a “Maxwell’s demon” by the entropy cost of information erasure and show that instead it is measurement that incurs the entropy cost. Thus, Brillouin, who identified measurement as savior [...] Read more.
I dispute the conventional claim that the second law of thermodynamics is saved from a “Maxwell’s demon” by the entropy cost of information erasure and show that instead it is measurement that incurs the entropy cost. Thus, Brillouin, who identified measurement as savior of the second law, was essentially correct, and putative refutations of his view, such as Bennett’s claim to measure without entropy cost, are seen to fail when the applicable physics is taken into account. I argue that the tradition of attributing the defeat of Maxwell’s demon to erasure rather than to measurement arose from unphysical classical idealizations that do not hold for real gas molecules, as well as a physically ungrounded recasting of physical thermodynamical processes into computational and information-theoretic conceptualizations. I argue that the fundamental principle that saves the second law is the quantum uncertainty principle applying to the need to localize physical states to precise values of observables in order to effect the desired disequilibria aimed at violating the second law. I obtain the specific entropy cost for localizing a molecule in the Szilard engine and show that it coincides with the quantity attributed to Landauer’s principle. I also note that an experiment characterized as upholding an entropy cost of erasure in a “quantum Maxwell’s demon” actually demonstrates an entropy cost of measurement. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

11 pages, 2150 KiB  
Article
Physical and Logical Synchronization of Clocks: The Ramsey Approach
by Edward Bormashenko
Foundations 2025, 5(2), 15; https://doi.org/10.3390/foundations5020015 - 28 Apr 2025
Viewed by 352
Abstract
Ramsey analysis is applied to the problem of the relativistic and quantum synchronization of clocks. Various protocols of synchronization are addressed. Einstein and Eddington special relativity synchronization procedures are considered, and quantum synchronization is discussed. Clocks are seen as the vertices of the [...] Read more.
Ramsey analysis is applied to the problem of the relativistic and quantum synchronization of clocks. Various protocols of synchronization are addressed. Einstein and Eddington special relativity synchronization procedures are considered, and quantum synchronization is discussed. Clocks are seen as the vertices of the graph. Clocks may be synchronized or unsynchronized. Thus, introducing complete, bi-colored, Ramsey graphs emerging from the lattices of clocks becomes possible. The transitivity of synchronization plays a key role in the coloring of the Ramsey graph. Einstein synchronization is transitive, while general relativity and quantum synchronization procedures are not. This fact influences the value of the Ramsey number established for the synchronization graph arising from the lattice of clocks. Any lattice built of six clocks, synchronized with quantum entanglement, will inevitably contain the mono-chromatic triangle. The transitive synchronization of logical clocks is discussed. Interrelation between the symmetry of the clock lattice and the structure of the synchronization graph is addressed. Ramsey analysis of synchronization is important for the synchronization of computers in networks, LIGO, and Virgo instruments intended for the registration of gravitational waves and GPS tame-based synchronization. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

18 pages, 1520 KiB  
Article
Bayesian Nonparametric Inference in Elliptic PDEs: Convergence Rates and Implementation
by Matteo Giordano
Foundations 2025, 5(2), 14; https://doi.org/10.3390/foundations5020014 - 23 Apr 2025
Viewed by 223
Abstract
Parameter identification problems in partial differential equations (PDEs) consist in determining one or more functional coefficient in a PDE. In this article, the Bayesian nonparametric approach to such problems is considered. Focusing on the representative example of inferring the diffusivity function in an [...] Read more.
Parameter identification problems in partial differential equations (PDEs) consist in determining one or more functional coefficient in a PDE. In this article, the Bayesian nonparametric approach to such problems is considered. Focusing on the representative example of inferring the diffusivity function in an elliptic PDE from noisy observations of the PDE solution, the performance of Bayesian procedures based on Gaussian process priors is investigated. Building on recent developments in the literature, we derive novel asymptotic theoretical guarantees that establish posterior consistency and convergence rates for methodologically attractive Gaussian series priors based on the Dirichlet–Laplacian eigenbasis. An implementation of the associated posterior-based inference is provided and illustrated via a numerical simulation study, where excellent agreement with the theory is obtained. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

14 pages, 280 KiB  
Article
Fault-Tolerant Metric Dimension in Carbon Networks
by Kamran Azhar, Asim Nadeem and Yilun Shang
Foundations 2025, 5(2), 13; https://doi.org/10.3390/foundations5020013 - 16 Apr 2025
Viewed by 237
Abstract
In this paper, we study the fault-tolerant metric dimension in graph theory, an important measure against failures in unique vertex identification. The metric dimension of a graph is the smallest number of vertices required to uniquely identify every other vertex based on their [...] Read more.
In this paper, we study the fault-tolerant metric dimension in graph theory, an important measure against failures in unique vertex identification. The metric dimension of a graph is the smallest number of vertices required to uniquely identify every other vertex based on their distances from these chosen vertices. Building on existing work, we explore fault tolerance by considering the minimal number of vertices needed to ensure that all other vertices remain uniquely identifiable even if a specified number of these vertices fails. We compute the fault-tolerant metric dimension of various chemical graphs, namely fullerenes, benzene, and polyphenyl graphs. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

10 pages, 259 KiB  
Perspective
Revisiting the Definition of Vectors—From ‘Magnitude and Direction’ to Abstract Tuples
by Reinout Heijungs
Foundations 2025, 5(2), 12; https://doi.org/10.3390/foundations5020012 - 15 Apr 2025
Viewed by 233
Abstract
Vectors are almost always introduced as objects having magnitude and direction. Following that idea, textbooks and courses introduce the concept of a vector norm and the angle between two vectors. While this is correct and useful for vectors in two- or three-dimensional Euclidean [...] Read more.
Vectors are almost always introduced as objects having magnitude and direction. Following that idea, textbooks and courses introduce the concept of a vector norm and the angle between two vectors. While this is correct and useful for vectors in two- or three-dimensional Euclidean space, these concepts make no sense for more general vectors, that are defined in abstract, non-metric vector spaces. This is even the case when an inner product exists. Here, we analyze how several textbooks are imprecise in presenting the restricted validity of the expressions for the norm and the angle. We also study one concrete example, the so-called ‘vector-based sustainability analytics’, in which scientists have gone astray by mistaking an abstract vector for a Euclidean vector. We recommend that future textbook authors introduce the distinction between vectors that have and that do not have magnitude and direction, even in cases where an inner product exists. Full article
(This article belongs to the Section Mathematical Sciences)
31 pages, 926 KiB  
Article
Introducing an Evolutionary Method to Create the Bounds of Artificial Neural Networks
by Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
Foundations 2025, 5(2), 11; https://doi.org/10.3390/foundations5020011 - 25 Mar 2025
Viewed by 250
Abstract
Artificial neural networks are widely used in applications from various scientific fields and in a multitude of practical applications. In recent years, a multitude of scientific publications have been presented on the effective training of their parameters, but in many cases overfitting problems [...] Read more.
Artificial neural networks are widely used in applications from various scientific fields and in a multitude of practical applications. In recent years, a multitude of scientific publications have been presented on the effective training of their parameters, but in many cases overfitting problems appear, where the artificial neural network shows poor results when used on data that were not present during training. This text proposes the incorporation of a three-stage evolutionary technique, which has roots in the differential evolution technique, for the effective training of the parameters of artificial neural networks and the avoidance of the problem of overfitting. The new method effectively constructs the parameter value range of the artificial neural network with one processing level and sigmoid outputs, both achieving a reduction in training error and preventing the network from experiencing overfitting phenomena. This new technique was successfully applied to a wide range of problems from the relevant literature and the results were extremely promising. From the conducted experiments, it appears that the proposed method reduced the average classification error by 30%, compared to the genetic algorithm, and the average regression error by 45%, as compared to the genetic algorithm. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

Previous Issue
Back to TopTop