Next Issue
Volume 5, September
Previous Issue
Volume 5, March
 
 

Foundations, Volume 5, Issue 2 (June 2025) – 12 articles

Cover Story (view full-size image): Parameter identification problems in partial differential equations (PDEs) consist in determining one or more functional coefficient in a PDE. In this article, the Bayesian nonparametric approach to such problems is considered. Focusing on the representative example of inferring the diffusivity function in an elliptic PDE from noisy observations of the PDE solution, the performance of Bayesian procedures based on Gaussian process priors is investigated. Building on recent developments in the literature, we derive novel asymptotic theoretical guarantees that establish posterior consistency and convergence rates for methodologically attractive Gaussian series priors based on the Dirichlet–Laplacian eigenbasis. An implementation of the associated posterior-based inference is provided and illustrated via a numerical simulation study, where excellent agreement with the theory is obtained. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 951 KiB  
Article
Cross-Analysis of Magnetic and Current Density Field Topologies in a Quiescent High Confinement Mode Tokamak Discharge
by Marie-Christine Firpo
Foundations 2025, 5(2), 22; https://doi.org/10.3390/foundations5020022 - 17 Jun 2025
Viewed by 220
Abstract
In axisymmetric fusion devices like tokamaks, the winding of the magnetic field is characterized by its safety profile q=qB. Similarly, the winding of the current density field is characterized by qJ. Currently, the relationship between qB [...] Read more.
In axisymmetric fusion devices like tokamaks, the winding of the magnetic field is characterized by its safety profile q=qB. Similarly, the winding of the current density field is characterized by qJ. Currently, the relationship between qB and qJ profiles and their effect on tokamak plasma confinement properties remains unexplored, as the qJ profile is neither computed nor considered. This study presents a reconstruction of the current density winding profile from experimental data in the quiescent H-mode. The topology analysis derived from (qB,qJ) was carried out using Hamada coordinates. It shows a large central plasma region unaffected by current filamentation-driven resonant magnetic perturbations, while the outer region harbors a spectrum of magnetic resonant modes, induced by current filaments located within the core plasma, which degrade peripheral confinement. These results suggest a QH-mode signature pattern needing further validation with additional data. Implementing (qB,qJ) real-time monitoring could provide insights into tokamak confinement regimes with significant implications. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

17 pages, 1223 KiB  
Article
Foreground Emission Randomization Due to Dynamics of Magnetized Interstellar Medium: WMAP and Planck Frequency Bands
by Alexander Bershadskii
Foundations 2025, 5(2), 21; https://doi.org/10.3390/foundations5020021 - 10 Jun 2025
Viewed by 623
Abstract
Using the results of numerical simulations and astrophysical observations (mainly in the WMAP and Planck frequency bands), it is shown that Galactic foreground emission becomes more sensitive to the mean magnetic field with the frequency, resulting in the appearance of two levels of [...] Read more.
Using the results of numerical simulations and astrophysical observations (mainly in the WMAP and Planck frequency bands), it is shown that Galactic foreground emission becomes more sensitive to the mean magnetic field with the frequency, resulting in the appearance of two levels of its randomization due to the chaotic/turbulent dynamics of a magnetized interstellar medium dominated by magnetic helicity. The galactic foreground emission is more randomized at higher frequencies. The Galactic synchrotron and polarized dust emissions have been studied in detail. It is shown that the magnetic field imposes its level of randomization on the synchrotron and dust emission. The main method for the theoretical consideration used in this study is the Kolmogorov–Iroshnikov phenomenology in the frames of distributed chaos notion. Despite the vast differences in the values of physical parameters and spatio-temporal scales between the numerical simulations and the astrophysical observations, there is a quantitative agreement between the results of the astrophysical observations and the numerical simulations in the frames of the distributed chaos notion. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

18 pages, 506 KiB  
Article
Comparing Different Specifications of Mean–Geometric Mean Linking
by Alexander Robitzsch
Foundations 2025, 5(2), 20; https://doi.org/10.3390/foundations5020020 - 6 Jun 2025
Viewed by 702
Abstract
Mean–geometric mean (MGM) linking compares group differences on a latent variable θ within the two-parameter logistic (2PL) item response theory model. This article investigates three specifications of MGM linking that differ in the weighting of item difficulty differences: unweighted (UW), discrimination-weighted (DW), and [...] Read more.
Mean–geometric mean (MGM) linking compares group differences on a latent variable θ within the two-parameter logistic (2PL) item response theory model. This article investigates three specifications of MGM linking that differ in the weighting of item difficulty differences: unweighted (UW), discrimination-weighted (DW), and precision-weighted (PW). These methods are evaluated under conditions where random DIF effects are present in either item difficulties or item intercepts. The three estimators are analyzed both analytically and through a simulation study. The PW method outperforms the other two only in the absence of random DIF or in small samples when DIF is present. In larger samples, the UW method performs best when random DIF with homogeneous variances affects item difficulties, while the DW method achieves superior performance when such DIF is present in item intercepts. The analytical results and simulation findings consistently show that the PW method introduces bias in the estimated group mean when random DIF is present. Given that the effectiveness of MGM methods depends on the type of random DIF, the distribution of DIF effects was further examined using PISA 2006 reading data. The model comparisons indicate that random DIF with homogeneous variances in item intercepts provides a better fit than random DIF in item difficulties in the PISA 2006 reading dataset. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

30 pages, 2290 KiB  
Article
Numerical Evidence for a Bipartite Pure State Entanglement Witness from Approximate Analytical Diagonalization
by Paul M. Alsing and Richard J. Birrittella
Foundations 2025, 5(2), 19; https://doi.org/10.3390/foundations5020019 - 4 Jun 2025
Viewed by 906
Abstract
We show numerical evidence for a bipartite d×d pure state entanglement witness that is readily calculated from the wavefunction coefficients directly, without the need for the numerical computation of eigenvalues. This is accomplished by using an approximate analytic diagonalization of the [...] Read more.
We show numerical evidence for a bipartite d×d pure state entanglement witness that is readily calculated from the wavefunction coefficients directly, without the need for the numerical computation of eigenvalues. This is accomplished by using an approximate analytic diagonalization of the bipartite state that captures dominant contributions to the negativity of the partially transposed state. We relate this entanglement witness to the Log Negativity, and show that it exactly agrees with it for the class of pure states whose quantum amplitudes form a positive Hermitian matrix. In this case, the Log Negativity is given by the negative logarithm of the purity of the amplitudes considered a density matrix. In other cases, the witness forms a lower bound to the exact, numerically computed Log Negativity. The formula for the approximate Log Negativity achieves equality with the exact Log Negativity for the case of an arbitrary pure state of two qubits, which we show analytically. We compare these results to a witness of entanglement given by the linear entropy. Finally, we explore an attempt to extend these pure state results to mixed states. We show that the Log Negativity for this approximate formula is exact for the class of pure state decompositions, for which the quantum amplitudes of each pure state form a positive Hermitian matrix. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

25 pages, 325 KiB  
Review
Advances in Fractional Lyapunov-Type Inequalities: A Comprehensive Review
by Sotiris K. Ntouyas, Bashir Ahmad and Jessada Tariboon
Foundations 2025, 5(2), 18; https://doi.org/10.3390/foundations5020018 - 27 May 2025
Viewed by 512
Abstract
In this survey, we have included the recent results on Lyapunov-type inequalities for differential equations of fractional order associated with Dirichlet, nonlocal, multi-point, anti-periodic, and discrete boundary conditions. Our results involve a variety of fractional derivatives such as Riemann–Liouville, Caputo, Hilfer–Hadamard, ψ-Riemann–Liouville, [...] Read more.
In this survey, we have included the recent results on Lyapunov-type inequalities for differential equations of fractional order associated with Dirichlet, nonlocal, multi-point, anti-periodic, and discrete boundary conditions. Our results involve a variety of fractional derivatives such as Riemann–Liouville, Caputo, Hilfer–Hadamard, ψ-Riemann–Liouville, Atangana–Baleanu, tempered, half-linear, and discrete fractional derivatives. Full article
(This article belongs to the Section Mathematical Sciences)
22 pages, 365 KiB  
Article
Entropy Production Assumption and Objectivity in Continuum Physics Modelling
by Angelo Morro
Foundations 2025, 5(2), 17; https://doi.org/10.3390/foundations5020017 - 22 May 2025
Viewed by 567
Abstract
This paper revisits some aspects connected with the methods for the determination of thermodynamically consistent models. While the concepts apply to the general context of continuum physics, the details are developed for the modelling of deformable dielectrics. The symmetry condition arising from the [...] Read more.
This paper revisits some aspects connected with the methods for the determination of thermodynamically consistent models. While the concepts apply to the general context of continuum physics, the details are developed for the modelling of deformable dielectrics. The symmetry condition arising from the balance of angular momentum is viewed as a constraint for the constitutive equations and is shown to be satisfied by sets of objective fields that account jointly for deformation and electric field. The second law of thermodynamics is considered in a generalized form where the entropy production is given by a constitutive function possibly independent of the other constitutive functions. Furthermore, a representation formula is applied for solving the Clausius–Duhem inequality with respect to the chosen unknown fields. Full article
(This article belongs to the Section Physical Sciences)
12 pages, 414 KiB  
Article
Maxwell’s Demon Is Foiled by the Entropy Cost of Measurement, Not Erasure
by Ruth E. Kastner
Foundations 2025, 5(2), 16; https://doi.org/10.3390/foundations5020016 - 22 May 2025
Viewed by 1000
Abstract
I dispute the conventional claim that the second law of thermodynamics is saved from a “Maxwell’s demon” by the entropy cost of information erasure and show that instead it is measurement that incurs the entropy cost. Thus, Brillouin, who identified measurement as savior [...] Read more.
I dispute the conventional claim that the second law of thermodynamics is saved from a “Maxwell’s demon” by the entropy cost of information erasure and show that instead it is measurement that incurs the entropy cost. Thus, Brillouin, who identified measurement as savior of the second law, was essentially correct, and putative refutations of his view, such as Bennett’s claim to measure without entropy cost, are seen to fail when the applicable physics is taken into account. I argue that the tradition of attributing the defeat of Maxwell’s demon to erasure rather than to measurement arose from unphysical classical idealizations that do not hold for real gas molecules, as well as a physically ungrounded recasting of physical thermodynamical processes into computational and information-theoretic conceptualizations. I argue that the fundamental principle that saves the second law is the quantum uncertainty principle applying to the need to localize physical states to precise values of observables in order to effect the desired disequilibria aimed at violating the second law. I obtain the specific entropy cost for localizing a molecule in the Szilard engine and show that it coincides with the quantity attributed to Landauer’s principle. I also note that an experiment characterized as upholding an entropy cost of erasure in a “quantum Maxwell’s demon” actually demonstrates an entropy cost of measurement. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

11 pages, 2150 KiB  
Article
Physical and Logical Synchronization of Clocks: The Ramsey Approach
by Edward Bormashenko
Foundations 2025, 5(2), 15; https://doi.org/10.3390/foundations5020015 - 28 Apr 2025
Viewed by 777
Abstract
Ramsey analysis is applied to the problem of the relativistic and quantum synchronization of clocks. Various protocols of synchronization are addressed. Einstein and Eddington special relativity synchronization procedures are considered, and quantum synchronization is discussed. Clocks are seen as the vertices of the [...] Read more.
Ramsey analysis is applied to the problem of the relativistic and quantum synchronization of clocks. Various protocols of synchronization are addressed. Einstein and Eddington special relativity synchronization procedures are considered, and quantum synchronization is discussed. Clocks are seen as the vertices of the graph. Clocks may be synchronized or unsynchronized. Thus, introducing complete, bi-colored, Ramsey graphs emerging from the lattices of clocks becomes possible. The transitivity of synchronization plays a key role in the coloring of the Ramsey graph. Einstein synchronization is transitive, while general relativity and quantum synchronization procedures are not. This fact influences the value of the Ramsey number established for the synchronization graph arising from the lattice of clocks. Any lattice built of six clocks, synchronized with quantum entanglement, will inevitably contain the mono-chromatic triangle. The transitive synchronization of logical clocks is discussed. Interrelation between the symmetry of the clock lattice and the structure of the synchronization graph is addressed. Ramsey analysis of synchronization is important for the synchronization of computers in networks, LIGO, and Virgo instruments intended for the registration of gravitational waves and GPS tame-based synchronization. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

18 pages, 1520 KiB  
Article
Bayesian Nonparametric Inference in Elliptic PDEs: Convergence Rates and Implementation
by Matteo Giordano
Foundations 2025, 5(2), 14; https://doi.org/10.3390/foundations5020014 - 23 Apr 2025
Cited by 1 | Viewed by 927
Abstract
Parameter identification problems in partial differential equations (PDEs) consist in determining one or more functional coefficient in a PDE. In this article, the Bayesian nonparametric approach to such problems is considered. Focusing on the representative example of inferring the diffusivity function in an [...] Read more.
Parameter identification problems in partial differential equations (PDEs) consist in determining one or more functional coefficient in a PDE. In this article, the Bayesian nonparametric approach to such problems is considered. Focusing on the representative example of inferring the diffusivity function in an elliptic PDE from noisy observations of the PDE solution, the performance of Bayesian procedures based on Gaussian process priors is investigated. Building on recent developments in the literature, we derive novel asymptotic theoretical guarantees that establish posterior consistency and convergence rates for methodologically attractive Gaussian series priors based on the Dirichlet–Laplacian eigenbasis. An implementation of the associated posterior-based inference is provided and illustrated via a numerical simulation study, where excellent agreement with the theory is obtained. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

14 pages, 280 KiB  
Article
Fault-Tolerant Metric Dimension in Carbon Networks
by Kamran Azhar, Asim Nadeem and Yilun Shang
Foundations 2025, 5(2), 13; https://doi.org/10.3390/foundations5020013 - 16 Apr 2025
Viewed by 688
Abstract
In this paper, we study the fault-tolerant metric dimension in graph theory, an important measure against failures in unique vertex identification. The metric dimension of a graph is the smallest number of vertices required to uniquely identify every other vertex based on their [...] Read more.
In this paper, we study the fault-tolerant metric dimension in graph theory, an important measure against failures in unique vertex identification. The metric dimension of a graph is the smallest number of vertices required to uniquely identify every other vertex based on their distances from these chosen vertices. Building on existing work, we explore fault tolerance by considering the minimal number of vertices needed to ensure that all other vertices remain uniquely identifiable even if a specified number of these vertices fails. We compute the fault-tolerant metric dimension of various chemical graphs, namely fullerenes, benzene, and polyphenyl graphs. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

10 pages, 259 KiB  
Perspective
Revisiting the Definition of Vectors—From ‘Magnitude and Direction’ to Abstract Tuples
by Reinout Heijungs
Foundations 2025, 5(2), 12; https://doi.org/10.3390/foundations5020012 - 15 Apr 2025
Viewed by 608
Abstract
Vectors are almost always introduced as objects having magnitude and direction. Following that idea, textbooks and courses introduce the concept of a vector norm and the angle between two vectors. While this is correct and useful for vectors in two- or three-dimensional Euclidean [...] Read more.
Vectors are almost always introduced as objects having magnitude and direction. Following that idea, textbooks and courses introduce the concept of a vector norm and the angle between two vectors. While this is correct and useful for vectors in two- or three-dimensional Euclidean space, these concepts make no sense for more general vectors, that are defined in abstract, non-metric vector spaces. This is even the case when an inner product exists. Here, we analyze how several textbooks are imprecise in presenting the restricted validity of the expressions for the norm and the angle. We also study one concrete example, the so-called ‘vector-based sustainability analytics’, in which scientists have gone astray by mistaking an abstract vector for a Euclidean vector. We recommend that future textbook authors introduce the distinction between vectors that have and that do not have magnitude and direction, even in cases where an inner product exists. Full article
(This article belongs to the Section Mathematical Sciences)
31 pages, 926 KiB  
Article
Introducing an Evolutionary Method to Create the Bounds of Artificial Neural Networks
by Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
Foundations 2025, 5(2), 11; https://doi.org/10.3390/foundations5020011 - 25 Mar 2025
Viewed by 1490
Abstract
Artificial neural networks are widely used in applications from various scientific fields and in a multitude of practical applications. In recent years, a multitude of scientific publications have been presented on the effective training of their parameters, but in many cases overfitting problems [...] Read more.
Artificial neural networks are widely used in applications from various scientific fields and in a multitude of practical applications. In recent years, a multitude of scientific publications have been presented on the effective training of their parameters, but in many cases overfitting problems appear, where the artificial neural network shows poor results when used on data that were not present during training. This text proposes the incorporation of a three-stage evolutionary technique, which has roots in the differential evolution technique, for the effective training of the parameters of artificial neural networks and the avoidance of the problem of overfitting. The new method effectively constructs the parameter value range of the artificial neural network with one processing level and sigmoid outputs, both achieving a reduction in training error and preventing the network from experiencing overfitting phenomena. This new technique was successfully applied to a wide range of problems from the relevant literature and the results were extremely promising. From the conducted experiments, it appears that the proposed method reduced the average classification error by 30%, compared to the genetic algorithm, and the average regression error by 45%, as compared to the genetic algorithm. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop