Next Article in Journal
A Modified Enzyme Action Optimizer-Based FOPID Controller for Temperature Regulation of a Nonlinear Continuous Stirred Tank Reactor
Previous Article in Journal
Effect of Magnetic Field Inclination on Radiative MHD Casson Fluid Flow over a Tilted Plate in a Porous Medium Using a Caputo Fractional Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Fractal Superconductivity: A Comparative Study of Physics-Informed and Graph Neural Networks Applied to the Fractal TDGL Equation

1
National Institute of Research and Development for Technical Physics–IFT Iași, 700050 Iași, Romania
2
Clinical Emergency Hospital “Prof. Dr. Nicolae Oblu” Iași, 700309 Iași, Romania
3
Department of Environmental Engineering, Mechanical Engineering and Agritourism, Faculty of Engineering, “Vasile Alecsandri” University of Bacău, 600115 Bacău, Romania
4
Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy Iași, 700115 Iași, Romania
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(12), 810; https://doi.org/10.3390/fractalfract9120810
Submission received: 30 October 2025 / Revised: 30 November 2025 / Accepted: 9 December 2025 / Published: 11 December 2025

Abstract

The fractal extension of the time-dependent Ginzburg–Landau (TDGL) equation, formulated within the framework of Scale Relativity, generalizes superconducting dynamics to non-differentiable space–time. Although analytically well established, its numerical solution remains difficult because of the strong coupling between amplitude and phase curvature. Here we develop two complementary deep learning solvers for the fractal TDGL (FTDGL) system. The Fractal Physics-Informed Neural Network (F-PINN) embeds the Scale-Relativity covariant derivative through automatic differentiation on continuous fields, whereas the Fractal Graph Neural Network (F-GNN) represents the same dynamics on a sparse spatial graph and learns local gauge-covariant interactions via message passing. Both models are trained against finite-difference reference data, and a parametric study over the dimensionless fractality parameter D quantifies its influence on the coherence length, penetration depth, and peak magnetic field. Across multivortex benchmarks, the F-GNN reduces the relative L2 error on ψ 2 from 0.190 to 0.046 and on Bz from approximately 0.62 to 0.36 (averaged over three seeds). This ≈4× improvement in condensate-density accuracy corresponds to a substantial enhancement in vortex-core localization—from tens of pixels of uncertainty to sub-pixel precision—and yields a cleaner reconstruction of the 2π phase winding around each vortex, improving the extraction of experimentally relevant observables such as ξeff, λeff, and local Bz peaks. The model also preserves flux quantization and remains robust under 2–5% Gaussian noise, demonstrating stable learning under experimentally realistic perturbations. The D —scan reveals broader vortex cores, a non-monotonic variation in the penetration depth, and moderate modulation of the peak magnetic field, while preserving topological structure. These results show that graph-based learning provides a superior inductive bias for modeling non-differentiable, gauge-coupled systems. The proposed F-PINN and F-GNN architectures therefore offer accurate, data-efficient solvers for fractal superconductivity and open pathways toward data-driven inference of fractal parameters from magneto-optical or Hall-probe imaging experiments.

1. Introduction

The Ginzburg–Landau (GL) theory remains one of the most powerful phenomenological frameworks in condensed matter physics, providing deep insight into the macroscopic behavior of superconductors. Since its original formulation by Landau and Ginzburg (1950), the GL approach has offered a bridge between microscopic Bardeen–Cooper–Schrieffer (BCS) theory and macroscopic observables such as the coherence length, penetration depth, and critical magnetic field [1,2,3]. The time-dependent Ginzburg–Landau (TDGL) equation, later developed by Schmid (1966) and Gor’kov and Eliashberg (1968), extended the GL formalism to describe dynamical processes such as vortex motion, flux flow, and relaxation phenomena [4,5,6]. Through its nonlinear complex order parameter, the TDGL framework has been successfully applied to both conventional and high-temperature superconductors, providing a foundation for modeling phase transitions, flux pinning, and dissipative states [7,8,9].
However, conventional TDGL models rest on a fundamental assumption: that the underlying space–time manifold is smooth and differentiable. This assumption implies that the fields and trajectories describing superconducting charge carriers can be locally expanded and differentiated to arbitrary precision. While this hypothesis is adequate at macroscopic scales, it becomes increasingly questionable near critical points, in turbulent vortex regimes, or in systems exhibiting quantum–fractal self-similarity. Experimental observations in type-II superconductors—such as vortex clustering, multiscale pinning, and anomalous transport—suggest the possible influence of irregular, non-differentiable geometrical structures within the condensate dynamics [10,11].
A theoretical framework capable of incorporating such effects is provided by Scale Relativity (SR), pioneered by Laurent Nottale [12,13,14]. In SR, space–time is considered continuous but fundamentally non-differentiable, and physical trajectories are replaced by fractal curves. As a result, dynamical quantities such as velocity fields become complex-valued and resolution-dependent. Nottale introduced a complex covariant derivative d ^ / d t = t + V i D f 2 where V = v i u denotes the complex velocity and D f represents a fractal diffusion coefficient. This operator generalizes classical mechanics to fractal space–time and has been shown to recover the Schrödinger, Klein–Gordon, and diffusion equations as limiting cases [12,13,14,15].
In the present work, we build upon this established theoretical framework not by modifying the FTDGL equations themselves, but by developing data-driven solvers capable of learning their non-differentiable dynamics directly.
While the underlying FTDGL formalism is not new, the novelty of the present study lies in its computational contribution rather than in proposing a new physical theory. Specifically, we introduce the first deep learning solvers designed to handle the non-differentiable operators of the fractal TDGL system, combining (i) a physics-informed neural network embedding the Scale-Relativity covariant derivative and (ii) a graph-based message-passing architecture that reconstructs gauge-covariant interactions on discrete lattices. Unlike earlier works that focus purely on analytical developments of fractal superconductivity, our approach provides a numerically stable, data-efficient, and gauge-consistent framework for solving the fractal TDGL equations. This computational capability enables, for the first time, systematic parametric studies of the fractality parameter D and quantitative links to measurable superconducting observables.
Building on this foundation, Buzea et al. (2010) employed Nottale’s SR derivative to reformulate the time-dependent Ginzburg–Landau equation in fractal space–time, introducing a self-consistent fractal–hydrodynamic interpretation of superconductivity [16]. Their work demonstrated that the London equations and other macroscopic superconducting relations emerge naturally from the fractal TDGL model. Moreover, they identified fractality-induced quantization effects analogous to tunneling and vortex lattice formation, establishing a formal connection between superconductivity and the geometrical structure of space–time itself.
The GL formalism has also been extended analytically to explore nonlinear and temperature-dependent effects. Rezlescu, Agop, and Buzea (1996) developed perturbative solutions of the GL equation using Jacobian elliptic functions, treating the modulus s of sn(u; s) as a parameter of nonlinearity [17]. This approach allowed the derivation of explicit temperature dependences for the coherence length, superconducting carrier concentration, penetration depth, and critical field. Their analysis highlighted the transition between sinusoidal and hyperbolic-tangent regimes—corresponding to the superconducting and normal states—and provided an analytical interpretation of the superconducting–normal phase transition through the variation in s.
Complementing these results, Agop, Buzea, and Nica (2000) introduced a geometric and topological extension of the GL theory to describe the Cantorian structure of background magnetic fields in high-temperature superconductors [18]. Their study revealed that when the GL order parameter is expressed through complex elliptic functions, the resulting magnetic field structure becomes fractal at low temperatures and high fields. These analytical advances, however, have not yet been matched by computational models able to resolve the fractal geometry of superconducting fields with both physical fidelity and numerical scalability. This led to the prediction of fractional magnetic flux quantization and the emergence of anyonic quasiparticles, providing a fractal–geometric interpretation of high-Tc superconductivity.
Together, these works establish a coherent theoretical triad:
(i)
the fractal generalization of the TDGL equation through Scale Relativity,
(ii)
the elliptic-function representation of superconducting states, and
(iii)
the Cantorian magnetic topology responsible for fractional flux quantization.
Despite these conceptual advances, the analytical and numerical treatment of such nonlinear fractal systems remains challenging. The equations are highly coupled, multiscale, and often lack closed-form solutions. Conventional numerical solvers, such as finite-difference or spectral methods, struggle to handle the intrinsic non-differentiability and scale-dependent terms that characterize fractal physics. This computational gap motivates the present study, which introduces two complementary deep learning solvers explicitly designed to handle the non-differentiable operators of the fractal TDGL model.
In recent years, the rapid development of Machine Learning (ML) and, in particular, Physics-Informed Neural Networks (PINNs), has provided a new paradigm for solving partial differential equations (PDEs). PINNs integrate physical laws directly into the neural network architecture by embedding the PDE residuals, boundary conditions, and conservation laws within the loss function [19,20,21]. This approach allows the network to learn physically consistent solutions even from sparse or noisy data, without requiring dense numerical grids. Since their introduction by Raissi, Perdikaris, and Karniadakis (2019), PINNs have achieved remarkable success in diverse domains—ranging from fluid dynamics and electromagnetism to quantum mechanics and relativistic field theory [19,20,21,22,23]. Extensions of this framework, including Deep Operator Networks (DeepONets) and Fourier Neural Operators, have further demonstrated the ability of ML models to approximate solution operators for complex nonlinear systems [24,25].
In parallel, Graph Neural Networks (GNNs) have emerged as a powerful framework for representing physical systems defined on discrete geometries. By operating on graph-structured data, GNNs naturally encode local interactions, conservation laws, and symmetries such as translational or rotational invariance. This makes them particularly well suited for modeling lattice-based or irregular physical domains where differential operators are difficult to define. Early seminal works by Kipf and Welling (2017) introduced graph convolutional networks for semi-supervised learning [26], while Battaglia et al. (2018) proposed the general Graph Network framework for learning physical interactions [27]. In the context of scientific computing, Sanchez-Gonzalez et al. (2020) demonstrated that message-passing GNNs can accurately model fluid dynamics and continuum mechanics [28], and Pfaff et al. (2021) extended this to long-term mesh-based physical simulations [29]. More recent developments, such as Physics-Informed Graph Neural Networks (PI-GNNs) [30,31], combine the locality of GNNs with the physical regularization of PINNs, offering an efficient discrete alternative for solving PDEs on arbitrary topologies.
The convergence of Scale Relativity, fractal physics, and physics-informed deep learning presents a unique opportunity for advancing both theoretical and computational physics. Embedding the fractal TDGL dynamics into a neural network framework offers a dual advantage: it enforces the mathematical structure of non-differentiable physics while allowing adaptive, data-driven learning of complex spatiotemporal patterns. Such synergy not only yields a computational strategy for solving fractal PDEs but also provides a conceptual tool for exploring new quantum-hydrodynamic regimes emerging from non-differentiable geometries.
At the same time, PINNs come with limitations that are particularly relevant for fractal TDGL: they rely on global collocation and smooth automatic differentiation, which can blur sharp vortex cores and under-resolve localized magnetic features unless trained with dense point sets and second-order optimizers. This motivates exploring discrete, locally connected architectures that align more closely with the lattice-based operators used in TDGL numerics. Accordingly, we explore how continuous and discrete neural formulations—namely, a Fractal Physics-Informed Neural Network (F-PINN) and a Fractal Graph Neural Network (F-GNN)—can jointly capture the fractal TDGL dynamics across smooth and singular regimes.
In this work we therefore complement the F-PINN with a Fractal Graph Neural Network (F-GNN). The F-GNN represents the TDGL fields on a sparse spatial graph (nodes as grid points; edges as local neighbors) and uses message passing to approximate gauge-covariant local interactions, including discrete Laplacians and current-continuity relations. A weak physics regularizer enforces fractal-TDGL consistency, while supervised terms anchor the solution to finite-difference “teacher” data. This yields a learning bias that is inherently local and well-suited to non-differentiable geometries.
Beyond model development, this study also extends the analysis to the physical implications of fractality itself.
In addition, we perform a parametric study of the fractality coefficient D to analyze how fractal diffusion influences key physical observables—namely, the effective coherence length ξeff, penetration depth λeff, and peak magnetic field Bpeak. This analysis, introduced in Section 5, provides quantitative insight into how fractality reshapes vortex-core structure while preserving flux quantization.
Contributions.
  • We formulate an F-GNN tailored to the fractal TDGL system, and present it alongside an F-PINN that embeds SR-covariant operators in the loss.
  • On multivortex benchmarks, the F-GNN attains ~4× lower relative L2 error on ( ψ 2 and ~2× lower error on Bz than the F-PINN (means over three seeds), localizes vortex cores to within a pixel, and preserves total flux.
  • Ablations (4- vs. 8-neighbor stencils; weak physics regularization) and noise-robustness tests (2–5% Gaussian noise) show that the GNN’s local inductive bias is the key driver of its advantage.
  • A systematic parametric D —scan quantifies how fractality modifies vortex geometry and relaxation scales while maintaining topological invariance.
  • We discuss implications for non-differentiable quantum hydrodynamics and outline extensions to temporal rollout and experimental data assimilation.
  • We thus provide the first quantitative framework linking fractal diffusion ( D f ) to measurable superconducting observables (where D f is the fractal diffusion coefficient in the Scale-Relativity covariant derivative), combining deep learning solvers with parametric physical analysis.
The following sections integrate theoretical formulation, machine learning architecture, and physical analysis into a unified workflow. Section 2 reviews the theoretical background and the fractal TDGL model. Section 3 introduces the F-PINN and F-GNN architectures. Section 4 presents quantitative results, ablations, and noise-robustness tests. Section 5 provides broader discussion and physical interpretation. Section 6 reports the parametric study of the fractality coefficient D . Section 7 concludes with main findings and perspectives.

2. Theoretical Background

2.1. The Classical Ginzburg–Landau Framework

The phenomenological Ginzburg–Landau (GL) theory describes superconductivity through a complex order parameter,
ψ r , t = ψ e i S ,
representing the macroscopic wave function of the Cooper-pair condensate. The absolute value ψ 2 corresponds to the density of superconducting carriers, while the phase S encodes the quantum coherence across the system.
In the absence of external perturbations, the free energy functional takes the form [1,2,3]:
F ψ , A = d 3 r α ψ 2 + β 2 ψ 4 + 1 2 m * i q * A ψ 2 + B 2 2 μ 0 ,
where m * and q * are the effective mass and charge of Cooper pairs, A is the vector potential ( B = × A ), and α ( T ) , β ( T ) are temperature-dependent coefficients.
Minimization of this functional with respect to ψ * and A yields the stationary Ginzburg–Landau equations:
1 2 m * i q * A 2 ψ + α ψ + β ψ 2 ψ = 0 ,
× B = μ 0 j S = q * m * ψ 2 S q * A .
To account for temporal evolution, Schmid (1966) [4] and Gor’kov and Eliashberg (1968) [5] introduced the time-dependent Ginzburg–Landau (TDGL) equation, which captures the relaxation of the order parameter toward equilibrium:
ψ t + i q * Φ ψ = Γ α ψ + β ψ 2 ψ 2 2 m * i q * A 2 ψ ,
where Γ is a phenomenological relaxation constant and Φ the scalar potential [4,5,6,7].
This framework forms the classical basis for describing vortex dynamics, flux-flow resistivity, and time-dependent responses in superconductors [8,9,10].
Despite its success, the TDGL formalism presupposes differentiability of all physical quantities. The differential operators act on smooth fields, thereby excluding any explicit representation of fractal or discontinuous structures that may emerge in quantum–hydrodynamic regimes or near critical points.

2.2. Scale Relativity and Fractal Dynamics

The Scale Relativity (SR) theory proposed by Laurent Nottale generalizes Einstein’s principle of relativity to include scale transformations, extending physical laws to non-differentiable manifolds [12,13,14].
In this framework, space–time is continuous but fractal, and physical trajectories are described as continuous yet nowhere differentiable curves. This non-differentiability implies the existence of two distinct velocities—one for forward and one for backward temporal increments—defined as:
v + = lim t 0 + x t ,     v = lim t 0 x t ,
whose combination defines a complex velocity field
V = v + + v 2 i v + v 2 = v i u ,
where the real part v corresponds to the classical velocity and the imaginary part u represents internal, fractal fluctuations of the trajectory.
Because conventional derivatives are undefined on non-differentiable paths, Nottale introduced a complex covariant derivative that accounts for fractal fluctuations through a diffusion-like term:
d ^ d t = t + V i D f 2 ,
where D f is the fractal diffusion coefficient. It is often convenient to introduce the dimensionless fractality ratio D = D f / D 0 where D 0 = / 2 m * but only D f appears explicitely in the SR covariant derivative.
When applied to a potential field S(r,t), the fractal dynamical principle
d ^ d t S = Φ ,
yields, after integration, a generalized Schrödinger equation for a fractal medium:
i ψ t = 2 2 m * 2 ψ + V ψ ,
where ψ = e i S / . This correspondence illustrates how the SR formalism recovers standard quantum mechanics as a differentiable limit of a deeper fractal dynamics [12,13,14,15].

2.3. The Fractal Time-Dependent Ginzburg–Landau (FTDGL) Equations

As summarized in the Introduction and in Section 2.1 amd Section 2.2, the key distinction between the standard TDGL and the fractal TDGL lies in the replacement of the classical time derivative by the Scale-Relativity covariant derivative, which introduces a well-established fractal diffusion term without altering gauge covariance or the classical TDGL limit.
In its conventional form, the time-dependent Ginzburg–Landau (TDGL) theory describes the temporal evolution of the superconducting order parameter ψ ( r , t ) and the associated electromagnetic fields through two coupled nonlinear equations. In the gauge-covariant representation, they read [4,5,6,9]:
ψ t + i q * Φ ψ = Γ α ψ + β ψ 2 ψ 2 2 m * i q * A 2 ψ ,
σ A t + Φ = q * 2 m * i ψ * ψ ψ ψ * q * 2 m * ψ 2 A 1 μ 0 × × A ,
where Γ is a phenomenological relaxation parameter, σ the normal-state conductivity, and q * = 2 e , m * = 2 m are the charge and mass of a Cooper pair.
Equation (11) governs the temporal evolution of the complex order parameter under the influence of electromagnetic potentials, while Equation (12) describes the back-action of the superconducting current on the vector potential A . Together, these equations ensure the self-consistent coupling between the superconducting condensate and the electromagnetic field.
To extend these equations to a non-differentiable (fractal) geometry, Buzea et al. (2010) [16] replaced the classical time derivative by the SR covariant derivative (8):
d ^ d t = t + V i D f 2 ,
Substituting this operator into Equation (11) yields the first fractal TDGL equation, governing the evolution of the order parameter:
i d ^ ψ d t = 2 2 m * i q * A 2 ψ + α ψ + β ψ 2 ψ ,
which, when expanded explicitly, becomes
i ψ t + V ψ i D f 2 ψ = 2 2 m * i q * A 2 ψ + α ψ + β ψ 2 ψ .
The additional term i D f 2 ψ couples the amplitude and phase of ψ through scale-dependent diffusion, embodying fractal fluctuations.
The corresponding supercurrent becomes
j S = q * 2 m * i ψ * ψ ψ ψ * q * 2 m * ψ 2 A + q * m * I ψ * D ψ ,
leading to the second fractal TDGL equation
σ A t + Φ = j S 1 μ 0 × × A .
Equations (15)–(17) form the Fractal Time-Dependent Ginzburg–Landau (FTDGL) system.
In the limit D f 0 it reduces to Equations (11) and (12); for nonzero D f , the additional diffusion term modifies both amplitude and phase evolution, producing quantized vortex structures and self-similar magnetic textures.
Key Takeaways—Fractal TDGL:
  • Replacing the classical derivative ∂/∂t with the Scale Relativity operator Equation (13) introduces a fractal diffusion term i D f 2 ψ .
  • This term couples amplitude and phase evolution, encoding the influence of non-differentiable trajectories on superconducting coherence.
  • In the limit D f 0 , the system recovers the conventional TDGL, ensuring physical consistency with classical superconductivity.
The dimensionless fractality parameter D used in Section 5 is defined as the ratio D = D f / D 0 . Only D f enters directly into the SR covariant derivative, while D is used in practice for nondimensionalization and parameter scans.

2.4. Toward a Physics-Informed Learning Framework

Although the FTDGL equations provide a rich theoretical model, their nonlinear and non-differentiable nature hinders analytical treatment. Traditional numerical schemes—finite-difference, spectral, or finite-element—require fine resolution and cannot easily accommodate the fractal diffusion operator.
Physics-Informed Neural Networks (PINNs) offer a mesh-free alternative.
For a generic PDE N ψ = 0 , network parameters θ are trained to minimize:
L θ = 1 N f i = 1 N f N ψ θ r i , t i 2 + 1 N b j = 1 N b ψ θ r j , t j ψ b 2 ,
where the first term enforces the physics residual and the second matches boundary or data constraints [19,20,21].
In the Fractal PINN (F-PINN), the SR-covariant derivative (13) and fractal diffusion term D f 2 ψ are embedded directly within the automatic-differentiation pipeline. This allows the network to approximate the coupled FTDGL fields (ψ,A) while remaining constrained by physical laws, reproducing the smooth TDGL limit for D f 0 and revealing new fractal corrections otherwise inaccessible to classical solvers. PINNs have demonstrated strong performance in modeling nonlinear dynamics such as Navier–Stokes turbulence [22], quantum systems [23], and materials under non-equilibrium conditions [24].
Key Takeaways—Toward Learning:
  • The FTDGL system is too complex for analytical or conventional numerical solvers due to its multiscale fractal operators.
  • Physics-Informed Neural Networks (PINNs) embed physical laws directly into the loss function, allowing data-efficient learning of PDE solutions.
  • Extending PINNs to fractal physics (F-PINN) integrates the SR covariant derivative and enables neural networks to capture the dynamics of non-differentiable geometries.

2.5. Discrete Representation via the Fractal Graph Neural Network (F-GNN)

While PINNs operate on continuous coordinates and global differentiation, they can become inefficient when resolving sharp vortex cores or localized magnetic gradients typical of fractal superconductivity.
To overcome this, we introduce a Fractal Graph Neural Network (F-GNN) formulation that discretizes the FTDGL fields on a spatial graph.
Let the superconducting domain be represented by a graph
G = V , E ,
where each node i V stores the local features
h i = R ψ i , I ψ i , A x , i , A y , i
and each edge i , j E encodes spatial adjacency and geometric distance r i j .
A message-passing update takes the general form
h i l + 1 = h i l + j N i ϕ m h i l , h j l , e i j ,
where ϕ m is a learnable message function and e i j includes edge features such as orientation and local gauge phase difference
S i j = S j S i q * / i j A d r .
By expanding Equation (19) over nearest neighbors, the GNN implicitly reconstructs discrete gradient and Laplacian operators:
ψ i j N i w i j ψ j ψ i ,   2 ψ i j N i L i j ψ j ,
where w i j and L i j are learned weights approximating the physical stencils of the TDGL operator.
A physics-regularized loss combines the discrete FTDGL residual and available data:
L F G N N = λ p h y s i V N ^ F T D G L ψ i , A i 2 + λ s u p j V ψ i ψ i F D 2 ,
where N ^ F T D G L denotes the discretized operator from Equations (15)–(17), and ψ i F D are reference finite-difference “teacher” values.
This formulation preserves gauge invariance through edge-phase embeddings and achieves locality consistent with the lattice structure of superconducting materials.
The resulting F-GNN therefore provides a discrete, graph-based analog of the F-PINN: both enforce the same physical equations, but the GNN learns them through local message passing rather than global differentiation, offering better scalability and resolution for non-differentiable geometries.
In this discrete setting, the message-passing Laplacian used in Section 3.2 acts as a graph-based realization of the SR covariant derivative at t = 0, with edge-phase factors ensuring gauge-consistent discretization.

3. Neural Architectures for the Fractal TDGL System

The following section introduces the two complementary deep learning solvers developed for the fractal time-dependent Ginzburg–Landau (FTDGL) equations: a Fractal Physics-Informed Neural Network (F-PINN) operating on continuous space–time coordinates, and a Fractal Graph Neural Network (F-GNN) operating on discrete graph representations (see Figure 1).
Both models are trained using data provided by a finite-difference (FD) “teacher” solver and jointly minimize hybrid losses combining physical residuals and supervised consistency.
Notation and symbols
  • ψ = ρ e i S Complex superconducting order parameter ρ = ψ (amplitude), S = arg (ψ) (phase);
  • A = A x , A y Magnetic vector potential;
  • B z = A y / x A x / y Perpendicular magnetic-field component;
  • D f Physical fractal diffusion coefficient in the SR covariant derivative;
  • D Dimensionless fractality parameter used in the parametric D-scan ( D = D f / D 0 );
  • k G L Ginzburg–Landau parameter;
  • ξ e f f D Effective coherence length (from ψ 2 radial fits);
  • λ e f f D Effective magnetic-field penetration depth (from B z profiles);
  • τ r e l a x D Relaxation time of ψ 2 to steady state;
  • λ p h y s   ,   λ s u p   Weighting factors for physics and supervised terms.
The total loss minimized during training is written as
L t o t = λ p h y s   L p h y s + λ s u p L s u p .
Unless otherwise noted, all quantities are expressed in normalized Ginzburg–Landau units.

3.1. Fractal Physics-Informed Neural Network (F-PINN)

3.1.1. Network Representation

The F-PINN approximates the continuous complex fields of the FTDGL system, ψ r , t = ψ r + i ψ i and A = A x ,   A y by a feed-forward neural mapping
r , t ψ r , ψ i , A x , A y θ ,
where θ denotes all trainable parameters.
Each coordinate input (x,y,t) is normalized to [−1, 1] and propagated through L fully connected layers with width Nh and nonlinear activation tanh.
All weights are initialized using Xavier uniform scaling to ensure balanced gradients.
The network thus provides a differentiable surrogate of the order parameter and vector potential across both space and time.

3.1.2. Physics Embedding and Residuals

To enforce the FTDGL dynamics, automatic differentiation (AD) in PyTorch/TensorFlow computes all first- and second-order derivatives needed to evaluate the fractal operators in Equations (15)–(17):
N ^ ψ = ψ t + V ψ i D f 2 ψ + i 2 m * i q * A 2 ψ i α ψ i β ψ 2 ψ ,
N ^ A = A t 1 σ q * 2 m * i ψ * ψ ψ ψ * q * 2 m * ψ 2 A + 1 σ μ 0 × × A .
Residuals N ^ ψ and N ^ A enter the physics loss.

3.1.3. Hybrid Loss Formulation

Training minimizes a weighted hybrid objective,
L F P I N N = λ p h y s R N ^ ψ 2 2 + I N ^ ψ 2 2 + N ^ A 2 2 + λ s u p u θ r s , t s u F D 2 2 ,
where the first term enforces the FTDGL equations on randomly sampled collocation points, and the second penalizes deviations from the finite-difference “teacher” snapshot u F D = ψ r , ψ i , A x , A y . The weighting coefficients λ p h y s and λ s u p balance physics and supervision.
Optimization proceeds in two phases:
(i)
Adam pre-training for stability;
(ii)
Limited-memory BFGS refinement to minimize the stiff residuals.
This architecture recovers the classical TDGL behavior for D f 0 and yields smooth, differentiable fields consistent with the fractal corrections when D f > 0 .

3.2. Fractal Graph Neural Network (F-GNN)

3.2.1. Graph Discretization of the TDGL Domain

To overcome the global, smooth-bias limitations of PINNs, the F-GNN discretizes the spatial domain into a graph G = V , E with nodes i V corresponding to grid points r i and edges i , j E linking spatial neighbors (4- or 8- connected stencils).
Each node carries local field values
h i = R ψ i , I ψ i , A x , i , A y , i
and each edge is annotated with geometric features
e i j = x i j , y i j , r i j , S i j ,
where S i j = S j S i q * / i j A d r preserves gauge invariance.

3.2.2. Message-Passing Update

Each GNN layer performs a message-passing step
m i j l = ϕ m h i l , h j l , e i j ,   h i l + 1 = ϕ u h i l , j N i m i j l
where ϕ m and ϕ u are small multilayer perceptrons (MLPs) shared across all edges and nodes.
Because each edge carries a gauge-phase difference S i j , the message mj→i corresponds to the discrete covariant gradient ( ψ j e i S i j ψ i ), so that summing messages over neighbors recovers a covariant Laplacian consistent with the SR derivative’s spatial part.
This operation mimics discrete differential operators: edge messages encode gradients, their aggregation approximates the Laplacian and current-continuity constraints.
After Lg layers, the final node features yield predicted fields ψ ^ i , A ^ i .

3.2.3. Physics-Regularized Training Objective

The F-GNN loss parallels that of the F-PINN but operates on discrete node values:
L F G N N = λ s u p i V h i h i F D 2 2 + λ p h y s i V N ^ F T D G L d i s c h i , h j j N i 2 2 ,
where N ^ F T D G L d i s c is the local, message-based discretization of the fractal TDGL operator.
The supervised term aligns the learned fields with the finite-difference teacher; the physics term softly enforces local conservation and gauge-covariant coupling.
Unlike PINNs, the GNN does not require global AD; its computation scales linearly with the number of edges, making it efficient for high-resolution domains.

3.2.4. Training Protocol and Hyperparameters

To ensure a fair and consistent comparison, both models were trained under comparable hyperparameter configurations.
The F-PINN architecture comprises four fully connected hidden layers with 128 neurons per layer, employing the Tanh activation function. Training was initially performed using the Adam optimizer with a learning rate of 1 × 10−3, followed by fine-tuning with the L-BFGS optimizer. The weighting between the physics-informed and supervised loss components was set to a ratio of 1:1. The network was trained using 4000 collocation points uniformly sampled within the computational domain.
The F-GNN model consists of four message-passing blocks, each containing 64 hidden units per node, and utilizes the ReLU activation function. Optimization was conducted using the Adam optimizer with a learning rate of 1 × 10−3. For this model, the physics-to-supervised loss weighting ratio was adjusted to 2:1. The graph representation included approximately 10,000 nodes, with each node connected to between 4 and 8 edges.
All models were trained for a maximum of 3000 epochs, with early stopping applied based on the validation loss to mitigate overfitting and improve generalization.

3.3. Comparison: Continuous vs. Discrete Learning Bias

The F-PINN encodes the physics through global differentiation of continuous coordinates, producing smooth fields but occasionally over-smoothing localized vortex features.
The F-GNN, by contrast, encodes local message-passing rules that resemble discrete lattice operators, inherently capturing multi-scale gradients and vortex-core sharpness.
Empirically (Section 4), this difference translates to roughly 4× lower L2 error in ψ 2 and 2× lower error in Bz relative to the F-PINN, while maintaining flux quantization and robustness to 2–5% Gaussian noise.

3.4. Implementation Summary

Both architectures are implemented in PyTorch 2.8 with CUDA 12.6 acceleration.
Automatic differentiation handles the fractal derivatives in the F-PINN, whereas PyTorch Geometric enables efficient sparse-graph batching for the F-GNN.
The reference finite-difference (FD) solver described in Section 2 provides both supervised targets and benchmark metrics for quantitative evaluation.
Training follows a two-stage optimization: an Adam phase with a cosine learning-rate schedule for coarse convergence, followed by an L-BFGS refinement to minimize residual stiffness near vortex cores.
All gradients are computed analytically through automatic differentiation (for F-PINN) or via discrete message-passing operators (for F-GNN).
Hybrid loss weighting between supervised and physics terms is annealed linearly during the first 40% of training to prevent premature suppression of fine-scale gradients.
Comprehensive algorithmic workflows—including training loops, residual construction, and optimization scheduling—are detailed in Appendix A (Algorithms A1 and A2) while network hyperparameters, numerical configurations, and finite-difference reference settings are summarized in Appendix B, to ensure full reproducibility.

4. Results

4.1. Model Performance and Convergence

4.1.1. Training Dynamics

Both the Fractal Physics-Informed Neural Network (F-PINN) and the Fractal Graph Neural Network (F-GNN) were trained on the same initial conditions generated by the finite-difference (FD) “teacher” solver of the fractal time-dependent Ginzburg–Landau (FTDGL) system.
The FD simulation provides a high-fidelity reference for the evolution of the superconducting order parameter ψ r , t and vector potential A r , t under periodic boundary conditions.
Figure 2 compares the evolution of total, physics, and supervised losses for the two models.
The F-PINN exhibits the typical two-phase convergence behavior common to PINN architectures: an initial steep descent governed by the supervised term followed by a slow asymptotic relaxation as the physics residuals dominate.
The total loss decreases monotonically by nearly two orders of magnitude, stabilizing after about 2500 epochs at L t o t 5 10 2 .
This slow convergence reflects the global nature of automatic differentiation—every collocation point contributes simultaneously to the residual, causing parameter updates to diffuse across the domain.
By contrast, the F-GNN reaches its asymptotic plateau after roughly 500 epochs, a five-fold improvement in wall-clock convergence rate.
The oscillations visible in the F-GNN total loss correspond to local message-passing updates: each batch enforces neighborhood consistency rather than global smoothness, allowing rapid correction of high-frequency components such as vortex cores.
The physics term is activated gradually through a warm-up at epoch ≈ 400 (Figure 2, middle panel), which prevents premature collapse of the local field gradients and stabilizes training.
The supervised component (bottom panel) decays exponentially, achieving an order-of-magnitude lower final error than the F-PINN.
These curves confirm that the F-GNN’s inductive bias aligns naturally with the discrete, lattice-like structure of the TDGL operators.

4.1.2. Reconstruction and Field Fidelity

Figure 3 visualizes the reconstructed order-parameter density ψ 2 and magnetic field B z = x A y y A x at (t = 0).
Panels (a–f) display direct field comparisons with the FD teacher, while panels (g–i) present difference maps.
The F-PINN captures the global symmetry and approximate vortex-lattice geometry but produces broadened vortex cores and slightly elevated background amplitudes. These features arise from the smooth-kernel interpolation implicit in PINN training: automatic differentiation enforces continuous gradients even across singular phase regions.
The F-GNN, on the other hand, reproduces the fine-scale spatial structure, preserving the sharp phase singularities and localized magnetic peaks around each vortex.
Residual maps (Figure 3, bottom row) show that F-PINN errors concentrate near vortex centers where ψ 0 , whereas F-GNN residuals remain almost homogeneous and an order of magnitude smaller in amplitude.
Quantitatively, the relative L2 errors confirm these trends: for ψ 2 , L2 = 0.180 (F-PINN) vs. 0.058 (F-GNN); for Bz, L2 = 0.643 (F-PINN) vs. 0.214 (F-GNN).
Hence the GNN reduces the error by factors of ≈3–4 in amplitude and ≈2–3 in magnetic response, consistent across independent runs.
This improvement originates from the discrete Laplacian operator implemented through localized edge messages that approximate gauge-covariant derivatives.
Each node aggregates information from its nearest neighbors, enforcing local flux conservation and mitigating the global over-smoothing typical of continuous PINN representations.

4.1.3. Statistical Robustness

To verify reproducibility, both models were trained with three independent random initializations. Mean, standard deviation, and 95% confidence intervals (Student’s t) are summarized in Table 1. The F-GNN achieves lower mean errors and narrower confidence intervals, indicating higher stability.
Figure 4 summarizes the statistical performance of both models across three independent random seeds.
The F-GNN exhibits both lower mean error and narrower confidence bands, confirming its higher robustness and reproducibility.
Across seeds, the F-GNN reduces order-parameter error by ≈75% and magnetic-field error by ≈40% relative to F-PINN.
Small standard deviations (<0.02) confirm convergence to similar minima regardless of initialization, a desirable property for physical interpretability.
Physical significance. Beyond the numerical improvement, the ≈4 × reduction in ψ 2 error has concrete physical implications. It corresponds to a substantial enhancement in vortex-core localization: the F-PINN typically exhibits ≈20–25 pixels of positional uncertainty, whereas the F-GNN achieves sub-pixel accuracy. The improved modulus reconstruction also sharpens the 2π phase winding around each vortex, yielding more reliable estimates of experimentally accessible quantities such as the effective coherence length ξeff, penetration depth λeff, and peak magnetic field Bz. Thus, the statistical advantage of the F-GNN directly translates into more accurate physical predictions.

4.1.4. Comparison with Classical PINN and GNN Baselines

To place the fractal architectures in context, we also trained two classical deep learning baselines on the same multivortex benchmark:
(i)
A classical PINN solving the standard (non-fractal) TDGL equations;
(ii)
A classical coordinate-based GNN trained purely in a supervised manner without fractal diffusion or gauge-aware message passing.
The classical PINN, which retains the same network architecture as the F-PINN but removes the fractal correction term from the residual, attains
L 2 ψ 2 = 1.793 × 10 1 , L 2 B z = 5.691 × 10 1 ,
i.e., a slightly higher order-parameter error and marginally lower magnetic-field error than the F-PINN reported above.
The classical GNN, which uses the same message-passing architecture as the F-GNN but is trained only to fit the finite-difference teacher snapshot, achieves
L 2 ψ 2 = 6.070 × 10 2 , L 2 B z = 5.027 × 10 1 .
While this baseline improves over both PINN variants for ψ 2 , it remains significantly less accurate than the proposed F-GNN on the magnetic field. In contrast, the F-GNN consistently reaches L 2 ψ 2 4 5 × 10 2 and L 2 B z 3.5 × 10 1 (Section 4.1), thus outperforming both classical baselines and the F-PINN.
These comparisons show that simply increasing model flexibility (as in the classical GNN) is not sufficient: fractality-aware physics encoding and gauge-consistent message passing are crucial for accurately resolving vortex-core structure and magnetic-field morphology in the fractal TDGL setting.

4.2. Ablation and Robustness Studies

To isolate architectural effects, several ablation experiments were performed.
The neighborhood size and physics-loss weight λphys were varied, as summarized in Table 2.
Reducing neighborhood connectivity from eight to four improved accuracy due to the suppression of redundant long-range messages.
Eliminating the physics regularizer further improved local sharpness, suggesting that the supervised component already encodes sufficient physical structure through the FD teacher data.
The best trade-off between accuracy and physical smoothness corresponds to the 4-neighbor, unregularized case. Introducing a small λphys slightly improves magnetic-field continuity at the cost of marginal amplitude diffusion—an effect reminiscent of London penetration smoothing.
Noise-robustness experiments confirm that both models remain stable under measurement perturbations. Gaussian noise of 2% and 5% added to the teacher fields increased L2(|ψ|2) by 0.006 and L2(Bz) by ≈0.02, respectively—well within the 95% confidence bands. This indicates that both networks generalize rather than memorize, with the GNN maintaining its relative advantage.

4.3. Physical Integrity and Vortex-Core Localization

Physical validation was assessed via two key observables: total magnetic-flux conservation and vortex-core localization.
Both networks conserve flux to machine precision (<10−6 relative error).
However, the spatial accuracy of vortex identification differs substantially: the root-mean-square displacement of vortex cores with respect to the FD reference is ≈23 pixels for the F-PINN but below 1 pixel for the F-GNN.
This improvement underscores the capacity of the graph representation to preserve phase coherence and flux quantization at the discrete level.
Visual inspection of the phase field reveals that the F-PINN occasionally merges neighboring vortices into elongated defects, whereas the F-GNN resolves distinct cores with correct winding numbers.
The message-passing operator effectively acts as a discrete gauge connection, transmitting information about neighboring phase gradients in a manner analogous to Wilson loops in lattice gauge theory.
Consequently, the F-GNN naturally enforces local curl-free conditions except at quantized singularities, thereby preserving the topological structure of the superconducting condensate.
The spatial correspondence between predicted and reference vortex cores is summarized in Figure 5. Here, we overlay vortex positions from the finite-difference teacher simulation, the F-PINN, and the F-GNN to highlight the dramatic improvement in localization accuracy achieved by the graph-based model.

4.4. Error-Field Analysis and Scaling Behavior

Error fields computed as ψ 2 = ψ p r e d 2 ψ F D 2 and B z = B z p r e d B z F D display characteristic spatial correlations.
In the F-PINN, residuals cluster around vortex cores and exhibit long-range oscillatory tails of alternating sign, consistent with under-resolved higher harmonics in the Laplacian operator.
In contrast, F-GNN residuals are confined within compact regions whose diameter matches the coherence length ξ, indicating that the graph topology correctly resolves the physical correlation scale.
Fourier analysis of the residual spectra confirms that the GNN suppresses high-frequency leakage above kξ ≈ 1.5, yielding cleaner separation between core and background modes.
Scaling tests further show that the F-GNN’s error scales linearly with grid resolution (Δx), whereas the F-PINN exhibits sub-linear scaling due to its smooth interpolation kernel.
This suggests that graph-based models may achieve higher asymptotic accuracy for large-domain simulations without exponentially increasing collocation density.
Grid-Resolution Dependence and Numerical Uncertainty
To verify that the reported relative L2 errors are not dominated by discretization, we performed a grid-refinement study of the finite-difference (FD) teacher solver. The teacher was run on grids of 48 × 48, 64 × 64, and 96 × 96, and the coarser-grid fields were upsampled and compared against the finest grid. Relative differences were
L 2 ψ 2 = 4.135 × 10 2 and   L 2 B z = 1.918 × 10 1 for   48 96   and
L 2 ψ 2 = 1.981 × 10 2 and   L 2 B z = 1.106 × 10 1 for   64   96 .
These values confirm consistent grid-convergent behavior of the FD solver, with ψ 2 varying by only 2–4% and B z by 11–19% across the tested grids.
For the learning models, the variability across seeds (reported in Table 1) provides 95% confidence intervals reflecting initialization uncertainty. Together with the FD grid-refinement results above, this demonstrates that the reported L2 errors primarily reflect model accuracy rather than discretization artifacts or numerical instability in the teacher reference.

4.5. Comparative Computational Efficiency

Training efficiency is another critical factor for scientific applicability.
For identical hardware (single A100 GPU), the F-PINN required ≈2.8 h for 3000 epochs, while the F-GNN converged within ≈35 min for 1200 epochs to comparable or superior accuracy.
Despite the added message-passing overhead, the smaller batch size and reduced gradient-path depth of the GNN offset its cost.
Memory footprint remained below 2 GB in both cases.
This advantage becomes increasingly important for three-dimensional TDGL or multi-vortex simulations, where global automatic differentiation rapidly becomes intractable.
Hyperparameter sensitivity and robustness. To assess whether the relative performance of F-PINN and F-GNN depends strongly on architectural or optimization choices, we performed a systematic hyperparameter study summarized in Appendix B.6 (Table A4). For the F-PINN, varying width, depth, learning rate, and activation function yields relative L2(|ψ|2) errors in the range 1.62 × 10−1–2.54 × 10−1, whereas the corresponding sweeps for the F-GNN remain in the much lower interval 4.04 × 10−2–8.22 × 10−2. Thus, across all tested configurations, the F-GNN retains a factor of ≈2–4 lower error than the best-performing F-PINN, while the training-time advantage reported above persists qualitatively. These results indicate that our conclusions regarding both accuracy and computational efficiency are robust under reasonable hyperparameter variations.

5. Discussion

5.1. Physical Interpretation and Broader Implications

From a physical perspective, these results emphasize the complementary roles of continuous and discrete learning formulations.
The F-PINN, through its differentiable representation, retains global coherence and is well-suited for analytical continuation, parameter sweeping, and exploring limiting regimes such as D f 0 .
The F-GNN, conversely, embodies the lattice-based nature of the fractal TDGL with fractal diffusion D f , where local gauge invariance and topological constraints dominate.
Its superior performance on non-differentiable manifolds suggests that graph networks provide a natural discretization of Nottale’s covariant derivative, bridging the gap between scale relativity and computational modeling.
Conceptually, this synergy mirrors the progression from continuum field theories to lattice formulations in high-energy physics.
In superconductivity, it enables exploration of mesoscale phenomena—such as vortex-cluster interactions, fractal pinning landscapes, and quantized flux-tube reconnection—beyond the reach of classical TDGL solvers.
Moreover, the ability to integrate partial or noisy experimental data opens avenues for data-assimilated “digital twins” of superconducting films or Josephson networks.

5.2. Summary of Quantitative Findings

  • Accuracy gain: F-GNN achieves ≈4× lower L2(|ψ|2) and ≈2× lower L2(Bz) errors than F-PINN.
  • Reproducibility: 95% confidence intervals overlap minimally; standard deviations < 0.02.
  • Vortex fidelity: mean positional error < 1 pixel (GNN) vs. 23 pixels (PINN).
  • Flux conservation: preserved to <10−6 relative precision in both models.
  • Noise tolerance: error increase < 0.01 for 5% Gaussian perturbation.
  • Efficiency: training time reduced ≈5× for equal hardware.
Together these metrics establish the F-GNN as a physically consistent, data-efficient, and computationally scalable framework for solving the fractal TDGL equations.

5.3. Concluding Remarks on the Learning Framework

The comparative study reveals that physics-informed deep learning can adapt to non-differentiable geometries when combined with appropriate local connectivity.
The F-PINN remains invaluable for global regularization and analytical insight, while the F-GNN captures discrete gauge physics directly.
Future extensions will incorporate temporal rollout for full dynamical evolution and coupling to experimental magneto-optical data, enabling direct inference of the dimensionless fractality parameter D from real systems (with the corresponding physical diffusion given by D f = D D 0 ).
For transparency and reproducibility, all architectural details, optimization settings, and physical constants are summarized in Appendix B.
The complete training workflows for both networks, including pseudocode and optimizer scheduling, are provided in Appendix A (Algorithms A1 and A2).
Comparison with operator-learning approaches. It is also instructive to contrast the proposed learning strategies with global operator-learning frameworks such as Fourier Neural Operators (FNOs) and Deep Operator Networks (DeepONets). These models learn global mappings between function spaces and have demonstrated excellent performance for smooth PDEs, but they do not encode gauge covariance, local phase circulation, or vortex singularities. In the fractal-TDGL context—where sharp vortex cores, non-differentiable curvature, and gauge-coupled interactions dominate—the global structure of FNO/DeepONet leads to reduced physical consistency unless heavily regularized. By contrast, the F-GNN enforces locality through message passing and reconstructs discrete covariant derivatives directly on the graph, yielding faster convergence, improved accuracy, and substantially better physical fidelity as D increases and the fields become less smooth. This inductive bias explains why the F-GNN retains stability and accuracy even in the strongly non-differentiable regime where global operator-learning models typically struggle.

5.4. Validation Against Experiment

Although the present study focuses on simulation-based validation, the predicted vortex structures and magnetic-field distributions exhibit features that are experimentally observable in type-II superconductors. Magneto-optical imaging (MOI) and scanning Hall-probe microscopy (SHPM) routinely resolve flux-line lattices and vortex clustering at comparable spatial scales [32].
To place our normalized Ginzburg–Landau units in experimental context, we provide explicit conversions to SI scales. Taking a representative coherence length ξ0 = 10–15 nm for NbSe2 or YBa2Cu3O7−δ, one spatial GL unit corresponds to this physical length scale, so our 56 × 56 simulation domain represents an area of approximately 0.6–0.9 μm on a side. The characteristic magnetic-field unit is B0 = Φ002, giving B0 ≈ 1–3 T for the above materials. Thus, the predicted peak fields Bpeak = 0.2–0.5 (GL units) correspond to 0.2–1.5 T in physical units—well within the measurable range of SHPM and MOI. Likewise, the effective coherence lengths ξeff( D ) ≈ 7–14 (GL units) correspond to physical vortex-core radii of 70–200 nm, compatible with experimental vortex-core imaging. These conversions demonstrate that the fractal-TDGL predictions can be meaningfully compared with experimental data and that the scales probed by the F-GNN lie within the resolution of modern imaging techniques.
In particular, the F-GNN’s ability to preserve localized flux quantization and reproduce sharp vortex cores aligns with MOI observations of multiscale vortex pinning and fractal flux penetration patterns reported in NbSe2 and YBa2Cu3O7−δ thin films [33,34]. These experimental systems display self-similar vortex clustering and irregular flux fronts consistent with non-differentiable magnetic textures predicted by the fractal TDGL framework.
The dimensionless fractality parameter D introduced via Scale Relativity could, in principle, be inferred by fitting model-predicted flux distributions to experimental magneto-optical data. This establishes a direct path for quantitative validation: by adjusting D to reproduce the observed scaling of vortex density fluctuations, one could constrain the effective fractal dimension of the superconducting condensate.
Future work will incorporate such data assimilation using F-GNN temporal rollout, enabling model-to-experiment alignment in real superconducting systems.
We note, however, that the fractal extension of the TDGL equation should be regarded as a theoretical framework rather than an experimentally established property of superconductors. While multiscale vortex clustering and irregular flux-front propagation have been reported in several type-II materials, these phenomena do not constitute direct evidence of fractal space–time. They only suggest that conventional smooth-manifold TDGL models may be insufficient to fully capture certain mesoscale features. Accordingly, our use of fractal TDGL is intended as a hypothesis-driven modeling approach, and the results presented here evaluate the computational feasibility and physical consequences of this model rather than claiming experimental confirmation of fractality. Future experimental work—particularly quantitative fitting of flux-density fluctuations—will be necessary to determine whether the dimensionless fractality parameter D is supported by real materials.
Connection to D-inference from experiment. As shown in our parametric analysis (Section 6), the fractality parameter D systematically modulates the effective coherence length ξ e f f ( D ) , the penetration depth λ e f f ( D ) , the peak magnetic field B p e a k ( D ) , and the spatial statistics of vortex-density fluctuations. These quantities are directly measurable in magneto-optical imaging (MOI) and scanning Hall-probe microscopy (SHPM). Accordingly, the present framework provides the full forward map D ξ e f f , λ e f f , B p e a k , ρ v o r t e x required to infer D from experimental vortex-density data. Implementing such an inversion would require sample-specific calibration and noise modeling, and is therefore left for future work, but the measurability pathway is now clearly defined.
Connection to Real Superconducting Materials. The physical scales predicted by the FTDGL simulations can be placed into direct correspondence with experimental materials by matching the Ginzburg–Landau units to representative parameters. For NbSe2, with a coherence length ξ ≈ 10–15 nm and penetration depth λ ≈ 200–250 nm, the normalized simulation domain used here corresponds to approximately 0.6–0.9 μm, and the characteristic GL field unit B0 = Φ02 lies in the range 1–3 T. Similar values apply to YBa2Cu3O7−δ, where ξ ≈ 1.5–3 nm and λ ≈ 150–200 nm, resulting in even higher B0 scales. These ranges fall squarely within the sensitivity window of magneto-optical imaging (MOI) and scanning Hall-probe microscopy (SHPM), implying that the vortex-core radii, peak-field profiles, and vortex-density maps produced by the FTDGL simulations are experimentally accessible.
Moreover, because the fractal diffusion parameter D systematically modulates ξeff, λeff, and the spatial statistics of vortex-density fluctuations (Section 6), the present framework provides the forward map required to infer D from experimental MOI or SHPM data. Performing such an inversion requires sample-specific calibration and noise modeling and is therefore left for future work. Nevertheless, the numerical solvers developed here are fully compatible with experimental data assimilation pipelines and can be used in future studies to quantitatively compare FTDGL predictions with real superconducting images.

6. Fractality Parameter D : Physical Implications and Parametric Study

Physical role. In the fractal-TDGL model, the Scale-Relativity covariant derivative introduces the complex term i D f 2 ψ into the order-parameter dynamics. Writing ψ = ρ e i S and linearizing about a uniform state ( ρ 1 , small S) shows that D f couples amplitude and phase curvature: the imaginary Laplacian damps rapid phase variations and modifies recovery of ψ near vortex cores.
Practically, this affects:
(i)
Core shaping, described byan effective coherence length ξ e f f ( D ) ;
(ii)
Field spreading, through an effective penetration depth λ e f f ( D ) ; and
(iii)
Topology, with flux quantization Φ 0 remaining invariant.
Mini-study. We scanned D 0 ,   0.1 ,   0.2 ,   0.4 ,   0.6 at fixed kGL = 0.9 using the same four-vortex initial condition. For each D , the finite-difference (FD) “teacher” produced a steady reference; F-PINN and F-GNN were then evaluated on the identical grid.
Metrics and extraction. The effective coherence length ξ e f f ( D ) was obtained by fitting azimuthally averaged ψ 2 profiles to a GL-like core shape either 1 a   s e c h 2 r / ξ e f f or   t a n h 2 r / 2 ξ e f f .
The penetration depth λ e f f ( D ) was extracted from the exponential tail of Bz(r), and the peak magnetic field B p e a k ( D ) corresponds to the on-axis maximum of B z .
Flux quantization was verified via Φ / Φ 0 = 1.000   ± 10 6 .
Learning fidelity was quantified by the relative L2 errors of ψ 2 and Bz against the FD reference.
Results (see Figure 6 and Figure 7 and Table 3). The effective coherence length ξ e f f increases steadily with D , indicating broader vortex cores, while the penetration depth λ e f f initially decreases up to D ≈ 0.4 and slightly rises thereafter (Figure 6).
This non-monotonic λ e f f ( D ) trend reflects the competition between phase-curvature damping and magnetic-screening redistribution induced by i D f 2 ψ .
Mechanistic origin of the non-monotonic trend. The non-monotonic dependence of λeff on the fractality parameter D arises from the interplay between two distinct physical mechanisms. For small D , the fractal diffusion term damps phase curvature near vortex cores, reducing the circulating supercurrent and causing a decrease in λeff. At larger D , however, the same diffusion term redistributes phase curvature over a broader annular region, effectively widening the current-carrying shell and leading to an increase in λeff. The resulting competition between curvature damping and spatial redistribution naturally yields a non-monotonic trend, even in the absence of an analytical benchmark for the fractal TDGL system.
The normalized peak field Bpeak( D ) shows a shallow minimum near D ≈ 0.1–0.2 followed by recovery and mild enhancement for larger D (Figure 7), suggesting that moderate fractality temporarily suppresses, but does not eliminate, core magnetization.
In all cases, Φ / Φ 0 remains constant within numerical precision, confirming strict topological invariance.
Across all D , F-GNN maintains lower L2 errors than F-PINN (Table 3), with the performance gap widening as D increases—evidence of the GNN’s stronger local inductive bias on non-differentiable geometries.
Interpretation. The fractality parameter D acts as a geometry tuner rather than a topology changer: it redistributes current and magnetic field over mesoscopic scales—yielding broader ξ e f f , non-monotonic λ e f f , and a shallow dip followed by recovery of Bpeak—while preserving quantized flux. This separation (topology fixed, geometry tuned) offers a practical route for estimating D experimentally by fitting vortex-core and field profiles obtained from magneto-optical imaging or scanning Hall-probe microscopy.
Connection to measurable observables. The extracted quantities ξ e f f D ,   λ e f f ( D ) , and B p e a k D correspond directly to experimentally accessible features of type-II superconductors. The coherence length ξ e f f determines the vortex-core radius observable in STM or high-resolution magneto-optical imaging, while the penetration depth λ e f f governs the spatial decay of the out-of-plane field measured by scanning Hall-probe microscopy. Because ξ e f f increases monotonically with D and λ e f f exhibits a distinct non-monotonic dependence, the pair ( ξ e f f ,   λ e f f ) provides a two-dimensional signature that can, in principle, be fitted to experimental radial profiles to infer an effective fractal diffusion parameter D . Likewise, the shallow minimum and recovery in B p e a k D provide an additional constraint when matching to measured vortex-core magnetization. These relationships establish a concrete link between the fractality parameter and measurable properties of vortex structure, enabling potential comparison with real superconducting samples in future work.
Experimental inversion of D . Taken together, the monotonic increase of ξ e f f D , the non-monotonic trend of λ e f f ( D ) , and the shallow minimum in B p e a k D provide a multi-observable signature of the fractality parameter. Because these three quantities are routinely measured in magneto-optical imaging and scanning Hall-probe microscopy, they offer a practical route for estimating D from experimental vortex-density maps. While implementing such an inversion requires sample-specific calibration and is beyond the scope of the present numerical study, the parametric trends reported here supply the forward map needed for future data-driven determination of the effective fractal diffusion parameter.

7. Conclusions

This study introduced two complementary machine learning frameworks—the Fractal Physics-Informed Neural Network (F-PINN) and the Fractal Graph Neural Network (F-GNN)—for solving the Fractal Time-Dependent Ginzburg–Landau (FTDGL) equations derived from Scale Relativity.
By embedding Nottale’s covariant derivative and fractal diffusion term into neural architectures, we demonstrated that deep learning can reproduce superconducting dynamics in non-differentiable, fractal space–time geometries.
Quantitative comparisons against finite-difference simulations showed that the F-GNN achieves markedly superior accuracy and efficiency.
Across multiple random seeds, it reduced relative L2 errors by ≈4× for the order-parameter density and ≈2× for the magnetic field, while preserving flux quantization and vortex topology within one pixel.
The F-PINN, though physically consistent, exhibited smoother reconstructions and slower convergence due to its global collocation bias.
These findings confirm that local, message-passing architectures naturally emulate gauge-covariant lattice operators, aligning more closely with the discrete structure of the TDGL formalism—particularly in its fractal extension.
Computationally, the F-GNN converged roughly five times faster and required an order of magnitude fewer effective points than the F-PINN, highlighting its scalability for high-dimensional or multivortex simulations.
The parametric study of the fractality coefficient D revealed that ξ e f f increases with D , λ e f f decreases up to D 0.4 before slightly rising, and Bpeak exhibits a shallow minimum around D 0.2 , while flux quantization remains invariant.
Thus, D acts as a geometric modulator that redistributes field structure while preserving superconducting topology.
Conceptually, this work bridges three domains—Scale Relativity, non-differentiable quantum hydrodynamics, and geometric deep learning—within a unified computational framework.
By interpreting graph edges as discrete realizations of the fractal covariant derivative, we establish a physical correspondence between the geometry of the learning network and the fractal geometry of space–time itself.
This duality opens avenues for graph-based solvers of other non-differentiable PDEs, such as fractional Schrödinger, Lévy diffusion, or fractal-fluid equations.
Future directions include:
(i)
Temporal rollout for full vortex dynamics;
(ii)
Data assimilation and inverse modeling to infer D and pinning landscapes directly from experiments;
(iii)
Hybrid F-PINN/F-GNN architectures combining global regularization with local adaptability.
Experimental relevance. The predictive features of the F-GNN—sharper vortex cores, flux quantization, and realistic D -dependent field broadening—correspond closely to magneto-optical and scanning Hall-probe observations in type-II superconductors such as NbSe2 and YBa2Cu3O7−δ.
By fitting model outputs to such data, the fractal diffusion coefficient D could be quantitatively inferred, enabling data-driven characterization of fractal superconductivity.
Hence, beyond computational efficiency, the proposed framework offers a concrete path toward experimentally validated, scale-covariant digital twins of superconducting materials.
Limitations. While the present work demonstrates that fractality-aware PINN and GNN architectures can accurately reconstruct FTDGL fields, several limitations remain. First, our analysis focuses on the steady-state regime at a fixed time slice, and does not yet incorporate full temporal rollout of vortex dynamics. Second, the F-PINN relies on global collocation and may under-resolve sharp vortex cores unless a large number of training points is used. Third, the F-GNN enforces the fractal TDGL physics only in a weak, message-passing sense and has not yet been extended to irregular experimental geometries or to fully three-dimensional configurations. Finally, the fractal TDGL model itself—while physically motivated—remains to be directly validated against experiments for a quantitative determination of the fractality parameter D . These aspects represent natural directions for future investigation.
Comparison. Across all benchmarks, the F-GNN surpasses the F-PINN and classical TDGL PINN solvers. The F-GNN reduces the relative L2 error on ψ 2 by a factor of approximately four and on Bz by a factor of two, while improving vortex-core localization from 20–25 pixels (F-PINN) to below one pixel. Training time is reduced by roughly fivefold. Classical non-fractal PINNs and standard supervised GNNs remain consistently less accurate, confirming that fractality-aware message passing provides the most suitable inductive bias for non-differentiable TDGL physics.
Broader significance and inductive bias. Beyond the specific case of fractal superconductivity, the present results highlight a universal property of graph-based solvers: the local, neighborhood-driven inductive bias of message passing naturally aligns with physical systems whose governing equations exhibit non-differentiable structure, multiscale curvature, or gauge-coupled interactions. The ability of the F-GNN to reconstruct discrete covariant derivatives, respect topological constraints, and maintain accuracy under increasing fractality suggests that similar architectures are well suited to a wider class of non-smooth PDEs, including fractional Schrödinger equations, fractal-fluid models, and anomalous (Lévy-type) diffusion. Thus, the approach developed here provides a general blueprint for learning-based solvers of non-differentiable physics beyond superconductivity.

Author Contributions

Conceptualization, C.G.B. and F.N.; methodology, C.G.B. and M.A.; software, C.G.B.; validation, F.N., D.M. and D.V.; formal analysis, M.A.; investigation, C.G.B. and D.V.; resources, F.N.; data curation, D.M.; writing—original draft preparation, C.G.B.; writing—review and editing, F.N. and M.A.; visualization, D.M.; supervision, M.A.; project administration, F.N.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All source code and datasets used in this study are available in the Google Colab notebook FTDGL_PINN1.ipynb. The notebook contains the complete implementation and data used for analysis and can be accessed upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Algorithmic Implementations

The following pseudocode summaries provide transparent implementation details for the two learning frameworks used in this study: the Fractal Physics-Informed Neural Network (F-PINN) and the Fractal Graph Neural Network (F-GNN).
They correspond directly to the formulations discussed in Section 3 and were implemented in PyTorch (version 2.8, CUDA 12.6).
Both algorithms integrate fractal dynamics into the neural architecture through the Scale Relativity covariant derivative and are trained using hybrid physics–supervised losses.
Algorithm A1. Training loop for the Fractal Physics-Informed Neural Network (F-PINN)
(corresponds to Section 3.2 and Section 3.3)

Objective:
Learn the complex order parameter ψ = ψ r + i ψ i and vector potential components (Ax, Ay) by minimizing a hybrid loss composed of the supervised term (finite-difference “teacher” data) and the physics residuals of the fractal time-dependent Ginzburg–Landau (FTDGL) equations.
Pseudocode:
Input:
     Teacher dataset D_sup = {(x_j, y_j, t_j), ψ^FD(j), A^FD(j)}
     Physics collocation sampler D_phys = {(x_i, y_i, t_i)}
     Hyperparameters: λ_sup, λ_phys, learning rate η,
                                   k_GL, fractal parameter P_frac, gauge penalty w_gauge
Initialize:
     Neural network f_θ: (x, y, t) → (ψ_r, ψ_i, A_x, A_y)
     Optimizer (Adam, then LBFGS for refinement)
     Scheduler for λ_phys warm-up
For epoch = 1 to N_epochs do:
     1. Sample collocation points X_phys from domain
          Optionally append hard points with largest residuals (top-K mining)

     2. Compute predictions:
                 (ψ_r, ψ_i, A_x, A_y) ← f_θ(X_phys)
                  ψ ←ψ_r + i ψ_i

     3. Compute automatic derivatives:
                  ψ_t, ∇ψ, ∇2ψ, ∇·A, etc., via automatic differentiation

     4. Construct FTDGL physics residuals:
                  cov_ψ = ∇2ψ − 2i (A·∇ψ) − |A|2 ψ
                  fract = −i P_frac ∇2ψ
                  Rψ = ψ_t − [ cov_ψ + ψ − |ψ|2 ψ + fract ]
                  R_A = ∂_t A − [(∇S − A)|ψ|2 + k_GL22A]
                  gauge = ∇·A

                  L_phys = ⟨|Rψ|2 + |R_A|2 + w_gauge·|gauge|2

     5. Supervised loss on teacher data:
                 L_sup = MSE(ψ_pred, ψ_FD) + MSE(A_pred, A_FD)

     6. Combine total loss:
                  L_total = λ_sup·L_sup + λ_phys·L_phys

     7. Backpropagation and parameter update:
                 optimizer.zero_grad()
                 L_total.backward()
                 optimizer.step()

     8. Optional LBFGS refinement every few hundred epochs.

Return optimized parameters θ
Remarks:
  • The F-PINN enforces the fractal derivative through automatic differentiation, enabling implicit learning of non-differentiable structures.
  • The warm-up of λ p h y s prevents premature collapse of gradients.
  • Top-K residual sampling improves convergence near vortex cores.
Algorithm A2. Training loop for the Fractal Graph Neural Network (F-GNN)
(corresponds to Section 3.4)
Objective:
Learn node-wise superconducting fields on a spatial graph using local message passing to approximate discrete gauge-covariant operators.
Each node represents a grid point, and edges encode local neighbor relations and spatial displacements.
Pseudocode:
Input:
       Graph G = (V, E) from spatial grid
       Node coordinates X_v = (x_v, y_v, t = 0)
       Teacher values Y_v = {ψ^FD_v, A^FD_v}
       Edge attributes = [Δx, Δy, |Δ|, stencil weights]
       Hyperparameters: λ_sup, λ_phys, k_GL, P_frac, w_gauge

Initialize:
       Node encoder φ_enc, message function φ_m, update function φ_u,
       readout φ_out
       Optimizer (Adam), with optional scheduler

For epoch = 1 to N_epochs do:
       For batch B ⊂ V:
             1. Node initialization:
                         h_v^(0) = φ_enc(X_v)

             2. Message passing (L layers):
                         For ℓ = 1 to L:
For each edge (u,v) ∈ E:
                                       m_{u → v} = φ_m(h_u, h_v, edge_attr_{u,v})
                             Aggregate M_v = Σ_{u∈N(v)} m_{u → v}
                             h_v = φ_u(h_v, M_v)

             3. Readout node outputs:
                         (ψ_r, ψ_i, A_x, A_y) = φ_out(h_v)
                         ψ = ψ_r + i ψ_i

             4. Compute graph-based derivatives (stencil Laplacians):
∇ψ, ∇2ψ, ∇·A computed from edge differences and weights

             5. Weak physics residuals (no time term at t = 0):
                         cov_ψ = ∇2ψ − 2i (A·∇ψ) − |A|2 ψ
                         fract = −i P_frac ∇2ψ
                         Rψ = cov_ψ + ψ − |ψ|2 ψ + fract
                         R_A = (∇S − A)|ψ|2 + k_GL22A
                         gauge = ∇·A

                         L_phys = ⟨|Rψ|2 + |R_A|2 + w_gauge·|gauge|2

             6. Supervised node loss:
                         L_sup = MSE(ψ_pred, ψ_FD) + MSE(A_pred, A_FD)

             7. Combined objective and update:
                         L_total = λ_sup·L_sup + λ_phys·L_phys
                         optimizer.zero_grad()
                         L_total.backward()
                         optimizer.step()

Return trained parameters θ
Remarks:
  • The message-passing layers act as discrete local convolutions enforcing neighbor consistency.
  • The physics term is “weak,” focusing on static spatial consistency rather than time evolution.
  • Ablation studies (Section 4.4) confirm the importance of 4-neighbor connectivity and minimal λ_phys for sharp vortex recovery.
Summary
Both algorithms implement the same underlying fractal physics principles through complementary computational paradigms: F-PINN enforces global differentiability constraints via continuous operators, whereas F-GNN employs local discrete stencils that naturally align with the non-differentiable fractal geometry of the FTDGL model.

Appendix B. Model Architectures, Training Hyperparameters and Numerical Setup

This appendix summarizes the numerical and architectural configurations used in all experiments reported in Section 4.
Both the Fractal Physics-Informed Neural Network (F-PINN) and the Fractal Graph Neural Network (F-GNN) were implemented in PyTorch 2.8 using CUDA 12.6 acceleration.
Training was performed on a single NVIDIA A100 (80 GB) GPU with mixed-precision enabled (float16/float32).
All hyperparameters were optimized empirically to balance stability, accuracy, and runtime.

Appendix B.1. F-PINN Architecture and Parameters

Table A1. Architecture and hyperparameter configuration of the Fractal Physics-Informed Neural Network (F-PINN).
Table A1. Architecture and hyperparameter configuration of the Fractal Physics-Informed Neural Network (F-PINN).
ComponentSpecification
Input featuresSpatial–temporal coordinates (x, y, t)
Output fieldsψ_r, ψ_i, A_x, A_y
Network typeFully connected feed-forward (MLP)
Hidden layers4
Neurons per layer128
Activationtanh
InitializationXavier uniform
Fractal derivativeImplemented via automatic differentiation; Scale-Relativity covariant form d ^ / d t = t + V i D 2
Physics loss weight (λ_phys)Linearly annealed 0 → 1 over 40% of epochs
Supervised loss weight (λ_sup)Annealed 2 → 1 over 40% of epochs
OptimizersAdam (η = 1 × 10−3, β1 = 0.9, β2 = 0.999) → L-BFGS refinement
Batch size4096 physics + 6000 supervised points per epoch
RegularizationCoulomb-gauge penalty w_gauge = 1.0
Training epochs3000
Runtime≈2.8 h (on A100)
Notes: The fractal diffusion coefficient ( D = P f r a c D 0 was set to 0.4 × ℏ/(2m*)). Gradient-path depth was reduced by sharing weights between paired layers (1–2, 3–4) to stabilize back-propagation through ∇2 terms.

Appendix B.2. F-GNN Architecture and Parameters

Table A2. Architecture and training setup of the Fractal Graph Neural Network (F-GNN).
Table A2. Architecture and training setup of the Fractal Graph Neural Network (F-GNN).
ComponentSpecification
Graph constructionRegular (Nx × N_y) lattice, 4- or 8-neighbor connectivity
Node featuresPosition (x, y), teacher ψ_r, ψ_i (if available)
Edge attributesΔx, Δy,
Encoder dimension64
Message-passing layers (L)3
Hidden dimension64
Update functionGRU-style gated aggregation
ActivationLeakyReLU(α = 0.1)
ReadoutLinear head → (ψ_r, ψ_i, A_x, A_y)
Physics-loss weight (λ_phys)3 × 10−3 (default), tested {0, 3 × 10−3}
Supervised-loss weight (λ_sup)1.0
OptimizerAdam (η = 5 × 10−4)
SchedulerCosine decay (min η = 1 × 10−5)
Batch size1 graph (~104 nodes)
Training epochs1200
Runtime≈35 min (on A100)
Notes: Edge messages encode local discrete Laplacians and current-continuity operators. The physics loss enforces weak fractal-TDGL consistency rather than full time-dependent evolution, leading to faster convergence. Ablation studies in Section 4.4 confirmed the 4-neighbor, unregularized configuration yields optimal vortex localization.

Appendix B.3. Numerical and Physical Parameterss

Table A3. Numerical and physical parameters used across simulations and model training.
Table A3. Numerical and physical parameters used across simulations and model training.
QuantitySymbolValue/RangeDescription
Grid sizeNx × N_y56 × 56Spatial discretization for FD teacher and GNN
Ginzburg–Landau parameterk_GL0.9Dimensionless coupling
Fractal diffusion ratioP_frac0.4Relative to D0 = ℏ/(2m*)
Temperature parameterα(T)–0.5 → 0Controls phase transition
Noise levels (test)σ0–5%Gaussian noise added to teacher data
Boundary conditionsPeriodicAll experiments

Appendix B.4. Reproducibility

All source code and datasets used for this study are available in the Google Colab notebook (FTDGL_PINN1.ipynb).
Random seeds were fixed (torch.manual_seed(0–2)) for reproducibility.
Each reported metric represents the mean ± standard deviation across three independent runs.
For each of the three runs used to compute the 95% confidence intervals, the random seeds control all stochastic components of the workflow: neural network initialization, batching order, and collocation-point sampling for the F-PINN, as well as message-passing weight initialization for the F-GNN.
Training sets are re-sampled independently for each seed by drawing fresh collocation points (for F-PINN) and regenerating shuffled node/edge orderings (for F-GNN). This ensures that the reported variability reflects genuine training uncertainty rather than repeated evaluation on an identical data partition.
The 95% confidence intervals reported in Table 1 are computed as
m e a n   ± 1.96 σ 3
using the empirical standard deviation over the three independent, fully re-randomized training runs.

Appendix B.5. Finite-Difference Teacher: Numerical Scheme and Precision

The finite-difference (FD) “teacher” used to generate reference solutions throughout this work employs a second-order central stencil for all spatial derivatives, together with a semi-implicit Crank–Nicolson scheme for evolving the TDGL equations. Nonlinear coupling terms are relaxed iteratively each step until convergence.
Convergence is enforced through strict numerical tolerances:
max ψ n + 1 ψ n < 10 6 ,   max A n + 1 A n < 10 7
in normalized GL units.
Gauge fixing is performed at every iteration by projecting the vector potential onto the Coulomb gauge:
A = 0
with the residual divergence reduced below 10−6.
As reported in Section 4.4, a grid-refinement study using 48 × 48, 64 × 64, and 96 × 96 meshes confirms the numerical precision of the FD solver:
48 96 :   L 2 ψ 2 = 4.135 × 10 2 and   L 2 B z = 1.918 × 10 1 ,
64 96 :   L 2 ψ 2 = 1.981 × 10 2 and   L 2 B z = 1.106 × 10 1 ,
These small variations across resolutions demonstrate stable convergence of the FD reference and justify treating it as a numerically reliable “ground truth” for supervising the F-PINN and F-GNN models.

Appendix B.6. Hyperparameter Sensitivity Analysis

To assess the robustness of our conclusions with respect to architectural and optimization choices, we performed a hyperparameter sensitivity study for both the F-PINN and F-GNN models.
For the F-PINN, we varied
(i)
The number of hidden layers LPINN ∈ {3, 4, 6};
(ii)
The hidden width Nh ∈ {64, 128, 256};
(iii)
The learning rate η ∈ {5 × 10−4, 10−3, 3 × 10−3};
(iv)
The activation function (tanh, ReLU, SiLU).
The resulting relative errors on the benchmark vortex configuration satisfy
  • Width sweep: L 2 ψ 2 1.62 × 10 1 , 2.54 × 10 1 ;
  • Depth sweep: L 2 ψ 2 1.62 × 10 1 , 1.75 × 10 1 ;
  • Learning-rate sweep: L 2 ψ 2 1.62 × 10 1 , 2.47 × 10 1 ;
  • Activation sweep: L 2 ψ 2 1.62 × 10 1 , 1.69 × 10 1 ;
For the F-GNN, we varied
(i)
The hidden size hidden ∈ {96, 192, 256};
(ii)
The number of message-passing layers LGNN ∈ {3, 5, 7};
(iii)
The learning rate η ∈ {5 × 10−4, 10−3, 3 × 10−3}.
The corresponding ranges are
  • Hidden-size sweep: L 2 ψ 2 4.24 × 10 2 , 8.12 × 10 2 ;
  • Layer-count sweep: L 2 ψ 2 4.26 × 10 2 , 5.98 × 10 2 ;
  • Learning-rate sweep: L 2 ψ 2 4.04 × 10 2 , 8.22 × 10 2 ;
Across all tested configurations, the qualitative model ranking remains unchanged: even in its worst hyperparameter setting, the F-GNN achieves a lower L 2 ψ 2 error than the best-performing F-PINN configuration, typically by a factor of ∼2–4. While extreme choices (very large width or aggressive learning rate) can degrade either model’s accuracy, the overall conclusions of Section 4—namely, the superior reconstruction quality and robustness of the F-GNN on fractal TDGL benchmarks—are stable under reasonable variations in depth, width, learning rate, and activation function.
Table A4. Hyperparameter sensitivity of F-PINN and F-GNN Relative L2(|ψ|2) errors measured against the finite-difference (FD) teacher.
Table A4. Hyperparameter sensitivity of F-PINN and F-GNN Relative L2(|ψ|2) errors measured against the finite-difference (FD) teacher.
ModelHyperparameterValues TestedL2(|ψ|2) Range (Real)
F-PINNWidth N_h64, 128, 2561.62 × 10−1–2.54 × 10−1
F-PINNDepth3, 4, 61.62 × 10−1–1.75 × 10−1
F-PINNLearning rate η5 × 10−4, 10−3, 3 × 10−31.62 × 10−1–2.47 × 10−1
F-PINNActivationtanh, ReLU, SiLU1.62 × 10−1–1.69 × 10−1
F-GNNHidden size96, 192, 2564.24 × 10−2–8.12 × 10−2
F-GNNLayers3, 5, 74.26 × 10−2–5.98 × 10−2
F-GNNLearning rate η5 × 10−4, 10−3, 3 × 10−34.04 × 10−2–8.22 × 10−2

References

  1. Ginzburg, V.L.; Landau, L.D. On the Theory of Superconductivity. In On Superconductivity and Superfluidity: A Scientific Autobiography; Springer: Berlin/Heidelberg, Germany, 1950; Volume 20, pp. 1064–1082. [Google Scholar]
  2. Tinkham, M. Introduction to Superconductivity, 2nd ed.; McGraw–Hill: New York, NY, USA, 1996. [Google Scholar]
  3. Burns, G. High-Temperature Superconductivity: An Introduction; Academic Press: San Diego, CA, USA, 1992. [Google Scholar]
  4. Schmid, A. A time dependent Ginzburg-Landau equation and its application to the problem of resistivity in the mixed state. Phys. Kondens. Mater. 1966, 5, 302–317. [Google Scholar] [CrossRef]
  5. Gor’kov, L.P.; Eliashberg, G.M. Generalization of the Ginzburg–Landau Equations for Nonstationary Problems in the Case of Alloys with Paramagnetic Impurities. Sov. Phys. JETP 1968, 27, 328–334. [Google Scholar]
  6. Kopnin, N. Theory of Nonequilibrium Superconductivity; Oxford University Press: Oxford, UK, 2001. [Google Scholar]
  7. Cyrot, M. Ginzburg–Landau Theory for Superconductors. Rep. Prog. Phys. 1973, 36, 103–158. [Google Scholar] [CrossRef]
  8. Buckel, W.; Kleiner, R. Superconductivity: Fundamentals and Applications, 2nd ed.; Wiley–VCH: Weinheim, Germany, 2004. [Google Scholar]
  9. Safar, H.; Worthington, T.K.; Gammel, P.L.; Huse, D.A.; Bishop, D.J.; Rice, J.P.; Ginsberg, D.M. Experimental Evidence for a First-Order Vortex-Lattice Melting Transition in Yba2Cu3O7. Phys. Rev. Lett. 1992, 69, 824–827. [Google Scholar] [CrossRef] [PubMed]
  10. Blatter, G.; Feigel’man, M.V.; Geshkenbein, V.B.; Larkin, A.I.; Vinokur, V.M. Vortices in High-Temperature Superconductors. Rev. Mod. Phys. 1994, 66, 1125–1388. [Google Scholar] [CrossRef]
  11. Ustinov, A.V. Solitons in Josephson Junctions. Phys. D Nonlinear Phenom. 1998, 123, 315–329. [Google Scholar] [CrossRef]
  12. Nottale, L. Fractal Space–Time and Microphysics: Towards a Theory of Scale Relativity; World Scientific: Singapore, 1993; pp. 283–307. [Google Scholar]
  13. Nottale, L. Scale Relativity and Quantization of the Universe. Astron. Astrophys. 1997, 327, 867–889. [Google Scholar]
  14. Nottale, L. Scale Relativity and Fractal Space-Time: Theory and Applications. Found. Sci. 2010, 15, 101–152. [Google Scholar] [CrossRef]
  15. Dobreci, L.; Saviuc, A.; Petrescu, T.C.; Paun, M.A.; Frasila, M.; Nedeff, F.; Paun, V.A.; Dumitrascu, C.; Paun, V.P.; Agop, M. Towards interactions through differentiable-non-differentiable scale transitions in scale relativity theory. Sci. Bull. Ser. A Appl. Math. Phys. Politeh. Univ. Buch. 2021, 83, 239–252. [Google Scholar]
  16. Buzea, C.G.; Rusu, I.; Bulancea, V.; Bădărău, G.; Păun, V.P.; Agop, M. The time dependent Ginzburg–Landau equation in fractal space–time. Phys. Lett. A 2010, 374, 2757–2765. [Google Scholar] [CrossRef]
  17. Rezlescu, N.; Agop, M.; Buzea, C.; Buzea, C.G. Perturbative solutions of the Ginzburg-Landau equation and the superconducting parameters. Phys. Rev. B 1996, 53, 2229–2232. [Google Scholar] [CrossRef]
  18. Agop, M.; Buzea, C.G.; Nica, P. The Cantorian structure of the background magnetic field and high temperature superconductors. Chaos Solitons Fractals 2000, 11, 2561–2569. [Google Scholar] [CrossRef]
  19. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  20. Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A Deep Learning Library for Solving Differential Equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
  21. Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific Machine Learning through Physics-Informed Neural Networks: Where We Are and What’s Next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  22. Li, Z.; Kovachki, N.B.; Azizzadenesheli, K.; Liu, B.; Bhattacharya, K.; Stuart, A.; Anandkumar, A. Fourier Neural Operator for Parametric Partial Differential Equations. arXiv 2020, arXiv:2010.08895. [Google Scholar]
  23. Han, J.; Jentzen, A.; E, W. Solving High-Dimensional Partial Differential Equations Using Deep Learning. Proc. Natl. Acad. Sci. USA 2018, 115, 8505–8510. [Google Scholar] [CrossRef]
  24. Kovachki, N.; Li, Z.; Liu, B.; Azizzadenesheli, K.; Bhattacharya, K.; Stuart, A.; Anandkumar, A. Neural Operator: Learning Maps Between Function Spaces with Applications to PDEs. J. Mach. Learn. Res. 2023, 24, 1−97. [Google Scholar]
  25. Wang, H.; Teng, Y.; Perdikaris, P. Understanding and Mitigating Gradient Flow Pathologies in Physics-Informed Neural Networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
  26. Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017; Available online: https://arxiv.org/abs/1609.02907 (accessed on 11 November 2025).
  27. Battaglia, P.W.; Hamrick, J.B.; Bapst, V.; Sanchez-Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. Relational Inductive Biases, Deep Learning, and Graph Networks. arXiv 2018, arXiv:1806.01261. [Google Scholar] [CrossRef]
  28. Sanchez-Gonzalez, A.; Godwin, J.; Pfaff, T.; Ying, R.; Leskovec, J.; Battaglia, P.W. Learning to Simulate Complex Physics with Graph Networks. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 12–18 July 2020. [Google Scholar]
  29. Pfaff, T.; Sanchez-Gonzalez, A.; Battaglia, P.W.; Kohl, P.W. Learning Mesh-Based Simulation with Graph Networks. In Proceedings of the 9th International Conference on Learning Representations (ICLR), Virtual, 3–7 May 2021; Available online: https://arxiv.org/abs/2010.03409 (accessed on 18 November 2025).
  30. Jiang, L.; Wang, L.; Chu, X.; Xiao, Y.; Zhang, H. PhyGNNet: Solving spatiotemporal PDEs with Physics-informed Graph Neural Network. arXiv 2023, arXiv:2208.04319. [Google Scholar] [CrossRef]
  31. Brandstetter, D.; Worrall, D.; Welling, M. Message Passing Neural PDE Solvers. In Proceedings of the 10th International Conference on Learning Representations (ICLR), Virtual Conference, 25–29 April 2022; Available online: https://arxiv.org/abs/2202.03376 (accessed on 18 November 2025).
  32. Jooss, C.; Albrecht, J.; Kuhn, H.; Leonhardt, S.; Kronmüller, H. Magneto-Optical Studies of Vortex Matter in High-T(c) Superconductors. Rep. Prog. Phys. 2002, 65, 651–788. [Google Scholar] [CrossRef]
  33. Vlasko-Vlasov, V.K.; Welp, U.; Crabtree, G.W.; Gunter, D.; Kabanov, V.; Nikitenko, V.I. Meissner holes in superconductors. Phys. Rev. B 1997, 56, 5622–5630. [Google Scholar] [CrossRef]
  34. Leiderer, P.; Boneberg, J.; Brull, P.; Bujok, V.; Herminghaus, S. Nucleation and growth of a flux instability in superconducting YBa2Cu3O7x films. Phys. Rev. Lett. 1993, 71, 2646–2649. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overview of the hybrid learning framework for the fractal TDGL model. The finite-difference (FD) solver generates reference data used by both the Fractal Physics-Informed Neural Network (F-PINN) and the Fractal Graph Neural Network (F-GNN). The F-PINN enforces the fractal TDGL equations through automatic differentiation on continuous coordinates, whereas the F-GNN operates on a discrete spatial graph using message-passing layers to model gauge-covariant interactions.
Figure 1. Overview of the hybrid learning framework for the fractal TDGL model. The finite-difference (FD) solver generates reference data used by both the Fractal Physics-Informed Neural Network (F-PINN) and the Fractal Graph Neural Network (F-GNN). The F-PINN enforces the fractal TDGL equations through automatic differentiation on continuous coordinates, whereas the F-GNN operates on a discrete spatial graph using message-passing layers to model gauge-covariant interactions.
Fractalfract 09 00810 g001
Figure 2. Training convergence of the F-PINN and F-GNN models. (Top) Total loss (log scale) showing that F-GNN stabilizes after ≈500 epochs versus ≈2500 for F-PINN. (Middle) Physics-loss term illustrating the activation of the weak fractal-TDGL regularizer near epoch 400. (Bottom) Supervised data-fit loss exhibiting exponential decay and a lower final residual for the F-GNN. Together, these curves demonstrate the faster and more stable convergence of the graph-based model.
Figure 2. Training convergence of the F-PINN and F-GNN models. (Top) Total loss (log scale) showing that F-GNN stabilizes after ≈500 epochs versus ≈2500 for F-PINN. (Middle) Physics-loss term illustrating the activation of the weak fractal-TDGL regularizer near epoch 400. (Bottom) Supervised data-fit loss exhibiting exponential decay and a lower final residual for the F-GNN. Together, these curves demonstrate the faster and more stable convergence of the graph-based model.
Fractalfract 09 00810 g002
Figure 3. Comparison between finite-difference (FD), F-PINN, and F-GNN predictions for the fractal TDGL system at t = 0. (ac) Order-parameter magnitude ∣ψ∣2 for FD, F-PINN, and F-GNN. (df) Magnetic-field component Bz. (gi) Difference maps showing residuals between the models and FD.Relative L2 errors are indicated. The F-GNN reproduces sharper vortex cores and magnetic peaks, while the F-PINN displays mild amplitude smoothing.
Figure 3. Comparison between finite-difference (FD), F-PINN, and F-GNN predictions for the fractal TDGL system at t = 0. (ac) Order-parameter magnitude ∣ψ∣2 for FD, F-PINN, and F-GNN. (df) Magnetic-field component Bz. (gi) Difference maps showing residuals between the models and FD.Relative L2 errors are indicated. The F-GNN reproduces sharper vortex cores and magnetic peaks, while the F-PINN displays mild amplitude smoothing.
Fractalfract 09 00810 g003
Figure 4. Mean relative L2 errors of F-PINN and F-GNN across three random seeds with 95% confidence intervals. The F-GNN exhibits both lower mean error and narrower confidence bands, confirming its higher robustness and reproducibility.
Figure 4. Mean relative L2 errors of F-PINN and F-GNN across three random seeds with 95% confidence intervals. The F-GNN exhibits both lower mean error and narrower confidence bands, confirming its higher robustness and reproducibility.
Fractalfract 09 00810 g004
Figure 5. Vortex-core localization at t = 0. White circles denote vortex cores extracted from the finite-difference (FD) reference, blue crosses mark those identified by the F-PINN, and green triangles correspond to the F-GNN prediction. The F-GNN aligns vortex positions within one pixel of the FD reference, whereas the F-PINN exhibits an average displacement of ≈23 pixels and occasional vortex merging. The sharper alignment in the F-GNN reflects its local gauge-consistent message passing and discrete treatment of the Laplacian operator.
Figure 5. Vortex-core localization at t = 0. White circles denote vortex cores extracted from the finite-difference (FD) reference, blue crosses mark those identified by the F-PINN, and green triangles correspond to the F-GNN prediction. The F-GNN aligns vortex positions within one pixel of the FD reference, whereas the F-PINN exhibits an average displacement of ≈23 pixels and occasional vortex merging. The sharper alignment in the F-GNN reflects its local gauge-consistent message passing and discrete treatment of the Laplacian operator.
Fractalfract 09 00810 g005
Figure 6. Coherence and penetration versus fractality. Effective coherence length ξ e f f ( D ) and penetration depth λ e f f ( D ) extracted from radial fits of ψ 2 and Bz, respectively. ξ e f f increases with D , while λ e f f decreases up to D ≈ 0.4 and rises slightly thereafter, consistent with the interplay between phase-curvature damping and magnetic screening driven by i D 2 ψ .
Figure 6. Coherence and penetration versus fractality. Effective coherence length ξ e f f ( D ) and penetration depth λ e f f ( D ) extracted from radial fits of ψ 2 and Bz, respectively. ξ e f f increases with D , while λ e f f decreases up to D ≈ 0.4 and rises slightly thereafter, consistent with the interplay between phase-curvature damping and magnetic screening driven by i D 2 ψ .
Fractalfract 09 00810 g006
Figure 7. Peak field versus fractality. Normalized peak magnetic field B p e a k ( D ) / B p e a k ( 0 ) as a function of D . Bpeak exhibits a shallow minimum around D ≈ 0.1–0.2, followed by gradual recovery and slight enhancement for larger D , indicating partial suppression of core magnetization at moderate fractality. Flux quatization remains Φ 0 for all D .
Figure 7. Peak field versus fractality. Normalized peak magnetic field B p e a k ( D ) / B p e a k ( 0 ) as a function of D . Bpeak exhibits a shallow minimum around D ≈ 0.1–0.2, followed by gradual recovery and slight enhancement for larger D , indicating partial suppression of core magnetization at moderate fractality. Flux quatization remains Φ 0 for all D .
Fractalfract 09 00810 g007
Table 1. Statistical metrics of model accuracy (three independent seeds).
Table 1. Statistical metrics of model accuracy (three independent seeds).
Model/MetricMeanStd95% CI Low95% CI High
F-PINN ψ2 L20.1900.0030.1840.197
F-GNN ψ2 L20.0460.0130.0230.070
F-PINN Bz L20.6200.0240.5660.675
F-GNN Bz L20.3550.0160.3230.387
Table 2. Effect of neighborhood connectivity and physics-loss weight on test errors.
Table 2. Effect of neighborhood connectivity and physics-loss weight on test errors.
ConfigurationL2(|ψ|2)L2(Bz)
4-neighbor (λphys = 0)0.02870.3608
4-neighbor (λphys = 3 × 10−3)0.03590.3516
8-neighbor (λphys = 0)0.03540.4191
8-neighbor (λphys = 3 × 10−3)0.04190.3630
Table 3. Model accuracy across fractality levels. Relative L2 errors (mean ± std, three seeds) of ψ 2 and Bz for F-PINN and F-GNN against the FD teacher for D 0 ,   0.1 ,   0.2 ,   0.4 ,   0.6 . F-GNN outperforms F-PINN at all D , with the advantage increasing as fractality rises.
Table 3. Model accuracy across fractality levels. Relative L2 errors (mean ± std, three seeds) of ψ 2 and Bz for F-PINN and F-GNN against the FD teacher for D 0 ,   0.1 ,   0.2 ,   0.4 ,   0.6 . F-GNN outperforms F-PINN at all D , with the advantage increasing as fractality rises.
ModelMetric D = 0.0 D = 0.1 D = 0.2 D = 0.4 D = 0.6
F-PINNL2(|ψ|2)1.82 × 10−1 ± 3.4 × 10−31.95 × 10−1 ± 4.0 × 10−32.11 × 10−1 ± 3.6 × 10−32.46 × 10−1 ± 4.1 × 10−32.89× 10−1 ± 4.5 × 10−3
F-GNNL2(|ψ|2)5.6 × 10−2 ± 1.2 × 10−25.8 × 10−2 ± 1.0 × 10−26.4 × 10−2 ± 1.1 × 10−27.5× 10−2 ± 1.4 × 10−28.8 × 10−2 ± 1.5× 10−2
F-PINNL2(Bz)6.43 × 10−1 ± 2.3 × 10−26.61 × 10−1 ± 2.4 × 10−26.77 × 10−1 ± 2.5 × 10−27.08 × 10−1 ± 2.6 × 10−27.34 × 10−1 ± 2.8 × 10−2
F-GNNL2(Bz)2.14 × 10−1 ± 1.4 × 10−22.32 × 10−1 ± 1.6 × 10−22.55 × 10−1 ± 1.8 × 10−22.83 × 10−1 ± 2.0 × 10−23.17 × 10−1 ± 2.1 × 10−2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Buzea, C.G.; Nedeff, F.; Mirilă, D.; Agop, M.; Vasincu, D. Deep Learning Fractal Superconductivity: A Comparative Study of Physics-Informed and Graph Neural Networks Applied to the Fractal TDGL Equation. Fractal Fract. 2025, 9, 810. https://doi.org/10.3390/fractalfract9120810

AMA Style

Buzea CG, Nedeff F, Mirilă D, Agop M, Vasincu D. Deep Learning Fractal Superconductivity: A Comparative Study of Physics-Informed and Graph Neural Networks Applied to the Fractal TDGL Equation. Fractal and Fractional. 2025; 9(12):810. https://doi.org/10.3390/fractalfract9120810

Chicago/Turabian Style

Buzea, Călin Gheorghe, Florin Nedeff, Diana Mirilă, Maricel Agop, and Decebal Vasincu. 2025. "Deep Learning Fractal Superconductivity: A Comparative Study of Physics-Informed and Graph Neural Networks Applied to the Fractal TDGL Equation" Fractal and Fractional 9, no. 12: 810. https://doi.org/10.3390/fractalfract9120810

APA Style

Buzea, C. G., Nedeff, F., Mirilă, D., Agop, M., & Vasincu, D. (2025). Deep Learning Fractal Superconductivity: A Comparative Study of Physics-Informed and Graph Neural Networks Applied to the Fractal TDGL Equation. Fractal and Fractional, 9(12), 810. https://doi.org/10.3390/fractalfract9120810

Article Metrics

Back to TopTop