Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (84)

Search Parameters:
Keywords = reproducing kernel space

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 332 KB  
Article
Fibonacci-Weighted Bicomplex Hardy Spaces: Reproducing Kernels, Shift Bounds, and Germ Sheaves
by Ji Eun Kim
Mathematics 2026, 14(6), 936; https://doi.org/10.3390/math14060936 - 10 Mar 2026
Viewed by 103
Abstract
Motivated by the fact that the Fibonacci sequence is the simplest nontrivial second-order recurrence with a rational generating function, we develop a Fibonacci-weighted Hardy theory for bicomplex holomorphic functions. Starting from the coefficient norm [...] Read more.
Motivated by the fact that the Fibonacci sequence is the simplest nontrivial second-order recurrence with a rational generating function, we develop a Fibonacci-weighted Hardy theory for bicomplex holomorphic functions. Starting from the coefficient norm n0|an|2/Fn+1, we obtain a bicomplex Hilbert module whose reproducing kernel is governed by (1tt2)1 and whose maximal disk of holomorphy is determined sharply by the nearest kernel singularity, giving the radius ρF=φ1/2 (the square-root inverse of the golden ratio φ). The arithmetic recurrence makes several objects fully explicit: we derive closed formulas for the kernels through the idempotent decomposition of BC, compute exact norms of the shift powers and a golden-ratio spectral radius, and package the local theory into a sheaf of Fibonacci-holomorphic germs that are compatible with the bicomplex idempotent splitting. We also treat (p,q)-Fibonacci weights, obtaining a one-parameter family of rational kernels (1ptqt2)1 and corresponding operator bounds. In addition to providing a concrete bicomplex model within weighted Hardy theory, the resulting explicit kernels furnish benchmark examples for kernel-based interpolation and for the operator theory of unilateral weighted shifts. Full article
(This article belongs to the Section C1: Difference and Differential Equations)
Show Figures

Figure 1

22 pages, 1359 KB  
Article
Kernel VICReg for Self-Supervised Learning in Reproducing Kernel Hilbert Space
by M. Hadi Sepanj, Benyamin Ghojogh, Saed Moradi and Paul Fieguth
Big Data Cogn. Comput. 2026, 10(3), 78; https://doi.org/10.3390/bdcc10030078 - 5 Mar 2026
Viewed by 200
Abstract
Self-supervised learning (SSL) has emerged as a powerful paradigm for representation learning by optimizing geometric objectives, such as invariance to augmentations, variance preservation, and feature decorrelation, without requiring labels. However, most existing methods operate in Euclidean space, limiting their ability to capture nonlinear [...] Read more.
Self-supervised learning (SSL) has emerged as a powerful paradigm for representation learning by optimizing geometric objectives, such as invariance to augmentations, variance preservation, and feature decorrelation, without requiring labels. However, most existing methods operate in Euclidean space, limiting their ability to capture nonlinear dependencies and geometric structures. In this work, we propose Kernel VICReg, a novel self-supervised learning framework that pulls the VICReg objective into a Reproducing Kernel Hilbert Space (RKHS). By kernelizing each term of the loss, variance, invariance, and covariance, we obtain a general formulation that operates on double-centered kernel matrices and Hilbert–Schmidt norms, enabling nonlinear feature learning without explicit mappings. We demonstrate that Kernel VICReg mitigates the risk of representational collapse under challenging conditions and improves performance on datasets exhibiting nonlinear structure or limited sample regimes. Empirical evaluations across MNIST, CIFAR-10, STL-10, TinyImageNet, and ImageNet100 show consistent gains over Euclidean VICReg, with particularly strong improvements on datasets where nonlinear structures are prominent. UMAP visualizations are provided only as a qualitative illustration of embedding geometry and are not used as a calibration or statistical validation. Our results suggest that kernelizing SSL objectives is a promising direction for bridging classical kernel methods with modern representation learning. Full article
(This article belongs to the Section Artificial Intelligence and Multi-Agent Systems)
Show Figures

Figure 1

15 pages, 1351 KB  
Article
An Operator Analysis on Stochastic Differential Equation (SDE)-Based Diffusion Generative Models
by Yunpei Wu and Yoshinobu Kawahara
Entropy 2026, 28(3), 290; https://doi.org/10.3390/e28030290 - 4 Mar 2026
Viewed by 269
Abstract
Score-based generative models, grounded in stochastic differential equations (SDEs), excel in producing high-quality data but suffer from slow sampling due to the extensive nonlinear computations required for iterative score function evaluations. We propose an innovative approach that integrates score-based reverse SDEs with kernel [...] Read more.
Score-based generative models, grounded in stochastic differential equations (SDEs), excel in producing high-quality data but suffer from slow sampling due to the extensive nonlinear computations required for iterative score function evaluations. We propose an innovative approach that integrates score-based reverse SDEs with kernel methods, leveraging the derivative reproducing property of reproducing kernel Hilbert spaces (RKHSs) to efficiently approximate the eigenfunctions and eigenvalues of the Fokker–Planck operator. This enables data generation through linear combinations of eigenfunctions, transforming computationally intensive nonlinear operations into efficient linear ones, thereby significantly reducing computational overhead. Notably, our experimental results demonstrate remarkable progress: despite a slight reduction in sample diversity, the sampling time for a single image on the CIFAR-10 dataset is reduced to an impressive 0.29 s, marking a substantial advancement in efficiency. This work introduces novel theoretical and practical tools for generative modeling, establishing a robust foundation for real-time applications. Full article
Show Figures

Figure 1

20 pages, 483 KB  
Article
Numerical Simulation of the Kudryashov–Sinelshchikov Equation for Modeling Pressure Waves in Liquids with Gas Bubbles
by Gayatri Das, Bibekananda Sitha, Rajesh Kumar Mohapatra, Predrag Stanimirović and Tzung-Pei Hong
Mathematics 2026, 14(4), 710; https://doi.org/10.3390/math14040710 - 17 Feb 2026
Viewed by 251
Abstract
The Kudryashov–Sinelshchikov equation (KSE) is crucial in modeling pressure waves in liquids containing gas bubbles, capturing both nonlinear wave phenomena and dispersion effects. This article applies the reproducing kernel Hilbert space method (RKHSM) to find a numerical solution for the time-fractional KSE. We [...] Read more.
The Kudryashov–Sinelshchikov equation (KSE) is crucial in modeling pressure waves in liquids containing gas bubbles, capturing both nonlinear wave phenomena and dispersion effects. This article applies the reproducing kernel Hilbert space method (RKHSM) to find a numerical solution for the time-fractional KSE. We develop a numerical solution to the KSE using the RKHSM, which offers an efficient and accurate approach for solving nonlinear partial differential equations due to its smoothness and orthogonality properties. The key components of this method include the reproducing kernel (RK) theory, important Hilbert spaces, normal basis, orthogonalization, and homogenization. We construct an appropriate RK and derive an iterative solution that converges rapidly to the exact solution. The effectiveness of this approach is demonstrated through numerical simulations in which we analyze the behavior of pressure waves and compare the results with existing analytical and numerical solutions. The RKHSM consistently demonstrates highly accurate, rapid convergence, and remarkable stability across a wide range of problems. Thus, the RKHSM is a promising tool for studying wave propagation in bubbly liquids. Full article
(This article belongs to the Special Issue Recent Developments in Theoretical and Applied Mathematics)
Show Figures

Figure 1

14 pages, 3859 KB  
Article
Compact Analytic Two-Gaussian Representation of Universal Short-Range Coulomb Correlations in Soft-Core Fluids
by Hiroshi Frusawa
Axioms 2026, 15(2), 123; https://doi.org/10.3390/axioms15020123 - 6 Feb 2026
Viewed by 385
Abstract
Soft-core Coulomb fluids, exemplified by the two-dimensional Gaussian-charge one-component plasma, serve as fundamental benchmarks for both mathematical theory and computational modeling of coarse-grained dynamics, including stochastic density functional theory, dynamical density functional theory, and dissipative particle dynamics. In these systems, the conventional mean-field [...] Read more.
Soft-core Coulomb fluids, exemplified by the two-dimensional Gaussian-charge one-component plasma, serve as fundamental benchmarks for both mathematical theory and computational modeling of coarse-grained dynamics, including stochastic density functional theory, dynamical density functional theory, and dissipative particle dynamics. In these systems, the conventional mean-field description, or the random phase approximation (RPA), is frequently employed due to its analytic simplicity; however, its validity is restricted to weak coupling regimes. Here we demonstrate that Coulomb correlations induce a structural crossover to a strongly correlated liquid where the nearest-neighbor distance saturates rather than decreasing monotonically, a behavior fundamentally incompatible with mean-field predictions. Central to our analysis is the emergence of a universal scaling law: when rescaled by the coupling constant, the short-range direct correlation function (DCF) collapses onto a single curve across the strong coupling regime. Exploiting this universality, we construct a closed-form analytic representation of the DCF using a two-Gaussian basis. This compact form accurately reproduces hypernetted-chain radial distribution functions and structure factors while ensuring exact compliance with thermodynamic sum rules. Beyond theoretical elegance, the proposed kernel offers a computationally efficient alternative to RPA-based approximations, enabling real-space dynamical methods to incorporate strong correlations without modifying long-range smoothed-charge electrostatics. Its analytic transparency bridges rigorous integral equation theory and practical dynamical kernels, additionally providing a physics-informed prior for emerging machine-learning models. Collectively, these results establish a mathematically rigorous testbed for advancing the modeling of strongly correlated soft matter systems. Full article
(This article belongs to the Section Mathematical Physics)
Show Figures

Figure 1

16 pages, 574 KB  
Article
Feasibility-Aware Design-Space Exploration of Transparent Coarse-Grained Reconfigurable Architectures
by Thiago R. B. S. Soares and Ivan S. Silva
Electronics 2026, 15(2), 313; https://doi.org/10.3390/electronics15020313 - 10 Jan 2026
Viewed by 332
Abstract
Coarse-Grained Reconfigurable Architectures (CGRAs) execute compute-intensive kernels on a reconfigurable processing mesh. Transparent CGRAs extend this model by generating configurations at runtime and storing them in a dedicated cache, removing compiler dependence and enabling adaptive behavior. Although prior work has explored mapping strategies [...] Read more.
Coarse-Grained Reconfigurable Architectures (CGRAs) execute compute-intensive kernels on a reconfigurable processing mesh. Transparent CGRAs extend this model by generating configurations at runtime and storing them in a dedicated cache, removing compiler dependence and enabling adaptive behavior. Although prior work has explored mapping strategies and mesh scaling, the feasibility of the configuration cache remains unaddressed, as it is commonly treated as a generic storage block. This paper presents a feasibility study of configuration cache organizations and a design-space exploration of Transparent CGRAs, introducing a parameterized cache geometry model that relates cache parameters to the processing mesh and configuration structure. The model enables realistic estimates of area, latency, and energy at the digital system level and is applied to three Transparent CGRAs from the literature and five additional designs covering a wide range of spatial and temporal organizations. The results show that mesh scaling must be balanced with cache feasibility: wide I/O paths and large configurations lead to impractical caches, whereas well-proportioned meshes achieve competitive performance with modest overheads. Under the proposed exploration, selected expanded meshes outperform a two-issue out-of-order processor by up to 1.4× while increasing area by only 14.8% and energy by 2%. These findings demonstrate that Transparent CGRAs are viable, but their scalability depends on a realistic configuration cache design. The proposed parameterized cache model provides a structured and reproducible basis for analyzing transparency overheads and guiding future CGRA designs. Full article
(This article belongs to the Special Issue Design and Application of Digital Circuit and Systems)
Show Figures

Figure 1

52 pages, 782 KB  
Article
Single-Stage Causal Incentive Design via Optimal Interventions
by Sebastián Bejos, Eduardo F. Morales, Luis Enrique Sucar and Enrique Munoz de Cote
Entropy 2026, 28(1), 4; https://doi.org/10.3390/e28010004 - 19 Dec 2025
Viewed by 533
Abstract
We introduce Causal Incentive Design (CID), a framework that applies causal inference to canonical single-stage principal–agent problems (PAPs) characterized by bilateral private information. Within CID, the operating rules of PAPs are formalized using an additive-noise causal graphical model (CGM). Incentives are modeled as [...] Read more.
We introduce Causal Incentive Design (CID), a framework that applies causal inference to canonical single-stage principal–agent problems (PAPs) characterized by bilateral private information. Within CID, the operating rules of PAPs are formalized using an additive-noise causal graphical model (CGM). Incentives are modeled as interventions on a function space variable, Γ, which correspond to policy interventions in the principal–follower causal relation. The causal inference target estimand V(Γ) is defined as the expected value of the principal’s utility variable under a specified policy intervention in the post-intervention distribution. In the context of additive-Gaussian independent noise, the estimand V(Γ) decomposes into a two-layer expectation: (i) an inner Gaussian smoothing of the principal’s utility regression; and (ii) an outer averaging over the conditional probability of the follower’s action given the incentive policy. A Gauss–Hermite quadrature method is employed to efficiently estimate the first layer, while a policy-local kernel reweighting approach is used for the second. For offline selection of a single incentive policy, a Functional Causal Bayesian Optimization (FCBO) algorithm is introduced. This algorithm models the objective functional γV(γ) using a functional Gaussian process surrogate defined on a Reproducing Kernel Hilbert Space (RKHS) domain and utilizes an Upper Confidence Bound (UCB) acquisition functional. Consequently, the policy value V(γ) becomes an interventional query that can be answered using offline observational data under standard identifiability assumptions. High-probability cumulative-regret bounds are established in terms of differential information gain for the proposed FBO algorithm. Collectively, these elements constitute the central contributions of the CID framework, which integrates causal inference through identification and estimation with policy search in principal–agent problems under private information. This approach establishes a causal decision-making pipeline that enables commitment to a high-performing incentive in a single-shot game, supported by regret guarantees. Provided that the data used for estimation is sufficient, the resulting offline pipeline is appropriate for scenarios where adaptive deployment is impractical or costly. Beyond the methodological contribution, this work introduces a novel application of causal graphical models and causal reasoning to incentive design and principal–agent problems, which are central to economics and multi-agent systems. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications)
Show Figures

Figure 1

27 pages, 2727 KB  
Article
The Module Gradient Descent Algorithm via L2 Regularization for Wavelet Neural Networks
by Khidir Shaib Mohamed, Ibrahim. M. A. Suliman, Abdalilah Alhalangy, Alawia Adam, Muntasir Suhail, Habeeb Ibrahim, Mona A. Mohamed, Sofian A. A. Saad and Yousif Shoaib Mohammed
Axioms 2025, 14(12), 899; https://doi.org/10.3390/axioms14120899 - 4 Dec 2025
Viewed by 784
Abstract
Although wavelet neural networks (WNNs) combine the expressive capability of neural models with multiscale localization, there are currently few theoretical guarantees for their training. We investigate the weight decay (L2 regularization) optimization dynamics of gradient descent (GD) for WNNs. Using explicit [...] Read more.
Although wavelet neural networks (WNNs) combine the expressive capability of neural models with multiscale localization, there are currently few theoretical guarantees for their training. We investigate the weight decay (L2 regularization) optimization dynamics of gradient descent (GD) for WNNs. Using explicit rates controlled by the spectrum of the regularized Gram matrix, we first demonstrate global linear convergence to the unique ridge solution for the feature regime when wavelet atoms are fixed and only the linear head is trained. Second, for fully trainable WNNs, we demonstrate linear rates in regions satisfying a Polyak–Łojasiewicz (PL) inequality and establish convergence of GD to stationary locations under standard smoothness and boundedness of wavelet parameters; weight decay enlarges these regions by suppressing flat directions. Third, we characterize the implicit bias in the over-parameterized neural tangent kernel (NTK) regime: GD converges to the minimum reproducing kernel Hilbert space (RKHS) norm interpolant associated with the WNN kernel with L2. In addition to an assessment process on synthetic regression, denoising, and ablations across λ and stepsize, we supplement the theory with useful recommendations on initialization, stepsize schedules, and regularization scales. Together, our findings give a principled prescription for dependable training that has broad applicability to signal processing applications and shed light on when and why L2-regularized GD is stable and quick for WNNs. Full article
Show Figures

Figure 1

26 pages, 5845 KB  
Article
Automated 3D Multivariate Domaining of a Mine Tailings Deposit Using a Continuity-Aware Geostatistical–AI Workflow
by Keyumars Anvari and Jörg Benndorf
Minerals 2025, 15(12), 1249; https://doi.org/10.3390/min15121249 - 26 Nov 2025
Viewed by 781
Abstract
Geochemical data from mine tailings are layered, compositional, and noisy, complicating automated domaining. This study introduces a continuity-aware workflow the Geostatistical k-means Recurrent Neural Network (GkRNN) that links compositional preprocessing and geostatistical continuity to sequence learning, allowing depth order and lateral context to [...] Read more.
Geochemical data from mine tailings are layered, compositional, and noisy, complicating automated domaining. This study introduces a continuity-aware workflow the Geostatistical k-means Recurrent Neural Network (GkRNN) that links compositional preprocessing and geostatistical continuity to sequence learning, allowing depth order and lateral context to influence final domain labels. The workflow begins with a centered log-ratio (CLR) transform, followed by construction of a spectral embedding derived from kernelized direct and cross variograms. Clustering is carried out in this embedded space, and depth sequences are regularized with a hidden Markov model (HMM) model and a long short-term memory (LSTM) network. When applied to a multivariate set of tailing drillholes, stratigraphically coherent zones were obtained, depthwise proportions were stabilized, and vertical as well as lateral semivariograms remained consistent with laminated material. Compared with k-means and Gaussian Mixture baselines, over-segmentation was reduced and the intended layered architecture was recovered in most drillholes. The result is a reproducible domaining workflow that enables clearer grade estimation and more transparent risk evaluation. Full article
Show Figures

Figure 1

22 pages, 57273 KB  
Article
Adaptive Software-Defined Network Control Using Kernel-Based Reinforcement Learning: An Empirical Study
by Yedil Nurakhov, Abzal Kyzyrkanov, Zhenis Otarbay and Danil Lebedev
Appl. Sci. 2025, 15(23), 12349; https://doi.org/10.3390/app152312349 - 21 Nov 2025
Viewed by 695
Abstract
Software-defined networking (SDN) requires adaptive control strategies to handle dynamic traffic conditions and heterogeneous network environments. Reinforcement learning (RL) has emerged as a promising solution, yet deep RL methods often face instability, non-stationarity, and reproducibility challenges that limit practical deployment. To address these [...] Read more.
Software-defined networking (SDN) requires adaptive control strategies to handle dynamic traffic conditions and heterogeneous network environments. Reinforcement learning (RL) has emerged as a promising solution, yet deep RL methods often face instability, non-stationarity, and reproducibility challenges that limit practical deployment. To address these issues, a kernel-based RL framework is introduced, embedding transition dynamics into reproducing kernel Hilbert spaces (RKHS) and combining kernel ridge regression with policy iteration. This approach enables stable value estimation, enhanced sample efficiency, and interpretability, making it suitable for large-scale and evolving SDN scenarios. Experimental evaluation demonstrates consistent convergence and robustness under traffic variability, with cumulative rewards exceeding those of baseline deep RL methods by more than 22%. The findings highlight the potential of kernel-embedded RL as a practical and theoretically grounded solution for adaptive SDN management and contribute to the broader development of intelligent systems in complex environments. Full article
Show Figures

Figure 1

27 pages, 5940 KB  
Article
Manufacturability-Constrained Multi-Objective Optimization of an EV Battery Pack Enclosure for Side-Pole Impact
by Desheng Zhang, Zhenxin Sun, Han Zhang and Jieguo Liao
World Electr. Veh. J. 2025, 16(11), 632; https://doi.org/10.3390/wevj16110632 - 19 Nov 2025
Cited by 1 | Viewed by 655
Abstract
This work minimizes battery pack enclosure mass (kg) and peak deformation (mm) under a side-pole impact condition and validates the results by finite-element reruns complemented by coupon-level material tests. A 64-run optimal Latin hypercube dataset trained ARD Matérn-5/2 Gaussian-process surrogates, and NSGA-II performed [...] Read more.
This work minimizes battery pack enclosure mass (kg) and peak deformation (mm) under a side-pole impact condition and validates the results by finite-element reruns complemented by coupon-level material tests. A 64-run optimal Latin hypercube dataset trained ARD Matérn-5/2 Gaussian-process surrogates, and NSGA-II performed a multi-objective search on a manufacturability grid (Δt = 0.5 mm). Decision-making processes used knee-region filtering and TOPSIS in the normalized objective space with robustness checks (uncertainty inflation, weight perturbation, and cross-kernel audit). The representative optimum reduced mass from 149.40 kg to 115.20 kg (−22.89%) while keeping peak deformation essentially unchanged (66.17 → 66.25 mm) in independent reruns. To examine material dependence, an orthotropic CFRP cross-check was performed by substituting the upper cover and side walls: the iso-thickness mapping yields 90.40 kg with 68.67 mm (+3.65% vs. aluminum x), whereas a constrained iso-mass setting (H1 = 7.0 mm, H2 = 7.0 mm) gives 111.70 kg with 80.85 mm (+22.04%). The observed trends are consistent with the laminate’s lower transverse-shear moduli and shear-sensitive load paths; damage evolution and lay-up optimization are outside the present scope. The workflow provides a reproducible route to balance lightweighting and deformation control for battery pack enclosures. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Graphical abstract

20 pages, 29354 KB  
Article
Two-Dimensional Reproducing Kernel-Based Interpolation Approximation for Best Regularization Parameter in Electrical Tomography Algorithm
by Fanpeng Dong and Shihong Yue
Symmetry 2025, 17(8), 1242; https://doi.org/10.3390/sym17081242 - 5 Aug 2025
Viewed by 733
Abstract
The regularization parameter plays an important role in regularization-based electrical tomography (ET) algorithms, but the existing methods generally cannot determine the parameter. Moreover, these methods are not real-time since a thorough search must be performed for the best parameter. To address the issue, [...] Read more.
The regularization parameter plays an important role in regularization-based electrical tomography (ET) algorithms, but the existing methods generally cannot determine the parameter. Moreover, these methods are not real-time since a thorough search must be performed for the best parameter. To address the issue, a reproducing kernel-based interpolation approximation method is proposed to efficiently estimate the best regularization parameter from a group of representative samples. The optimization and generation of the new method have been verified by theoretical analysis and experimental demonstration. The theoretical evaluation is conducted in a Hilbert space with a known reproducing kernel, and its symmetry ensures the uniqueness of the interpolation. And experimental validation is carried out using both simulated and actual models, each with a range of distinct features. Results indicate that the new method can approximately find the best regularization parameter. Consequently, when using the regularization parameter, the new method can effectively improve both the spatial resolution and steadiness of ET imaging process. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 3670 KB  
Article
Quantum Data-Driven Modeling of Interactions and Vibrational Spectral Bands in Cationic Light Noble-Gas Hydrides: [He2H]+ and [Ne2H]+
by María Judit Montes de Oca-Estévez, Álvaro Valdés and Rita Prosmiti
Molecules 2025, 30(11), 2440; https://doi.org/10.3390/molecules30112440 - 3 Jun 2025
Cited by 2 | Viewed by 1091
Abstract
Motivated by two of the most unexpected discoveries in recent years—the detection of ArH+ and HeH+ noble gas molecules in the cold, low-pressure regions of the Universe—we investigate [He2H]+ and [Ne2H]+ as potentially detectable species [...] Read more.
Motivated by two of the most unexpected discoveries in recent years—the detection of ArH+ and HeH+ noble gas molecules in the cold, low-pressure regions of the Universe—we investigate [He2H]+ and [Ne2H]+ as potentially detectable species in the interstellar medium, providing new insights into their energetic and spectral properties. These findings are crucial for advancing our understanding of noble gas chemistry in astrophysical environments. To achieve this, we employed a data-driven approach to construct a high-accuracy machine-learning potential energy surface using the reproducing kernel Hilbert space method. Training and testing datasets are generated via high-level CCSD(T)/CBS[56] quantum chemistry computations, followed by a rigorous validation protocol to ensure the reliability of the potential. The ML-PES is then used to compute vibrational states within the MCTDH framework, and assign spectral transitions for the most common isotopologues of these species in the interstellar medium. Our results are compared with previously recorded values, revealing that both cations exhibit a prominent proton-shuttle motion within the infrared spectral range, making them strong candidates for telescopic observation. This study provides a solid computational foundation, based on rigorous, fully quantum treatments, aiming to assist in the identification of these yet unobserved He/Ne hydride cations in astrophysical environments. Full article
(This article belongs to the Special Issue Advances in Computational Spectroscopy, 2nd Edition)
Show Figures

Graphical abstract

22 pages, 1130 KB  
Article
Two-Mode Hereditary Model of Solar Dynamo
by Evgeny Kazakov, Gleb Vodinchar and Dmitrii Tverdyi
Mathematics 2025, 13(10), 1669; https://doi.org/10.3390/math13101669 - 20 May 2025
Viewed by 642
Abstract
The magnetic field of the Sun is formed by the mechanism of hydromagnetic dynamo. In this mechanism, the flow of the conducting medium (plasma) of the convective zone generates a magnetic field, and this field corrects the flow using the Lorentz force, creating [...] Read more.
The magnetic field of the Sun is formed by the mechanism of hydromagnetic dynamo. In this mechanism, the flow of the conducting medium (plasma) of the convective zone generates a magnetic field, and this field corrects the flow using the Lorentz force, creating feedback. An important role in dynamo is played by memory (hereditary), when a change in the current state of a physical system depends on its states in the past. Taking these effects into account may provide a more accurate description of the generation of the Sun’s magnetic field. This paper generalizes classical dynamo models by including hereditary feedback effects. The feedback parameters such as the presence or absence of delay, delay duration, and memory duration are additional degrees of freedom. This can provide more diverse dynamic modes compared to classical memoryless models. The proposed model is based on the kinematic dynamo problem, where the large-scale velocity field is predetermined. The field in the model is represented as a linear combination of two stationary predetermined modes with time-dependent amplitudes. For these amplitudes, equations are obtained based on the kinematic dynamo equations. The model includes two generators of a large-scale magnetic field. In the first, the field is generated due to large-scale flow of the medium. The second generator has a turbulent nature; in it, generation occurs due to the nonlinear interaction of small-scale pulsations of the magnetic field and velocity. Memory in the system under study is implemented in the form of feedback distributed over all past states of the system. The feedback is represented by an integral term of the type of convolution of a quadratic form of phase variables with a kernel of a fairly general form. The quadratic form models the influence of the Lorentz force. This integral term describes the turbulent generator quenching. Mathematically, this model is written with a system of integro-differential equations for amplitudes of modes. The model was applied to a real space object, namely, the solar dynamo. The model representation of the Sun’s velocity field was constructed based on helioseismological data. Free field decay modes were chosen as components of the magnetic field. The work considered cases when hereditary feedback with the system arose instantly or with a delay. The simulation results showed that the model under study reproduces dynamic modes characteristic of the solar dynamo, if there is a delay in the feedback. Full article
(This article belongs to the Special Issue Advances in Nonlinear Dynamical Systems of Mathematical Physics)
Show Figures

Figure 1

19 pages, 2924 KB  
Article
An Efficient Multiple Empirical Kernel Learning Algorithm with Data Distribution Estimation
by Jinbo Huang , Zhongmei Luo  and Xiaoming Wang 
Electronics 2025, 14(9), 1879; https://doi.org/10.3390/electronics14091879 - 5 May 2025
Viewed by 1060
Abstract
The Multiple Random Empirical Kernel Learning Machine (MREKLM) typically generates multiple empirical feature spaces by selecting a limited group of samples, which helps reduce training duration. However, MREKLM does not incorporate data distribution information during the projection process, leading to inconsistent performance and [...] Read more.
The Multiple Random Empirical Kernel Learning Machine (MREKLM) typically generates multiple empirical feature spaces by selecting a limited group of samples, which helps reduce training duration. However, MREKLM does not incorporate data distribution information during the projection process, leading to inconsistent performance and issues with reproducibility. To address this limitation, we introduce a within-class scatter matrix that leverages the distribution of samples, resulting in the development of the Fast Multiple Empirical Kernel Learning Incorporating Data Distribution Information (FMEKL-DDI). This approach enables the algorithm to incorporate sample distribution data during projection, improving the decision boundary and enhancing classification accuracy. To further minimize sample selection time, we employ a border point selection technique utilizing locality-sensitive hashing (BPLSH), which helps in efficiently picking samples for feature space development. The experimental results from various datasets demonstrate that FMEKL-DDI significantly improves classification accuracy while reducing training duration, thereby providing a more efficient approach with strong generalization performance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop