Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (856)

Search Parameters:
Keywords = Linear interpolation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2662 KB  
Article
Seasonal and Spatial Variations in General Extreme Value (GEV) Distribution Shape Parameter for Estimating Extreme Design Rainfall in Tasmania
by Iqbal Hossain, Shirley Gato-Trinidad and Monzur Alam Imteaz
Water 2026, 18(3), 319; https://doi.org/10.3390/w18030319 - 27 Jan 2026
Viewed by 118
Abstract
This paper demonstrates seasonal variations in the generalised extreme value (GEV) distribution shape parameter and discrepancies in GEV types within the same location. Daily rainfall data from 26 rain gauge stations located in Tasmania were used as a case study. Four GEV distribution [...] Read more.
This paper demonstrates seasonal variations in the generalised extreme value (GEV) distribution shape parameter and discrepancies in GEV types within the same location. Daily rainfall data from 26 rain gauge stations located in Tasmania were used as a case study. Four GEV distribution parameter estimation techniques, such as MLE, GMLE, Bayesian, and L-moments, were used to determine the shape parameter of the distribution. With the estimated shape parameter, the spatial variations under different seasons were investigated through GIS interpolation maps. As there is strong evidence that shape parameters potentially vary across locations, spatial analysis focusing on the shape parameter across Tasmania (Australia) was performed. The outcomes of the analysis revealed that the shape parameters exhibit their highest and lowest values in winter, with a range from −0.234 to 0.529. The analysis of the rainfall data revealed that there is significant variation in the shape parameters among the seasons. The magnitude of the shape parameter decreases with elevation, and a non-linear relationship exists between these two parameters. This study extends knowledge on the current framework of GEV distribution shape parameter estimation techniques at the regional scale, enabling the adoption of appropriate GEV types and, thus, the appropriate determination of design rainfall to reduce hazards and protect our environments. Full article
Show Figures

Figure 1

25 pages, 889 KB  
Article
Constructive Approximation of Nonlinear Operators Based on Piecewise Interpolation Technique
by Anatoli Torokhti and Peter Pudney
Axioms 2026, 15(2), 91; https://doi.org/10.3390/axioms15020091 - 26 Jan 2026
Viewed by 101
Abstract
Suppose KY and KX are the image and the preimage of a nonlinear operator KYKX.
It is supposed that the cardinality of each KY and KX is N and N is large. We provide [...] Read more.
Suppose KY and KX are the image and the preimage of a nonlinear operator KYKX.
It is supposed that the cardinality of each KY and KX is N and N is large. We provide an
approximation to the map F that requires prior information only on a few elements p from
KY, where pN, but still effectively represents F(KY). It is achieved under Lipschitz
continuity assumptions. The device behind the proposed method is based on a special
extension of the piecewise linear interpolation technique to the case of sets of stochastic
elements. The proposed technique provides a single operator that transforms any element
from the arbitrarily large set KY. The operator is determined in terms of pseudo-inverse
matrices so that it always exists. Full article
13 pages, 2027 KB  
Article
An Improved Diffusion Model for Generating Images of a Single Category of Food on a Small Dataset
by Zitian Chen, Zhiyong Xiao, Dinghui Wu and Qingbing Sang
Foods 2026, 15(3), 443; https://doi.org/10.3390/foods15030443 - 26 Jan 2026
Viewed by 215
Abstract
In the era of the digital food economy, high-fidelity food images are critical for applications ranging from visual e-commerce presentation to automated dietary assessment. However, developing robust computer vision systems for food analysis is often hindered by data scarcity for long-tail or regional [...] Read more.
In the era of the digital food economy, high-fidelity food images are critical for applications ranging from visual e-commerce presentation to automated dietary assessment. However, developing robust computer vision systems for food analysis is often hindered by data scarcity for long-tail or regional dishes. To address this challenge, we propose a novel high-fidelity food image synthesis framework as an effective data augmentation tool. Unlike generic generative models, our method introduces an Ingredient-Aware Diffusion Model based on the Masked Diffusion Transformer (MaskDiT) architecture. Specifically, we design a Label and Ingredients Encoding (LIE) module and a Cross-Attention (CA) mechanism to explicitly model the relationship between food composition and visual appearance, simulating the “cooking” process digitally. Furthermore, to stabilize training on limited data samples, we incorporate a linear interpolation strategy into the diffusion process. Extensive experiments on the Food-101 and VireoFood-172 datasets demonstrate that our method achieves state-of-the-art generation quality even in data-scarce scenarios. Crucially, we validate the practical utility of our synthetic images: utilizing them for data augmentation improved the accuracy of downstream food classification tasks from 95.65% to 96.20%. This study provides a cost-effective solution for generating diverse, controllable, and realistic food data to advance smart food systems. Full article
Show Figures

Figure 1

19 pages, 3630 KB  
Article
Normal Shock Wave Approximations for Flight at Hypersonic Mach Numbers
by Pasquale M. Sforza
Aerospace 2026, 13(2), 115; https://doi.org/10.3390/aerospace13020115 - 24 Jan 2026
Viewed by 170
Abstract
Normal shock pressure ratios in equilibrium air for Mach numbers up to 30 and altitudes to 300,000 feet are shown to be correlated by a simple power law which provides an accuracy of ±2%, thereby permitting direct calculation of corresponding enthalpy ratios accurate [...] Read more.
Normal shock pressure ratios in equilibrium air for Mach numbers up to 30 and altitudes to 300,000 feet are shown to be correlated by a simple power law which provides an accuracy of ±2%, thereby permitting direct calculation of corresponding enthalpy ratios accurate to ±1% without iteration; a slight change in power-law coefficients extends this capability to Mach 65. Temperature, density, and compressibility may be then found directly from tables for high temperature air. For Mach numbers up to at least 6, a linear approximation for specific heat provides direct solutions for post-shock state variables, while a complementary logarithmic model of the equation of state enables direct solutions for Mach numbers up to about 12. This approach, which provides accuracy within ±3% for all relevant variables in the practical flight corridor of vehicles at these low to moderate hypersonic Mach numbers, should prove useful in design and analysis because the algebraic solutions obtained need neither iteration or interpolation. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

28 pages, 978 KB  
Article
Computable Reformulation of Data-Driven Distributionally Robust Chance Constraints: Validated by Solution of Capacitated Lot-Sizing Problems
by Hua Deng and Zhong Wan
Mathematics 2026, 14(2), 331; https://doi.org/10.3390/math14020331 - 19 Jan 2026
Viewed by 101
Abstract
Uncertainty in optimization models often causes awkward properties in their deterministic equivalent formulations (DEFs), even for simple linear models. Chance-constrained programming is a reasonable tool for handling optimization problems with random parameters in objective functions and constraints, but it assumes that the distribution [...] Read more.
Uncertainty in optimization models often causes awkward properties in their deterministic equivalent formulations (DEFs), even for simple linear models. Chance-constrained programming is a reasonable tool for handling optimization problems with random parameters in objective functions and constraints, but it assumes that the distribution of these random parameters is known, and its DEF is often associated with the complicated computation of multiple integrals, hence impeding its extensive applications. In this paper, for optimization models with chance constraints, the historical data of random model parameters are first exploited to construct an adaptive approximate density function by incorporating piecewise linear interpolation into the well-known histogram method, so as to remove the assumption of a known distribution. Then, in view of this estimation, a novel confidence set only involving finitely many variables is constructed to depict all the potential distributions for the random parameters, and a computable reformulation of data-driven distributionally robust chance constraints is proposed. By virtue of such a confidence set, it is proven that the deterministic equivalent constraints are reformulated as several ordinary constraints in line with the principles of the distributionally robust optimization approach, without the need to solve complicated semi-definite programming problems, compute multiple integrals, or solve additional auxiliary optimization problems, as done in existing works. The proposed method is further validated by the solution of the stochastic multiperiod capacitated lot-sizing problem, and the numerical results demonstrate that: (1) The proposed method can significantly reduce the computational time needed to find a robust optimal production strategy compared with similar ones in the literature; (2) The optimal production strategy provided by our method can maintain moderate conservatism, i.e., it has the ability to achieve a better trade-off between cost-effectiveness and robustness than existing methods. Full article
(This article belongs to the Section D: Statistics and Operational Research)
Show Figures

Figure 1

24 pages, 5523 KB  
Article
Impact of Satellite Clock Corrections and Different Precise Products on GPS and Galileo Precise Point Positioning Performance
by Damian Kiliszek and Karol Korolczuk
Sensors 2026, 26(2), 588; https://doi.org/10.3390/s26020588 - 15 Jan 2026
Viewed by 307
Abstract
This study assesses how satellite clock products affect Precise Point Positioning (PPP) for GPS, Galileo, and GPS+Galileo. Multi-GNSS data at 30 s were processed for 12 global IGS stations over one week in 2025, with each day split into eight independent three-hour sessions. [...] Read more.
This study assesses how satellite clock products affect Precise Point Positioning (PPP) for GPS, Galileo, and GPS+Galileo. Multi-GNSS data at 30 s were processed for 12 global IGS stations over one week in 2025, with each day split into eight independent three-hour sessions. SP3 clocks (ORB, 5 min) were compared with dedicated CLKs (CLO, 5 s, 30 s, 5 min) across final (FIN), rapid (RAP), and ultra-rapid (ULT; observed/predicted) product lines from multiple analysis centers. Two timing strategies were tested: nearest-epoch sampling (CLOCK0) and linear interpolation (CLOCK1). CLO consistently delivered the lowest 2D/3D errors and the fastest convergence. ORB degraded accuracy by a few millimeters and extended convergence by ~5–10 min, most notably for GPS. With 5 min clocks, CLOCK1 yielded small gains for Galileo but often hurt GPS; with 30 s clocks, interpolation was immaterial; 5 s clocks offered no measurable benefit. FIN outperformed RAP; OPS slightly outperformed MGEX; ESA/GFZ ranked highest. ULT solutions were weaker, especially in the predicted half. Zenith tropospheric delay (ZTD) biases were negligible; variance was smallest for GPS+Galileo with CLO (~7–10 mm), increased by ~1–2 mm with ORB, and was largest in ULT. Dense, high-quality clock products remain essential for reliable PPP. Full article
Show Figures

Figure 1

18 pages, 1816 KB  
Article
A Biomass-Driven 3D Structural Model for Banana (Musa spp.) Fruit Fingers Across Genotypes
by Yongxia Liu, Ting Sun, Zhanwu Sheng, Bizun Wang, Lili Zheng, Yang Yang, Dao Xiao, Xiaoyan Zheng, Pingping Fang, Jing Cao and Wenyu Zhang
Agronomy 2026, 16(2), 204; https://doi.org/10.3390/agronomy16020204 - 14 Jan 2026
Viewed by 245
Abstract
Banana (Musa spp.) fruit morphology is a key determinant of yield and quality, yet modeling its 3D structural dynamics across genotypes remains difficult. To address this challenge, we developed a generic, biomass-driven 3D structural model for banana fruit fingers that quantitatively links [...] Read more.
Banana (Musa spp.) fruit morphology is a key determinant of yield and quality, yet modeling its 3D structural dynamics across genotypes remains difficult. To address this challenge, we developed a generic, biomass-driven 3D structural model for banana fruit fingers that quantitatively links growth and morphology. Field experiments were conducted over two growing seasons in Hainan, China, using three representative genotypes. Morphological traits, including outer and inner arc length, circumference, and pedicel length, along with dry (Wd) and fresh weight (Wf), were measured every 10 days after flowering until 110 days. Quantitative relationships between morphological traits and Wf, as well as between Wd and Wf, were fitted using linear or Gompertz functions with genotype-specific parameters. Based on these functions, a parameterized 3D reconstruction method was implemented in Python, combining biomass-driven growth equations, curvature geometry, and cross-sectional interpolation to simulate the fruit’s bending, tapering, and volumetric development. The resulting dynamic 3D models accurately reproduced genotype-specific differences in curvature, length, and shape with average fitting R2 > 0.95. The proposed biomass-driven 3D structural model provides a methodological framework for integrating banana fruit morphology into functional–structural plant models. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

21 pages, 2930 KB  
Article
Robust Model Predictive Control with a Dynamic Look-Ahead Re-Entry Strategy for Trajectory Tracking of Differential-Drive Robots
by Diego Guffanti, Moisés Filiberto Mora Murillo, Santiago Bustamante Sanchez, Javier Oswaldo Obregón Gutiérrez, Marco Alejandro Hinojosa, Alberto Brunete, Miguel Hernando and David Álvarez
Sensors 2026, 26(2), 520; https://doi.org/10.3390/s26020520 - 13 Jan 2026
Viewed by 206
Abstract
Accurate trajectory tracking remains a central challenge in differential-drive mobile robots (DDMRs), particularly when operating under real-world conditions. Model Predictive Control (MPC) provides a powerful framework for this task, but its performance degrades when the robot deviates significantly from the nominal path. To [...] Read more.
Accurate trajectory tracking remains a central challenge in differential-drive mobile robots (DDMRs), particularly when operating under real-world conditions. Model Predictive Control (MPC) provides a powerful framework for this task, but its performance degrades when the robot deviates significantly from the nominal path. To address this limitation, robust recovery mechanisms are required to ensure stable and precise tracking. This work presents an experimental validation of an MPC controller applied to a four-wheel DDMR, whose odometry is corrected by a SLAM algorithm running in ROS 2. The MPC is formulated as a quadratic program with state and input constraints on linear (v) and angular (ω) velocities, using a prediction horizon of Np=15 future states, adjusted to the computational resources of the onboard computer. A novel dynamic look-ahead re-entry strategy is proposed, which activates when the robot exits a predefined lateral error band (δ=0.05 m) and interpolates a smooth reconnection trajectory based on a forward look-ahead point, ensuring gradual convergence and avoiding abrupt re-entry actions. Accuracy was evaluated through lateral and heading errors measured via geometric projection onto the nominal path, ensuring fair comparison. From these errors, RMSE, MAE, P95, and in-band percentage were computed as quantitative metrics. The framework was tested on real hardware at 50 Hz through 5 nominal experiments and 3 perturbed experiments. Perturbations consisted of externally imposed velocity commands at specific points along the path, while configuration parameters were systematically varied across trials, including the weight R, smoothing distance Lsmooth, and activation of the re-entry strategy. In nominal conditions, the best configuration (ID 2) achieved a lateral RMSE of 0.05 m, a heading RMSE of 0.06 rad, and maintained 68.8% of the trajectory within the validation band. Under perturbations, the proposed strategy substantially improved robustness. For instance, in experiment ID 6 the robot sustained a lateral RMSE of 0.12 m and preserved 51.4% in-band, outperforming MPC without re-entry, which suffered from larger deviations and slower recoveries. The results confirm that integrating MPC with the proposed re-entry strategy enhances both accuracy and robustness in DDMR trajectory tracking. By combining predictive control with a spatially grounded recovery mechanism, the approach ensures consistent performance in challenging scenarios, underscoring its relevance for reliable mobile robot navigation in uncertain environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

24 pages, 6216 KB  
Article
Three-Dimensional Surface High-Precision Modeling and Loss Mechanism Analysis of Motor Efficiency Map Based on Driving Cycles
by Jiayue He, Yan Sui, Qiao Liu, Zehui Cai and Nan Xu
Energies 2026, 19(2), 302; https://doi.org/10.3390/en19020302 - 7 Jan 2026
Viewed by 207
Abstract
Amid fossil-fuel depletion and worsening environmental impacts, battery electric vehicles (BEVs) are pivotal to the energy transition. Energy management in BEVs relies on accurate motor efficiency maps, yet real-time onboard control demands models that balance fidelity with computational cost. To address map inaccuracy [...] Read more.
Amid fossil-fuel depletion and worsening environmental impacts, battery electric vehicles (BEVs) are pivotal to the energy transition. Energy management in BEVs relies on accurate motor efficiency maps, yet real-time onboard control demands models that balance fidelity with computational cost. To address map inaccuracy under real driving and the high runtime cost of 2-D interpolation, we propose a driving-cycle-aware, physically interpretable quadratic polynomial-surface framework. We extract priority operating regions on the speed–torque plane from typical driving cycles and model electrical power Pe  as a function of motor speed n and mechanical power Pm. A nested model family (M3–M6) and three fitting strategies—global, local, and region-weighted—are assessed using R2, RMSE, a computational complexity index (CCI), and an Integrated Criterion for accuracy–complexity and stability (ICS). Simulations on the Worldwide Harmonized Light Vehicles Test Cycle, the China Light-Duty Vehicle Test Cycle, and the Urban Dynamometer Driving Schedule show that region-weighted fitting consistently achieves the best or near-best ICS; relative to Global fitting, mean ICS decreases by 49.0%, 46.4%, and 90.6%, with the smallest variance. Regarding model order, the four-term M4 +Pm2 offers the best accuracy–complexity trade-off. Finally, the region-weighted fitting M4 +Pm2 polynomial model was integrated into the vehicle-level economic speed planning model based on the dynamic programming algorithm. In simulations covering a 27 km driving distance, this model reduced computational time by approximately 87% compared to a linear interpolation method based on a two-dimensional lookup table, while achieving an energy consumption deviation of about 0.01% relative to the lookup table approach. Results demonstrate that the proposed model significantly alleviates computational burden while maintaining high energy consumption prediction accuracy, thereby providing robust support for real-time in-vehicle applications in whole-vehicle energy management. Full article
(This article belongs to the Special Issue Challenges and Research Trends of Energy Management)
Show Figures

Figure 1

23 pages, 5241 KB  
Article
BAARTR: Boundary-Aware Adaptive Regression for Kinematically Consistent Vessel Trajectory Reconstruction from Sparse AIS
by Hee-jong Choi, Joo-sung Kim and Dae-han Lee
J. Mar. Sci. Eng. 2026, 14(2), 116; https://doi.org/10.3390/jmse14020116 - 7 Jan 2026
Viewed by 240
Abstract
The Automatic Identification System (AIS) frequently suffers from data loss and irregular report intervals in real maritime environments, compromising the reliability of downstream navigation, monitoring, and trajectory reconstruction tasks. To address these challenges, we propose BAARTR (Boundary-Aware Adaptive Regression for Kinematically Consistent Vessel [...] Read more.
The Automatic Identification System (AIS) frequently suffers from data loss and irregular report intervals in real maritime environments, compromising the reliability of downstream navigation, monitoring, and trajectory reconstruction tasks. To address these challenges, we propose BAARTR (Boundary-Aware Adaptive Regression for Kinematically Consistent Vessel Trajectory Reconstruction), a novel kinematically consistent interpolation framework. Operating solely on time, latitude, and longitude inputs, BAARTR explicitly enforces boundary velocities derived from raw AIS data. The framework adaptively selects a velocity-estimation strategy based on the AIS reporting gap: central differencing is applied for short intervals, while a hierarchical cubic velocity regression with a quadratic acceleration constraint is employed for long or irregular gaps to iteratively refine endpoint slopes. These boundary slopes are subsequently incorporated into a clamped quartic interpolation at a 1 s resolution, effectively suppressing overshoots and ensuring velocity continuity across segments. We evaluated BAARTR against Linear, Spline, Hermite, Bezier, Piecewise cubic hermite interpolating polynomial (PCHIP) and Modified akima (Makima) methods using real-world AIS data collected from the Mokpo Port channel, Republic of Korea (2023–2024), across three representative vessels. The experimental results demonstrate that BAARTR achieves superior reconstruction accuracy while maintaining strictly linear time complexity (O(N)). BAARTR consistently achieved the lowest median Root Mean Square Error (RMSE) and the narrowest Interquartile Ranges (IQR), producing visibly smoother and more kinematically plausible paths-especially in high-curvature turns where standard geometric interpolations tend to oscillate. Furthermore, sensitivity analysis shows stable performance with a modest training window (n ≈ 16) and minimal regression iterations (m = 2–3). By reducing reliance on large training datasets, BAARTR offers a lightweight, extensible foundation for post-processing in Maritime Autonomous Surface Ship (MASS) and Vessel Traffic Service (VTS), as well as for accident reconstruction and multi-sensor fusion. Full article
(This article belongs to the Special Issue Advanced Research on Path Planning for Intelligent Ships)
Show Figures

Figure 1

14 pages, 1038 KB  
Article
Designing Poly(vinyl formal) Membranes for Controlled Diclofenac Delivery: Integrating Classical Kinetics with GRNN Modeling
by Igor Garcia-Atutxa and Francisca Villanueva-Flores
Appl. Sci. 2026, 16(2), 562; https://doi.org/10.3390/app16020562 - 6 Jan 2026
Viewed by 191
Abstract
Controlled-release systems must translate material design choices into predictable pharmacokinetic (PK) profiles, yet purely mechanistic or purely data-driven models often underperform when tuning complex polymer networks. Here, we develop tunable poly(vinyl formal) membranes (PVFMs) for diclofenac delivery and integrate classical kinetic analysis with [...] Read more.
Controlled-release systems must translate material design choices into predictable pharmacokinetic (PK) profiles, yet purely mechanistic or purely data-driven models often underperform when tuning complex polymer networks. Here, we develop tunable poly(vinyl formal) membranes (PVFMs) for diclofenac delivery and integrate classical kinetic analysis with a Generalized Regression Neural Network (GRNN) to connect formulation variables to release behavior and PK-relevant targets. PVFMs were synthesized across a gradient of crosslink densities by varying HCl content; diclofenac release was quantified under standardized conditions with geometry and dosing rigorously controlled (thickness, effective area, surface-area-to-volume ratio, and areal drug loading are reported to ensure reproducibility). Release profiles were fitted to Korsmeyer–Peppas, zero-order, first-order, Higuchi, and hyperbolic tangent models, while a GRNN was trained on material descriptors and time to predict cumulative release and flux, including out-of-sample conditions. Increasing crosslink density monotonically reduced swelling, areal release rate, and overall release efficiency (strong linear trends; r ≈ 0.99) and shifted transport from anomalous to Super Case II at the highest crosslinking. Classical models captured regime transitions but did not sustain high accuracy across the full design space; in contrast, the GRNN delivered superior predictive performance and generalized to conditions absent from training, enabling accurate interpolation/extrapolation of release trajectories. Beyond prior work, we provide a material-to-PK design map in which crosslinking, porosity/tortuosity, and hydrophobicity act as explicit “knobs” to shape burst, flux, and near-zero-order behavior, and we introduce a hybrid framework where mechanistic models guide interpretation while GRNN supplies robust, data-driven prediction for formulation selection. This integrated PVFM–GRNN approach supports rational design and quality control of controlled-release devices for diclofenac and is extendable to other therapeutics given appropriate descriptors and training data. Full article
(This article belongs to the Section Materials Science and Engineering)
Show Figures

Figure 1

22 pages, 657 KB  
Article
Weighted Random Averages and Recursive Interpolation in Fibonacci Sequences
by Najmeddine Attia and Taoufik Moulahi
Fractal Fract. 2026, 10(1), 33; https://doi.org/10.3390/fractalfract10010033 - 5 Jan 2026
Viewed by 205
Abstract
We investigate the multifractal geometry of irregular sets arising from weighted averages of random variables, where the weights (wn) form a positive sequence with exponential growth. Our analysis applies in particular to sequences generated by linear recurrence relations of Fibonacci [...] Read more.
We investigate the multifractal geometry of irregular sets arising from weighted averages of random variables, where the weights (wn) form a positive sequence with exponential growth. Our analysis applies in particular to sequences generated by linear recurrence relations of Fibonacci type, including higher-order generalizations such as the Tetranacci sequence (Tn). Using a Cantor-type construction built from alternating free and forced blocks, we show that the associated exceptional sets may attain full Hausdorff and packing dimension, independently of the precise form of the recurrence. We further develop a probabilistic interpretation of (Tn) through an appropriate Markov representation that encodes its combinatorial evolution and yields sharp asymptotic behavior. Finally, given n+1 consecutive terms of a Fibonacci-type sequence, one may construct a polynomial Pn(x) of degree at most n via Lagrange interpolation; we show that this polynomial admits an implicit recursive representation consistent with the underlying recurrence. Full article
Show Figures

Figure 1

35 pages, 5897 KB  
Article
An Extrinsic Enriched Finite Element Method Based on RBFs for the Helmholtz Equation
by Qingliang Liu, Zhihong Zou, Yingbin Chai, Wei Li and Wei Chu
Mathematics 2026, 14(1), 200; https://doi.org/10.3390/math14010200 - 5 Jan 2026
Viewed by 307
Abstract
The traditional finite element method (FEM) usually exhibits significant numerical dispersion error for solving the Helmholtz equation in relatively high-frequency range, resulting in insufficiently accurate solutions. To address this problem, this paper proposes a novel enriched finite element method (EFEM) based on radial [...] Read more.
The traditional finite element method (FEM) usually exhibits significant numerical dispersion error for solving the Helmholtz equation in relatively high-frequency range, resulting in insufficiently accurate solutions. To address this problem, this paper proposes a novel enriched finite element method (EFEM) based on radial basis functions (RBFs) which are frequently used in meshless numerical techniques. In the proposed method, the partition of unity (PU) framework is retained, and nodal interpolation functions are formed using the RBFs. Furthermore, the linear dependence (LD) problem commonly encountered in many of the PU-based methods using polynomial basis functions (PBFs) is effectively avoided by using the present RBFs. To enrich the approximation space generated by the RBFs, the PBFs are introduced to construct the local enrichment functions. Several typical numerical experiments are conducted in this work. The results indicate that the proposed method can significantly reduce the dispersion error and yield accurate solutions even for relatively high-frequency Helmholtz problems. More importantly, the proposed method can be directly implemented with standard quadrilateral meshes as in FEM. Therefore, the proposed method represents a promising numerical scheme for solving relatively high-frequency Helmholtz problems. Full article
(This article belongs to the Special Issue Advances in Numerical Analysis of Partial Differential Equations)
Show Figures

Figure 1

23 pages, 2512 KB  
Article
Thermal Data Optimization Through Uncertainty Reduction in Fatigue Limits Estimation: A TCM–ANN Framework for C45 Steel
by Luca Corsaro, Mohsen Dehghanpour Abyaneh, Mohammad Sadegh Javadi, Francesca Curà and Raffaella Sesana
Metals 2026, 16(1), 42; https://doi.org/10.3390/met16010042 - 29 Dec 2025
Viewed by 238
Abstract
The combination of both Passive Thermography and machine learning in materials science and engineering allows rapid progress in advanced fatigue analysis. Focusing on mechanical aspects, the combination of these approaches is capable of interpolating the fatigue resistance in diverse conditions with minimal data, [...] Read more.
The combination of both Passive Thermography and machine learning in materials science and engineering allows rapid progress in advanced fatigue analysis. Focusing on mechanical aspects, the combination of these approaches is capable of interpolating the fatigue resistance in diverse conditions with minimal data, when compared to the classical solution, in which analyses are conducted using statistical processes such as the Staircase Method. Even though the thermal increment and thermal area are crucial parameters for the fatigue limit analysis, the implementation of machine-learning interpolation improves data consistency and reduces variability in the fatigue limit estimation through Type-A repeatability uncertainty reduction. This way, the two-layer artificial neural network does not have any predefined form of functions; second, it maintains the inherent non-linear features of the data. The validation of the proposed approach was conducted for a C45 steel, and two different experimental campaigns were conducted using a resonant machine. At the end, the analysis of the fatigue limit was conducted by means of an interpolation-assisted Two-Curve Method, starting from the classical thermal data evolution properly optimized with a machine-learning approach, achieving a more precise result in estimating the fatigue limit. Full article
Show Figures

Figure 1

26 pages, 3111 KB  
Article
Elevation-Dependent Glacier Albedo Modelling Using Machine Learning and a Multi-Algorithm Satellite Approach in Svalbard
by Dominik Cyran and Dariusz Ignatiuk
Remote Sens. 2026, 18(1), 87; https://doi.org/10.3390/rs18010087 - 26 Dec 2025
Viewed by 563
Abstract
Glacier surface albedo controls solar energy absorption and Arctic mass balance, yet comprehensive modelling approaches remain limited. This study develops and validates multiple modelling frameworks for glacier albedo prediction using automatic weather station (AWS) data from Hansbreen and Werenskioldbreen in southern Svalbard during [...] Read more.
Glacier surface albedo controls solar energy absorption and Arctic mass balance, yet comprehensive modelling approaches remain limited. This study develops and validates multiple modelling frameworks for glacier albedo prediction using automatic weather station (AWS) data from Hansbreen and Werenskioldbreen in southern Svalbard during the 2011 ablation season. We compared three point-based approaches across elevation zones. At lower elevations (190 m), linear regression models emphasising snowfall probability or temperature controls achieved excellent performance (R2 = 0.84–0.86), with snowfall probability contributing 65% and daily positive temperature contributing 86.3% feature importance. At higher elevations (420 m) where snow persists, neural networks proved superior (R2 = 0.65), with positive degree days (72.5% importance) driving albedo evolution in snow-dominated environments. Spatial modelling extended point predictions across glacier surfaces using elevation-dependent probability calculations. Validation with Landsat 7 imagery and multi-algorithm comparison (n = 5) revealed that while absolute albedo values varied by 12% (0.54–0.60), temporal dynamics showed remarkable consistency (27.8–35.2% seasonal decline). Point-to-pixel validation achieved excellent agreement (mean absolute difference = 0.03 ± 0.02, 5.3% relative error). Spatial validation across 173,133 pixel comparisons demonstrated good agreement (r = 0.62, R2 = 0.40, RMSE = 0.15), with an accuracy of reproducing temporal evolution within 0.001–0.021 error. These findings demonstrate that optimal glacier albedo modelling requires elevation-dependent approaches combining physically based interpolation with machine learning, and that temporal pattern reproduction is more reliably validated than absolute values. The frameworks provide tools for understanding albedo-climate feedback and improving mass balance projections in response to Arctic warming. Full article
(This article belongs to the Special Issue New Insights in Remote Sensing of Snow and Glaciers)
Show Figures

Figure 1

Back to TopTop