Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (59)

Search Parameters:
Keywords = constant pairing computations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 6132 KB  
Article
AI-Guided Binding Mechanisms and Molecular Dynamics for MERS-CoV
by Pradyumna Kumar, Lingtao Chen, Rachel Yuanbao Chen, Yin Chen, Seyedamin Pouriyeh, Progyateg Chakma, Abdur Rahman Mohd Abul Basher and Yixin Xie
Int. J. Mol. Sci. 2026, 27(4), 1989; https://doi.org/10.3390/ijms27041989 - 19 Feb 2026
Viewed by 397
Abstract
The MERS-CoV (Middle East respiratory syndrome coronavirus) is a zoonotic virus with a high mortality rate and a lack of antiviral drugs, underscoring the need for effective therapeutic methods. Viral entry depends on interactions between viral surface proteins and human receptors, with Dipeptidyl [...] Read more.
The MERS-CoV (Middle East respiratory syndrome coronavirus) is a zoonotic virus with a high mortality rate and a lack of antiviral drugs, underscoring the need for effective therapeutic methods. Viral entry depends on interactions between viral surface proteins and human receptors, with Dipeptidyl Peptidase-4 (DPP4), a transmembrane glycoprotein, acting as the receptor for MERS-CoV. We employed Molecular Dynamics (MD) Simulations to identify critical interface residues under a high-performance computing (HPC) workflow for accelerated results. Target residue pairs were identified through analysis of salt bridge and hydrogen bond occupancy. The stability of these residues was confirmed through three independent MD Simulations at human body temperature and constant pressure. Additionally, binding affinity predictions were calculated to determine the interaction strength between the virus and human receptors. Applying the scientific threshold criteria, we narrowed our results to seven key interaction pairs; two of the identified pairs (Asp510-Arg317, and Arg511-Asp393) are consistent with findings published in previous research studies, and five novel interactions are proposed for future experimental studies with our active collaborators in Pharmacology. The results provide a molecular basis for targeted mutation-based experiments and support the rational design of structure-based inhibitors aimed at disrupting the MERS-CoV-DPP4 complex, thereby facilitating the translation of computational findings into antiviral drug discovery. Full article
Show Figures

Figure 1

22 pages, 2365 KB  
Article
Greedy-VoI Time-Mesh Design for Rolling-Horizon EMS: Optimizing Block-Variable Granularity and Horizon Under Compute Budgets
by Gregorio Fernández, J. F. Sanz Osorio, Adrián Alarcón, Miguel Torres and Alfonso Calavia
Smart Cities 2026, 9(2), 30; https://doi.org/10.3390/smartcities9020030 - 10 Feb 2026
Viewed by 363
Abstract
Rolling-horizon energy management systems (EMSs) and model predictive control (MPC) for microgrids in smart cities face a fundamental trade-off: finer temporal discretization improves operational performance but rapidly increases the size of the optimization problem and execution time, jeopardizing real-time feasibility. Furthermore, in short-horizon [...] Read more.
Rolling-horizon energy management systems (EMSs) and model predictive control (MPC) for microgrids in smart cities face a fundamental trade-off: finer temporal discretization improves operational performance but rapidly increases the size of the optimization problem and execution time, jeopardizing real-time feasibility. Furthermore, in short-horizon operation, only the first control actions are implemented, while long-horizon decisions primarily guide feasibility and constraints. This paper proposes a computation-aware temporal mesh design layer that jointly selects a variable granularity of blocks and an optimization horizon, explicitly bounded by market-aligned settlement steps and per-cycle computation budgets. Candidate configurations are represented as pairs ⟨B, H⟩, where B is a constant-step block programme, and H is the optimization horizon, and they are uniquely tracked through an auditable mesh signature. The method first evaluates a predefined, market-consistent set of solutions ⟨B, H⟩ to establish reproducible cost and execution-time benchmarks, then applies a greedy value-of-information (Greedy-VoI) search that generates valid neighbouring meshes through local refinement, thickening, and resolution reallocation without violating the basic requirements that every solution must meet. All candidates are evaluated using the same microgrid use case and the same comparative KPIs, enabling the systematic identification of near-optimal mesh–horizon designs for practical EMS implementation. Full article
Show Figures

Graphical abstract

28 pages, 4886 KB  
Review
Energy Storage Systems for AI Data Centers: A Review of Technologies, Characteristics, and Applicability
by Saifur Rahman and Tafsir Ahmed Khan
Energies 2026, 19(3), 634; https://doi.org/10.3390/en19030634 - 26 Jan 2026
Viewed by 1840
Abstract
The fastest growth in electricity demand in the industrialized world will likely come from the broad adoption of artificial intelligence (AI)—accelerated by the rise of generative AI models such as OpenAI’s ChatGPT. The global “data center arms race” is driving up power demand [...] Read more.
The fastest growth in electricity demand in the industrialized world will likely come from the broad adoption of artificial intelligence (AI)—accelerated by the rise of generative AI models such as OpenAI’s ChatGPT. The global “data center arms race” is driving up power demand and grid stress, which creates local and regional challenges because people in the area understand that the additional data center-related electricity demand is coming from faraway places, and they will have to support the additional infrastructure while not directly benefiting from it. So, there is an incentive for the data center operators to manage the fast and unpredictable power surges internally so that their loads appear like a constant baseload to the electricity grid. Such high-intensity and short-duration loads can be served by hybrid energy storage systems (HESSs) that combine multiple storage technologies operating across different timescales. This review presents an overview of energy storage technologies, their classifications, and recent performance data, with a focus on their applicability to AI-driven computing. Technical requirements of storage systems, such as fast response, long cycle life, low degradation under frequent micro-cycling, and high ramping capability—which are critical for sustainable and reliable data center operations—are discussed. Based on these requirements, this review identifies lithium titanate oxide (LTO) and lithium iron phosphate (LFP) batteries paired with supercapacitors, flywheels, or superconducting magnetic energy storage (SMES) as the most suitable HESS configurations for AI data centers. This review also proposes AI-specific evaluation criteria, defines key performance metrics, and provides semi-quantitative guidance on power–energy partitioning for HESSs in AI data centers. This review concludes by identifying key challenges, AI-specific research gaps, and future directions for integrating HESSs with on-site generation to optimally manage the high variability in the data center load and build sustainable, low-carbon, and intelligent AI data centers. Full article
(This article belongs to the Special Issue Modeling and Optimization of Energy Storage in Power Systems)
Show Figures

Figure 1

24 pages, 742 KB  
Article
Hybrid Poly Commitments for Scalable Binius Zero-Knowledge Proofs in Federated Learning
by Hasina Andriambelo, Hery Zo Andriamanohisoa and Naghmeh Moradpoor
Electronics 2026, 15(3), 500; https://doi.org/10.3390/electronics15030500 - 23 Jan 2026
Viewed by 307
Abstract
Federated learning enables collaborative model training without sharing raw data, but practical deployments increasingly require verifiable guarantees that clients compute updates correctly. Zero-knowledge proofs can provide such guarantees, yet existing approaches face scalability limits due to the combined cost of polynomial commitments and [...] Read more.
Federated learning enables collaborative model training without sharing raw data, but practical deployments increasingly require verifiable guarantees that clients compute updates correctly. Zero-knowledge proofs can provide such guarantees, yet existing approaches face scalability limits due to the combined cost of polynomial commitments and fast Fourier transform (FFT) intensive verification. Pairing-based schemes offer compact proofs but incur high prover and verifier overhead, while hash-based constructions reduce algebraic cost at the expense of rapidly growing proof sizes. This paper proposes Hybrid-Commit, a polynomial commitment architecture for Binius zero-knowledge proofs that aligns cryptographic primitives with the algebraic structure of federated learning workloads. The scheme separates verification into additive and multiplicative phases: linear aggregation is handled using batched additive commitments optimized for binary fields, while non-linear constraints are verified via hash-based commitments over sparsely selected FFT domains. Proofs from multiple clients are combined through recursive aggregation while preserving non-interactivity. Experiments demonstrate scalability in prover time and proof size (near-constant prover time across 4–11 clients; 160 bytes per client representing 341× and 813× reductions vs. FRI-PCS and Orion), although verification time (762 ms per client) does not scale favorably, making the scheme suitable for bandwidth-constrained scenarios. The scheme achieves under 2% end-to-end training overhead with no impact on model accuracy, indicating that workload-aware commitment design can improve specific scalability dimensions of zero-knowledge verification in federated learning systems. Full article
Show Figures

Figure 1

30 pages, 454 KB  
Article
Bell–CHSH Under Setting-Dependent Selection: Sharp Total-Variation Bounds and an Experimental Audit Protocol
by Parker Emmerson (Yaohushuason)
Quantum Rep. 2026, 8(1), 8; https://doi.org/10.3390/quantum8010008 - 23 Jan 2026
Viewed by 758
Abstract
Bell–CHSH is an inequality about unconditional expectations: under measurement independence, Bell locality, and bounded outcomes, the CHSH value satisfies S2. Experimental correlators, however, are often computed on an accepted subset of trials defined by detection logic, coincidence matching, quality cuts, [...] Read more.
Bell–CHSH is an inequality about unconditional expectations: under measurement independence, Bell locality, and bounded outcomes, the CHSH value satisfies S2. Experimental correlators, however, are often computed on an accepted subset of trials defined by detection logic, coincidence matching, quality cuts, and analysis windows. We model this by an acceptance probability γ(a,b,λ)[0,1] and the resulting accepted hidden-variable law νab obtained by weighting the measurement-independent prior ρ by γ and renormalizing. If νab depends on the setting pair then the four correlators entering CHSH are expectations under four different measures, and a Bell-local measurement-independent model can yield Sobs>2 by selection alone. We quantify the required setting dependence in total variation (TV) distance. For any reference law μ we prove the sharp bound Sobs2+2qQTV(νq,μ) for a CHSH quartet Q. Optimizing over μ yields the intrinsic dispersion bound Sobs2+2ΔQ, and, in particular, Sobsmin{4,2+6DQ}, where DQ is the quartet TV diameter. The constants are optimal. Consequently, reproducing Tsirelson’s value 22 within Bell-local measurement-independent models via setting-dependent acceptance requires ΔQ21 (hence, DQ(21)/3). We then propose a two-lane experimental audit protocol: (i) prior-relative fair-sampling diagnostics using tags recorded on all trials, and (ii) prior-free dispersion diagnostics using accepted-tag distributions across settings, with ΔQ,X computable by linear programming on finite tag alphabets. Full article
Show Figures

Graphical abstract

25 pages, 911 KB  
Article
Constraint-Efficient Comparators via Weighted Accumulation
by Marc Guzmán-Albiol, Marta Bellés-Muñoz, Rafael Genés-Durán and Jose Luis Muñoz-Tapia
Mathematics 2025, 13(24), 3959; https://doi.org/10.3390/math13243959 - 12 Dec 2025
Viewed by 461
Abstract
This article presents an optimized method for verifying the comparison of two binary numbers using the rank-1 constraint system (R1CS) representation, a standard framework for verifiable computation systems. In particular, we analyze different strategies for implementing strict comparisons of the form [...] Read more.
This article presents an optimized method for verifying the comparison of two binary numbers using the rank-1 constraint system (R1CS) representation, a standard framework for verifiable computation systems. In particular, we analyze different strategies for implementing strict comparisons of the form t>K, where K is a known constant and t is an integer input to the comparison. We first analyze a lexicographic approach that, although conceptually straightforward, results in a large number of constraints due to its branching logic. To address this inefficiency, we introduce a weighted-accumulation method that computes an accumulator whose sign determines the comparison outcome. By assigning position-dependent weights to bit pairs and formulating the computation through degree-2 constraints, this method eliminates branching and significantly reduces the total number of constraints. In order to validate our designs, we implemented the described comparison algorithms in an R1CS compiler called circom, allowing us to generate and analyze the corresponding R1CS constraint systems in practice. Overall, the presented design not only ensures correctness but also demonstrates how careful exploitation of the R1CS structure can lead to efficient constraint settings. Full article
(This article belongs to the Special Issue Applied Cryptography and Information Security with Application)
Show Figures

Figure 1

21 pages, 1244 KB  
Article
An Analytically Derived Gauss–Legendre Quadrature for Axis-Aligned Ellipse–Ellipse Intersection
by Mohamad Shatnawi and Péter Földesi
Mathematics 2025, 13(23), 3814; https://doi.org/10.3390/math13233814 - 27 Nov 2025
Viewed by 518
Abstract
Accurate and efficient evaluation of the intersection area between two axis-aligned ellipses is essential in applications where the coordinate system or underlying geometry naturally imposes alignment. However, most existing numerical integration techniques are designed for arbitrarily oriented ellipses, and their generality typically requires [...] Read more.
Accurate and efficient evaluation of the intersection area between two axis-aligned ellipses is essential in applications where the coordinate system or underlying geometry naturally imposes alignment. However, most existing numerical integration techniques are designed for arbitrarily oriented ellipses, and their generality typically requires adaptive refinement or solving higher-degree algebraic intersection formulations, leading to greater computational cost than necessary in the axis-aligned case. This study introduces two analytically derived, fixed-cost Gauss–Legendre quadrature formulations for computing the intersection area in the axis-aligned configuration. The first is a sine-mapped Gauss–Legendre quadrature, which applies a trigonometric transformation to improve conditioning near endpoint singularities while retaining constant-time evaluation. The second is an enhanced two-panel affine-normalized formulation, which splits the intersection domain into two sub-intervals to increase local accuracy while maintaining a fixed computational cost. Both methods are benchmarked against adaptive Simpson integration, polygonal discretization, and Monte Carlo sampling over 10,000 randomly generated ellipse pairs. The two-panel formulation achieves a mean relative error of 0.003% with runtimes more than twenty times faster than the adaptive reference and remains consistently more efficient than the polygonal and Monte Carlo approaches while exhibiting comparable or superior numerical behavior across all tested regimes. Full article
Show Figures

Figure 1

15 pages, 2750 KB  
Article
Study on the Spreading Dynamics of Droplet Pairs near Walls
by Jing Li, Junhu Yang, Xiaobin Liu and Lei Tian
Fluids 2025, 10(10), 252; https://doi.org/10.3390/fluids10100252 - 26 Sep 2025
Viewed by 636
Abstract
This study develops an incompressible two-phase flow solver based on the open-source OpenFOAM platform, employing the volume-of-fluid (VOF) method to track the gas–liquid interface and utilizing the MULES algorithm to suppress numerical diffusion. This study provides a comprehensive investigation of the spreading dynamics [...] Read more.
This study develops an incompressible two-phase flow solver based on the open-source OpenFOAM platform, employing the volume-of-fluid (VOF) method to track the gas–liquid interface and utilizing the MULES algorithm to suppress numerical diffusion. This study provides a comprehensive investigation of the spreading dynamics of droplet pairs near walls, along with the presentation of a corresponding mathematical model. The numerical model is validated through a two-dimensional axisymmetric computational domain, demonstrating grid independence and confirming its reliability by comparing simulation results with experimental data in predicting drConfirmedoplet collision, spreading, and deformation dynamics. The study particularly investigates the influence of surface wettability on droplet impact dynamics, revealing that increased contact angle enhances droplet retraction height, leading to complete rebound on superhydrophobic surfaces. Finally, a mathematical model is presented to describe the relationship between spreading length, contact angle, and Weber number, and the study proves its accuracy. Analysis under logarithmic coordinates reveals that the contact angle exerts a significant influence on spreading length, while a constant contact angle condition yields a slight monotonic increase in spreading length with the Weber number. These findings provide an effective numerical and mathematical tool for analyzing the spreading dynamics of droplet pairs. Full article
Show Figures

Figure 1

19 pages, 3294 KB  
Article
Rotation- and Scale-Invariant Object Detection Using Compressed 2D Voting with Sparse Point-Pair Screening
by Chenbo Shi, Yue Yu, Gongwei Zhang, Shaojia Yan, Changsheng Zhu, Yanhong Cheng and Chun Zhang
Electronics 2025, 14(15), 3046; https://doi.org/10.3390/electronics14153046 - 30 Jul 2025
Viewed by 842
Abstract
The Generalized Hough Transform (GHT) is a powerful method for rigid shape detection under rotation, scaling, translation, and partial occlusion conditions, but its four-dimensional accumulator incurs prohibitive computational and memory demands that prevent real-time deployment. To address this, we propose a framework that [...] Read more.
The Generalized Hough Transform (GHT) is a powerful method for rigid shape detection under rotation, scaling, translation, and partial occlusion conditions, but its four-dimensional accumulator incurs prohibitive computational and memory demands that prevent real-time deployment. To address this, we propose a framework that compresses the 4-D search space into a concise 2-D voting scheme by combining two-level sparse point-pair screening with an accelerated lookup. In the offline stage, template edges are extracted using an adaptive Canny operator with Otsu-determined thresholds, and gradient-direction differences for all point pairs are quantized to retain only those in the dominant bin, yielding rotation- and scale-invariant descriptors that populate a compact 2-D reference table. During the online stage, an adaptive grid selects only the highest-gradient pixels per cell as a base points, while a precomputed gradient-direction bucket table enables constant-time retrieval of compatible subpoints. Each valid base–subpoint pair is mapped to indices in the lookup table, and “fuzzy” votes are cast over a 3 × 3 neighborhood in the 2-D accumulator, whose global peak determines the object center. Evaluation on 200 real industrial parts—augmented to 1000 samples with noise, blur, occlusion, and nonlinear illumination—demonstrates that our method maintains over 90% localization accuracy, matches the classical GHT, and achieves a ten-fold speedup, outperforming IGHT and LI-GHT variants by 2–3×, thereby delivering a robust, real-time solution for industrial rigid object localization. Full article
Show Figures

Figure 1

29 pages, 1997 KB  
Article
An Efficient Sparse Twin Parametric Insensitive Support Vector Regression Model
by Shuanghong Qu, Yushan Guo, Renato De Leone, Min Huang and Pu Li
Mathematics 2025, 13(13), 2206; https://doi.org/10.3390/math13132206 - 6 Jul 2025
Cited by 1 | Viewed by 766
Abstract
This paper proposes a novel sparse twin parametric insensitive support vector regression (STPISVR) model, designed to enhance sparsity and improve generalization performance. Similar to twin parametric insensitive support vector regression (TPISVR), STPISVR constructs a pair of nonparallel parametric insensitive bound functions to indirectly [...] Read more.
This paper proposes a novel sparse twin parametric insensitive support vector regression (STPISVR) model, designed to enhance sparsity and improve generalization performance. Similar to twin parametric insensitive support vector regression (TPISVR), STPISVR constructs a pair of nonparallel parametric insensitive bound functions to indirectly determine the regression function. The optimization problems are reformulated as two sparse linear programming problems (LPPs), rather than traditional quadratic programming problems (QPPs). The two LPPs are originally derived from initial L1-norm regularization terms imposed on their respective dual variables, which are simplified to constants via the Karush–Kuhn–Tucker (KKT) conditions and consequently disappear. This simplification reduces model complexity, while the constraints constructed through the KKT conditions— particularly their geometric properties—effectively ensure sparsity. Moreover, a two-stage hybrid tuning strategy—combining grid search for coarse parameter space exploration and Bayesian optimization for fine-grained convergence—is proposed to precisely select the optimal parameters, reducing tuning time and improving accuracy compared to a singlemethod strategy. Experimental results on synthetic and benchmark datasets demonstrate that STPISVR significantly reduces the number of support vectors (SVs), thereby improving prediction speed and achieving a favorable trade-off among prediction accuracy, sparsity, and computational efficiency. Overall, STPISVR enhances generalization ability, promotes sparsity, and improves prediction efficiency, making it a competitive tool for regression tasks, especially in handling complex data structures. Full article
Show Figures

Figure 1

18 pages, 1902 KB  
Article
A Discrete Fracture Network Model for Coupled Variable-Density Flow and Dissolution with Dynamic Fracture Aperture Evolution
by Anis Younes, Husam Musa Baalousha, Lamia Guellouz and Marwan Fahs
Water 2025, 17(13), 1904; https://doi.org/10.3390/w17131904 - 26 Jun 2025
Viewed by 977
Abstract
Fluid flow and mass transfer processes in some fractured aquifers are negligible in the low-permeability rock matrix and occur mainly in the fracture network. In this work, we consider coupled variable-density flow (VDF) and mass transport with dissolution in discrete fracture networks (DFNs). [...] Read more.
Fluid flow and mass transfer processes in some fractured aquifers are negligible in the low-permeability rock matrix and occur mainly in the fracture network. In this work, we consider coupled variable-density flow (VDF) and mass transport with dissolution in discrete fracture networks (DFNs). These three processes are ruled by nonlinear and strongly coupled partial differential equations (PDEs) due to the (i) density variation induced by concentration and (ii) fracture aperture evolution induced by dissolution. In this study, we develop an efficient model to solve the resulting system of nonlinear PDEs. The new model leverages the method of lines (MOL) to combine the robust finite volume (FV) method for spatial discretization with a high-order method for temporal discretization. A suitable upwind scheme is used on the fracture network to eliminate spurious oscillations in the advection-dominated case. The time step size and the order of the time integration are adapted during simulations to reduce the computational burden while preserving accuracy. The developed VDF-DFN model is validated by simulating saltwater intrusion and dissolution in a coastal fractured aquifer. The results of the VDF-DFN model, in the case of a dense fracture network, show excellent agreement with the Henry semi-analytical solution for saltwater intrusion and dissolution in a coastal aquifer. The VDF-DFN model is then employed to investigate coupled flow, mass transfer and dissolution for an injection/extraction well pair problem. This test problem enables an exploration of how dissolution influences the evolution of the fracture aperture, considering both constant and variable dissolution rates. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

22 pages, 7459 KB  
Article
Robust Line Feature Matching via Point–Line Invariants and Geometric Constraints
by Chenyang Zhang, Yunfei Xiang, Qiyuan Wang, Shuo Gu, Jianghua Deng and Rongchun Zhang
Sensors 2025, 25(10), 2980; https://doi.org/10.3390/s25102980 - 8 May 2025
Cited by 4 | Viewed by 1700
Abstract
Line feature matching is a crucial aspect of computer vision and image processing tasks, attracting significant research attention. Most line matching algorithms predominantly rely on local feature descriptors or deep learning modules, which often suffer from low robustness and poor generalization. In response, [...] Read more.
Line feature matching is a crucial aspect of computer vision and image processing tasks, attracting significant research attention. Most line matching algorithms predominantly rely on local feature descriptors or deep learning modules, which often suffer from low robustness and poor generalization. In response, this paper presents a novel line feature matching approach grounded in point–line invariants through spatial invariant relationships. By leveraging a robust point feature matching algorithm, an initial set of point feature matches is acquired. Subsequently, the line feature supporting area is partitioned, and a constant ratio invariant is formulated based on the distances from point to line features within corresponding neighborhood domains. Additionally, a direction vector invariant is also introduced, jointly constructing a dual invariant for line matching. An initial matching matrix and line feature match pairs are derived using this dual invariant. Subsequent geometric constraints within line feature matches eliminate residual outliers. Comprehensive evaluations under diverse imaging conditions, along with comparisons to several state-of-the-art algorithms, demonstrate that our proposal achieved remarkable performance in terms of both accuracy and robustness. Our implementation code will be publicly released upon the acceptance of this paper. Full article
(This article belongs to the Special Issue Multi-Modal Data Sensing and Processing)
Show Figures

Figure 1

15 pages, 400 KB  
Article
On High-Order Runge–Kutta Pairs for Linear Inhomogeneous Problems
by Houssem Jerbi, Sanaa Maali, Sondess Ben Aoun, Arwa N. Aledaily, Vijipriya Jeyamani, Theodore E. Simos and Charalampos Tsitouras
Axioms 2025, 14(4), 245; https://doi.org/10.3390/axioms14040245 - 24 Mar 2025
Cited by 1 | Viewed by 1159
Abstract
This paper introduces a novel Runge–Kutta (RK) pair of orders 8(6) designed specifically for solving linear inhomogeneous initial value problems (IVPs) with constant coefficients. The proposed method requires only 11 stages per iteration, a significant improvement over conventional RK pairs [...] Read more.
This paper introduces a novel Runge–Kutta (RK) pair of orders 8(6) designed specifically for solving linear inhomogeneous initial value problems (IVPs) with constant coefficients. The proposed method requires only 11 stages per iteration, a significant improvement over conventional RK pairs of orders 8(7), which typically demand 13 stages. The reduction in stages is achieved by leveraging a smaller set of order conditions tailored to linear inhomogeneous problems, where traditional simplification techniques are not applicable. To address the complexity of deriving such methods, the authors employ the Differential Evolution algorithm, a global optimization technique, to solve the resulting system of equations. The new RK pair, named NEW8(6)Lin, is tested on several benchmark problems, including scalar, linear inhomogeneous, and larger systems, demonstrating a superior performance in terms of accuracy and computational efficiency. The method’s high phase-lag accuracy and efficiency make it particularly suitable for problems requiring high precision over extended intervals. The coefficients of the method are provided with high precision, enabling direct implementation in computational environments like Mathematica. The results highlight the method’s potential as a robust tool for solving linear inhomogeneous IVPs, offering a balance between computational cost and accuracy. This work contributes to the ongoing development of specialized numerical methods for differential equations, particularly in scenarios where traditional approaches struggle with efficiency or stability. Full article
(This article belongs to the Special Issue The Numerical Analysis and Its Application)
Show Figures

Figure 1

13 pages, 4512 KB  
Article
The Nasal Septal Swell Body May Have a Regulatory Role in Nasal Airway Passage That Depends on the Degree of Septal Deviation
by Tomohisa Hirai, Takehiro Sera, Sachio Takeno, Yukako Okamoto, Tomohiro Kawasumi, Chie Ishikawa, Takashi Oda, Manabu Nishida, Yuichiro Horibe, Takashi Ishino, Takao Hamamoto, Tsutomu Ueda and Nobuhisa Ishikawa
J. Otorhinolaryngol. Hear. Balance Med. 2025, 6(1), 5; https://doi.org/10.3390/ohbm6010005 - 4 Mar 2025
Viewed by 3672
Abstract
Background: The nasal septal swell body (NSB) is a thickened area of the nasal septum with erectile tissues, located above the nasal floor. We hypothesized that the presence of the NSB in this space exerts favorable effects to generate laminar nasal airflow by [...] Read more.
Background: The nasal septal swell body (NSB) is a thickened area of the nasal septum with erectile tissues, located above the nasal floor. We hypothesized that the presence of the NSB in this space exerts favorable effects to generate laminar nasal airflow by developing its morphology as adjusted to nasal septal deviation (NSD). Patients and Methods: We objectively measured the NSB morphology in 152 patients by computed tomography (CT) and assessed its relationship with the width of the inferior turbinate (IT), the severity of NSD, and the patency of the nasal airflow passage (NAP). Results: In the patients with moderate or severe NSD, the mean widths of the NSB, IT, and NAP were significantly narrower at the convex side compared to the paired concave side, with the degree being more prominent in the severe-NSD group. A positive correlation was observed between the degree of the NSD angles and the difference in the widths of the NSB (r = 0.805) and IT (r = 0.609). Conclusions: These results imply novel roles of the NSB in the maintenance of physiological nasal airflow to generate a laminar airflow from the nostrils toward the middle nasal meatus at a constant rate. Full article
Show Figures

Figure 1

25 pages, 3204 KB  
Article
Fractional Partial Differential Equation Modeling for Solar Cell Charge Dynamics
by Waleed Mohammed Abdelfattah, Ola Ragb, Mohamed Salah, Mohamed S. Matbuly and Mokhtar Mohamed
Fractal Fract. 2024, 8(12), 729; https://doi.org/10.3390/fractalfract8120729 - 12 Dec 2024
Cited by 1 | Viewed by 1629
Abstract
This paper presents a groundbreaking numerical approach, the fractional differential quadrature method (FDQM), to simulate the complex dynamics of organic polymer solar cells. The method, which leverages polynomial-based differential quadrature and Cardinal sine functions coupled with the Caputo-type fractional derivative, offers a significant [...] Read more.
This paper presents a groundbreaking numerical approach, the fractional differential quadrature method (FDQM), to simulate the complex dynamics of organic polymer solar cells. The method, which leverages polynomial-based differential quadrature and Cardinal sine functions coupled with the Caputo-type fractional derivative, offers a significant improvement in accuracy and efficiency over traditional methods. By employing a block-marching technique, we effectively address the time-dependent nature of the governing equations. The efficacy of the proposed method is validated through rigorous numerical simulations and comparisons with existing analytical and numerical solutions. Each scheme’s computational characteristics are tailored to achieve high accuracy, ensuring an error margin on the order of 108  or less. Additionally, a comprehensive parametric study is conducted to investigate the impact of key parameters on device performance. These parameters include supporting conditions, time evolution, carrier mobilities, charge carrier densities, geminate pair distances, recombination rate constants, and generation efficiency. The findings of this research offer valuable insights for optimizing and enhancing the performance of organic polymer solar cell devices. Full article
(This article belongs to the Special Issue Fractional Mathematical Modelling: Theory, Methods and Applications)
Show Figures

Figure 1

Back to TopTop