Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (162)

Search Parameters:
Keywords = complete monotonicity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1671 KB  
Article
Experimental Study of Hydrogen Combustion and Emissions for a Self-Developed Microturbine
by István Péter Kondor
Energies 2026, 19(3), 577; https://doi.org/10.3390/en19030577 - 23 Jan 2026
Viewed by 123
Abstract
This paper presents an experimental investigation of hydrogen enrichment effects on combustion behavior and exhaust emissions in a self-developed micro gas turbine fueled with a propane–butane mixture. Hydrogen was blended with the base fuel in volume fractions of 0–30%, and combustion was examined [...] Read more.
This paper presents an experimental investigation of hydrogen enrichment effects on combustion behavior and exhaust emissions in a self-developed micro gas turbine fueled with a propane–butane mixture. Hydrogen was blended with the base fuel in volume fractions of 0–30%, and combustion was examined under unloaded operating conditions at three global equivalence ratios (ϕ = 0.7, 1.1, and 1.3). The global equivalence ratio (ϕ) is defined as the ratio of the actual fuel–air ratio to the corresponding stoichiometric fuel–air ratio, with ϕ < 1 representing lean, ϕ = 1 stoichiometric, and ϕ > 1 fuel-rich operating conditions. The micro gas turbine is based on an automotive turbocharger coupled with a custom-designed counterflow combustion chamber developed specifically for alternative gaseous fuel research. Exhaust gas emissions of CO, CO2, and NOx were measured using a laboratory-grade FTIR analyzer (Horiba Mexa FTIR Horiba Ltd., Kyoto, Japan), while combustion chamber temperature was monitored with thermocouples. The results show that hydrogen addition significantly influences flame stability, combustion temperature, and emission characteristics. Increasing the hydrogen fraction led to a pronounced reduction in CO emissions across all equivalence ratios, indicating enhanced oxidation kinetics and improved combustion completeness. CO2 concentrations decreased monotonically with hydrogen enrichment due to the reduced carbon content of the blended fuel and the shift of combustion products toward higher H2O fractions. In contrast, NOx emissions increased with increasing hydrogen content for all tested equivalence ratios, which is attributed to elevated local flame temperatures, enhanced reaction rates, and the formation of locally near-stoichiometric zones in the compact combustor. A slight reduction in NOx at low hydrogen fractions was observed under near-stoichiometric conditions, suggesting a temporary shift toward a more distributed combustion regime. Overall, the findings demonstrate that hydrogen–propane–butane blends can be stably combusted in a micro gas turbine without major operational issues under unloaded conditions. While hydrogen addition offers clear benefits in terms of CO reduction and carbon-related emissions, effective NOx mitigation strategies will be essential for future high-hydrogen microturbine applications. Full article
(This article belongs to the Section A5: Hydrogen Energy)
Show Figures

Figure 1

35 pages, 504 KB  
Article
Introducing a Resolvable Network-Based SAT Solver Using Monotone CNF–DNF Dualization and Resolution
by Gábor Kusper and Benedek Nagy
Mathematics 2026, 14(2), 317; https://doi.org/10.3390/math14020317 - 16 Jan 2026
Viewed by 340
Abstract
This paper is a theoretical contribution that introduces a new reasoning framework for SAT solving based on resolvable networks (RNs). RNs provide a graph-based representation of propositional satisfiability in which clauses are interpreted as directed reaches between disjoint subsets of Boolean variables (nodes). [...] Read more.
This paper is a theoretical contribution that introduces a new reasoning framework for SAT solving based on resolvable networks (RNs). RNs provide a graph-based representation of propositional satisfiability in which clauses are interpreted as directed reaches between disjoint subsets of Boolean variables (nodes). Building on this framework, we introduce a novel RN-based SAT solver, called RN-Solver, which replaces local assignment-driven branching by global reasoning over token distributions. Token distributions, interpreted as truth assignments, are generated by monotone CNF–DNF dualization applied to white (all-positive) clauses. New white clauses are derived via resolution along private-pivot chains, and the solver’s progression is governed by a taxonomy of token distributions (black-blocked, terminal, active, resolved, and non-resolved). The main results establish the soundness and completeness of the RN-Solver. Experimentally, the solver performs very well on pigeonhole formulas, where the separation between white and black clauses enables effective global reasoning. In contrast, its current implementation performs poorly on random 3-SAT instances, highlighting both practical limitations and significant opportunities for optimization and theoretical refinement. The presented RN-Solver implementation is a proof-of-concept which validates the underlying theory rather than a state-of-the-art competitive solver. One promising direction is the generalization of strongly connected components from directed graphs to resolvable networks. Finally, the token-based perspective naturally suggests a connection to token-superposition Petri net models. Full article
(This article belongs to the Special Issue Graph Theory and Applications, 3rd Edition)
36 pages, 2297 KB  
Article
Decarbonizing Coastal Shipping: Voyage-Level CO2 Intensity, Fuel Switching and Carbon Pricing in a Distribution-Free Causal Framework
by Murat Yildiz, Abdurrahim Akgundogdu and Guldem Elmas
Sustainability 2026, 18(2), 723; https://doi.org/10.3390/su18020723 - 10 Jan 2026
Viewed by 205
Abstract
Coastal shipping plays a critical role in meeting maritime decarbonization targets under the International Maritime Organization’s (IMO) Carbon Intensity Indicator (CII) and the European Union Emissions Trading System (EU ETS); however, operators currently lack robust tools to forecast route-specific carbon intensity and evaluate [...] Read more.
Coastal shipping plays a critical role in meeting maritime decarbonization targets under the International Maritime Organization’s (IMO) Carbon Intensity Indicator (CII) and the European Union Emissions Trading System (EU ETS); however, operators currently lack robust tools to forecast route-specific carbon intensity and evaluate the causal benefits of fuel switching. This study developed a distribution-free causal forecasting framework for voyage-level Carbon Dioxide (CO2) intensity using an enriched panel of 1440 real-world voyages across four Nigerian coastal routes (2022–2024). We employed a physics-informed monotonic Light Gradient Boosting Machine (LightGBM) model trained under a strict leave-one-route-out (LORO) protocol, integrated with split-conformal prediction for uncertainty quantification and Causal Forests for estimating heterogeneous treatment effects. The model predicted emission intensity on completely unseen corridors with a Mean Absolute Error (MAE) of 40.7 kg CO2/nm, while 90% conformal prediction intervals achieved 100% empirical coverage. While the global average effect of switching from heavy fuel oil to diesel was negligible (≈−0.07 kg CO2/nm), Causal Forests revealed significant heterogeneity, with effects ranging from −74 g to +29 g CO2/nm depending on route conditions. Economically, targeted diesel use becomes viable only when carbon prices exceed ~100 USD/tCO2. These findings demonstrate that effective coastal decarbonization requires moving beyond static baselines to uncertainty-aware planning and targeted, route-specific fuel strategies rather than uniform fleet-wide policies. Full article
(This article belongs to the Special Issue Sustainable Maritime Logistics and Low-Carbon Transportation)
Show Figures

Figure 1

18 pages, 816 KB  
Article
The Convergent Indian Buffet Process
by Ilsang Ohn
Mathematics 2025, 13(23), 3881; https://doi.org/10.3390/math13233881 - 3 Dec 2025
Viewed by 301
Abstract
We propose a new Bayesian nonparametric prior for latent feature models, called the Convergent Indian Buffet Process (CIBP). We show that under the CIBP, the number of latent features is distributed as a Poisson distribution, with the mean monotonically increasing but converging to [...] Read more.
We propose a new Bayesian nonparametric prior for latent feature models, called the Convergent Indian Buffet Process (CIBP). We show that under the CIBP, the number of latent features is distributed as a Poisson distribution, with the mean monotonically increasing but converging to a certain value as the number of objects goes to infinity. That is, the expected number of features is bounded above even when the number of objects goes to infinity, unlike the standard Indian Buffet Process, under which the expected number of features increases with the number of objects. We provide two alternative representations of the CIBP based on a hierarchical distribution and a completely random measure, which are of independent interest. The proposed CIBP is assessed on a high-dimensional sparse factor model. Full article
Show Figures

Figure 1

14 pages, 1008 KB  
Article
Extreme Events and Event Size Fluctuations in Resetting Random Walks on Networks
by Xiaohan Sun, Shaoxiang Zhu and Anlin Li
Entropy 2025, 27(12), 1215; https://doi.org/10.3390/e27121215 - 28 Nov 2025
Viewed by 354
Abstract
Random walks with stochastic resetting, where walkers periodically return to a designated node, have emerged as an important framework for understanding transport processes in complex networks. While resetting is known to optimize search times, its effects on extreme events—defined as exceedances of walker [...] Read more.
Random walks with stochastic resetting, where walkers periodically return to a designated node, have emerged as an important framework for understanding transport processes in complex networks. While resetting is known to optimize search times, its effects on extreme events—defined as exceedances of walker flux above a critical threshold—remain largely unexplored. Such events model critical network phenomena, including traffic congestion, server overloads, and infrastructure failures. In this work, we systematically investigate how stochastic resetting influences both the probability and magnitude of extreme events in complex networks. Through analytical derivation of the stationary occupation probabilities and comprehensive numerical simulations, we demonstrate that resetting significantly reduces the occurrence of extreme events while concentrating event-size fluctuations. Our results reveal a universal suppression effect: increasing the resetting rate γ monotonically decreases extreme event probabilities across all nodes, with complete elimination at γ=1. Notably, this suppression is most pronounced for vulnerable low-degree nodes and nodes distant from the resetting node, which experience the largest reduction in both event probability and fluctuation magnitude. These findings provide theoretical foundations for using resetting as a control mechanism to mitigate extreme events in networked systems. Full article
(This article belongs to the Special Issue Transport in Complex Environments)
Show Figures

Figure 1

17 pages, 8537 KB  
Article
Physics-Informed Multi-Task Neural Network (PINN) Learning for Ultra-High-Performance Concrete (UHPC) Strength Prediction
by Long Yan, Pengfei Liu, Yufeng Yao, Fan Yang and Xu Feng
Buildings 2025, 15(23), 4243; https://doi.org/10.3390/buildings15234243 - 24 Nov 2025
Cited by 1 | Viewed by 648
Abstract
Ultra-high-performance concrete (UHPC) mixtures exhibit tightly coupled ingredient–property relations and heterogeneous reporting, which complicate the data-driven prediction of compressive and flexural strength. We present an end-to-end framework that (i) harmonizes mixture records, (ii) completes numeric features using a dependence-preserving Gaussian copula routine, and [...] Read more.
Ultra-high-performance concrete (UHPC) mixtures exhibit tightly coupled ingredient–property relations and heterogeneous reporting, which complicate the data-driven prediction of compressive and flexural strength. We present an end-to-end framework that (i) harmonizes mixture records, (ii) completes numeric features using a dependence-preserving Gaussian copula routine, and (iii) standardizes/encodes predictors with training-only fits. The feature space focuses on domain ratios and concise interactions (water–binder, superplasticizer–binder, total fiber, water–binder, superplasticizer–binder, and fiber normalized by water–binder). A physics-informed multi-task neural network (PINN) is trained in log space with Smooth-L1 supervision and learned per-task noise scales for uncertainty-weighted balancing, while soft monotonicity penalties are applied to input gradients so that predicted strength is non-increasing in water–binder (both targets) and, when available, non-decreasing in fiber content for flexural response. In parallel, histogram-based gradient boosting is fitted per target; a convex combination is then selected on the validation slice and fixed for testing. On the held-out sets, the blended model attains an MAE/RMSE/R2 of 10.86 MPa/14.68 MPa/0.848 MPa for compressive strength and 2.78 MPa/3.67 MPa/0.841 MPa for flexural peak, improving over the best single family by 0.5 RMSE (compressive) and 0.16 RMSE (flexural), with corresponding R2 gains of 0.01–0.02. Residual-versus-prediction diagnostics and predicted–actual overlays indicate aligned trends and reduced heteroscedastic tail errors. Full article
(This article belongs to the Special Issue Trends and Prospects in Cementitious Material)
Show Figures

Figure 1

20 pages, 9137 KB  
Article
Study on the Separation Mechanism of Walnut Shell Kernels on Different Inclined Vibrating Screens
by Yongcheng Zhang, Changqi Wang, Wangyuan Zong, Hong Zhang, Zhanbiao Li, Guangxin Gai, Peiyu Chen and Jiale Ma
AgriEngineering 2025, 7(11), 396; https://doi.org/10.3390/agriengineering7110396 - 20 Nov 2025
Viewed by 977
Abstract
The separation of walnut kernels from shells is a crucial step in walnut processing. Pneumatic sorting is the mainstream method. However, due to the overlapping suspension speeds of half-shells and eighth-shells, complete separation was not achieved. This paper proposes using a toothed vibrating [...] Read more.
The separation of walnut kernels from shells is a crucial step in walnut processing. Pneumatic sorting is the mainstream method. However, due to the overlapping suspension speeds of half-shells and eighth-shells, complete separation was not achieved. This paper proposes using a toothed vibrating screen to separate the two. Using EDEM to simulate and analyze the motion forms, collision processes, and stress conditions of walnut shells and kernels on the vibrating screen, the effectiveness of this method was demonstrated, and the mechanisms of shell–kernel retention and loss during the separation process were revealed. Results indicate that 1/8 kernels, being smaller, easily fall into tooth grooves and move upward step by step under the excitation force during reciprocating vibration. The 1/2 shells, being larger, are difficult to fall into the teeth grooves, and their smooth surfaces cause them to slide easily, moving downward continuously under the action of reciprocating vibration and gravity. Using the cleaning rate and loss rate as evaluation indicators, it was found that as the inclination angle of the vibrating screen increased step by step, the cleaning rate consistently increased monotonously. The loss rate initially rose slowly, then surged sharply after reaching 22°, at which point the loss rate was at its lowest, around 10%, and the cleaning rate was at its maximum, at 95%. The shortest retention time of walnut shells on the screen is 2.85 s, and the longest is 10.6 s, with the number of collisions being 458 and 2619, respectively; the collisions between the shells and the kernels account for 51.8%. The failure to thoroughly separate is due to the shell and kernel entangling within the separation area, making it impossible to segregate them. They enter the opposite region, collide, and cause loss and retention phenomena. Full article
Show Figures

Figure 1

22 pages, 2708 KB  
Article
Student Characteristics and ICT Usage as Predictors of Computational Thinking: An Explainable AI Approach
by Tongtong Guan, Liqiang Zhang, Xingshu Ji, Yuze He and Yonghe Zheng
J. Intell. 2025, 13(11), 145; https://doi.org/10.3390/jintelligence13110145 - 11 Nov 2025
Viewed by 872
Abstract
Computational thinking (CT) is recognized as a core competency for the 21st century, and its development is shaped by multiple factors, including students’ individual characteristics and their use of information and communication technology (ICT). Drawing on large-scale international data from the 2023 cycle [...] Read more.
Computational thinking (CT) is recognized as a core competency for the 21st century, and its development is shaped by multiple factors, including students’ individual characteristics and their use of information and communication technology (ICT). Drawing on large-scale international data from the 2023 cycle of the International Computer and Information Literacy Study (ICILS), this study analyzes a sample of 81,871 Grade 8 students from 23 countries and one regional education system who completed the CT assessment. This study is the first to apply a predictive modeling framework that integrates two machine learning techniques to systematically identify and explain the key variables that predict CT and their nonlinear effects. The results reveal that various student-level predictors—such as educational expectations and the number of books at home—as well as ICT usage across different contexts, demonstrate significant nonlinear patterns in the model, including U-shaped, inverted U-shaped, and monotonic trends. Compared with traditional linear models, the SHapley Additive exPlanations (SHAP)-based approach facilitates the interpretation of the complex nonlinear effects that shape CT development. Methodologically, this study expands the integration of educational data mining and explainable artificial intelligence (XAI). Practically, it provides actionable insights for ICT-integrated instructional design and targeted educational interventions. Future research can incorporate longitudinal data to explore the developmental trajectories and causal mechanisms of students’ CT over time. Full article
Show Figures

Figure 1

25 pages, 1615 KB  
Article
A Unified Framework for Constructing Two-Branched Fuzzy Implications and Copulas via Monotone and Convex Function Composition
by Panagiotis G. Mangenakis and Basil K. Papadopoulos
Mathematics 2025, 13(22), 3604; https://doi.org/10.3390/math13223604 - 10 Nov 2025
Viewed by 794
Abstract
This paper presents a unified framework for constructing two-branched fuzzy implications and families of copulas based on the same composition principles involving monotone and convex functions. The proposed methodology yields operators with a genuine dual structure, where each branch satisfies distinct boundary and [...] Read more.
This paper presents a unified framework for constructing two-branched fuzzy implications and families of copulas based on the same composition principles involving monotone and convex functions. The proposed methodology yields operators with a genuine dual structure, where each branch satisfies distinct boundary and monotonicity conditions while remaining consistent with the general axioms of copulas. By systematically combining monotone generators with convex transformations, new families of fuzzy implications and copulas are obtained, both exhibiting enhanced analytical properties such as strengthened two-increasing behavior, adjustable dependence strength, and flexible convexity with continuous transitions. Convexity ensures the two-increasing property, while continuity guarantees the completeness and mathematical soundness of the constructions. Remarkably, certain copulas produced under this framework display Archimedean-like features—symmetry and associativity—thus providing new theoretical instruments for the advancement of fuzzy logic and dependence modeling. Full article
(This article belongs to the Special Issue Advances in Convex Analysis and Inequalities)
Show Figures

Figure 1

13 pages, 1326 KB  
Article
Characterization of Alpha Particle Track Lengths in LR-115 Detectors
by Luiz Augusto Stuani Pereira and Carlos Alberto Tello Sáenz
Physics 2025, 7(4), 56; https://doi.org/10.3390/physics7040056 - 7 Nov 2025
Viewed by 503
Abstract
We investigate the dependence of the maximum etched track length (Lmax) on alpha-particle energy and incidence angle in LR-115 type II nuclear track detectors by combining Geant4 Monte Carlo simulations with controlled chemical etching experiments. The bulk (VB [...] Read more.
We investigate the dependence of the maximum etched track length (Lmax) on alpha-particle energy and incidence angle in LR-115 type II nuclear track detectors by combining Geant4 Monte Carlo simulations with controlled chemical etching experiments. The bulk (VB) and track (VT) etch rates were determined under standardized conditions, yielding VB=(3.1±0.1) µm/h and VT=(5.98±0.06) µm/h, which correspond to a critical detection angle of about (58.8±1.2)°. Simulations covering initial energies spanning 1 MeV to 5 MeV and incidence angles up to 70° confirmed that the maximum etched track length varies quadratically with particle energy E and depends systematically on incidence angle θ. Empirical parameterizations of Lmax(E,θ) were obtained, and energy thresholds for complete track registration within the 12 µm sensitive layer were established. The angular acceptance predicted by the VT/VB ratio was validated, and the results demonstrate that Lmax provides a monotonic and more reliable observable for energy calibration compared to track diameter. These findings improve the quantitative calibration of LR-115 detectors and strengthen their use in environmental radon monitoring, radiation dosimetry, and alpha spectrometry. In addition, they highlight the utility of Geant4-based modeling for refining solid state nuclear track detector response functions and guiding the development of optimized detector protocols for nuclear and environmental physics applications. Full article
(This article belongs to the Section Applied Physics)
Show Figures

Figure 1

13 pages, 2719 KB  
Article
Validation of the Dermatologic Complexity Score for Dermatologic Triage
by Neil K. Jairath, Joshua Mijares, Kanika Garg, Katie Beier, Vartan Pahalyants, Andjela Nemcevic, Melissa Laughter, Jessica Quinn, Swetha Maddipuddi, George Jeha, Sultan Qiblawi and Vignesh Ramachandran
Diagnostics 2025, 15(21), 2765; https://doi.org/10.3390/diagnostics15212765 - 31 Oct 2025
Viewed by 702
Abstract
Background/Objectives: Demand for dermatologic services exceeds specialist capacity, with average wait times of 26–50 days in the United States. Current triage methods rely on subjective judgment or disease-specific indices that do not generalize across diagnoses or translate to operational decisions. We developed and [...] Read more.
Background/Objectives: Demand for dermatologic services exceeds specialist capacity, with average wait times of 26–50 days in the United States. Current triage methods rely on subjective judgment or disease-specific indices that do not generalize across diagnoses or translate to operational decisions. We developed and validated the Dermatologic Complexity Score (DCS), a standardized instrument to guide case prioritization across dermatology care settings and evaluate DCS as a workload-reduction filter, enabling safe delegation of approximately half of routine teledermatology cases (DCS ≤ 40) away from specialist review. Methods: We conducted a prospective validation study of the DCS using 100 consecutive teledermatology cases spanning 30 common conditions. The DCS decomposes complexity into five domains (Diagnostic, Treatment, Risk, Patient Complexity, Monitoring) summed to a 0–100 total with prespecified bands: ≤40 (low) (41–70), (moderate) (71–89), (high), ≥90 (extreme). Five board-certified dermatologists and an automated module independently scored all cases. Two primary care physicians completed all ≤40 cases to assess feasibility. Primary outcomes were interrater reliability using ICC (2,1) and agreement with automation. Secondary outcomes included time-to-decision, referral rates, and primary care feasibility. Results: Mean patient age was 46.2 years; 47% of cases scored ≤40, 33% scored 41–70, 18% scored 71–89, and 2% scored ≥90. Interrater reliability was excellent (ICC (1,2)) = 0.979; 95% CI 0.974–0.983), with near-perfect agreement between automated and mean dermatologist scores (r = 0.998). Time-to-decision increased monotonically across DCS bands from 2.11 min (≤40) to 5 (90) min (≥90) (p = 1.36 × 10−14). Referral rates were 0% for ≤40, 3% for 41–70, 27.8% for 71–89, and 100% for ≥90 cases. DCS strongly predicted referral decisions (AUC = 0.919). Primary care physicians successfully managed all ≤40 cases but required 6–8 additional minutes per case compared to dermatologists. Conclusions: The DCS demonstrates excellent reliability and strong construct validity, mapping systematically to clinically relevant outcomes, including decision time and referral patterns. The instrument enables standardized, reproducible triage decisions that can optimize resource allocation across teledermatology, clinic, procedural, and inpatient settings. Implementation could improve access to dermatologic care by supporting appropriate delegation of low-complexity cases to primary care while ensuring timely specialist evaluation for high-complexity conditions. Full article
Show Figures

Figure 1

17 pages, 340 KB  
Article
Semi-Rings, Semi-Vector Spaces, and Fractal Interpolation
by Peter Massopust
Fractal Fract. 2025, 9(11), 680; https://doi.org/10.3390/fractalfract9110680 - 23 Oct 2025
Viewed by 506
Abstract
In this paper, we introduce fractal interpolation on complete semi-vector spaces. This approach is motivated by the requirements of the preservation of positivity or monotonicity of functions for some models in approximation and interpolation theory. The setting in complete semi-vector spaces does not [...] Read more.
In this paper, we introduce fractal interpolation on complete semi-vector spaces. This approach is motivated by the requirements of the preservation of positivity or monotonicity of functions for some models in approximation and interpolation theory. The setting in complete semi-vector spaces does not requite additional assumptions but is intrinsically built into the framework. For the purposes of this paper, fractal interpolation in the complete semi-vector spaces C+ and Lp+ is considered. Full article
(This article belongs to the Special Issue Applications of Fractal Interpolation in Mathematical Functions)
28 pages, 1237 KB  
Article
Counting Cosmic Cycles: Past Big Crunches, Future Recurrence Limits, and the Age of the Quantum Memory Matrix Universe
by Florian Neukart, Eike Marx and Valerii Vinokur
Entropy 2025, 27(10), 1043; https://doi.org/10.3390/e27101043 - 7 Oct 2025
Viewed by 1454
Abstract
We present a quantitative theory of contraction and expansion cycles within the Quantum Memory Matrix (QMM) cosmology. In this framework, spacetime consists of finite-capacity Hilbert cells that store quantum information. Each non-singular bounce adds a fixed increment of imprint entropy, defined as the [...] Read more.
We present a quantitative theory of contraction and expansion cycles within the Quantum Memory Matrix (QMM) cosmology. In this framework, spacetime consists of finite-capacity Hilbert cells that store quantum information. Each non-singular bounce adds a fixed increment of imprint entropy, defined as the cumulative quantum information written irreversibly into the matrix and distinct from coarse-grained thermodynamic entropy, thereby providing an intrinsic, monotonic cycle counter. By calibrating the geometry–information duality, inferring today’s cumulative imprint from CMB, BAO, chronometer, and large-scale-structure constraints, and integrating the modified Friedmann equations with imprint back-reaction, we find that the Universe has already completed Npast=3.6±0.4 cycles. The finite Hilbert capacity enforces an absolute ceiling: propagating the holographic write rate and accounting for instability channels implies only Nfuture=7.8±1.6 additional cycles before saturation halts further bounces. Integrating Kodama-vector proper time across all completed cycles yields a total cumulative age tQMM=62.0±2.5Gyr, compared to the 13.8±0.2Gyr of the current expansion usually described by ΛCDM. The framework makes concrete, testable predictions: an enhanced faint-end UV luminosity function at z12 observable with JWST, a stochastic gravitational-wave background with f2/3 scaling in the LISA band from primordial black-hole mergers, and a nanohertz background with slope α2/3 accessible to pulsar-timing arrays. These signatures provide near-term opportunities to confirm, refine, or falsify the cyclical QMM chronology. Full article
Show Figures

Figure 1

15 pages, 2750 KB  
Article
Study on the Spreading Dynamics of Droplet Pairs near Walls
by Jing Li, Junhu Yang, Xiaobin Liu and Lei Tian
Fluids 2025, 10(10), 252; https://doi.org/10.3390/fluids10100252 - 26 Sep 2025
Viewed by 562
Abstract
This study develops an incompressible two-phase flow solver based on the open-source OpenFOAM platform, employing the volume-of-fluid (VOF) method to track the gas–liquid interface and utilizing the MULES algorithm to suppress numerical diffusion. This study provides a comprehensive investigation of the spreading dynamics [...] Read more.
This study develops an incompressible two-phase flow solver based on the open-source OpenFOAM platform, employing the volume-of-fluid (VOF) method to track the gas–liquid interface and utilizing the MULES algorithm to suppress numerical diffusion. This study provides a comprehensive investigation of the spreading dynamics of droplet pairs near walls, along with the presentation of a corresponding mathematical model. The numerical model is validated through a two-dimensional axisymmetric computational domain, demonstrating grid independence and confirming its reliability by comparing simulation results with experimental data in predicting drConfirmedoplet collision, spreading, and deformation dynamics. The study particularly investigates the influence of surface wettability on droplet impact dynamics, revealing that increased contact angle enhances droplet retraction height, leading to complete rebound on superhydrophobic surfaces. Finally, a mathematical model is presented to describe the relationship between spreading length, contact angle, and Weber number, and the study proves its accuracy. Analysis under logarithmic coordinates reveals that the contact angle exerts a significant influence on spreading length, while a constant contact angle condition yields a slight monotonic increase in spreading length with the Weber number. These findings provide an effective numerical and mathematical tool for analyzing the spreading dynamics of droplet pairs. Full article
Show Figures

Figure 1

35 pages, 33910 KB  
Article
ReduXis: A Comprehensive Framework for Robust Event-Based Modeling and Profiling of High-Dimensional Biomedical Data
by Neel D. Sarkar, Raghav Tandon, James J. Lah and Cassie S. Mitchell
Int. J. Mol. Sci. 2025, 26(18), 8973; https://doi.org/10.3390/ijms26188973 - 15 Sep 2025
Viewed by 860
Abstract
Event-based models (EBMs) are powerful tools for inferring probabilistic sequences of monotonic biomarker changes in progressive diseases, but their use is often hindered by data quality issues, high dimensionality, and limited interpretability. We introduce ReduXis, a streamlined pipeline that overcomes these challenges via [...] Read more.
Event-based models (EBMs) are powerful tools for inferring probabilistic sequences of monotonic biomarker changes in progressive diseases, but their use is often hindered by data quality issues, high dimensionality, and limited interpretability. We introduce ReduXis, a streamlined pipeline that overcomes these challenges via three key innovations. First, upon dataset upload, ReduXis performs an automated data readiness assessment—verifying file formats, metadata completeness, column consistency, and measurement compatibility—while flagging preprocessing errors, such as improper scaling, and offering actionable feedback. Second, to prevent overfitting in high-dimensional spaces, ReduXis implements an ensemble voting-based feature selection strategy, combining gradient boosting, logistic regression, and random forest classifiers to identify a robust subset of biomarkers. Third, the pipeline generates interpretable outputs—subject-level staging and subtype assignments, comparative biomarker profiles across disease stages, and classification performance visualizations—facilitating transparency and downstream analysis. We validate ReduXis on three diverse cohorts: the Emory Healthy Brain Study (EHBS) cohort of patients with Alzheimer’s disease (AD), a Genomic Data Commons (GDC) cohort of transitional cell carcinoma (TCC) patients, and a GDC cohort of colorectal adenocarcinoma (CRAC) patients. Full article
(This article belongs to the Special Issue Artificial Intelligence in Molecular Biomarker Screening)
Show Figures

Figure 1

Back to TopTop