Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (213)

Search Parameters:
Keywords = vanishing point

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 4612 KB  
Article
A Robust Numerical Framework for Hollow-Fiber Membrane Module Simulation and Solver Performance Analysis
by Diego Queiroz Faria de Menezes, Marília Caroline Cavalcante de Sá, Nayher Andres Clavijo Vallejo, Thainá Menezes de Melo, Luiz Felipe de Oliveira Campos, Thiago Koichi Anzai and José Carlos Costa da Silva Pinto
Membranes 2026, 16(4), 154; https://doi.org/10.3390/membranes16040154 - 21 Apr 2026
Abstract
Robust numerical frameworks are essential for the simulation, design, monitoring, and control of membrane-based separation units, particularly under highly nonlinear and industrially relevant operating conditions. In this context, a comprehensive phenomenological and numerical framework is proposed for the simulation of hollow-fiber membrane modules, [...] Read more.
Robust numerical frameworks are essential for the simulation, design, monitoring, and control of membrane-based separation units, particularly under highly nonlinear and industrially relevant operating conditions. In this context, a comprehensive phenomenological and numerical framework is proposed for the simulation of hollow-fiber membrane modules, incorporating coupled mass, momentum (through pressure drop), and energy transport equations. The governing equations are discretized using a rigorous orthogonal collocation formulation, and the performances of two numerical solution strategies are systematically investigated for the first time to allow the in-line and real-time implementation of the model: a steady-state approach based on the Newton–Raphson method with careful treatment of initial estimates, and a pseudotransient formulation. Particularly, an original and consistent numerical treatment is introduced for the energy balance at boundaries where the permeate flow vanishes, enabling the stable incorporation of thermal effects and Joule–Thomson phenomena. The results clearly show that the steady-state Newton–Raphson approach provides the best overall performance in terms of computational efficiency, numerical robustness, and accuracy when physically consistent initial profiles are employed. In particular, the combination of a linear initial guess and a numerical mesh constituted of four collocation points yielded the most favorable balance between convergence speed, numerical robustness, and accuracy for the base-case sensitivity analysis. For monitoring-oriented applications, the numerical choice should be weighted primarily toward computational performance once physical consistency and convergence criteria are satisfied, rather than toward maximum mesh-refinement accuracy. In this context, small differences in internal-fiber profiles can be compensated through real-time permeance estimation and are negligible when compared with measurement uncertainty in real industrial processes. Under extreme operating conditions involving low concentrations, low flow rates, and highly permeable species, the pseudotransient formulation proved to be a reliable auxiliary strategy, enabling robust convergence when suitable initial guesses were not readily available. The proposed framework is validated against experimental data from the literature and subjected to extensive convergence and sensitivity analyses, providing a reliable basis for simulation and for assessing computational feasibility in in-line and real-time monitoring-oriented applications. A full demonstration of digital-twin integration, online parameter updating, reduced-order coupling, and closed-loop control is beyond the scope of the present study and will be addressed in future work. Full article
27 pages, 4829 KB  
Article
Dual RANSAC with Rescue Midpoint Multi-Trend Vanishing Point Detection
by Nada Said, Bilal Nakhal, Ali El-Zaart and Lama Affara
J. Imaging 2026, 12(4), 172; https://doi.org/10.3390/jimaging12040172 - 16 Apr 2026
Viewed by 256
Abstract
Vanishing point detection is a fundamental step in computer vision that allows 3D scene understanding and autonomous navigation. Classical techniques have significant challenges when trying to understand scenes that are heavily cluttered and images containing multiple perspective cues, leading to poor or unreliable [...] Read more.
Vanishing point detection is a fundamental step in computer vision that allows 3D scene understanding and autonomous navigation. Classical techniques have significant challenges when trying to understand scenes that are heavily cluttered and images containing multiple perspective cues, leading to poor or unreliable vanishing point determination. We present a Dual RANSAC with Rescue Midpoint-based Multi-Trend Vanishing Point Detection framework, which targets the simultaneous detection and fine-tuning of multiple, globally consistent vanishing points. The proposed framework introduces a novel Midpoint-based Multi-Trend Random Sample Consensus formulation that operates on line segment midpoints to infer dominant directional groups, thereby eliminating noisy or unstable midpoints and stabilizing subsequent vanishing point inference. The main novelty lies in using line segment midpoints to model the orientation variation as a linear regression in the midpoint–orientation space, which helps reduce sensitivity to endpoint instability. Candidate vanishing points are prioritized through inlier-based confidence ranking and subsequently optimized via an MSAC-based arbiter to resolve hypothesis conflicts and minimize geometric error. We evaluate our work against state-of-the-art techniques such as J-Linkage and Conditional Sample Consensus, over two of the current challenging public datasets that comprise the York Urban Dataset and the Toulouse Vanishing Point Dataset. The results show that the proposed framework achieves a recall of up to 95% and an image success rate of almost 84%, outperforming both J-Linkage and Conditional Sample Consensus, especially under tighter angular thresholds. This demonstrates the ability of the proposed framework to provide enhanced stability and localization accuracy. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

36 pages, 491 KB  
Review
Weak Change Detection: A Review
by Fatma Aouissaoui and Joseph Ngatchou-Wandji
Mathematics 2026, 14(8), 1278; https://doi.org/10.3390/math14081278 - 12 Apr 2026
Viewed by 191
Abstract
This article provides a selective review of offline change-point detection methods, with a particular emphasis on weak change-points. The study of such weak changes plays a central role in establishing the asymptotic properties of test statistics designed for detecting fixed (non-vanishing) changes. In [...] Read more.
This article provides a selective review of offline change-point detection methods, with a particular emphasis on weak change-points. The study of such weak changes plays a central role in establishing the asymptotic properties of test statistics designed for detecting fixed (non-vanishing) changes. In addition, it is crucial for analyzing the asymptotic behavior of change-point location estimators. We review the current state of the literature, identify key limitations, and outline promising directions for future research in this challenging setting. Full article
Show Figures

Figure 1

23 pages, 399 KB  
Article
Curvature–Cohomology Criterion for Projectivity: A Synthesis of Classical Results in Hodge Theory
by Ghaliah Alhamzi, Mona Bin-Asfour, Emad Solouma, Abdullah Alahmari, Mansoor Alsulami and Sayed Saber
Axioms 2026, 15(4), 265; https://doi.org/10.3390/axioms15040265 - 6 Apr 2026
Viewed by 231
Abstract
This paper synthesizes classical results in Hodge theory, curvature positivity, and vanishing theorems to give a concise curvature–cohomology criterion for the projectivity of compact Kähler manifolds. While each analytic component—Yau’s solution of the Calabi conjecture, the Bochner–Kodaira–Nakano identity, and Kodaira’s embedding theorem—is well-known, [...] Read more.
This paper synthesizes classical results in Hodge theory, curvature positivity, and vanishing theorems to give a concise curvature–cohomology criterion for the projectivity of compact Kähler manifolds. While each analytic component—Yau’s solution of the Calabi conjecture, the Bochner–Kodaira–Nakano identity, and Kodaira’s embedding theorem—is well-known, their combination yields a transparent geometric criterion: if the first Chern class c1(M) admits a semi-positive real (1,1) representative that is strictly positive at some point (or equivalently has a maximal rank n somewhere), then M is projective. Beyond the maximal rank case, we refine Girbau’s classical vanishing theorem to obtain an optimal rank-sensitive bound: if 2πc1(M) has a semi-positive representative whose pointwise rank is k somewhere, then Hp,0(M)=0 for all p>nk. This sharpens the classical Girbau–Griffiths–Harris vanishing theorem and quantifies how partial positivity of a Ricci representative constrains Hodge cohomology. We situate these criteria alongside classical tests (Kodaira integrality and Moishezon) and numerical descriptions of the Kähler cone (Demailly–Paun), discuss deformation-invariance properties, and relate them to RC positivity and Campana–Peternell-type statements. Examples illustrate the sharpness of the hypotheses, and we survey the effective bounds—ranging from rigorous uniform high ampleness results to conjectural optimal constants—with clear distinction between proven theorems, refinements of classical results, and open problems. The contribution of this work lies not in new analytic techniques but in (1) isolating a sharp curvature condition at the level of c1(M); (2) organizing classical tools into a direct projectivity criterion; and (3) clarifying the rank-dependent vanishing behavior that follows from partial positivity. Full article
(This article belongs to the Special Issue Recent Advances in Complex Analysis and Applications, 2nd Edition)
24 pages, 2227 KB  
Article
Prime-Enforced Symmetry Constraints in Thermodynamic Recoils: Unifying Phase Behaviors and Transport Phenomena via a Covariant Fugacity Hessian
by Muhamad Fouad
Symmetry 2026, 18(4), 610; https://doi.org/10.3390/sym18040610 - 4 Apr 2026
Viewed by 484
Abstract
The Zeta-Minimizer Theorem establishes that the Riemann zeta function ζ(s) and the primes arise variationally as unique minimizers of a phase functional defined on a symmetric measure space XμG equipped with helical operators. Three fundamental axioms—strict concave entropy [...] Read more.
The Zeta-Minimizer Theorem establishes that the Riemann zeta function ζ(s) and the primes arise variationally as unique minimizers of a phase functional defined on a symmetric measure space XμG equipped with helical operators. Three fundamental axioms—strict concave entropy maximization (Axiom 1), spectral Gibbs minima with non-vanishing ground states (Axiom 2), and irreducible bounded oscillations with flux conservation (Axiom 3)—allow for the selection of the non-proper Archimedean conical helix as the sole topology satisfying all constraints. Primes emerge as indivisible minimal cycles in the associated representation graph Γ (via Hilbert irreducibility and Maschke’s theorem), while the Euler product is recovered through the spectral Dirichlet mapping of the helical eigenvalues. The partial zeta product, Zs=j11pjs,sR0, constitutes the exact grand partition function of any finite subsystem. Numerical inversion of this product directly recovers the mixture frequency s from any experimental compressibility factor Zmix. Mole fractions xi(s), interaction parameters Δ(xi), and the Lyapunov spectrum λ(xi) then follow deductively via the helical transfer matrix and the closed-form linear ODE for Δ. Occupation numbers N(xi) attain sharp maxima precisely at Fibonacci ratios Fr/Fr+1, leading to the molecular prime-ID rule. For twelve representative purely binary (irreducible) systems spanning atomic noble gases, simple diatomics, polar molecules, and an aromatic ring, the residuals satisfy |ZsZmix|<1.5×108. The resulting λ(xi) curves accurately reproduce critical points, liquid ranges, and thermodynamic anomalies with zero adjustable parameters. The Riemann Hypothesis follows rigorously as a theorem: the unique fixed point of the duality functor s1s that preserves the orthogonality condition cos2θk=1 is Re(s)=1/2, enforced by Axiom 1 concavity and Axiom 3 irreducibility. The framework is fully deductive and parameter-free and extends naturally to arbitrary mixtures and multiplicities through the helical representation graph. It provides a variational unification of analytic number theory, spectral geometry, thermodynamic phase behavior, and the Riemann Hypothesis from first principles. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

17 pages, 330 KB  
Article
Boundary Value Problems and Propagation of Singularities for Several Partial Differential Equations of Mathematical Physics
by Angela Slavova and Petar Popivanov
Mathematics 2026, 14(5), 883; https://doi.org/10.3390/math14050883 - 5 Mar 2026
Viewed by 359
Abstract
This paper deals with several equations of mathematical physics written in explicit form with their solutions. In Theorem 1, an oblique derivative problem for the string equation is studied. More precisely, the initial-boundary value problem for the string equation is investigated. The corresponding [...] Read more.
This paper deals with several equations of mathematical physics written in explicit form with their solutions. In Theorem 1, an oblique derivative problem for the string equation is studied. More precisely, the initial-boundary value problem for the string equation is investigated. The corresponding vector field on the boundary is non-vanishing and does not have a characteristic direction, but can be tangential to some part of the boundary, and it is allowed to change sign. A classical solution exists with suitable compatibility conditions at the corner points. The picture changes significantly in the case of the wave equation with several (say two: 2D) space variables in a circular cylinder. The initial-boundary value problem turns out to be underdetermined with an infinite-dimensional kernel if the boundary vector field is orthogonal to the time axis. By prescribing extra conditions on the generatrices of the cylinder where the vector field is tangential to the cylinder, we obtain a unique classical solution. In Theorem 2, we consider the Cauchy problem in the interior of the parabola of the Lorentzian-type eikonal equation and find its unique classical solution in {0x21/2}{x2x122}. Propagation of singularities for the D and 3 D hyperbolic (Klein–Gordon) equations in R4, R8 is studied in Theorem 3. In the double characteristic points, the wave front propagates either along the surface of the characteristic cone, or in the solid cone starting from (t0,x0). Full article
(This article belongs to the Section C1: Difference and Differential Equations)
Show Figures

Figure 1

14 pages, 1320 KB  
Article
An Adaptive Damped Double-Inertial Parallel Algorithm for Common Fixed-Point Problems with Applications to Image Restoration
by Supalin Tiammee, Suthep Suantai and Jukrapong Tiammee
Mathematics 2026, 14(5), 880; https://doi.org/10.3390/math14050880 - 5 Mar 2026
Viewed by 258
Abstract
Inertial methods are widely used to accelerate the convergence of iterative algorithms for solving fixed-point problems. However, standard inertial terms often introduce undesirable oscillations, particularly in high-dimensional settings. In this paper, we propose a novel parallel double inertial algorithm with adaptive damping control [...] Read more.
Inertial methods are widely used to accelerate the convergence of iterative algorithms for solving fixed-point problems. However, standard inertial terms often introduce undesirable oscillations, particularly in high-dimensional settings. In this paper, we propose a novel parallel double inertial algorithm with adaptive damping control (D-DIMPMHA) for finding a common fixed point of a finite family of nonexpansive mappings in real Hilbert spaces. By integrating a double inertial step with a self-adaptive damping parameter, the proposed method effectively balances momentum and stability, thereby mitigating numerical oscillations without requiring vanishing inertial conditions. We establish the weak convergence theorem of the generated sequence under suitable control conditions. Furthermore, the practical efficiency of the algorithm is demonstrated through numerical experiments on large-scale convex feasibility problems and image restoration problems. Comparative results indicate that the proposed algorithm achieves superior convergence speed and higher restoration quality compared to existing single inertial methods and FISTA. Full article
(This article belongs to the Section C1: Difference and Differential Equations)
Show Figures

Figure 1

21 pages, 1497 KB  
Article
Assessing the White-in-Time Stochastic Forcing in Resolvent Prediction of Velocity and Pressure for Turbulent Channel Flow
by Xuan Zhu, Huan Liu and Liang Zhang
Processes 2026, 14(5), 737; https://doi.org/10.3390/pr14050737 - 24 Feb 2026
Viewed by 293
Abstract
Research on the frequency spectra of velocity and pressure fluctuations in turbulent channel flow is central to applications in petroleum engineering, including pipeline transport efficiency, erosion prediction, and flow-induced vibration in wellbores and surface facilities. Direct numerical simulation at high Reynolds numbers remains [...] Read more.
Research on the frequency spectra of velocity and pressure fluctuations in turbulent channel flow is central to applications in petroleum engineering, including pipeline transport efficiency, erosion prediction, and flow-induced vibration in wellbores and surface facilities. Direct numerical simulation at high Reynolds numbers remains prohibitively expensive, motivating the use of resolvent analysis as a computationally efficient alternative. The resolvent analysis, formulated from the linearized Navier–Stokes equations, relies on appropriate modeling of stochastic forcing. In this work, we demonstrate that the conventional white-in-time stochastic forcing model exhibits fundamental deficiencies in predicting velocity and pressure statistics. Specifically, it fails to reproduce the correct two-point correlation of the wall-normal velocity, leading to inaccurate predictions of the rapid pressure spectrum. Moreover, it does not capture the correct wall-normal distribution of the forcing divergence, resulting in erroneous predictions of the slow pressure component. More fundamentally, we rigorously show that a linear convection–diffusion system driven by white-in-time stochastic forcing possesses an infinite frequency bandwidth, which implies unphysical vanishing Taylor time microscales for velocity fluctuations. These results highlight intrinsic limitations of white-in-time forcing and demonstrate the necessity of adopting colored-in-time stochastic forcing models to obtain physically consistent spectral predictions. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

18 pages, 2458 KB  
Perspective
From Statistical Mechanics to Nonlinear Dynamics and into Complex Systems
by Alberto Robledo
Complexities 2026, 2(1), 3; https://doi.org/10.3390/complexities2010003 - 13 Feb 2026
Viewed by 673
Abstract
We detail a procedure to transform the current empirical stage in the study of complex systems into a predictive phenomenological one. Our approach starts with the statistical-mechanical Landau-Ginzburg equation for dissipative processes, such as kinetics of phase change. Then, it imposes discrete time [...] Read more.
We detail a procedure to transform the current empirical stage in the study of complex systems into a predictive phenomenological one. Our approach starts with the statistical-mechanical Landau-Ginzburg equation for dissipative processes, such as kinetics of phase change. Then, it imposes discrete time evolution to explicit back feeding, and adopts a power-law driving force to incorporate the onset of chaos, or, alternatively, criticality, the guiding principles of complexity. One obtains, in closed analytical form, a nonlinear renormalization-group (RG) fixed-point map descriptive of any of the three known (one-dimensional) transitions to or out of chaos. Furthermore, its Lyapunov function is shown to be the thermodynamic potential in q-statistics, because the regular or multifractal attractors at the transitions to chaos impose a severe impediment to access the system’s built-in configurations, leaving only a subset of vanishing measure available. To test the pertinence of our approach, we refer to the following complex systems issues: (i) Basic questions, such as demonstration of paradigms equivalence, illustration of self-organization, thermodynamic viewpoint of diversity, biological or other. (ii) Derivation of empirical laws, e.g., ranked data distributions (Zipf law), biological regularities (Kleiber law), river and cosmological structures (Hack law). (iii) Complex systems methods, for example, evolutionary game theory, self-similar networks, central-limit theorem questions. (iv) Condensed-matter physics complex problems (and their analogs in other disciplines), like, critical fluctuations (catastrophes), glass formation (traffic jams), localization transition (foraging, collective motion). Full article
Show Figures

Graphical abstract

27 pages, 3705 KB  
Article
Power Flow of Electric Dipole Radiation Extinction in an Absorbing Host Medium
by Henk F. Arnoldus
Optics 2026, 7(1), 16; https://doi.org/10.3390/opt7010016 - 12 Feb 2026
Viewed by 482
Abstract
We have studied the extinction power flow for a dipole in a laser beam, and embedded in a dissipating medium. The power flows along the field lines of the Poynting vector. We have shown that near the particle, the field lines form closed [...] Read more.
We have studied the extinction power flow for a dipole in a laser beam, and embedded in a dissipating medium. The power flows along the field lines of the Poynting vector. We have shown that near the particle, the field lines form closed loops, which start and end at the location of the dipole. A closed-form expression for these loops has been derived, and we have shown how the orientation direction of a loop is determined by the permittivities and permeabilities of the host medium and the particle. It is also shown that the spatial extent of these loops is determined by singularities in the flow pattern. It is shown that the extent of the loop structure near the dipole diminishes strongly when there is dissipation in the medium. This is due to the appearance of singularities very close to the particle, which are due to the damping. At greater distances, flow lines run off to the far field or they come in from the far field. Most flow lines change from incoming to outgoing, or vice versa, so they turn around somewhere in the flow field. Singularities, points where the Poynting vector vanishes, appear on the coordinate axes. At these points, field lines split. Off the axes, singularities appear as the centers of vortices. Near a vortex, energy swirls around the singular point. Field lines can come out of the center of a vortex or end there. Full article
Show Figures

Figure 1

13 pages, 4275 KB  
Article
Fluctuations of Temperature in the Polyakov Loop-Extended Nambu–Jona-Lasinio Model
by He Liu, Peng Wu, Hong-Ming Liu and Peng-Cheng Chu
Universe 2026, 12(2), 37; https://doi.org/10.3390/universe12020037 - 28 Jan 2026
Viewed by 305
Abstract
In this study, we investigate temperature fluctuations in hot QCD matter using a three-flavor Polyakov loop-extended Nambu–Jona-Lasinio (PNJL) model. The high-order cumulant ratios Rn2 (n>2) exhibit non-monotonic variations across the chiral phase transition, characterized by slight fluctuations [...] Read more.
In this study, we investigate temperature fluctuations in hot QCD matter using a three-flavor Polyakov loop-extended Nambu–Jona-Lasinio (PNJL) model. The high-order cumulant ratios Rn2 (n>2) exhibit non-monotonic variations across the chiral phase transition, characterized by slight fluctuations in the chiral crossover region and significant oscillations around the critical point. In contrast, distinct peak and dip structures are observed in the cumulant ratios at low-baryon chemical potential. These structures gradually weaken and eventually vanish at high chemical potential as they compete with the sharpening of the chiral phase transition, particularly near the critical point and the first-order phase transition. Our results indicate that these non-monotonic peak and dip structures in high-order cumulant ratios are associated with the deconfinement phase transition. This study quantitatively analyzes temperature fluctuation behavior across different phase transition regions, and the findings are expected to be observed and validated in heavy-ion collision experiments through measurements of event-by-event mean transverse momentum fluctuations. Full article
(This article belongs to the Special Issue Relativistic Heavy-Ion Collisions: Theory and Observation)
Show Figures

Figure 1

19 pages, 11499 KB  
Article
A Novel Plasticization Mechanism in Poly(Lactic Acid)/PolyEthyleneGlycol Blends: From Tg Depression to a Structured Melt State
by Nawel Mechernene, Lina Benkraled, Assia Zennaki, Khadidja Arabeche, Abdelkader Berrayah, Lahcene Mechernene, Amina Bouriche, Sid Ahmed Benabdellah, Zohra Bouberka, Ana Barrera and Ulrich Maschke
Polymers 2026, 18(3), 317; https://doi.org/10.3390/polym18030317 - 24 Jan 2026
Viewed by 630
Abstract
Polylactic acid (PLA) is a promising biodegradable polymer whose widespread application is hindered by inherent brittleness. Polyethylene glycol (PEG) is a common plasticizer, but the effects of intermediate molecular weights, such as 4000 g/mol, on the coupled thermal, mechanical, and rheological properties of [...] Read more.
Polylactic acid (PLA) is a promising biodegradable polymer whose widespread application is hindered by inherent brittleness. Polyethylene glycol (PEG) is a common plasticizer, but the effects of intermediate molecular weights, such as 4000 g/mol, on the coupled thermal, mechanical, and rheological properties of PLA remain insufficiently understood. This study presents a comprehensive analysis of PLA plasticized with 0–20 wt% PEG 4000, employing differential scanning calorimetry (DSC), dynamic mechanical analysis (DMA), and rheology. DSC confirmed excellent miscibility and a significant glass transition temperature (Tg) depression exceeding 19 °C for the highest concentration. A complex, non-monotonic evolution of crystallinity was observed, associated with the formation of different crystalline forms (α′ and α). Critically, DMA revealed that the material’s thermo-mechanical response is dominated by its thermal history: while the plasticizing effect is masked in highly crystalline, as-cast films, it is unequivocally demonstrated in quenched amorphous samples. The core finding emerges from a targeted rheological investigation. An anomalous increase in melt viscosity and elasticity at intermediate PEG concentrations (5–15 wt%), observed at 180 °C, was systematically shown to vanish at 190 °C and in amorphous samples. This proves that the anomaly stems from residual crystalline domains (α′ precursors) persisting near the melting point, not from a transient molecular network. These results establish that PEG 4000 is a highly effective PLA plasticizer whose impact is profoundly mediated by processing-induced crystallinity. This work provides essential guidelines for tailoring PLA properties by controlling thermal history to optimize flexibility and processability for advanced applications, specifically in melt-processing for flexible packaging. Full article
(This article belongs to the Section Polymer Physics and Theory)
Show Figures

Figure 1

26 pages, 23681 KB  
Article
Semantic-Guided Spatial and Temporal Fusion Framework for Enhancing Monocular Video Depth Estimation
by Hyunsu Kim, Yeongseop Lee, Hyunseong Ko, Junho Jeong and Yunsik Son
Appl. Sci. 2026, 16(1), 212; https://doi.org/10.3390/app16010212 - 24 Dec 2025
Viewed by 1078
Abstract
Despite advancements in deep learning-based Monocular Depth Estimation (MDE), applying these models to video sequences remains challenging due to geometric ambiguities in texture-less regions and temporal instability caused by independent per-frame inference. To address these limitations, we propose STF-Depth, a novel post-processing framework [...] Read more.
Despite advancements in deep learning-based Monocular Depth Estimation (MDE), applying these models to video sequences remains challenging due to geometric ambiguities in texture-less regions and temporal instability caused by independent per-frame inference. To address these limitations, we propose STF-Depth, a novel post-processing framework that enhances depth quality by logically fusing heterogeneous information—geometric, semantic, and panoptic—without requiring additional retraining. Our approach introduces a robust RANSAC-based Vanishing Point Estimation to guide Dynamic Depth Gradient Correction for background separation, alongside Adaptive Instance Re-ordering to clarify occlusion relationships. Experimental results on the KITTI, NYU Depth V2, and TartanAir datasets demonstrate that STF-Depth functions as a universal plug-and-play module. Notably, it achieved a 25.7% reduction in Absolute Relative error (AbsRel) and significantly enhanced temporal consistency compared to state-of-the-art backbone models. These findings confirm the framework’s practicality for real-world applications requiring geometric precision and video stability, such as autonomous driving, robotics, and augmented reality (AR). Full article
(This article belongs to the Special Issue Advances in Computer Vision and Digital Image Processing)
Show Figures

Figure 1

23 pages, 339 KB  
Article
Composite Lyapunov Criteria for Stability and Convergence with Applications to Optimization Dynamics
by Hassan Saoud
Mathematics 2025, 13(23), 3859; https://doi.org/10.3390/math13233859 - 2 Dec 2025
Viewed by 496
Abstract
We propose a composite Lyapunov framework for nonlinear autonomous systems that ensures strict decay through a pair of differential inequalities. The approach yields integral estimates, quantitative convergence rates, vanishing of dissipation measures, convergence to a critical set, and semistability under mild conditions, without [...] Read more.
We propose a composite Lyapunov framework for nonlinear autonomous systems that ensures strict decay through a pair of differential inequalities. The approach yields integral estimates, quantitative convergence rates, vanishing of dissipation measures, convergence to a critical set, and semistability under mild conditions, without relying on invariance principles or compactness assumptions. The framework unifies convergence to points and sets and is illustrated through applications to inertial gradient systems and Primal–Dual gradient flows. Full article
(This article belongs to the Special Issue Advances in Nonlinear Analysis and Applications)
17 pages, 2149 KB  
Article
Recreation of Gap Test with Damaged Plasticity Model
by Michał Szczecina and Andrzej Winnicki
Appl. Sci. 2025, 15(23), 12606; https://doi.org/10.3390/app152312606 - 28 Nov 2025
Viewed by 433
Abstract
A gap test is a relatively new experimental and numerical test proposed by Bažant and co-workers. The test is designed to show that the effective mode I (opening mode) fracture energy depends strongly on the crack-parallel normal stress. Other typical tests (for instance [...] Read more.
A gap test is a relatively new experimental and numerical test proposed by Bažant and co-workers. The test is designed to show that the effective mode I (opening mode) fracture energy depends strongly on the crack-parallel normal stress. Other typical tests (for instance three-point bending) are not able to capture this influence. The gap test is therefore considered as a breakthrough in fracture mechanics. The authors of this paper tried to recreate numerically the gap test in a concrete specimen using a plasticity-based damage model for concrete, the so-called concrete damaged plasticity in Abaqus software. A specimen in the gap test is in the form of a concrete beam with a notch. At the first stage of the test the specimen is supported with poly-propylene pads. A static scheme of the specimen changes when the gap between the specimen and roller supports vanishes, exactly when the full yield of pads occurs. The gap test revealed the fact that the initial fracture energy varies its value depending on normal stress parallel to a crack propagating from the notch. To calculate the initial fracture energy, the authors of this paper used the size effect law. The results obtained by the authors are comparable with the laboratory and numerical results presented by Nguyen and co-workers. Full article
Show Figures

Figure 1

Back to TopTop