Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,108)

Search Parameters:
Keywords = monotonicity methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 834 KB  
Article
Interrelationship Between Dyslipidemia and Hyperuricemia in Patients with Uncontrolled Type 2 Diabetes: Clinical Implications and a Risk Identification Algorithm
by Lorena Paduraru, Cosmin Mihai Vesa, Mihaela Simona Popoviciu, Timea Claudia Ghitea and Dana Carmen Zaha
Healthcare 2025, 13(20), 2605; https://doi.org/10.3390/healthcare13202605 - 16 Oct 2025
Abstract
Background and Objectives: Dyslipidemia and hyperuricemia frequently co-exist in uncontrolled type 2 diabetes mellitus (T2DM), amplifying renal and cardiovascular risk. This study aimed to develop and evaluate an optimized Renal–Metabolic Risk Score (RMRS) integrating renal and lipid parameters to identify patients with both [...] Read more.
Background and Objectives: Dyslipidemia and hyperuricemia frequently co-exist in uncontrolled type 2 diabetes mellitus (T2DM), amplifying renal and cardiovascular risk. This study aimed to develop and evaluate an optimized Renal–Metabolic Risk Score (RMRS) integrating renal and lipid parameters to identify patients with both conditions. Materials and Methods: We conducted a retrospective observational study including 304 patients with uncontrolled T2DM hospitalized at the Emergency County Hospital Oradea, Romania (2022–2023). Hyperuricemia was defined as uric acid > 6 mg/dL in females and >7 mg/dL in males; dyslipidemia was diagnosed according to standard lipid thresholds. RMRS was calculated from standardized values of urea, TG/HDL ratio, and eGFR, with variable weights derived from logistic regression coefficients. The score was normalized to a 0–100 scale. Receiver operating characteristic (ROC) analysis assessed discriminative performance; quartile analysis explored stratification ability. Results: The prevalence of dyslipidemia and hyperuricemia co-occurrence was 81.6%. RMRS was significantly higher in the co-occurrence group compared to others (median 16.9 vs. 10.0; p < 0.001). ROC analysis showed an AUC of 0.78, indicating good discrimination. Quartile analysis demonstrated a monotonic gradient in co-occurrence prevalence from 64.5% in Q1 to 96.1% in Q4. Conclusions: The Renal–metabolic Risk Score (RMRS) demonstrated moderate discriminative performance in identifying patients with uncontrolled T2DM at risk for combined hyperuricemia and dyslipidemia. Because it relies on inexpensive, routine laboratory parameters, RMRS may be particularly useful in resource-limited settings to support early risk stratification, dietary counseling, and timely referral. Further validation in larger and more diverse cohorts is required before its clinical adoption. Full article
Show Figures

Figure 1

28 pages, 10614 KB  
Article
Assessment of Ecological Quality Dynamics and Driving Factors in the Ningdong Mining Area, China, Using the Coupled Remote Sensing Ecological Index and Ecological Grade Index
by Chengting Han, Peixian Li, He’ao Xie, Yupeng Pi, Yongliang Zhang, Xiaoqing Han, Jingjing Jin and Yuling Zhao
Sustainability 2025, 17(20), 9075; https://doi.org/10.3390/su17209075 (registering DOI) - 13 Oct 2025
Viewed by 186
Abstract
In response to the sustainability challenges of mining, restrictive policies aimed at improving ecological quality have been enacted in various countries and regions. The purpose of this study is to examine the environmental changes in the Ningdong mining area, located on the Loess [...] Read more.
In response to the sustainability challenges of mining, restrictive policies aimed at improving ecological quality have been enacted in various countries and regions. The purpose of this study is to examine the environmental changes in the Ningdong mining area, located on the Loess Plateau, over the past 25 years, due to many factors, such as coal mining, using the area as a case study. In this study, Landsat satellite images from 2000 to 2024 were used to derive the remote sensing ecological index (RSEI), while the RSEI results were comprehensively analyzed using the Sen+Mann-Kendall method with Geodetector, respectively. Simultaneously, this study utilized land use datasets to calculate the ecological grade (EG) index. The EG index was then analyzed in conjunction with the RSEI. The results show that in the time dimension, the ecological quality of the Ningdong mining area shows a non-monotonic trend of decreasing and then increasing during the 25-year period; The RSEI average reached its lowest value of 0.279 in 2011 and its highest value of 0.511 in 2022. In 2024, the RSEI was 0.428; The coupling matrix between the EG and RSEI indicates that the ecological environment within the mining area has improved. Through ecological factor-driven analysis, we found that the ecological environment quality in the study area is stably controlled by natural topography (slope) and climate (precipitation) factors, while also being disturbed by human activities. This experimental section demonstrates that ecological and environmental evolution is a complex process driven by the nonlinear synergistic interaction of natural and anthropogenic factors. The results of the study are of practical significance and provide scientific guidance for the development of coal mining and ecological environmental protection policies in other mining regions around the world. Full article
(This article belongs to the Special Issue Design for Sustainability in the Minerals Sector)
Show Figures

Figure 1

21 pages, 2666 KB  
Article
Maintenance-Aware Risk Curves: Correcting Degradation Models with Intervention Effectiveness
by F. Javier Bellido-Lopez, Miguel A. Sanz-Bobi, Antonio Muñoz, Daniel Gonzalez-Calvo and Tomas Alvarez-Tejedor
Appl. Sci. 2025, 15(20), 10998; https://doi.org/10.3390/app152010998 - 13 Oct 2025
Viewed by 97
Abstract
In predictive maintenance frameworks, risk curves are used as interpretable, real-time indicators of equipment degradation. However, existing approaches generally assume a monotonically increasing trend and neglect the corrective effect of maintenance, resulting in unrealistic or overly conservative risk estimations. This paper addresses this [...] Read more.
In predictive maintenance frameworks, risk curves are used as interpretable, real-time indicators of equipment degradation. However, existing approaches generally assume a monotonically increasing trend and neglect the corrective effect of maintenance, resulting in unrealistic or overly conservative risk estimations. This paper addresses this limitation by introducing a novel method that dynamically corrects risk curves through a quantitative measure of maintenance effectiveness. The method adjusts the evolution of risk to reflect the actual impact of preventive and corrective interventions, providing a more realistic and traceable representation of asset condition. The approach is validated with case studies on critical feedwater pumps in a combined-cycle power plant. First, individual maintenance actions are analyzed for a single failure mode to assess their direct effectiveness. Second, the cross-mode impact of a corrective intervention is evaluated, revealing both direct and indirect effects. Third, corrected risk curves are compared across two redundant pumps to benchmark maintenance performance, showing similar behavior until 2023, after which one unit accumulated uncontrolled risk while the other remained stable near zero, reflected in their overall performance indicators (0.67 vs. 0.88). These findings demonstrate that maintenance-corrected risk curves enhance diagnostic accuracy, enable benchmarking between comparable assets, and provide a missing piece for the development of realistic, risk-informed predictive maintenance strategies. Full article
(This article belongs to the Special Issue Big-Data-Driven Advances in Smart Maintenance and Industry 4.0)
Show Figures

Figure 1

16 pages, 2994 KB  
Article
Stiffness Degradation of Expansive Soil Stabilized with Construction and Demolition Waste Under Wetting–Drying Cycles
by Haodong Xu and Chao Huang
Coatings 2025, 15(10), 1154; https://doi.org/10.3390/coatings15101154 - 3 Oct 2025
Viewed by 387
Abstract
To address the challenge of long-term stiffness retention of subgrades in humid–hot climates, this study evaluates expansive soil stabilized with construction and demolition waste (CDW), focusing on the resilient modulus (Mr) under coupled stress states and wetting–drying histories. Basic physical [...] Read more.
To address the challenge of long-term stiffness retention of subgrades in humid–hot climates, this study evaluates expansive soil stabilized with construction and demolition waste (CDW), focusing on the resilient modulus (Mr) under coupled stress states and wetting–drying histories. Basic physical and swelling tests identified an optimal CDW incorporation of about 40%, which was then used to prepare specimens subjected to controlled. Wetting–drying cycles (0, 1, 3, 6, 10) and multistage cyclic triaxial loading across confining and deviatoric stress combinations. Mr increased monotonically with both stresses, with stronger confinement hardening at higher deviatoric levels; with cycling, Mr exhibited a rapid then gradual degradation, and for most stress combinations, the ten-cycle loss was 20%–30%, slightly mitigated by higher confinement. Grey relational analysis ranked influence as follows: the number of wetting–drying cycles > deviatoric stress > confining pressure. A Lytton model, based on a modified prediction method, accurately predicted Mr across conditions (R2 ≈ 0.95–0.98). These results integrate stress dependence with environmental degradation, offering guidance on material selection (approximately 40% incorporation), construction (adequate compaction), and maintenance (priority control of early moisture fluctuations), and provide theoretical support for durable expansive soil subgrades in humid–hot regions. Full article
(This article belongs to the Special Issue Novel Cleaner Materials for Pavements)
Show Figures

Graphical abstract

15 pages, 1337 KB  
Article
Sinusoidal Approximation Theorem for Kolmogorov–Arnold Networks
by Sergei Gleyzer, Hanh Nguyen, Dinesh P. Ramakrishnan and Eric A. F. Reinhardt
Mathematics 2025, 13(19), 3157; https://doi.org/10.3390/math13193157 - 2 Oct 2025
Viewed by 225
Abstract
The Kolmogorov–Arnold representation theorem states that any continuous multivariable function can be exactly represented as a finite superposition of continuous single-variable functions. Subsequent simplifications of this representation involve expressing these functions as parameterized sums of a smaller number of unique monotonic functions. Kolmogorov–Arnold [...] Read more.
The Kolmogorov–Arnold representation theorem states that any continuous multivariable function can be exactly represented as a finite superposition of continuous single-variable functions. Subsequent simplifications of this representation involve expressing these functions as parameterized sums of a smaller number of unique monotonic functions. Kolmogorov–Arnold Networks (KANs) have been recently proposed as an alternative to multilayer perceptrons. KANs feature learnable nonlinear activations applied directly to input values, modeled as weighted sums of basis spline functions. This approach replaces the linear transformations and sigmoidal post-activations used in traditional perceptrons. In this work, we propose a novel KAN variant by replacing both the inner and outer functions in the Kolmogorov–Arnold representation with weighted sinusoidal functions of learnable frequencies. We particularly fix the phases of the sinusoidal activations to linearly spaced constant values and provide a proof of their theoretical validity. We also conduct numerical experiments to evaluate its performance on a range of multivariable functions, comparing it with fixed-frequency Fourier transform methods, basis spline KANs (B-SplineKANs), and multilayer perceptrons (MLPs). We show that it outperforms the fixed-frequency Fourier transform B-SplineKAN and achieves comparable performance to MLP. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

25 pages, 63826 KB  
Article
Mutual Effects of Face-Swap Deepfakes and Digital Watermarking—A Region-Aware Study
by Tomasz Walczyna and Zbigniew Piotrowski
Sensors 2025, 25(19), 6015; https://doi.org/10.3390/s25196015 - 30 Sep 2025
Viewed by 620
Abstract
Face swapping is commonly assumed to act locally on the face region, which motivates placing watermarks away from the face to preserve the integrity of the face. We demonstrate that this assumption is violated in practice. Using a region-aware protocol with tunable-strength visible [...] Read more.
Face swapping is commonly assumed to act locally on the face region, which motivates placing watermarks away from the face to preserve the integrity of the face. We demonstrate that this assumption is violated in practice. Using a region-aware protocol with tunable-strength visible and invisible watermarks and six face-swap families, we quantify both identity transfer and watermark retention on the VGGFace2 dataset. First, edits are non-local—generators alter background statistics and degrade watermarks even far from the face, as measured by background-only PSNR and Pearson correlation relative to a locality-preserving baseline. Second, dependencies between watermark strength, identity transfer, and retention are non-monotonic and architecture-dependent. Methods that better confine edits to the face—typically those employing segmentation-weighted objectives—preserve background signal more reliably than globally trained GAN pipelines. At comparable perceptual distortion, invisible marks tuned to the background retain higher correlation with the background than visible overlays. These findings indicate that classical robustness tests are insufficient alone—watermark evaluation should report region-wise metrics and be strength- and architecture-aware. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies—Second Edition)
Show Figures

Figure 1

15 pages, 278 KB  
Article
Bounds on Causal Effects Based on Expectations in Ordered-Outcome Models
by Ailei Ding and Hanmei Sun
Mathematics 2025, 13(19), 3103; https://doi.org/10.3390/math13193103 - 27 Sep 2025
Viewed by 280
Abstract
Bounding causal effects under unmeasured confounding is particularly challenging when the outcome variable is ordinal. When the goal is to assess whether an intervention leads to a better outcome, ordinal causal effects offer a more appropriate analytical framework. In contrast, the average causal [...] Read more.
Bounding causal effects under unmeasured confounding is particularly challenging when the outcome variable is ordinal. When the goal is to assess whether an intervention leads to a better outcome, ordinal causal effects offer a more appropriate analytical framework. In contrast, the average causal effect (ACE), defined as the difference in expected outcomes, is more suitable for capturing population-level effects. In this paper, we derive sharp bounds for causal effects with ternary outcomes using an expectation-based framework, under both general conditions and monotonicity assumptions. We conduct numerical simulations to evaluate the width of the bounds under various scenarios. Finally, we demonstrate our method’s practical utility by applying it to the CDC Diabetes Health Indicators Dataset to assess the causal effect of health behaviors on diabetes risk. Full article
(This article belongs to the Special Issue Advances in Statistical AI and Causal Inference)
Show Figures

Figure 1

15 pages, 2750 KB  
Article
Study on the Spreading Dynamics of Droplet Pairs near Walls
by Jing Li, Junhu Yang, Xiaobin Liu and Lei Tian
Fluids 2025, 10(10), 252; https://doi.org/10.3390/fluids10100252 - 26 Sep 2025
Viewed by 228
Abstract
This study develops an incompressible two-phase flow solver based on the open-source OpenFOAM platform, employing the volume-of-fluid (VOF) method to track the gas–liquid interface and utilizing the MULES algorithm to suppress numerical diffusion. This study provides a comprehensive investigation of the spreading dynamics [...] Read more.
This study develops an incompressible two-phase flow solver based on the open-source OpenFOAM platform, employing the volume-of-fluid (VOF) method to track the gas–liquid interface and utilizing the MULES algorithm to suppress numerical diffusion. This study provides a comprehensive investigation of the spreading dynamics of droplet pairs near walls, along with the presentation of a corresponding mathematical model. The numerical model is validated through a two-dimensional axisymmetric computational domain, demonstrating grid independence and confirming its reliability by comparing simulation results with experimental data in predicting drConfirmedoplet collision, spreading, and deformation dynamics. The study particularly investigates the influence of surface wettability on droplet impact dynamics, revealing that increased contact angle enhances droplet retraction height, leading to complete rebound on superhydrophobic surfaces. Finally, a mathematical model is presented to describe the relationship between spreading length, contact angle, and Weber number, and the study proves its accuracy. Analysis under logarithmic coordinates reveals that the contact angle exerts a significant influence on spreading length, while a constant contact angle condition yields a slight monotonic increase in spreading length with the Weber number. These findings provide an effective numerical and mathematical tool for analyzing the spreading dynamics of droplet pairs. Full article
Show Figures

Figure 1

26 pages, 2781 KB  
Article
Iterative Optimization of Structural Entropy for Enhanced Network Fragmentation Analysis
by Fatih Ozaydin, Vasily Lubashevskiy and Seval Yurtcicek Ozaydin
Information 2025, 16(10), 828; https://doi.org/10.3390/info16100828 - 24 Sep 2025
Viewed by 318
Abstract
Identifying and ranking influential nodes is central to tasks such as targeted immunization, misinformation containment, and resilient design. Structural entropy (SE) offers a principled, community-aware scoring rule, yet the one-shot (static) use of SE may become suboptimal after each intervention, as the residual [...] Read more.
Identifying and ranking influential nodes is central to tasks such as targeted immunization, misinformation containment, and resilient design. Structural entropy (SE) offers a principled, community-aware scoring rule, yet the one-shot (static) use of SE may become suboptimal after each intervention, as the residual topology and its modular structure change. We introduce iterative structural entropy (ISE), a simple yet powerful modification that recomputes SE on the residual graph before every removal, thus turning node targeting into a sequential, feedback-driven policy. We evaluate SE and ISE on seven benchmark networks using (i) cumulative structural entropy (CSE), (ii) cumulative sum of largest connected component sizes (LCCs), and (iii) dynamic panels that track average shortest-path length and diameter within the residual LCC together with a near-threshold percolation proxy (expected outbreak size). Across datasets, ISE consistently fragments earlier and more decisively than SE; on the Netscience network, ISE reduces the cumulative LCC size by 43% (RLCCs =0.567). In parallel, ISE achieves perfect discriminability (monotonicity M=1.0) among positively scored nodes on all benchmarks, while SE and degree-based baselines display method-dependent ties. These results support ISE as a practical, adaptive alternative to static SE when sequential decisions matter, delivering sharper rankings and faster structural degradation under identical measurement protocols. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

23 pages, 901 KB  
Article
Time-of-Flow Distributions in Discrete Quantum Systems: From Operational Protocols to Quantum Speed Limits
by Mathieu Beau
Entropy 2025, 27(10), 996; https://doi.org/10.3390/e27100996 - 24 Sep 2025
Viewed by 372
Abstract
We propose a general and experimentally accessible framework to quantify transition timing in discrete quantum systems via the time-of-flow (TF) distribution. Defined from the rate of population change in a target state, the TF distribution can be reconstructed through repeated projective measurements at [...] Read more.
We propose a general and experimentally accessible framework to quantify transition timing in discrete quantum systems via the time-of-flow (TF) distribution. Defined from the rate of population change in a target state, the TF distribution can be reconstructed through repeated projective measurements at discrete times on independently prepared systems, thus avoiding Zeno inhibition. In monotonic regimes, it admits a clear interpretation as a time-of-arrival (TOA) or time-of-departure (TOD) distribution. We apply this approach to optimize time-dependent Hamiltonians, analyze shortcut-to-adiabaticity (STA) protocols, study non-adiabatic features in the dynamics of a three-level time-dependent detuning model, and derive a transition-based quantum speed limit (TF-QSL) for both closed and open quantum systems. We also establish a lower bound on temporal uncertainty and examine decoherence effects, demonstrating the versatility of the TF framework for quantum control and diagnostics. This method provides both a conceptual tool and an experimental protocol for probing and engineering quantum dynamics in discrete-state platforms. Full article
(This article belongs to the Special Issue Quantum Mechanics and the Challenge of Time)
Show Figures

Figure 1

32 pages, 898 KB  
Article
Heat Conduction Model Based on the Explicit Euler Method for Non-Stationary Cases
by Attila Érchegyi and Ervin Rácz
Entropy 2025, 27(10), 994; https://doi.org/10.3390/e27100994 - 24 Sep 2025
Viewed by 282
Abstract
This article presents an optimization of the explicit Euler method for a heat conduction model. The starting point of the paper was the analysis of the limitations of the explicit Euler scheme and the classical CFL condition in the transient domain, which pointed [...] Read more.
This article presents an optimization of the explicit Euler method for a heat conduction model. The starting point of the paper was the analysis of the limitations of the explicit Euler scheme and the classical CFL condition in the transient domain, which pointed to the oscillation occurring in the intermediate states. To eliminate this phenomenon, we introduced the No-Sway Threshold given for the Fourier number (K), stricter than the CFL, which guarantees the monotonic approximation of the temperature–time evolution. Thereafter, by means of the identical inequalities derived based on the Method of Equating Coefficients, we determined the optimal values of Δt and Δx. Finally, for the construction of the variable grid spacing (M2), we applied the equation expressing the R of the identical inequality system and accordingly specified the thickness of the material elements (Δξ). As a proof-of-concept, we demonstrate the procedure on an application case with major simplifications: during an emergency shutdown of the Flexblue® SMR, the temperature of the air inside the tank instantly becomes 200 °C, while the initial temperatures of the water and the steel are 24 °C. For a 50.003 mm × 50.003 mm surface patch of the tank, we keep the leftmost and rightmost material elements of the uniform-grid (M1) and variable-grid (M2) single-line models at constant temperature; we scale the results up to the total external surface (6714.39 m2). In the M2 case, a larger portion of the heat power taken up from the air is expended on heating the metal, while the rise in the heat power delivered to the seawater is more moderate. At the 3000th min, the steel-wall temperature in M1 falls between 26.229 °C and 25.835 °C, whereas in M2 the temperature gradient varies between 34.648 °C and 30.041 °C, which confirms the advantage of the combination of variable grid spacing and the No-Sway Threshold. Full article
(This article belongs to the Special Issue Dissipative Physical Dynamics)
Show Figures

Figure 1

29 pages, 5817 KB  
Article
Unsupervised Segmentation and Alignment of Multi-Demonstration Trajectories via Multi-Feature Saliency and Duration-Explicit HSMMs
by Tianci Gao, Konstantin A. Neusypin, Dmitry D. Dmitriev, Bo Yang and Shengren Rao
Mathematics 2025, 13(19), 3057; https://doi.org/10.3390/math13193057 - 23 Sep 2025
Viewed by 389
Abstract
Learning from demonstration with multiple executions must contend with time warping, sensor noise, and alternating quasi-stationary and transition phases. We propose a label-free pipeline that couples unsupervised segmentation, duration-explicit alignment, and probabilistic encoding. A dimensionless multi-feature saliency (velocity, acceleration, curvature, direction-change rate) yields [...] Read more.
Learning from demonstration with multiple executions must contend with time warping, sensor noise, and alternating quasi-stationary and transition phases. We propose a label-free pipeline that couples unsupervised segmentation, duration-explicit alignment, and probabilistic encoding. A dimensionless multi-feature saliency (velocity, acceleration, curvature, direction-change rate) yields scale-robust keyframes via persistent peak–valley pairs and non-maximum suppression. A hidden semi-Markov model (HSMM) with explicit duration distributions is jointly trained across demonstrations to align trajectories on a shared semantic time base. Segment-level probabilistic motion models (GMM/GMR or ProMP, optionally combined with DMP) produce mean trajectories with calibrated covariances, directly interfacing with constrained planners. Feature weights are tuned without labels by minimizing cross-demonstration structural dispersion on the simplex via CMA-ES. Across UAV flight, autonomous driving, and robotic manipulation, the method reduces phase-boundary dispersion by 31% on UAV-Sim and by 30–36% under monotone time warps, noise, and missing data (vs. HMM); improves the sparsity–fidelity trade-off (higher time compression at comparable reconstruction error) with lower jerk; and attains nominal 2σ coverage (94–96%), indicating well-calibrated uncertainty. Ablations attribute the gains to persistence plus NMS, weight self-calibration, and duration-explicit alignment. The framework is scale-aware and computationally practical, and its uncertainty outputs feed directly into MPC/OMPL for risk-aware execution. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

25 pages, 2438 KB  
Article
Interior Point-Driven Throughput Maximization for TS-SWIPT Multi-Hop DF Relays: A Log Barrier Approach
by Yang Yu, Xiaoqing Tang and Guihui Xie
Sensors 2025, 25(18), 5901; https://doi.org/10.3390/s25185901 - 21 Sep 2025
Viewed by 275
Abstract
This paper investigates a simultaneous wireless information and power transfer (SWIPT) decode-and-forward (DF) relay network, where a source node transmits data to a destination node through the assistance of multi-hop passive relays. We employ the time-switching (TS) protocol, enabling the relays to harvest [...] Read more.
This paper investigates a simultaneous wireless information and power transfer (SWIPT) decode-and-forward (DF) relay network, where a source node transmits data to a destination node through the assistance of multi-hop passive relays. We employ the time-switching (TS) protocol, enabling the relays to harvest energy from the received previous hop signal to support data forwarding. We first prove that the system throughput monotonically increases with the transmit power of the source node. Next, by employing logarithmic transformations, we convert the non-convex problem of obtaining optimal TS ratios at each relay to maximize the system throughput into a convex optimization problem. Comprehensively taking into account the convergence rate, computational complexity per iteration, and robustness, we selected the log barrier method—a type of interior point method—to address this convex optimization problem, along with providing a detailed implementation procedure. The simulation results validate the optimality of the proposed method and demonstrate its applicability to practical communication systems. For instance, the proposed scheme achieves 1437.3 bps throughput at 40 dBm maximum source power in a 2-relay network—278.6% higher than that of the scheme with TS ratio fixed at 0.75 (379.68 bps). On the other hand, it converges within a 1.36 ms computation time for 5 relays, 6 orders of magnitude faster than exhaustive search (1730 s). Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

20 pages, 10382 KB  
Article
Stability Analysis and Design of Composite Breakwater Based on Fluid-Solid Coupled Approach Using CFD/NDDA
by Xinyu Wang and Abdellatif Ouahsine
J. Mar. Sci. Eng. 2025, 13(9), 1817; https://doi.org/10.3390/jmse13091817 - 19 Sep 2025
Viewed by 285
Abstract
Composite breakwater is a commonly employed structure for coastal and harbor protection. However, strong hydrodynamic impact can lead to failure and instability of these protective structures. In this study, a two-dimensional fluid-porous-solid coupling model is developed to investigate the stability of composite breakwaters. [...] Read more.
Composite breakwater is a commonly employed structure for coastal and harbor protection. However, strong hydrodynamic impact can lead to failure and instability of these protective structures. In this study, a two-dimensional fluid-porous-solid coupling model is developed to investigate the stability of composite breakwaters. The fluid-porous model is based on the Volume-Averaged Reynolds-Averaged Navier-Stokes equations, in which the nonlinear Forchheimer equations are added to describe the porous layer. The solid model employs the Nodal-based Discontinuous Deformation Analysis (NDDA) method to analyze the displacement of the caisson. NDDA is a nodal-based method that couples FEM and DDA to improve non-linear processes. This proposed coupled model permits the examination of the influence of the thickness and porosity of the porous layer on maximum impacting wave height (IWHmax) and the turbulent kinetic energy (TKE) generation. The results show that high porosity values lead to the dissipation of TKE and reduce the IWHmax. However, the reduction in the IWHmax is not monotonic with increasing porous layer thickness. We observed that IWHmax reaches an optimum value as the porous layer thickness continues to increase. These results can contribute to improve the design of composite breakwaters. Full article
(This article belongs to the Section Coastal Engineering)
Show Figures

Figure 1

9 pages, 248 KB  
Article
Fixed-Point Theorem with a Novel Contraction Approach in Banach Algebras
by Hamza El Bazi, Younes Lahraoui, Cheng-Chi Lee, Loubna Omri and Abdellatif Sadrati
Mathematics 2025, 13(18), 3024; https://doi.org/10.3390/math13183024 - 19 Sep 2025
Viewed by 331
Abstract
In this paper, we establish a fixed-point theorem for mixed monotone operators in ordered Banach algebras by introducing a novel contraction condition formulated in terms of the product law, which represents a significant departure from the traditional additive approach. By exploiting the underlying [...] Read more.
In this paper, we establish a fixed-point theorem for mixed monotone operators in ordered Banach algebras by introducing a novel contraction condition formulated in terms of the product law, which represents a significant departure from the traditional additive approach. By exploiting the underlying algebraic structure, our method ensures both the existence and uniqueness of fixed points under broader conditions. To illustrate the effectiveness of the proposed theorem, we also provide a concrete example that demonstrates its applicability. Full article
Back to TopTop