Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,160)

Search Parameters:
Keywords = estimating nonlinearities

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 889 KB  
Article
Deep Spatiotemporal Forecasting and Reinforcement Optimization for Ambulance Allocation
by Yihjia Tsai, Yoshimasa Tokuyama, Jih Pin Yeh and Hwei Jen Lin
Mathematics 2026, 14(3), 483; https://doi.org/10.3390/math14030483 (registering DOI) - 29 Jan 2026
Abstract
Emergency Medical Services (EMS) require timely and equitable ambulance allocation supported by accurate demand estimation. In our prior work, we developed a statistical forecasting module based on Overall Smoothed Average Demand (OSAD) and Average Maximum (AMX) to estimate proportional EMS demand across spatial [...] Read more.
Emergency Medical Services (EMS) require timely and equitable ambulance allocation supported by accurate demand estimation. In our prior work, we developed a statistical forecasting module based on Overall Smoothed Average Demand (OSAD) and Average Maximum (AMX) to estimate proportional EMS demand across spatial zones. Although this approach was interpretable and computationally efficient, it was limited in modeling nonlinear spatiotemporal dependencies and adapting to dynamic demand variations. This paper presents a unified deep learning-based EMS planning framework that integrates spatiotemporal demand forecasting with adaptive ambulance allocation. Specifically, the statistical OSAD/AMX estimators are replaced by graph-based spatiotemporal forecasting models capable of capturing spatial interactions and temporal dynamics. The predicted demand is then incorporated into a reinforcement learning-based allocator that dynamically optimizes ambulance placement under fairness, coverage, and operational constraints. Experiments conducted on real-world EMS datasets demonstrate that the proposed end-to-end framework not only improves demand forecasting accuracy but also translates these improvements into tangible operational benefits, including enhanced equity in resource distribution and reduced response distance. Compared with traditional statistical and heuristic-based baselines, the proposed approach provides a more adaptive and decision-aware solution for EMS planning. Full article
14 pages, 1104 KB  
Article
MAGE (Multimodal AI-Enhanced Gastrectomy Evaluation): Comparative Analysis of Machine Learning Models for Postoperative Complications in Central European Gastric Cancer Population
by Wojciech Górski, Marcin Kubiak, Amir Nour Mohammadi, Maksymilian Podleśny, Gian Luca Baiocchi, Manuele Gaioni, Santo Vincent Grasso, Andrew Gumbs, Timothy M. Pawlik, Bartłomiej Drop, Albert Chomątowski, Zuzanna Pelc, Katarzyna Sędłak, Michał Woś and Karol Rawicz-Pruszyński
Cancers 2026, 18(3), 443; https://doi.org/10.3390/cancers18030443 (registering DOI) - 29 Jan 2026
Abstract
Introduction: By leveraging dedicated datasets and predictive modeling, machine-learning (ML) algorithms can estimate the probability of both short- and long-term outcomes after surgery. The aim of this study was to evaluate the ability of ML-based models to predict postoperative complications in patients [...] Read more.
Introduction: By leveraging dedicated datasets and predictive modeling, machine-learning (ML) algorithms can estimate the probability of both short- and long-term outcomes after surgery. The aim of this study was to evaluate the ability of ML-based models to predict postoperative complications in patients with gastric cancer (GC) undergoing multimodal therapy. In particular, we aimed to develop a free, publicly accessible online calculator based on preoperative variables. Materials and Methods: Patients with histologically confirmed locally advanced (cT2-4N0-3M0) GC who underwent multimodal treatment with curative intent between 2013 and 2023 were included in the study. ML models evaluation pipeline was used with Stratified 5-Fold Cross-Validation. Results: A total of 368 patients were included in the final analytic cohort. Among five algorithm classes under 5-fold cross-validation, Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) was 0.9719, 0.9652, 0.9796, 0.8339 and 0.7581 for XGBoost, Catboost, Random Forest, SVM and Logistic Regression, respectively. Macro F1 was 0.8714, 0.5094, 0.8820, 0.8714 and 0.4579 for XGBoost, SVM, Random Forest, CatBoost and Logistic Regression, respectively. Overall Accuracy was 0.8897, 0.5980, 0.8885, 0.8750 and 0.5466 for XGBoost, SVM, Random Forest, CatBoost and Logistic Regression models, respectively. Conclusions: In this Central and Eastern European cohort of patients with locally advanced GC, ML models using non-linear decision rules-particularly Random Forest and XGBoost- substantially outperformed conventional linear approaches in predicting the severity of postoperative complications. Prospective external validation is needed to clarify the model’s clinical utility and its potential role in perioperative decision support. Full article
24 pages, 7789 KB  
Article
Real-Time Acceleration Estimation for Low-Thrust Spacecraft Using a Dual-Layer Filter and an Interacting Multiple Model
by Zipeng Wu, Peng Zhang and Fanghua Jiang
Aerospace 2026, 13(2), 130; https://doi.org/10.3390/aerospace13020130 - 29 Jan 2026
Abstract
Orbit determination for non-cooperative targets represents a significant focus of research within the domain of space situational awareness. In contrast to cooperative targets, non-cooperative targets do not provide their orbital parameters, necessitating the use of observation data for accurate orbit determination. The increasing [...] Read more.
Orbit determination for non-cooperative targets represents a significant focus of research within the domain of space situational awareness. In contrast to cooperative targets, non-cooperative targets do not provide their orbital parameters, necessitating the use of observation data for accurate orbit determination. The increasing prevalence of low-cost, low-thrust spacecraft has heightened the demand for advancements in real-time orbit determination and parameter estimation for low-thrust maneuvers. This paper presents a novel dual-layer filter approach designed to facilitate real-time acceleration estimation for non-cooperative targets. Initially, the method employs a square-root cubature Kalman filter (SRCKF) to handle the nonlinearity of the system and a Jerk model to address the challenges in acceleration modeling, thereby yielding a preliminary estimation of the acceleration produced by the thruster of the non-cooperative target. Subsequently, a specialized filtering structure is established for the estimated acceleration, and two filtering frameworks are integrated into a dual-layer filter model via the cubature transform, significantly enhancing the estimation accuracy of acceleration parameters. Finally, to adapt to the potential on/off states of the thrusters, the Interacting Multiple Model (IMM) algorithm is employed to bolster the robustness of the proposed solution. Simulation results validate the effectiveness of the proposed method in achieving real-time orbit determination and acceleration estimation. Full article
(This article belongs to the Special Issue Precise Orbit Determination of the Spacecraft)
33 pages, 3882 KB  
Article
Hybrid Feature Selection and Interpretable Random Forest Modeling for Olympic Medal Forecasting: Integrating CFO Optimization and Uncertainty Analysis
by Xinran Chen, Xuming Yan and Tanran Zhang
Mathematics 2026, 14(3), 478; https://doi.org/10.3390/math14030478 - 29 Jan 2026
Abstract
This study develops a data-driven predictive framework integrating hybrid feature selection, interpretable machine learning, and uncertainty quantification to forecast Olympic medal performance among elite nations. Focusing on the top ten countries from Paris 2024, the analysis employs a three-stage feature selection procedure combining [...] Read more.
This study develops a data-driven predictive framework integrating hybrid feature selection, interpretable machine learning, and uncertainty quantification to forecast Olympic medal performance among elite nations. Focusing on the top ten countries from Paris 2024, the analysis employs a three-stage feature selection procedure combining Spearman correlation screening, random forest embedded importance, and the Caterpillar Fungus Optimizer (CFO) to identify stable long-term predictors. A novel test variable, rank, capturing historical competitive strength, and a refined continuous host-effect indicator derived from gravity-type trade models are introduced. Two complementary modeling strategies—a two-way fixed-effects econometric model and a CFO-optimized random forest—are implemented and validated. SHAP, LIME, and partial dependence plots enhance model interpretability, revealing nonlinear mechanisms underlying medal outcomes. Kernel density estimation generates probabilistic interval forecasts for Los Angeles 2028. Results demonstrate that historical performance and event-specific characteristics dominate medal predictions, while macroeconomic factors (GDP, population) and conventional host status contribute marginally once related variables are controlled. Consistent variable rankings across models and close alignment between 2028 projections and 2024 outcomes validate the framework’s robustness and practical applicability for sports policy and resource allocation decisions. Full article
Show Figures

Figure 1

22 pages, 1360 KB  
Article
A Data-Driven Approach to Estimating Passenger Boarding in Bus Networks
by Gustavo Bongiovi, Teresa Galvão Dias, Jose Nauri Junior and Marta Campos Ferreira
Appl. Sci. 2026, 16(3), 1384; https://doi.org/10.3390/app16031384 - 29 Jan 2026
Abstract
This study explores the application of multiple predictive algorithms under general versus route-specialized modeling strategies to estimate passenger boarding demand in public bus transportation systems. Accurate estimation of boarding patterns is essential for optimizing service planning, improving passenger comfort, and enhancing operational efficiency. [...] Read more.
This study explores the application of multiple predictive algorithms under general versus route-specialized modeling strategies to estimate passenger boarding demand in public bus transportation systems. Accurate estimation of boarding patterns is essential for optimizing service planning, improving passenger comfort, and enhancing operational efficiency. This research evaluates a range of predictive models to identify the most effective techniques for forecasting demand across different routes and times. Two modeling strategies were implemented: a generalistic approach and a specialized one. The latter was designed to capture route-specific characteristics and variability. A real-world case study from a medium-sized metropolitan region in Brazil was used to assess model performance. Results indicate that ensemble-tree-based models, particularly XGBoost, achieved the highest accuracy and robustness in handling nonlinear relationships and complex interactions within the data. Compared to the generalistic approach, the specialized approach demonstrated superior adaptability and precision, making it especially suitable for long-term and strategic planning applications. It reduced the average RMSE by 19.46% (from 13.84 to 11.15) and the MAE by 17.36% (from 9.60 to 7.93), while increasing the average R² from 0.289 to 0.344. However, these gains came with higher computational demands and mean Forecast Bias (from 0.002 to 0.560), indicating a need for bias correction before operational deployment. The findings highlight the practical value of predictive modeling for transit authorities, enabling data-driven decision making in fleet allocation, route planning, and service frequency adjustment. Moreover, accurate demand forecasting contributes to cost reduction, improved passenger satisfaction, and environmental sustainability through optimized operations. Full article
16 pages, 2451 KB  
Article
Stability Control of the DC/DC Converter in DC Microgrids Considering Negative Damping and Parameter Uncertainties
by Hao Deng, Wusong Wen, Yingchao Zhang, Sheng Long and Liping Jin
Energies 2026, 19(3), 697; https://doi.org/10.3390/en19030697 - 28 Jan 2026
Abstract
To address the issue of negative damping instability easily induced by DC/DC converters under constant power load (CPL) in DC microgrids and to enhance the control robustness of the system under uncertainties such as parameter perturbations, this paper designs a controller based on [...] Read more.
To address the issue of negative damping instability easily induced by DC/DC converters under constant power load (CPL) in DC microgrids and to enhance the control robustness of the system under uncertainties such as parameter perturbations, this paper designs a controller based on the linear active disturbance rejection control (LADRC) theory. Firstly, by establishing an equivalent model of the DC microgrid with CPL, the intrinsic relationship between the equivalent incremental admittance of the hybrid load and the system damping is revealed. Subsequently, treating the nonlinear characteristics of the CPL and model parameter variations as external disturbances, the linear extended state observer (LESO) is employed to estimate and compensate for the total system disturbance in real time. This effectively eliminates the risk of negative damping instability caused by the CPL and enhances the system’s robustness against parameter variations. Then, theoretical analysis is conducted from three perspectives, the convergence of disturbance estimation error, the stability of the closed-loop system, and robustness against parameter variations, thereby ensuring the reliability of the proposed control strategy. Finally, the proposed control strategy is validated through simulations and experiments. The results confirm that, even in the presence of negative damping effects and parameter variations, the strategy can effectively maintain fast tracking and stable control of the output voltage. Full article
(This article belongs to the Section F3: Power Electronics)
29 pages, 1105 KB  
Article
Quantitative Modeling of Investment–Output Dynamics: A Panel NARDL and GMM-Arellano–Bond Approach with Evidence from the Circular Economy
by Dorin Jula, Nicolae-Marius Jula and Kamer-Ainur Aivaz
Mathematics 2026, 14(3), 463; https://doi.org/10.3390/math14030463 - 28 Jan 2026
Abstract
This study develops an integrated panel econometric framework for modeling investment–output dynamics in circular economy sectors, explicitly addressing dynamic propagation, long-run equilibrium relationships, endogeneity, and nonlinear responses. Building on the Samuelson–Hicks Multiplier–Accelerator model, the analysis combines two complementary approaches. A dynamic panel specification [...] Read more.
This study develops an integrated panel econometric framework for modeling investment–output dynamics in circular economy sectors, explicitly addressing dynamic propagation, long-run equilibrium relationships, endogeneity, and nonlinear responses. Building on the Samuelson–Hicks Multiplier–Accelerator model, the analysis combines two complementary approaches. A dynamic panel specification estimated by the Generalized Method of Moments (Arellano–Bond) is employed to capture output inertia, intertemporal transmission of investment shocks, and stability properties of the dynamic system. In parallel, a nonlinear panel ARDL model estimated using the Pooled Mean Group (PMG/NARDL) methodology is used to identify cointegration and to distinguish between the long-run and short-run effects of positive and negative investment variations. The empirical analysis relies on a balanced panel of 28 European economies (EU-27 and the United Kingdom) over the period 2005–2023, using sectoral circular economy data, with gross value added as the output variable and gross private investment as the main regressor. The results indicate the existence of a stable cointegrated relationship between investment and output, characterized by significant asymmetries, with expansionary investment shocks exerting larger and more persistent effects than contractionary shocks. Dynamic GMM estimates further confirm delayed investment effects and a stable autoregressive structure. Overall, the paper contributes to mathematical economic modeling by providing a unified dynamic–equilibrium panel framework and by extending the empirical relevance of Multiplier–Accelerator dynamics to circular economy systems. Full article
17 pages, 2662 KB  
Article
Seasonal and Spatial Variations in General Extreme Value (GEV) Distribution Shape Parameter for Estimating Extreme Design Rainfall in Tasmania
by Iqbal Hossain, Shirley Gato-Trinidad and Monzur Alam Imteaz
Water 2026, 18(3), 319; https://doi.org/10.3390/w18030319 - 27 Jan 2026
Abstract
This paper demonstrates seasonal variations in the generalised extreme value (GEV) distribution shape parameter and discrepancies in GEV types within the same location. Daily rainfall data from 26 rain gauge stations located in Tasmania were used as a case study. Four GEV distribution [...] Read more.
This paper demonstrates seasonal variations in the generalised extreme value (GEV) distribution shape parameter and discrepancies in GEV types within the same location. Daily rainfall data from 26 rain gauge stations located in Tasmania were used as a case study. Four GEV distribution parameter estimation techniques, such as MLE, GMLE, Bayesian, and L-moments, were used to determine the shape parameter of the distribution. With the estimated shape parameter, the spatial variations under different seasons were investigated through GIS interpolation maps. As there is strong evidence that shape parameters potentially vary across locations, spatial analysis focusing on the shape parameter across Tasmania (Australia) was performed. The outcomes of the analysis revealed that the shape parameters exhibit their highest and lowest values in winter, with a range from −0.234 to 0.529. The analysis of the rainfall data revealed that there is significant variation in the shape parameters among the seasons. The magnitude of the shape parameter decreases with elevation, and a non-linear relationship exists between these two parameters. This study extends knowledge on the current framework of GEV distribution shape parameter estimation techniques at the regional scale, enabling the adoption of appropriate GEV types and, thus, the appropriate determination of design rainfall to reduce hazards and protect our environments. Full article
Show Figures

Figure 1

24 pages, 1852 KB  
Article
State Estimation-Based Disturbance Rejection Control for Third-Order Fuzzy Parabolic PDE Systems with Hybrid Attacks
by Karthika Poornachandran, Elakkiya Venkatachalam, Oh-Min Kwon, Aravinth Narayanan and Sakthivel Rathinasamy
Mathematics 2026, 14(3), 444; https://doi.org/10.3390/math14030444 - 27 Jan 2026
Viewed by 21
Abstract
In this work, we develop a disturbance suppression-oriented fuzzy sliding mode secured sampled-data controller for third-order parabolic partial differential equations that ought to cope with nonlinearities, hybrid cyber attacks, and modeled disturbances. This endeavor is mainly driven by formulating an observer model with [...] Read more.
In this work, we develop a disturbance suppression-oriented fuzzy sliding mode secured sampled-data controller for third-order parabolic partial differential equations that ought to cope with nonlinearities, hybrid cyber attacks, and modeled disturbances. This endeavor is mainly driven by formulating an observer model with a T–S fuzzy mode of execution that retrieves the latent state variables of the perceived system. Progressing onward, the disturbance observers are formulated to estimate the modeled disturbances emerging from the exogenous systems. In due course, the information received from the system and disturbance estimators, coupled with the sliding surface, is compiled to fabricate the developed controller. Furthermore, in the realm of security, hybrid cyber attacks are scrutinized through the use of stochastic variables that abide by the Bernoulli distributed white sequence, which combat their unpredictability. Proceeding further in this framework, a set of linear matrix inequality conditions is established that relies on the Lyapunov stability theory. Precisely, the refined looped Lyapunov–Krasovskii functional paradigm, which reflects in the sampling period that is intricately split into non-uniform intervals by leveraging a fractional-order parameter, is deployed. In line with this pursuit, a strictly (Φ1,Φ2,Φ3)ϱ dissipative framework is crafted with the intent to curb norm-bounded disturbances. A simulation-backed numerical example is unveiled in the closing segment to underscore the potency and efficacy of the developed control design technique. Full article
14 pages, 443 KB  
Article
A Bayesian Decision-Theoretic Optimization Model for Personalized Timing of Non-Invasive Prenatal Testing Based on Maternal BMI
by Yubu Ding, Kaixuan Ni, Xiaona Fan and Qinglun Yan
Mathematics 2026, 14(3), 437; https://doi.org/10.3390/math14030437 - 27 Jan 2026
Viewed by 74
Abstract
Non invasive prenatal testing, NIPT, is widely used for fetal aneuploidy screening, but its clinical utility depends on gestational timing and maternal characteristics. Low fetal fraction can lead to unreportable tests and increased false negative risk, while GC-content-related sequencing bias may contribute to [...] Read more.
Non invasive prenatal testing, NIPT, is widely used for fetal aneuploidy screening, but its clinical utility depends on gestational timing and maternal characteristics. Low fetal fraction can lead to unreportable tests and increased false negative risk, while GC-content-related sequencing bias may contribute to both false positive and false negative findings. We propose a Bayesian decision-theoretic optimization framework to recommend personalized NIPT timing across maternal body mass index (BMI) strata, explicitly incorporating test credibility and detection errors. We performed a retrospective analysis of de-identified NIPT records from a hospital in Guangdong Province, China, covering 1 January 2023 to 18 February 2024, including 1082 male fetus tests. Y chromosome concentration was used as a proxy for test reportability, with a 4 percent reporting threshold. Detection state proportions were empirically summarized from clinical reference information, with false positives at 10.35 percent and false negatives at 2.77 percent. A logistic regression model quantified the probability of obtaining a reportable result as a function of gestational week, maternal age, height, and weight, and the estimated probabilities were used to parameterize the Bayesian risk model. The optimized BMI-stratified schedule produced six BMI groups with recommended testing weeks ranging from 11 to 16, and the overall expected risk converged to 0.531. These results indicate a nonlinear BMI–timing relationship and suggest that a single universal testing week is suboptimal. The proposed framework provides quantitative decision support for BMI-stratified NIPT scheduling in clinical practice. Full article
Show Figures

Figure 1

15 pages, 3073 KB  
Article
Categorical Prediction of the Anthropization Index in the Lake Tota Basin, Colombia, Using XGBoost, Remote Sensing and Geomorphometry Data
by Ana María Camargo-Pérez, Iván Alfonso Mayorga-Guzmán, Gloria Yaneth Flórez-Yepes, Ivan Felipe Benavides-Martínez and Yeison Alberto Garcés-Gómez
Earth 2026, 7(1), 17; https://doi.org/10.3390/earth7010017 - 27 Jan 2026
Viewed by 44
Abstract
This study presents a machine learning framework to automate the mapping of the Integrated Relative Anthropization Index (INRA, by its Spanish acronym). A predictive model was developed to estimate the degree of anthropization in the basin of Lake Tota, Colombia, using the XGBoost [...] Read more.
This study presents a machine learning framework to automate the mapping of the Integrated Relative Anthropization Index (INRA, by its Spanish acronym). A predictive model was developed to estimate the degree of anthropization in the basin of Lake Tota, Colombia, using the XGBoost machine learning algorithm and remote sensing data. This research, part of a broader wetland monitoring project, aimed to identify the optimal spatial scale for analysis and the most influential predictor variables. Methodologically, models were tested at resolutions from 20 m to 500 m. The results indicate that a 50 m spatial scale provides the optimal balance between predictive accuracy and computational efficiency, achieving robust performance in identifying highly anthropized areas (sensitivity: 0.83, balanced accuracy: 0.91). SHAP analysis identified proximity to infrastructure and specific Sentinel-2 spectral bands as the most influential predictors in the INRA emulation model. The main result is a robust, replicable model that produces a detailed anthropization map, serving as a practical tool for monitoring human impact and supporting sustainable management strategies in threatened high-Andean ecosystems. Rather than a simple classification exercise, this approach serves to deconstruct the INRA methodology, using SHAP analysis to reveal the latent non-linear relationships between spectral variables and human impact, providing a transferable and explainable monitoring tool. Full article
Show Figures

Figure 1

27 pages, 4350 KB  
Article
Reduced-Order Legendre–Galerkin Extrapolation Method with Scalar Auxiliary Variable for Time-Fractional Allen–Cahn Equation
by Chunxia Huang, Hong Li and Baoli Yin
Fractal Fract. 2026, 10(2), 83; https://doi.org/10.3390/fractalfract10020083 - 26 Jan 2026
Viewed by 54
Abstract
This paper presents a reduced-order Legendre–Galerkin extrapolation (ROLGE) method combined with the scalar auxiliary variable (SAV) approach (ROLGE-SAV) to numerically solve the time-fractional Allen–Cahn equation (tFAC). First, the nonlinear term is linearized via the SAV method, and the linearized system derived from this [...] Read more.
This paper presents a reduced-order Legendre–Galerkin extrapolation (ROLGE) method combined with the scalar auxiliary variable (SAV) approach (ROLGE-SAV) to numerically solve the time-fractional Allen–Cahn equation (tFAC). First, the nonlinear term is linearized via the SAV method, and the linearized system derived from this SAV-based linearization is time-discretized using the shifted fractional trapezoidal rule (SFTR), resulting in a semi-discrete unconditionally stable scheme (SFTR-SAV). The scheme is then fully discretized by incorporating Legendre–Galerkin (LG) spatial discretization. To enhance computational efficiency, a proper orthogonal decomposition (POD) basis is constructed from a small set of snapshots of the fully discrete solutions on an initial short time interval. A reduced-order LG extrapolation SFTR-SAV model (ROLGE-SFTR-SAV) is then implemented over a subsequent extended time interval, thereby avoiding redundant computations. Theoretical analysis establishes the stability of the reduced-order scheme and provides its error estimates. Numerical experiments validate the effectiveness of the proposed method and the correctness of the theoretical results. Full article
(This article belongs to the Section Numerical and Computational Methods)
Show Figures

Figure 1

26 pages, 1299 KB  
Article
Function Meets Environment–Approach for the Environmental Assessment of Food Packaging, Taking into Account Packaging Functionality
by Alina Siebler, Jonas Keller, Mara Strenger, Tim Prescher, Stefan Albrecht and Markus Schmid
Sustainability 2026, 18(3), 1222; https://doi.org/10.3390/su18031222 - 26 Jan 2026
Viewed by 107
Abstract
As food typically accounts for substantially higher resource use and potential environmental impacts than its packaging, packaging-related food wastage must be considered in environmental assessments of food packaging. However, this is not currently performed as standard in Life Cycle Assessment (LCA). This article [...] Read more.
As food typically accounts for substantially higher resource use and potential environmental impacts than its packaging, packaging-related food wastage must be considered in environmental assessments of food packaging. However, this is not currently performed as standard in Life Cycle Assessment (LCA). This article proposes a conceptual framework as an approach to systematically integrating packaging functionality into LCA by incorporating packaging-related food wastage depending on shelf-life and due to technical emptiability. Based on literature data, a segmented non-linear regression is proposed to estimate shelf-life-dependent food wastage at retail level. Two exponential models provide a consistently decreasing relationship between shelf-life and food wastage, with S = 0.064 for products with ≤30 days shelf-life and S = 0.036 for >30 days shelf-life. These values indicate a satisfactory internal fit within both shelf-life segments. In addition, established experimental procedures for quantifying packaging emptiability are integrated to capture further packaging-related food wastage. The approach yields a pragmatic estimation of packaging-related food wastage that can be operationalized in packaging LCAs. Rather than predicting exact amounts of food wastage, the framework enables a more holistic, function-oriented assessment of food packaging by making environmental trade-offs between packaging design, shelf-life effects and emptiability transparent for screening-level LCA. Full article
Show Figures

Figure 1

18 pages, 16946 KB  
Article
Layer-Stripping Velocity Analysis Method for GPR/LPR Data
by Nan Huai, Tao Lei, Xintong Liu and Ning Liu
Appl. Sci. 2026, 16(3), 1228; https://doi.org/10.3390/app16031228 - 25 Jan 2026
Viewed by 110
Abstract
Diffraction-based velocity analysis is a key data interpretation technique in geophysical exploration, typically relying on the geometric characteristics, energy distribution, or propagation paths of diffraction waves. The hyperbola-based method is a classical strategy in this category, which extracts depth-dependent velocity (or dielectric properties) [...] Read more.
Diffraction-based velocity analysis is a key data interpretation technique in geophysical exploration, typically relying on the geometric characteristics, energy distribution, or propagation paths of diffraction waves. The hyperbola-based method is a classical strategy in this category, which extracts depth-dependent velocity (or dielectric properties) by correlating the hyperbolic shape of diffraction events with subsurface parameters for characterizing subsurface structures and material compositions. In this study, we propose a layer-stripping velocity analysis method applicable to ground-penetrating radar (GPR) and lunar-penetrating radar (LPR) data, with two main innovations: (1) replacing traditional local optimization algorithms with an intuitive parallelism check scheme, eliminating the need for complex nonlinear iterations; (2) performing depth-progressive velocity scanning of radargram diffraction signals, where shallow-layer velocity analysis constrains deeper-layer calculations. This strategy avoids misinterpretations of deep geological objects’ burial depth, morphology, and physical properties caused by a single average velocity or independent deep-layer velocity assumptions. The workflow of the proposed method is first demonstrated using a synthetic rock-fragment layered model, then applied to derive the near-surface dielectric constant distribution (down to 27 m) at the Chang’e-4 landing site. The estimated values range from 2.55 to 6, with the depth-dependent profile revealing lunar regolith stratification and interlayer material property variations. Consistent with previously reported results for the Chang’e-4 region, our findings confirm the method’s applicability to LPR data, providing a new technical framework for high-resolution subsurface structure reconstruction. Full article
Show Figures

Figure 1

18 pages, 1760 KB  
Article
The Prognostic Nutritional Index and Glycemic Status Synergistically Predict Early Renal Function Decline in Type 2 Diabetes: A Community-Based Cohort Study
by Yuting Yu, Jianguo Yu, Jing Li, Jiedong Xu, Yunhui Wang, Lihua Jiang, Genming Zhao and Yonggen Jiang
Nutrients 2026, 18(3), 395; https://doi.org/10.3390/nu18030395 - 25 Jan 2026
Viewed by 237
Abstract
Background/Objectives: The Prognostic Nutritional Index (PNI), which integrates serum albumin and lymphocyte count, reflects both nutritional and inflammatory status. However, its role in early renal function decline among patients with type 2 diabetes (T2D), particularly in relation to glycemic control, remains unclear. [...] Read more.
Background/Objectives: The Prognostic Nutritional Index (PNI), which integrates serum albumin and lymphocyte count, reflects both nutritional and inflammatory status. However, its role in early renal function decline among patients with type 2 diabetes (T2D), particularly in relation to glycemic control, remains unclear. This study aimed to: (1) characterize the dose–response relationship between PNI and early renal function decline in type 2 diabetes using restricted cubic splines; (2) identify whether glycemic control (HbA1c) modifies the PNI–renal decline association; and (3) evaluate the clinical utility of combining PNI and HbA1c for risk stratification. Methods: We analyzed data from 1711 community-based participants with T2D who had preserved renal function at baseline. The PNI was calculated as serum albumin (g/L) + 5 × lymphocyte count (×109/L). The primary outcome was a composite of rapid estimated glomerular filtration rate (eGFR) decline (>3 mL/min/1.73 m2 per year) or incident chronic kidney disease (CKD) stage 3. Restricted cubic spline models, multivariable regression, and Johnson–Neyman analyses were used to examine non-linearity and effect modification by glycated hemoglobin (HbA1c). Results: A consistent inverse linear association was observed between PNI and the rate of eGFR decline (P for non-linearity > 0.05). Johnson–Neyman analysis further demonstrated that the protective association of PNI was statistically significant within an HbA1c range of 7.24% to 8.71%. Stratification by clinical cut-offs revealed a significant effect modification by glycemic status. The inverse linear association between PNI and renal risk was most pronounced under hyperglycemic stress, as evidenced by the markedly elevated incidence (50.0%) among individuals with both poor glycemic control (HbA1c ≥ 8%) and low PNI (<50). Conversely, under good glycemic control (HbA1c < 8%), this inverse association was substantially attenuated, with a lower incidence observed in the low-PNI subgroup (6.7%) than in the high-PNI subgroup (15.9%). These findings indicate that the protective role of PNI is conditional upon the glycemic milieu. Conclusions: The PNI demonstrates a stable linear association with early renal function decline in T2D, with its protective effect most pronounced at suboptimal HbA1c levels. Combining PNI and HbA1c effectively identifies a high-risk subgroup characterized by synergistic risk, underscoring the need for integrated nutritional and glycemic management. Full article
Show Figures

Figure 1

Back to TopTop