Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,378)

Search Parameters:
Keywords = estimation of systematic risk

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 351 KB  
Article
From FII Dependence to DII Dominance: Behavioral Dynamics and Minskyan Risk in India’s Stock Market
by Suneel Maheshwari and Deepak Raghava Naik
J. Risk Financial Manag. 2026, 19(5), 315; https://doi.org/10.3390/jrfm19050315 (registering DOI) - 26 Apr 2026
Abstract
This study examines how market leadership in Indian equities has structurally shifted away from foreign institutional investors (FIIs) toward domestic institutional investors (DIIs) and mutual funds (MFs), and it evaluates the systemic risks created by this rebalancing. Using monthly transaction data from April [...] Read more.
This study examines how market leadership in Indian equities has structurally shifted away from foreign institutional investors (FIIs) toward domestic institutional investors (DIIs) and mutual funds (MFs), and it evaluates the systemic risks created by this rebalancing. Using monthly transaction data from April 2007 to January 2026, we analyze evolving investment patterns among FIIs, DIIs, and MFs by employing trend analysis, Pearson’s and Spearman’s correlation analyses, phase decomposition, stationarity tests, Granger causality analysis, ARIMA modelling, and GARCH volatility estimation. Since 2021, FIIs have recorded cumulative net outflows exceeding ₹8.68 lakh crore (US$95.36 billion), while DIIs mainly led by mutual funds financed largely through Systematic Investment Plans (SIPs) have made net purchases of over ₹19.37 lakh crore (US$212.67 billion), effectively absorbing FII selling and helping to maintain elevated index levels. The trend continues with SENSEX having remained above 80,000 points through 2025 despite persistent FII disengagement. The DII share of total market purchases rose from approximately 39% in 2017 to over 54% by January 2026, documenting a structural shift in market composition. The results show that DII flows have stayed positively and significantly correlated with SENSEX, with FII flows being significantly negatively correlated. Granger causality tests suggest market-responsive rather than market-driving behavior by domestic institutions. Drawing upon Minsky’s financial instability hypothesis and behavioral finance frameworks, we interpret that prolonged domestic absorption of FII exists where direct fundamental evidence is unavailable. The Minsky-type fragility interpretation is offered as a structured hypothesis for future empirical investigation. The findings carry important implications for retail investors, fund managers, and regulators. Full article
(This article belongs to the Special Issue Behavioral Factors and Risk-Taking in Financial Markets)
14 pages, 412 KB  
Article
Impact of Prehospital Lung Ultrasound on Diagnostic Precision and Hospital Transport in Patients with Dyspnea and Respiratory Failure: A Retrospective Comparative Analysis
by Damian Kowalczyk and Mikołaj Tyczyński
Diagnostics 2026, 16(9), 1297; https://doi.org/10.3390/diagnostics16091297 (registering DOI) - 26 Apr 2026
Abstract
Background: Dyspnea is a common reason for emergency medical service (EMS) interventions and is associated with a substantial risk of severe clinical course, complications, and hospital admission. Its differential diagnosis in the prehospital setting remains challenging due to the limited availability of imaging [...] Read more.
Background: Dyspnea is a common reason for emergency medical service (EMS) interventions and is associated with a substantial risk of severe clinical course, complications, and hospital admission. Its differential diagnosis in the prehospital setting remains challenging due to the limited availability of imaging modalities. Point-of-care ultrasound (POCUS), including lung ultrasound (LUS), is a rapid, field-applicable technique recommended in numerous acute respiratory diagnostic scenarios. Objective: To evaluate the use of lung ultrasound in the prehospital setting and its association with the precision of diagnoses related to respiratory failure, the frequency of transport to the emergency department (ED) among patients presenting with dyspnea/respiratory failure, and to characterize the profile of sonographic findings with their correlation to clinical diagnostic categories. Additionally, transport rates in the study population were compared with aggregated regional data for the Masovian Voivodeship (excluding the analyzed county). Methods: A retrospective observational study was conducted on EMS interventions performed between 01 January 2025 and 30 June 2025 in Legionowo County (N = 353). The analysis included ICD-10 codes assigned in prehospital documentation (one primary code and up to two additional codes) in patients presenting with dyspnea and/or respiratory failure, the performance of ultrasound examination, and resulting LUS findings (absence of pleural sliding and/or lung point; B-lines; consolidations; C-lines; pleural effusion). Descriptive analyses, frequency comparison tests (χ2/Fisher), estimation of relative risk (RR) with 95% confidence intervals (CI), and agreement analysis using Cohen’s kappa coefficient (κ) between etiological categories derived from ICD-10 codes and those inferred from LUS profiles were performed (κ with 95% CI estimated using bootstrap resampling). The study was reported in accordance with the STROBE guidelines for observational studies. Additionally, the distribution of ICD-10 coding and the proportion of hospital transports across the entire Masovian Voivodeship were compared with those observed in the analyzed area. Results: Ultrasound examination was performed in 72/353 (20.4%) EMS interventions; transport to the emergency department occurred in 239/353 (67.7%) cases. The most frequent clinical categories based on ICD-10 codes were: general/symptom-based 182/353 (51.6%), inflammatory 77/353 (21.8%), obstructive 66/353 (18.7%), and cardiological 20/353 (5.7%). Among abnormal LUS findings, the most common were B-lines (43/72; 61.4%) and consolidations (29/72; 41.4%). Consolidations were strongly associated with the inflammatory category (OR 9.72; p < 0.001), whereas B-lines were associated with the cardiological category (OR 23.41; p = 0.0011) among cases in which LUS was performed. Ultrasound use was associated with a higher frequency of assigning at least one targeted (non-symptom-based) diagnosis within ICD coding: 53/72 (73.6%) vs. 111/278 (39.9%), RR 1.84 (95% CI 1.51–2.25; p < 0.001). Agreement between the ICD-10 etiological category (inflammatory/cardiological/obstructive/other) and the category inferred from the LUS profile was moderate: κ = 0.36 (95% CI 0.21–0.51), with an observed agreement of 54.2%. Compared with aggregated regional data (Masovian Voivodeship excluding the analyzed county), the overall transport rate for comparable ICD-10 codes was lower in the study unit: 279/409 (68.2%) vs. 11,351/13,785 (82.3%), RR 0.83 (95% CI 0.78–0.89; p < 0.001). The largest differences were observed for dyspnea (R06.0: 72.9% vs. 88.2%; RR 0.83) and obstructive codes (J44/J45/J46 combined: 43.1% vs. 67.0%; RR 0.64). Conclusions: In this retrospective analysis, an EMS unit with systematically implemented ultrasound demonstrated a lower frequency of hospital transport for selected dyspnea/respiratory failure codes compared with regional data and greater precision in ICD-10 diagnostic coding in cases where ultrasound was performed. The profile of LUS findings correlated with clinical categories in a manner consistent with existing literature. Full article
(This article belongs to the Special Issue Application of Ultrasound Imaging in Clinical Diagnosis)
Show Figures

Figure 1

28 pages, 6628 KB  
Article
Unified AI Framework for Decarbonization in Large-Scale Building Energy Systems: Integrating Acoustic-Vision Leak Detection and Schedule-Aware Machine Learning
by Mooyoung Yoo
Buildings 2026, 16(9), 1698; https://doi.org/10.3390/buildings16091698 (registering DOI) - 26 Apr 2026
Abstract
Compressed air systems (CASs) represent a significant portion of energy consumption in large-scale built environments and manufacturing facilities, suffering from both micro-level physical pipeline leaks and macro-level operational inefficiencies. This paper proposes a unified, dual-action artificial intelligence framework aimed at advancing building decarbonization [...] Read more.
Compressed air systems (CASs) represent a significant portion of energy consumption in large-scale built environments and manufacturing facilities, suffering from both micro-level physical pipeline leaks and macro-level operational inefficiencies. This paper proposes a unified, dual-action artificial intelligence framework aimed at advancing building decarbonization by systematically integrating acoustic-vision leak quantification with schedule-aware machine learning. Specifically, the framework targets pneumatic pipe connection leaks, fitting leaks, and joint degradation faults within compressed air distribution networks, which are the primary sources of micro-level volumetric energy losses in industrial building systems. First, a probabilistic multimodal fusion algorithm (MPSF) using an ultrasonic camera is developed to detect and geometrically quantify physical leaks, successfully translating pixel areas into physical facility energy loss metrics (estimating 11.0 kW of wasted power from detected severe leaks). Second, to optimize the compressor’s supply matching the actual facility demand without risking data leakage from internal flow sensors, an eXtreme Gradient Boosting (XGBoost) model is proposed. By utilizing only external building environmental conditions and the real-time operational schedules of 13 distinct zones, the model achieves highly accurate dynamic power prediction (R2 = 0.9698). Finally, comprehensive simulations based on real-world digital monitoring data from a facility-scale built environment demonstrate that only the concurrent application of both modules ensures stable end-point pressure. The integrated framework achieves a substantial system-wide building energy reduction of over 20% to 40% compared to baseline constant-pressure operations, yielding an estimated annual reduction of 116 tons of CO2 emissions, thereby providing a direct pathway toward carbon-neutral building operations. Full article
(This article belongs to the Special Issue Built Environment and Building Energy for Decarbonization)
Show Figures

Figure 1

35 pages, 5864 KB  
Review
The State of Practice in Application of Natural Language Processing in Transportation Safety Analysis
by Mohammadjavad Bazdar, Hyun Kim, Branislav Dimitrijevic and Joyoung Lee
Appl. Sci. 2026, 16(9), 4223; https://doi.org/10.3390/app16094223 (registering DOI) - 25 Apr 2026
Abstract
This paper provides a systematic review of recent applications of NLP methods for analyzing traffic crash reports, with a focus on estimating crash severity, crash duration, and crash causation. The review covers prior research using probabilistic topic modeling methods such as LDA, STM, [...] Read more.
This paper provides a systematic review of recent applications of NLP methods for analyzing traffic crash reports, with a focus on estimating crash severity, crash duration, and crash causation. The review covers prior research using probabilistic topic modeling methods such as LDA, STM, and hierarchical Dirichlet processes in addition to research using transformer-based language models, which include encoder-based models like BERT and PubMedBERT as well as decoder-based models like GPT, GPT2, ChatGPT, GPT-3, and LLaMA. The review starts with a systematic literature selection process with predefined inclusion criteria. We categorize the reviewed studies into the following application areas: crash severity prediction, risk factor identification in crashes, and road safety analysis. The results show several complementary advantages of using different NLP techniques to achieve different analytical goals. Topic models allow for interpretable and exploratory pattern discovery, while encoder models are well-suited for structured prediction problems. Decoder models have the additional flexibility to perform zero-shot and few-shot reasoning, which makes them useful for reasoning about under-sampled or under-reported data. Across the literature, hybrid methods that combine text and structured data outperform individual methods in terms of prediction accuracy and broad applicability. Challenges across the literature include class imbalance, lack of standardization in preprocessing and evaluation methods, and the tradeoff between prediction accuracy and interpretability of prediction models. These findings highlight the importance of aligning model selection with data availability and operational constraints, pointing toward future research directions in hybrid modeling frameworks, standardized evaluation protocols, and real-world deployment of NLP-driven traffic safety systems. Full article
(This article belongs to the Special Issue Traffic Safety Measures and Assessment: 2nd Edition)
Show Figures

Figure 1

26 pages, 1233 KB  
Article
Does Exchange Rate Volatility Matter for Banking-Sector Financial Stability? A Global Analysis
by Olajide O. Oyadeyi, Md Mizanur Rahman, Obinna Ugwu, Bisayo O. Otokiti and Adekunle Adewole
J. Risk Financial Manag. 2026, 19(5), 313; https://doi.org/10.3390/jrfm19050313 (registering DOI) - 25 Apr 2026
Abstract
Exchange rate volatility has intensified in recent decades, yet its systematic implications for banking-sector stability remain contested. This study investigates whether exchange rate volatility constitutes a meaningful source of financial fragility using a global panel of 103 countries over the period 2000–2021. Financial [...] Read more.
Exchange rate volatility has intensified in recent decades, yet its systematic implications for banking-sector stability remain contested. This study investigates whether exchange rate volatility constitutes a meaningful source of financial fragility using a global panel of 103 countries over the period 2000–2021. Financial stability is proxied by the banking-sector Z-score, while exchange rate volatility is estimated using a EGARCH-based framework to capture time-varying uncertainty. To address cross-sectional dependence, heterogeneity, and endogeneity, the analysis employs Driscoll–Kraay fixed effects, two-step system GMM, and quantile regressions. The results reveal that exchange rate volatility exerts a statistically and economically significant negative effect on banking stability, reducing Z-scores across countries and income groups. The findings remain robust across alternative specifications and estimators. Bank-level fundamentals—capitalisation, liquidity, and credit—enhance stability, whereas higher non-performing loans and risk exposure amplify fragility. Macroeconomic conditions also matter, with stronger growth, institutional quality and external balances supporting resilience, while inflation, economic policy uncertainty and expansionary government spending weaken stability. By integrating time-varying volatility modelling with dynamic panel techniques in a large cross-country setting, this study provides new global evidence that exchange rate volatility is not merely a macroeconomic fluctuation but a structural source of banking-sector risk. The findings carry important implications for macroprudential policy, foreign-exchange management, and coordinated monetary–fiscal responses aimed at safeguarding financial stability in open economies. Full article
Show Figures

Figure 1

32 pages, 875 KB  
Systematic Review
Genetic Determinants of Stress Reactivity in Pregnancy: A Systematic Review and Meta-Analysis: Implications for Maternal and Fetal Health
by Socol Ioana Denisa, Socol Flavius George, Farcaș Simona Sorina, Dumitriu Bogdan-Ionel, Dumitriu Alina-Iasmina, Antal Andreea, Boarta Aris, Iacob Daniela and Andreescu Nicoleta Ioana
Genes 2026, 17(5), 509; https://doi.org/10.3390/genes17050509 (registering DOI) - 25 Apr 2026
Abstract
Background: Gestation is a period of significant biological plasticity where the intrauterine environment influences fetal development via “fetal programming”. This study systematically reviews and meta-analyzes the association between genetic determinants—specifically the NR3C1, FKBP5, and CRHR1 genes, chosen for their pivotal [...] Read more.
Background: Gestation is a period of significant biological plasticity where the intrauterine environment influences fetal development via “fetal programming”. This study systematically reviews and meta-analyzes the association between genetic determinants—specifically the NR3C1, FKBP5, and CRHR1 genes, chosen for their pivotal role in the functional regulation and feedback sensitivity of the hypothalamic–pituitary–adrenal (HPA) axis—and stress reactivity during pregnancy. Methods: Following PRISMA guidelines, a systematic search was conducted across PubMed, Scopus, and Web of Science, yielding an initial total of 1430 records. After removing duplicates and screening 669 studies, a total of 34 primary observational studies were included in the systematic review and qualitative synthesis. For the quantitative synthesis, 27 articles provided sufficient data, resulting in k = 39 independent effect sizes analyzed via a mixed-effects model to account for tissue-specific and cohort-specific outcomes. Results: Systematic analysis reveals that maternal psychosocial stress significantly correlates with NR3C1 hypermethylation, acting as a biological mediator for neonatal cortisol dysregulation and hippocampal volume reduction. The FKBP5 rs1360780 polymorphism emerged as a key moderator of structural vulnerability, showing a “double-hit” effect when combined with epigenetic alterations. Furthermore, the study identifies sex-specific susceptibility, with divergent placental trajectories for male and female fetuses. Meta-analytic estimates confirmed the robustness of these associations (Rosenthal Fail-Safe N = 431,000), despite a general trend toward statistical significance (p = 0.079) in heterogeneous cohorts. Conclusions: The findings underscore a stable link between genetic determinants and prenatal stress reactivity. The interaction between molecular predisposition and environmental factors defines the health of the mother–infant dyad. These results advocate for a transition toward Precision Prenatal Medicine, integrating polygenic risk scores and epigenetic monitoring to implement early, targeted preventive interventions. Full article
(This article belongs to the Section Human Genomics and Genetic Diseases)
Show Figures

Figure 1

28 pages, 1065 KB  
Article
Normalising Flow Enhanced GARCH Models: A Two-Stage Framework for Flexible Innovation Modelling in Financial Time Series
by Abdullah Hassan, Farai Mlambo and Wilson Tsakane Mongwe
Risks 2026, 14(5), 100; https://doi.org/10.3390/risks14050100 - 24 Apr 2026
Abstract
We introduce the Normalising Flow GARCH (NF-GARCH), a two-stage hybrid framework that enhances traditional GARCH models by replacing restrictive parametric innovation distributions with learned densities via normalising flows. Our approach preserves the interpretability of standard variance dynamics while addressing the common issue of [...] Read more.
We introduce the Normalising Flow GARCH (NF-GARCH), a two-stage hybrid framework that enhances traditional GARCH models by replacing restrictive parametric innovation distributions with learned densities via normalising flows. Our approach preserves the interpretability of standard variance dynamics while addressing the common issue of innovation misspecification. In the first stage, we estimate standard GARCH variants (sGARCH, TGARCH, and gjrGARCH) to extract standardised residuals. In the second stage, a Masked Autoregressive Flow learns the underlying residual distribution, with samples from the flow subsequently driving the GARCH recursion for out-of-sample forecasting. Evaluated on 13 daily financial series (six FX pairs and seven equities), NF-GARCH demonstrates systematic, statistically significant improvements in forecast accuracy for skewed-t baselines. Wilcoxon signed-rank tests confirm superior performance specifically for gjrGARCH-sstd and sGARCH-sstd specifications. While the framework offers enhanced flexibility and generative realism, we observe that computational overhead is increased, and the log-variance specification of eGARCH exhibits instability when paired with flow-based innovations. These results suggest that while NF-GARCH effectively captures empirical tail behaviour in univariate settings, future research should explore conditional flow architectures and multivariate extensions to account for time-varying innovation shapes. For risk management, gains are most relevant where skewed-t baselines are used and where closer residual realism supports scenario analysis; effect sizes remain modest relative to model risk and implementation cost. Full article
(This article belongs to the Special Issue Volatility Modeling in Financial Market)
22 pages, 3918 KB  
Article
Probabilistic Aseismic Performance Assessment of Rubber–Sand–Concrete Tunnel Linings Considering Spatial Variability of Rock Mass
by Kaichen Li, Xiancheng Mei, Baiyi Li, Hao Sheng, Zhen Cui, Yiheng Wang, Hegao Wu and Tao Wang
Materials 2026, 19(9), 1741; https://doi.org/10.3390/ma19091741 - 24 Apr 2026
Abstract
In tunnel engineering, the integration of aseismic materials and structural designs has become a prevalent strategy to reduce earthquake-induced damage. However, previous studies on the seismic performance of tunnel structures predominantly employed deterministic methods, overlooking the spatial variability of the surrounding rock mass. [...] Read more.
In tunnel engineering, the integration of aseismic materials and structural designs has become a prevalent strategy to reduce earthquake-induced damage. However, previous studies on the seismic performance of tunnel structures predominantly employed deterministic methods, overlooking the spatial variability of the surrounding rock mass. This oversight often leads to an overestimation of structural performance, posing potential risks to the project. This study develops a probabilistic framework based on random field theory to evaluate the aseismic performance of tunnel linings incorporating a rubber–sand–concrete (RSC) constrained damping layer. The analysis systematically evaluates the aseismic performance of RSC across varying peak ground acceleration (PGA) levels and tunnel depth conditions. The findings are compared with results from traditional deterministic approaches. The probabilistic analysis indicates the following: (1) a reduction of approximately 70% in the dispersion of maximum principal stresses across various PGAs; (2) a decrease in RSC’s aseismic performance with greater burial depths, though it remains substantial overall, and (3) a reduction in the failure probability from 31.8% to 16.3% at PGA = 1.2 g. Furthermore, deterministic methods tend to produce overly optimistic estimates of tunnel aseismic performance, highlighting the need for probabilistic analysis. Full article
41 pages, 1836 KB  
Article
Shocks from Extreme Temperatures: Climate Sensitivity of Urban Digital Economy in China
by Yi Yang, Yufei Ruan, Jingjing Wu and Rui Su
Sustainability 2026, 18(9), 4244; https://doi.org/10.3390/su18094244 (registering DOI) - 24 Apr 2026
Abstract
This study systematically examines the impacts of extreme temperatures on the digital economy development index and the underlying mechanisms based on panel data from 281 prefecture-level cities in China from 2012 to 2023. This study explicitly distinguishes the distinctive adaptive capacity of the [...] Read more.
This study systematically examines the impacts of extreme temperatures on the digital economy development index and the underlying mechanisms based on panel data from 281 prefecture-level cities in China from 2012 to 2023. This study explicitly distinguishes the distinctive adaptive capacity of the digital economy in responding to climate risks. Through global and local spatial autocorrelation analysis, the study finds that both extreme temperatures and the digital economy exhibit significant spatial clustering. This study employs the spatial Durbin model (SDM) and effect decomposition and further incorporates the GS2SLS estimator alongside dual instrumental variables constructed from historical geographic characteristics to address endogeneity, thereby identifying the asymmetrical impacts of extreme heat and extreme cold on the digital economy with great rigor. Specifically, extreme heat fosters short-term local digital demand that is subsequently translated into long-term growth in IT human capital and infrastructure, thereby increasing the DEDI. However, its net spatial effect is inhibitory due to energy crowding out. Extreme cold, by contrast, primarily disrupts supply chains and intensifies energy consumption, with its impact largely confined to the local scope. Green technological innovation mitigates the impact of extreme heat on the digital economy through demand substitution, while, under extreme cold, it manifests as the physical protection of infrastructure. Meanwhile, an optimized industrial structure substantially reduces the economy’s dependence on supply chains, amplifying the promotional effect of extreme temperatures on the digital economy and reflecting the transformation capacity of regions under complex environmental conditions. Heterogeneity analysis demonstrates that the effects of extreme temperatures vary significantly across different urban agglomerations, economic zones, geographic regions and city types. This study not only extends the theoretical framework for the economic assessment of climate risks and spatial econometric analysis to the climate sensitivity of the digital economy but also provides empirical evidence for understanding the complex relationship between climate change and digital economy development and offers references for differentiated policies in a coordinated regional digital economy. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
22 pages, 5421 KB  
Article
Recalibrating Resting Energy Expenditure Prediction Equations in Asian Older Adults with Multimorbidity
by Pei San Kua, Musfirah Albakri, Su Mei Tay, Phoebe Si-En Thong, Olivia Jiawen Xia, Wendelynn Hui Ping Chua, Kevin Chong, Nicholas Wei Kiat Tan, Xin Hui Loh, Jia Hui Tan and Lian Leng Low
Nutrients 2026, 18(9), 1345; https://doi.org/10.3390/nu18091345 - 24 Apr 2026
Abstract
Background/Objective: Accurate resting energy expenditure (REE) estimation is paramount for the nutritional management of older Asian adults with multimorbidity. However, standard predictive equations (PEs) lack precision for this cohort. This study aimed to recalibrate PEs using BMI-stratified, slope-only regression to enhance bedside accuracy. [...] Read more.
Background/Objective: Accurate resting energy expenditure (REE) estimation is paramount for the nutritional management of older Asian adults with multimorbidity. However, standard predictive equations (PEs) lack precision for this cohort. This study aimed to recalibrate PEs using BMI-stratified, slope-only regression to enhance bedside accuracy. Methods: REE was measured via indirect calorimetry in 400 hospitalized patients (age ≥ 65). Sensitivity analyses identified significant proportional bias in existing models. Models were recalibrated and validated using 1000-iteration bootstrap resampling. Results: Standard PEs exhibited significant bias, particularly underpredicting requirements for 68% of underweight patients. The new Singapore Older Adults Resting energy expenditure (SOAR) PE 1 (963.67 + 8.56 × weight − 5.6 × age) eliminated weight-dependent systematic errors. The recalibrated models utilizing actual body weight achieved accuracy rates of up to 64% in obese cohorts, comparable to complex adjusted-weight protocols. Conclusions: Population-specific recalibration is essential to mitigate the bidirectional risks of malnutrition and overfeeding in geriatric rehabilitation. The BMI-stratified multipliers provided offer a robust, clinically efficient framework for individualized nutritional care. Full article
Show Figures

Figure 1

24 pages, 281 KB  
Article
Insurance Institutional Ownership, Corporate Resilience, and Sustainable Development: Evidence from Chinese A-Share Firms
by Zongjun Zhang and Xinyu Dang
Sustainability 2026, 18(9), 4230; https://doi.org/10.3390/su18094230 - 24 Apr 2026
Viewed by 65
Abstract
Enhancing the resilience of real-economy firms is essential to sustainable development because firms must not only absorb shocks but also maintain long-term adaptive and renewal capacity. Against this background, this study examines whether insurance institutional ownership, as a form of patient capital, is [...] Read more.
Enhancing the resilience of real-economy firms is essential to sustainable development because firms must not only absorb shocks but also maintain long-term adaptive and renewal capacity. Against this background, this study examines whether insurance institutional ownership, as a form of patient capital, is systematically associated with corporate resilience. Using panel data for Chinese A-share listed firms from 2008 to 2024, we construct a multidimensional corporate resilience index based on risk resistance, adaptive recovery, and renewal and development and estimate two-way fixed-effects models. The results show that insurance ownership is positively associated with the baseline corporate resilience index, and this pattern remains qualitatively similar when we examine stock-return volatility, financial performance growth, and a stricter capability-oriented resilience index. The positive association is stronger for state-owned enterprises, small firms, non-manufacturing firms, and firms located in northern China. Channel analysis suggests that insurance ownership is associated with lower agency costs, stronger internal controls, greater external scrutiny, and lower financing constraints, patterns that are consistent with the proposed channels linking insurance ownership to corporate resilience. Further analyses show that higher insurance ownership and increases in insurance holdings are associated with stronger resilience, whereas decreases in holdings are associated with weaker resilience. Long holding duration is negatively associated with resilience, suggesting that performance-evaluation pressure may weaken the long-term governance role of insurance capital. Overall, the findings suggest that insurance investors may support corporate resilience and, when governance incentives and evaluation mechanisms are appropriately aligned, contribute to the sustainable development of the real economy. Full article
40 pages, 1903 KB  
Review
Volatile Organic Compound Exposure and Neurodevelopmental Outcomes in Children: A Systematic Review and Meta-Analysis
by Nurul Farehah Shahrir, Nur Nabila Abd Rahim, Fadly Syah Arsad, Imanul Hassan Abdul Shukor, Mohd Faiz Ibrahim, Nurul Amalina Khairul Hasni, Nadia Mohamad, Siti Aishah Rashid, Nai Ming Lai, Izzah Athirah Rosli and Sharifah Mazrah Sayed Mohamed Zain
Atmosphere 2026, 17(5), 433; https://doi.org/10.3390/atmos17050433 - 22 Apr 2026
Viewed by 170
Abstract
Volatile organic compounds (VOCs) are ubiquitous pollutants, and exposure from in utero through childhood may impair neurodevelopment. However, compound-specific risks remain unclear. This systematic review and meta-analysis examined associations between VOC exposure and child neurodevelopmental outcomes. A systematic search of PubMed, Scopus, and [...] Read more.
Volatile organic compounds (VOCs) are ubiquitous pollutants, and exposure from in utero through childhood may impair neurodevelopment. However, compound-specific risks remain unclear. This systematic review and meta-analysis examined associations between VOC exposure and child neurodevelopmental outcomes. A systematic search of PubMed, Scopus, and Embase was conducted until August 2025, yielding 1213 records. Quality assessment was performed using the Newcastle–Ottawa Scale and risk of bias using the ROBINS-E tool. Pooled odds ratios (ORs) were calculated using random-effects models, with heterogeneity evaluated via I2 statistics. Subgroup analyses explored for study design, exposure timing, and country income level. Twenty-eight studies were included in the final analysis. Of the 18 VOCs analyzed, five compounds, propionaldehyde (pooled OR = 1.84; 95% CI 1.19–2.49), styrene (pooled OR = 1.69; 95% CI 1.30–2.21), vinyl chloride (pooled OR = 1.53; 95% CI 1.24–1.89), acrolein (pooled OR = 1.48; 95% CI 1.08–2.04), and trichloroethylene (OR = 1.21; 95% CI 1.04–1.41), demonstrated statistically significant adverse associations with neurodevelopment. Benzene showed borderline significance. Heterogeneity ranged from 0–47%. Subgroup analyses identified significant effect modification for 1,3-butadiene by study design and exposure timing and higher pooled estimates for ethylbenzene in high-income countries. Full article
(This article belongs to the Section Air Quality and Health)
42 pages, 966 KB  
Article
Garbage In, Garbage Out? The Impact of Data Quality on the Performance of Financial Distress Prediction Models
by Veronika Labosova, Lucia Duricova, Katarina Kramarova and Marek Durica
Forecasting 2026, 8(3), 35; https://doi.org/10.3390/forecast8030035 - 22 Apr 2026
Viewed by 253
Abstract
Financial distress prediction remains a central topic in corporate finance and risk management, with extensive research devoted to improving classification accuracy through increasingly sophisticated statistical and machine learning techniques. Nevertheless, the influence of data preparation on predictive performance has received comparatively less systematic [...] Read more.
Financial distress prediction remains a central topic in corporate finance and risk management, with extensive research devoted to improving classification accuracy through increasingly sophisticated statistical and machine learning techniques. Nevertheless, the influence of data preparation on predictive performance has received comparatively less systematic attention. This study examines how an economically grounded data-preparation process affects the predictive performance of selected statistical and machine-learning models dedicated to predicting corporate financial distress. Using the chosen financial ratios, generally accepted indicators of corporate financial stability and economic performance, financial distress models are estimated on both raw, unprocessed input data and pre-processed data involving the exclusion of economically implausible accounting values, treatment of missing observations, and class balancing. In light of the above, the study adopts a structured methodological approach to assess the predictive performance of selected classification models, namely decision tree algorithms (CART, CHAID, and C5.0), artificial neural networks (ANNs), logistic regression (LR), and linear discriminant analysis (DA), using confusion-matrix–based evaluation and a comprehensive set of evaluation measures. The results suggest that the process of input data preparation is a critical factor, significantly improving the predictive performance of financial distress prediction models across most modelling techniques employed. The most pronounced gains are observed in decision tree models. ANNs also demonstrate marked improvement after input data preparation, whereas LR benefits more moderately, and linear DA remains limited despite preprocessing. The average gain in accuracy across all six modelling techniques, calculated as the difference between pre-processed and raw performance for each method and averaged across methods, was approximately 15.6 percentage points, with specificity improving by approximately 26.9 percentage points on average, amounting to roughly half the performance variation attributable to algorithm choice, which underscores that data preparation is a primary determinant of model reliability alongside algorithm selection. A step-level detailed analysis further shows that missing value imputation is the dominant driver of improvement for tree-based models, while class balancing contributes most for ANNs and logistic regression. The findings highlight that reliable financial distress prediction depends not only on technique selection but also on the consistency and economic plausibility of the input data, underscoring the central role of structured data preparation in developing robust early-warning models. Full article
Show Figures

Figure 1

16 pages, 307 KB  
Article
Dysphagia Risk and Its Association with Nutritional Status in Multiple Sclerosis: A Preliminary Study
by Nicole Vanessa Franchina Vergel, Jorge Molina-López and Elena Planells
Nutrients 2026, 18(9), 1315; https://doi.org/10.3390/nu18091315 - 22 Apr 2026
Viewed by 187
Abstract
Background/Objectives: Multiple sclerosis (MS) is a chronic, demyelinating and neurodegenerative disease frequently associated with dysphagia, nutritional imbalances, and alterations in body composition. This study aims to describe the anthropometric profile and body composition in people with MS, estimate the risk and type [...] Read more.
Background/Objectives: Multiple sclerosis (MS) is a chronic, demyelinating and neurodegenerative disease frequently associated with dysphagia, nutritional imbalances, and alterations in body composition. This study aims to describe the anthropometric profile and body composition in people with MS, estimate the risk and type of dysphagia, analyse dietary intake and habits, and evaluate the evolution of these parameters over six months. Methods: This descriptive analytical longitudinal study included 30 patients with MS (20 women, 10 men), with a median age of 53.3 years at baseline and 54.0 years at final assessment. The prevalence of dysphagia risk was determined, dietary patterns and body composition were characterised, and their interactions were explored through two assessments conducted six months apart. Results: Overall, 90% of the sample had relapsing–remitting MS (RRMS). At both the initial and final assessments, the median BMI was above 25 kg/m2 and a high prevalence of dysphagia risk (63.3% and 76.7%), particularly for liquids. Frequent inadequacies were observed in the intake of certain macronutrients and micronutrients, including energy, fibre, potassium and magnesium. Likewise, the analysis by food groups revealed low adherence to recommendations, particularly for fruits, cereals, legumes, fish and lean meats. No significant differences were detected between the two time points. Conclusions: Dysphagia, dietary intake, habits, and body composition are interconnected dimensions in MS; systematically integrating nutritional assessment and dysphagia screening into clinical practice would contribute to a more comprehensive management and to improvements in swallowing disorders and nutritional status in people with MS. Full article
(This article belongs to the Section Nutritional Epidemiology)
12 pages, 1761 KB  
Systematic Review
Global Longitudinal Strain Improves After Revascularization of Chronic Total Occlusion: A Systematic Review and Meta-Analysis
by Oguz Kaan Kaya and Ahmet Serbülent Savcıoğlu
J. Clin. Med. 2026, 15(9), 3186; https://doi.org/10.3390/jcm15093186 - 22 Apr 2026
Viewed by 148
Abstract
Background: The clinical benefit of percutaneous coronary intervention (PCI) for chronic total occlusion (CTO) remains controversial, particularly regarding left ventricular (LV) functional recovery. Global longitudinal strain (GLS) has emerged as a more sensitive marker of myocardial function than left ventricular ejection fraction (LVEF). [...] Read more.
Background: The clinical benefit of percutaneous coronary intervention (PCI) for chronic total occlusion (CTO) remains controversial, particularly regarding left ventricular (LV) functional recovery. Global longitudinal strain (GLS) has emerged as a more sensitive marker of myocardial function than left ventricular ejection fraction (LVEF). This study aimed to evaluate the effect of CTO revascularization on LV function using GLS. Methods: This systematic review and meta-analysis were conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) guidelines. A comprehensive literature search was performed in the PubMed/MEDLINE database from inception through March 2026 using predefined search terms and Boolean operators. Reference lists of relevant articles were also screened to ensure completeness. Studies evaluating GLS before and after PCI for CTO and reporting quantitative strain data were included. Pooled effect estimates were calculated as mean differences (MDs) with 95% confidence intervals (CIs) using a random-effects model. Subgroup and sensitivity analyses were performed to explore heterogeneity and assess the robustness of the findings. Results: Six studies involving 376 patients were included. Successful CTO-PCI may be associated with an improvement in GLS (MD = 1.69; 95% CI: 1.09–2.29; p < 0.001), with substantial heterogeneity (I2 = 81%). Subgroup analysis demonstrated greater GLS improvement in studies with longer follow-up durations. Sensitivity analyses confirmed the robustness of the results. Conclusions: CTO revascularization may be associated with an improvement in LV myocardial function as assessed by GLS, even in the absence of marked changes in conventional parameters such as LVEF. These findings support the clinical utility of GLS as a sensitive imaging biomarker for detecting early myocardial recovery and for guiding risk stratification in patients undergoing CTO-PCI. Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

Back to TopTop