Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,197)

Search Parameters:
Keywords = baseline estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 819 KiB  
Article
Evaluating the Effectiveness of Biodiverse Green Schoolyards on Child BMI z-Score and Physical Metrics: A Pilot Quasi-Experimental Study
by Bo H. W. van Engelen, Lore Verheyen, Bjorn Winkens, Michelle Plusquin and Onno C. P. van Schayck
Children 2025, 12(7), 944; https://doi.org/10.3390/children12070944 (registering DOI) - 17 Jul 2025
Abstract
Background: Childhood obesity is a significant public health issue linked to poor diet, low physical activity, and limited access to supportive environments. Green schoolyards may promote physical activity and improve health outcomes. This study evaluated the impact of the Green Healthy Primary School [...] Read more.
Background: Childhood obesity is a significant public health issue linked to poor diet, low physical activity, and limited access to supportive environments. Green schoolyards may promote physical activity and improve health outcomes. This study evaluated the impact of the Green Healthy Primary School of the Future (GHPSF) intervention—greening schoolyards—on children’s BMI z-scores, waist circumference, and hip circumference over 18 months, and compared these effects to those observed in the earlier Healthy Primary School of the Future (HPSF) initiative. Methods: This longitudinal quasi-experimental study included two intervention and two control schools in Limburg, a province both in the Netherlands and Belgium. Children aged 8–12 years (n = 159) were assessed at baseline, 12 months, and 18 months for anthropometric outcomes. Linear mixed models were used to estimate intervention effects over time, adjusting for sex, age, country, and socioeconomic background. Standardized effect sizes (ESs) were calculated. Results: The intervention group showed a greater reduction in BMI z-scores at 12 months (ES = −0.15, p = 0.084), though this was not statistically significant. Waist circumference increased in both groups, but less so in the intervention group, at 12 months (ES = −0.23, p = 0.057) and 18 months (ES = −0.13, p = 0.235). Hip circumference and waist–hip ratio changes were minimal and non-significant. GHPSF effect sizes were comparable to or greater than those from the HPSF initiative. Conclusions: Though not statistically significant, trends suggest that greening schoolyards may support favorable changes in anthropometric outcomes. Further research with larger samples and longer follow-up is recommended. Full article
Show Figures

Figure 1

59 pages, 11250 KiB  
Article
Automated Analysis of Vertebral Body Surface Roughness for Adult Age Estimation: Ellipse Fitting and Machine-Learning Approach
by Erhan Kartal and Yasin Etli
Diagnostics 2025, 15(14), 1794; https://doi.org/10.3390/diagnostics15141794 - 16 Jul 2025
Abstract
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT [...] Read more.
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT images. Methods: CT scans of 176 adults (94 males, 82 females; 21–94 years) were retrospectively analyzed. For each vertebra, the mean orthogonal deviation of the anterior superior endplate from an ideal ellipse was extracted. Sex-specific multiple linear regression served as a baseline; support vector regression (SVR), random forest (RF), k-nearest neighbors (k-NN), and Gaussian naïve-Bayes pseudo-regressor (GNB-R) were tuned with 10-fold cross-validation and evaluated on a 20% hold-out set. Performance was quantified with the standard error of the estimate (SEE). Results: DS values correlated moderately to strongly with age (peak r = 0.60 at L3–L5). Linear regression explained 40% (males) and 47% (females) of age variance (SEE ≈ 11–12 years). Non-parametric learners improved precision: RF achieved an SEE of 8.49 years in males (R2 = 0.47), whereas k-NN attained 10.8 years (R2 = 0.45) in women. Conclusions: Automated analysis of vertebral cortical roughness provides a transparent, observer-independent means of estimating adult age with accuracy approaching that of more complex deep learning pipelines. Streamlining image preparation and validating the approach across diverse populations are the next steps toward forensic adoption. Full article
(This article belongs to the Special Issue New Advances in Forensic Radiology and Imaging)
Show Figures

Figure 1

16 pages, 2355 KiB  
Article
Generalising Stock Detection in Retail Cabinets with Minimal Data Using a DenseNet and Vision Transformer Ensemble
by Babak Rahi, Deniz Sagmanli, Felix Oppong, Direnc Pekaslan and Isaac Triguero
Mach. Learn. Knowl. Extr. 2025, 7(3), 66; https://doi.org/10.3390/make7030066 - 16 Jul 2025
Abstract
Generalising deep-learning models to perform well on unseen data domains with minimal retraining remains a significant challenge in computer vision. Even when the target task—such as quantifying the number of elements in an image—stays the same, data quality, shape, or form variations can [...] Read more.
Generalising deep-learning models to perform well on unseen data domains with minimal retraining remains a significant challenge in computer vision. Even when the target task—such as quantifying the number of elements in an image—stays the same, data quality, shape, or form variations can deviate from the training conditions, often necessitating manual intervention. As a real-world industry problem, we aim to automate stock level estimation in retail cabinets. As technology advances, new cabinet models with varying shapes emerge alongside new camera types. This evolving scenario poses a substantial obstacle to deploying long-term, scalable solutions. To surmount the challenge of generalising to new cabinet models and cameras with minimal amounts of sample images, this research introduces a new solution. This paper proposes a novel ensemble model that combines DenseNet-201 and Vision Transformer (ViT-B/8) architectures to achieve generalisation in stock-level classification. The novelty aspect of our solution comes from the fact that we combine a transformer with a DenseNet model in order to capture both the local, hierarchical details and the long-range dependencies within the images, improving generalisation accuracy with less data. Key contributions include (i) a novel DenseNet-201 + ViT-B/8 feature-level fusion, (ii) an adaptation workflow that needs only two images per class, (iii) a balanced layer-unfreezing schedule, (iv) a publicly described domain-shift benchmark, and (v) a 47 pp accuracy gain over four standard few-shot baselines. Our approach leverages fine-tuning techniques to adapt two pre-trained models to the new retail cabinets (i.e., standing or horizontal) and camera types using only two images per class. Experimental results demonstrate that our method achieves high accuracy rates of 91% on new cabinets with the same camera and 89% on new cabinets with different cameras, significantly outperforming standard few-shot learning methods. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

21 pages, 705 KiB  
Article
Diabetes Risk Perception in Women with a Recent History of Gestational Diabetes Mellitus: A Secondary Analysis from a Belgian Randomized Controlled Trial (MELINDA Study)
by Yana Vanlaer, Caro Minschart, Ine Snauwaert, Nele Myngheer, Toon Maes, Christophe De Block, Inge Van Pottelbergh, Pascale Abrams, Wouter Vinck, Liesbeth Leuridan, Sabien Driessens, Jaak Billen, Christophe Matthys, Annick Bogaerts, Annouschka Laenen, Chantal Mathieu and Katrien Benhalima
J. Clin. Med. 2025, 14(14), 4998; https://doi.org/10.3390/jcm14144998 - 15 Jul 2025
Viewed by 125
Abstract
Background/Objectives: To evaluate diabetes risk perception in women with prior gestational diabetes mellitus (GDM) and prediabetes in early postpartum. Methods: Secondary analysis of a multi-center randomized controlled trial assessing the effectiveness of a mobile-based postpartum lifestyle intervention in women with prediabetes after [...] Read more.
Background/Objectives: To evaluate diabetes risk perception in women with prior gestational diabetes mellitus (GDM) and prediabetes in early postpartum. Methods: Secondary analysis of a multi-center randomized controlled trial assessing the effectiveness of a mobile-based postpartum lifestyle intervention in women with prediabetes after GDM. Data were collected from the Risk Perception Survey for Developing Diabetes at baseline (6–16 weeks postpartum) and one year post-randomization. Logistic regression was used to analyze the difference between the intervention and control groups on diabetes risk estimation. Results: Among 165 women with prediabetes in early postpartum (mean age: 32.1 years, mean BMI: 27.3 kg/m2), 58.9% (96) adequately estimated their diabetes risk (moderate or high chance) at baseline. These women smoked less often [2.06% (2) vs. 10.3% (7), p = 0.034], reported less anxiety (11.6 ± 3.0 vs. 12.6 ± 3.5, p = 0.040), and reported fewer symptoms of depression [30.9% (21) vs. 15.6% (15), p = 0.023] compared to women who underestimated their risk. At one year, 58.3% (95) of all women adequately estimated their diabetes risk. In the intervention group, 50.6% (41) adequately estimated their risk at baseline, increasing to 56.8% (46) by the end of the intervention after one year (p = 0.638). In the control group, a higher proportion of women adequately estimated their risk at baseline [67.1% (55), (p = 0.039)], which decreased to 59.8% (49) at one year (p = 0.376), with no significant difference in risk perception between the groups at one year (p = 0.638). Conclusions: Almost 60% of this high-risk population adequately estimated their diabetes risk, with no significant impact of the lifestyle intervention on risk perception. Full article
(This article belongs to the Special Issue Gestational Diabetes: Cutting-Edge Research and Clinical Practice)
Show Figures

Figure 1

27 pages, 9829 KiB  
Article
An Advanced Ensemble Machine Learning Framework for Estimating Long-Term Average Discharge at Hydrological Stations Using Global Metadata
by Alexandr Neftissov, Andrii Biloshchytskyi, Ilyas Kazambayev, Serhii Dolhopolov and Tetyana Honcharenko
Water 2025, 17(14), 2097; https://doi.org/10.3390/w17142097 - 14 Jul 2025
Viewed by 136
Abstract
Accurate estimation of long-term average (LTA) discharge is fundamental for water resource assessment, infrastructure planning, and hydrological modeling, yet it remains a significant challenge, particularly in data-scarce or ungauged basins. This study introduces an advanced machine learning framework to estimate long-term average discharge [...] Read more.
Accurate estimation of long-term average (LTA) discharge is fundamental for water resource assessment, infrastructure planning, and hydrological modeling, yet it remains a significant challenge, particularly in data-scarce or ungauged basins. This study introduces an advanced machine learning framework to estimate long-term average discharge using globally available hydrological station metadata from the Global Runoff Data Centre (GRDC). The methodology involved comprehensive data preprocessing, extensive feature engineering, log-transformation of the target variable, and the development of multiple predictive models, including a custom deep neural network with specialized pathways and gradient boosting machines (XGBoost, LightGBM, CatBoost). Hyperparameters were optimized using Bayesian techniques, and a weighted Meta Ensemble model, which combines predictions from the best individual models, was implemented. Performance was rigorously evaluated using R2, RMSE, and MAE on an independent test set. The Meta Ensemble model demonstrated superior performance, achieving a Coefficient of Determination (R2) of 0.954 on the test data, significantly surpassing baseline and individual advanced models. Model interpretability analysis using SHAP (Shapley Additive explanations) confirmed that catchment area and geographical attributes are the most dominant predictors. The resulting model provides a robust, accurate, and scalable data-driven solution for estimating long-term average discharge, enhancing water resource assessment capabilities and offering a powerful tool for large-scale hydrological analysis. Full article
Show Figures

Figure 1

22 pages, 2775 KiB  
Article
Surface Broadband Radiation Data from a Bipolar Perspective: Assessing Climate Change Through Machine Learning
by Alice Cavaliere, Claudia Frangipani, Daniele Baracchi, Maurizio Busetto, Angelo Lupi, Mauro Mazzola, Simone Pulimeno, Vito Vitale and Dasara Shullani
Climate 2025, 13(7), 147; https://doi.org/10.3390/cli13070147 - 13 Jul 2025
Viewed by 199
Abstract
Clouds modulate the net radiative flux that interacts with both shortwave (SW) and longwave (LW) radiation, but the uncertainties regarding their effect in polar regions are especially high because ground observations are lacking and evaluation through satellites is made difficult by high surface [...] Read more.
Clouds modulate the net radiative flux that interacts with both shortwave (SW) and longwave (LW) radiation, but the uncertainties regarding their effect in polar regions are especially high because ground observations are lacking and evaluation through satellites is made difficult by high surface reflectance. In this work, sky conditions for six different polar stations, two in the Arctic (Ny-Ålesund and Utqiagvik [formerly Barrow]) and four in Antarctica (Neumayer, Syowa, South Pole, and Dome C) will be presented, considering the decade between 2010 and 2020. Measurements of broadband SW and LW radiation components (both downwelling and upwelling) are collected within the frame of the Baseline Surface Radiation Network (BSRN). Sky conditions—categorized as clear sky, cloudy, or overcast—were determined using cloud fraction estimates obtained through the RADFLUX method, which integrates shortwave (SW) and longwave (LW) radiative fluxes. RADFLUX was applied with daily fitting for all BSRN stations, producing two cloud fraction values: one derived from shortwave downward (SWD) measurements and the other from longwave downward (LWD) measurements. The variation in cloud fraction used to classify conditions from clear sky to overcast appeared consistent and reasonable when compared to seasonal changes in shortwave downward (SWD) and diffuse radiation (DIF), as well as longwave downward (LWD) and longwave upward (LWU) fluxes. These classifications served as labels for a machine learning-based classification task. Three algorithms were evaluated: Random Forest, K-Nearest Neighbors (KNN), and XGBoost. Input features include downward LW radiation, solar zenith angle, surface air temperature (Ta), relative humidity, and the ratio of water vapor pressure to Ta. Among these models, XGBoost achieved the highest balanced accuracy, with the best scores of 0.78 at Ny-Ålesund (Arctic) and 0.78 at Syowa (Antarctica). The evaluation employed a leave-one-year-out approach to ensure robust temporal validation. Finally, the results from cross-station models highlighted the need for deeper investigation, particularly through clustering stations with similar environmental and climatic characteristics to improve generalization and transferability across locations. Additionally, the use of feature normalization strategies proved effective in reducing inter-station variability and promoting more stable model performance across diverse settings. Full article
(This article belongs to the Special Issue Addressing Climate Change with Artificial Intelligence Methods)
Show Figures

Figure 1

24 pages, 19550 KiB  
Article
TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video
by Jie Yin, Tao Sun, Xiao Zhang, Guorong Zhang, Xue Wan and Jianjun He
Remote Sens. 2025, 17(14), 2422; https://doi.org/10.3390/rs17142422 - 12 Jul 2025
Viewed by 160
Abstract
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing [...] Read more.
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing turbulence mitigation methods for long-range imaging demonstrate partial success, they exhibit limited generalizability and interpretability in large-scale satellite scenarios. Inspired by refractive-index structure constant (Cn2) estimation from degraded sequences, we propose a physics-informed turbulence signature (TS) prior that explicitly captures spatiotemporal distortion patterns to enhance model transparency. Integrating this prior into a lucky imaging framework, we develop a Physics-Based Turbulence Mitigation Network guided by Turbulence Signature (TMTS) to disentangle atmospheric disturbances from satellite videos. The framework employs deformable attention modules guided by turbulence signatures to correct geometric distortions, iterative gated mechanisms for temporal alignment stability, and adaptive multi-frame aggregation to address spatially varying blur. Comprehensive experiments on synthetic and real-world turbulence-degraded satellite videos demonstrate TMTS’s superiority, achieving 0.27 dB PSNR and 0.0015 SSIM improvements over the DATUM baseline while maintaining practical computational efficiency. By bridging turbulence physics with deep learning, our approach provides both performance enhancements and interpretable restoration mechanisms, offering a viable solution for operational satellite video processing under atmospheric disturbances. Full article
Show Figures

Figure 1

21 pages, 15482 KiB  
Article
InSAR Detection of Slow Ground Deformation: Taking Advantage of Sentinel-1 Time Series Length in Reducing Error Sources
by Machel Higgins and Shimon Wdowinski
Remote Sens. 2025, 17(14), 2420; https://doi.org/10.3390/rs17142420 - 12 Jul 2025
Viewed by 150
Abstract
Using interferometric synthetic aperture radar (InSAR) to observe slow ground deformation can be challenging due to many sources of error, with tropospheric phase delay and unwrapping errors being the most significant. While analytical methods, weather models, and data exist to mitigate tropospheric error, [...] Read more.
Using interferometric synthetic aperture radar (InSAR) to observe slow ground deformation can be challenging due to many sources of error, with tropospheric phase delay and unwrapping errors being the most significant. While analytical methods, weather models, and data exist to mitigate tropospheric error, most of these techniques are unsuitable for all InSAR applications (e.g., complex tropospheric mixing in the tropics) or are deficient in spatial or temporal resolution. Likewise, there are methods for removing the unwrapping error, but they cannot resolve the true phase when there is a high prevalence (>40%) of unwrapping error in a set of interferograms. Applying tropospheric delay removal techniques is unnecessary for C-band Sentinel-1 InSAR time series studies, and the effect of unwrapping error can be minimized if the full dataset is utilized. We demonstrate that using interferograms with long temporal baselines (800 days to 1600 days) but very short perpendicular baselines (<5 m) (LTSPB) can lower the velocity detection threshold to 2 mm y−1 to 3 mm y−1 for long-term coherent permanent scatterers. The LTSPB interferograms can measure slow deformation rates because the expected differential phases are larger than those of small baselines and potentially exceed the typical noise amplitude while also reducing the sensitivity of the time series estimation to the noise sources. The method takes advantage of the Sentinel-1 mission length (2016 to present), which, for most regions, can yield up to 300 interferograms that meet the LTSPB baseline criteria. We demonstrate that low velocity detection can be achieved by comparing the expected LTSPB differential phase measurements to synthetic tests and tropospheric delay from the Global Navigation Satellite System. We then characterize the slow (~3 mm/y) ground deformation of the Socorro Magma Body, New Mexico, and the Tampa Bay Area using LTSPB InSAR analysis. The method we describe has implications for simplifying the InSAR time series processing chain and enhancing the velocity detection threshold. Full article
Show Figures

Figure 1

34 pages, 2617 KiB  
Article
Toward Low-Carbon Mobility: Greenhouse Gas Emissions and Reduction Opportunities in Thailand’s Road Transport Sector
by Pantitcha Thanatrakolsri and Duanpen Sirithian
Clean Technol. 2025, 7(3), 60; https://doi.org/10.3390/cleantechnol7030060 - 11 Jul 2025
Viewed by 264
Abstract
Road transportation is a major contributor to greenhouse gas (GHG) emissions in Thailand. This study assesses the potential for GHG mitigation in the road transport sector from 2018 to 2030. Emission factors for various vehicle types and technologies were derived using the International [...] Read more.
Road transportation is a major contributor to greenhouse gas (GHG) emissions in Thailand. This study assesses the potential for GHG mitigation in the road transport sector from 2018 to 2030. Emission factors for various vehicle types and technologies were derived using the International Vehicle Emissions (IVE) model. Emissions were then estimated based on country-specific vehicle data. In the baseline year 2018, total emissions were estimated at 23,914.02 GgCO2eq, primarily from pickups (24.38%), trucks (20.96%), passenger cars (19.48%), and buses (16.95%). Multiple mitigation scenarios were evaluated, including the adoption of electric vehicles (EVs), improvements in fuel efficiency, and a shift to renewable energy. Results indicate that transitioning all newly registered passenger cars (PCs) to EVs while phasing out older models could lead to a 16.42% reduction in total GHG emissions by 2030. The most effective integrated scenario, combining the expansion of electric vehicles with improvements in internal combustion engine efficiency, could achieve a 41.96% reduction, equivalent to 18,378.04 GgCO2eq. These findings highlight the importance of clean technology deployment and fuel transition policies in meeting Thailand’s climate goals, while providing a valuable database to support strategic planning and implementation. Full article
Show Figures

Figure 1

20 pages, 108154 KiB  
Article
Masks-to-Skeleton: Multi-View Mask-Based Tree Skeleton Extraction with 3D Gaussian Splatting
by Xinpeng Liu, Kanyu Xu, Risa Shinoda, Hiroaki Santo and Fumio Okura
Sensors 2025, 25(14), 4354; https://doi.org/10.3390/s25144354 - 11 Jul 2025
Viewed by 228
Abstract
Accurately reconstructing tree skeletons from multi-view images is challenging. While most existing works use skeletonization from 3D point clouds, thin branches with low-texture contrast often involve multi-view stereo (MVS) to produce noisy and fragmented point clouds, which break branch connectivity. Leveraging the recent [...] Read more.
Accurately reconstructing tree skeletons from multi-view images is challenging. While most existing works use skeletonization from 3D point clouds, thin branches with low-texture contrast often involve multi-view stereo (MVS) to produce noisy and fragmented point clouds, which break branch connectivity. Leveraging the recent development in accurate mask extraction from images, we introduce a mask-guided graph optimization framework that estimates a 3D skeleton directly from multi-view segmentation masks, bypassing the reliance on point cloud quality. In our method, a skeleton is modeled as a graph whose nodes store positions and radii while its adjacency matrix encodes branch connectivity. We use 3D Gaussian splatting (3DGS) to render silhouettes of the graph and directly optimize the nodes and the adjacency matrix to fit given multi-view silhouettes in a differentiable manner. Furthermore, we use a minimum spanning tree (MST) algorithm during the optimization loop to regularize the graph to a tree structure. Experiments on synthetic and real-world plants show consistent improvements in completeness and structural accuracy over existing point-cloud-based and heuristic baseline methods. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

11 pages, 862 KiB  
Article
Level 3 Cardiopulmonary Exercise Testing to Guide Therapeutic Decisions in Non-Severe Pulmonary Hypertension with Lung Disease
by Raj Parikh, Chebly Dagher and Harrison W. Farber
Life 2025, 15(7), 1089; https://doi.org/10.3390/life15071089 - 11 Jul 2025
Viewed by 234
Abstract
Inhaled treprostinil is approved for the treatment of pulmonary hypertension-associated interstitial lung disease (PH-ILD); however, it has not shown significant benefit in patients with a pulmonary vascular resistance (PVR) < 4 WU. As such, treatment for non-severe PH-ILD remains controversial. A total of [...] Read more.
Inhaled treprostinil is approved for the treatment of pulmonary hypertension-associated interstitial lung disease (PH-ILD); however, it has not shown significant benefit in patients with a pulmonary vascular resistance (PVR) < 4 WU. As such, treatment for non-severe PH-ILD remains controversial. A total of 16 patients with non-severe PH-ILD were divided into two groups based on changes in PVR during exercise: a dynamic PVR group (n = 10), characterized by an increase in PVR with exertion, and a static PVR group (n = 6), with no increase in PVR with exercise. The dynamic PVR group received inhaled treprostinil, while the static PVR group was monitored off therapy. Baseline and 16-week follow-up values were compared within each group. At 16 weeks, the dynamic PVR group demonstrated significant improvements in mean 6 min walk distance (6MWD) (+32.5 m, p < 0.05), resting PVR (−1.04 WU, p < 0.05), resting mean pulmonary arterial pressure (mPAP) (−5.8 mmHg, p < 0.05), exercise PVR (−1.7 WU, p < 0.05), exercise mPAP (−13 mmHg, p < 0.05), and estimated right ventricular systolic pressure (−9.2 mmHg, p < 0.05). In contrast, the static PVR group remained clinically stable. These observations suggest that an exercise-induced increase in PVR, identified through Level 3 CPET, may help select patients with non-severe PH-ILD who are more likely to benefit from early initiation of inhaled treprostinil. Full article
(This article belongs to the Section Physiology and Pathology)
Show Figures

Figure 1

24 pages, 1616 KiB  
Systematic Review
Artificial Intelligence in Risk Stratification and Outcome Prediction for Transcatheter Aortic Valve Replacement: A Systematic Review and Meta-Analysis
by Shayan Shojaei, Asma Mousavi, Sina Kazemian, Shiva Armani, Saba Maleki, Parisa Fallahtafti, Farzin Tahmasbi Arashlow, Yasaman Daryabari, Mohammadreza Naderian, Mohamad Alkhouli, Jamal S. Rana, Mehdi Mehrani, Yaser Jenab and Kaveh Hosseini
J. Pers. Med. 2025, 15(7), 302; https://doi.org/10.3390/jpm15070302 - 11 Jul 2025
Viewed by 236
Abstract
Background/Objectives: Transcatheter aortic valve replacement (TAVR) has been introduced as an optimal treatment for patients with severe aortic stenosis, offering a minimally invasive alternative to surgical aortic valve replacement. Predicting these outcomes following TAVR is crucial. Artificial intelligence (AI) has emerged as a [...] Read more.
Background/Objectives: Transcatheter aortic valve replacement (TAVR) has been introduced as an optimal treatment for patients with severe aortic stenosis, offering a minimally invasive alternative to surgical aortic valve replacement. Predicting these outcomes following TAVR is crucial. Artificial intelligence (AI) has emerged as a promising tool for improving post-TAVR outcome prediction. In this systematic review and meta-analysis, we aim to summarize the current evidence on utilizing AI in predicting post-TAVR outcomes. Methods: A comprehensive search was conducted to evaluate the studies focused on TAVR that applied AI methods for risk stratification. We assessed various ML algorithms, including random forests, neural networks, extreme gradient boosting, and support vector machines. Model performance metrics—recall, area under the curve (AUC), and accuracy—were collected with 95% confidence intervals (CIs). A random-effects meta-analysis was conducted to pool effect estimates. Results: We included 43 studies evaluating 366,269 patients (mean age 80 ± 8.25; 52.9% men) following TAVR. Meta-analyses for AI model performances demonstrated the following results: all-cause mortality (AUC = 0.78 (0.74–0.82), accuracy = 0.81 (0.69–0.89), and recall = 0.90 (0.70–0.97); permanent pacemaker implantation or new left bundle branch block (AUC = 0.75 (0.68–0.82), accuracy = 0.73 (0.59–0.84), and recall = 0.87 (0.50–0.98)); valve-related dysfunction (AUC = 0.73 (0.62–0.84), accuracy = 0.79 (0.57–0.91), and recall = 0.54 (0.26–0.80)); and major adverse cardiovascular events (AUC = 0.79 (0.67–0.92)). Subgroup analyses based on the model development approaches indicated that models incorporating baseline clinical data, imaging, and biomarker information enhanced predictive performance. Conclusions: AI-based risk prediction for TAVR complications has demonstrated promising performance. However, it is necessary to evaluate the efficiency of the aforementioned models in external validation datasets. Full article
Show Figures

Graphical abstract

21 pages, 1847 KiB  
Article
Fusion of Recurrence Plots and Gramian Angular Fields with Bayesian Optimization for Enhanced Time-Series Classification
by Maria Mariani, Prince Appiah and Osei Tweneboah
Axioms 2025, 14(7), 528; https://doi.org/10.3390/axioms14070528 - 10 Jul 2025
Viewed by 254
Abstract
Time-series classification remains a critical task across various domains, demanding models that effectively capture both local recurrence structures and global temporal dependencies. We introduce a novel framework that transforms time series into image representations by fusing recurrence plots (RPs) with both Gramian Angular [...] Read more.
Time-series classification remains a critical task across various domains, demanding models that effectively capture both local recurrence structures and global temporal dependencies. We introduce a novel framework that transforms time series into image representations by fusing recurrence plots (RPs) with both Gramian Angular Summation Fields (GASFs) and Gramian Angular Difference Fields (GADFs). This fusion enriches the structural encoding of temporal dynamics. To ensure optimal performance, Bayesian Optimization is employed to automatically select the ideal image resolution, eliminating the need for manual tuning. Unlike prior methods that rely on individual transformations, our approach concatenates RP, GASF, and GADF into a unified representation and generalizes to multivariate data by stacking transformation channels across sensor dimensions. Experiments on seven univariate datasets show that our method significantly outperforms traditional classifiers such as one-nearest neighbor with Dynamic Time Warping, Shapelet Transform, and RP-based convolutional neural networks. For multivariate tasks, the proposed fusion model achieves macro F1 scores of 91.55% on the UCI Human Activity Recognition dataset and 98.95% on the UCI Room Occupancy Estimation dataset, outperforming standard deep learning baselines. These results demonstrate the robustness and generalizability of our framework, establishing a new benchmark for image-based time-series classification through principled fusion and adaptive optimization. Full article
Show Figures

Figure 1

18 pages, 1276 KiB  
Article
A Pressure-Driven Recovery Factor Equation for Enhanced Oil Recovery Estimation in Depleted Reservoirs: A Practical Data-Driven Approach
by Tarek Al Arabi Omar Ganat
Energies 2025, 18(14), 3658; https://doi.org/10.3390/en18143658 - 10 Jul 2025
Viewed by 121
Abstract
This study presents a new equation, the dynamic recovery factor (DRF), for evaluating the recovery factor (RF) in homogeneous and heterogeneous reservoirs. The DRF method’s outcomes are validated and compared using the decline curve analysis (DCA) method. Real measured [...] Read more.
This study presents a new equation, the dynamic recovery factor (DRF), for evaluating the recovery factor (RF) in homogeneous and heterogeneous reservoirs. The DRF method’s outcomes are validated and compared using the decline curve analysis (DCA) method. Real measured field data from 15 wells in a homogenous sandstone reservoir and 10 wells in a heterogeneous carbonate reservoir are utilized for this study. The concept of the DRF approach is based on the material balance principle, which integrates several components (weighted average cumulative pressure drop (ΔPcum), total compressibility (Ct), and oil saturation (So)) for predicting RF. The motivation for this study stems from the practical restrictions of conventional RF valuation techniques, which often involve extensive datasets and use simplifying assumptions that are not applicable in complex heterogeneous reservoirs. For the homogenous reservoir, the DRF approach predicts an RF of 8%, whereas the DCA method predicted 9.2%. In the heterogeneous reservoir, the DRF approach produces an RF of 6% compared with 5% for the DCA technique. Sensitivity analysis shows that RF is very sensitive to variations in Ct, ΔPcum, and So, with values that vary from 6.00% to 10.71% for homogeneous reservoirs and 4.43% to 7.91% for heterogeneous reservoirs. Uncertainty calculation indicates that errors in Ct, ΔPcum, and So propagate to RF, with weighting factor (Wi) uncertainties causing changes of ±3.7% and ±4.4% in RF for homogeneous and heterogeneous reservoirs, respectively. This study shows the new DRF approach’s ability to provide reliable RF estimations via pressure dynamics, while DCA is used as a validation and comparison baseline. The sensitivity analyses and uncertainty analyses provide a strong foundation for RF estimation that helps to select well-informed decisions in reservoir management with reliable RF values. The novelty of the new DRF equation lies in its capability to correctly estimate RFs using limited available historical data, making it appropriate for early-stage development and data-scarce situations. Hence, the new DRF equation is applied to various reservoir qualities, and the results show a strong alignment with those obtained from DCA, demonstrating high accuracy. This agreement validates the applicability of the DRF equation in estimating recovery factors through different reservoir qualities. Full article
(This article belongs to the Special Issue Petroleum Exploration, Development and Transportation)
Show Figures

Figure 1

14 pages, 1161 KiB  
Article
Robot-Assisted Radical Cystectomy with Ureterocutaneostomy: A Potentially Optimal Solution for Octogenarian and Frail Patients with Bladder Cancer
by Angelo Porreca, Filippo Marino, Davide De Marchi, Alessandro Crestani, Daniele D’Agostino, Paolo Corsi, Francesca Simonetti, Susy Dal Bello, Gian Maria Busetto, Francesco Claps, Aldo Massimo Bocciardi, Eugenio Brunocilla, Antonio Celia, Alessandro Antonelli, Andrea Gallina, Riccardo Schiavina, Andrea Minervini, Giuseppe Carrieri, Antonio Amodeo and Luca Di Gianfrancesco
J. Clin. Med. 2025, 14(14), 4898; https://doi.org/10.3390/jcm14144898 - 10 Jul 2025
Viewed by 228
Abstract
Background/Objectives: Robot-assisted radical cystectomy (RARC) has become the primary approach for treating bladder cancer, replacing the traditional open procedure. The robotic approach, when combined with ureterocutaneostomy (UCS), offers significant advantages for octogenarians, who are at increased risk for perioperative complications. Methods: This observational, [...] Read more.
Background/Objectives: Robot-assisted radical cystectomy (RARC) has become the primary approach for treating bladder cancer, replacing the traditional open procedure. The robotic approach, when combined with ureterocutaneostomy (UCS), offers significant advantages for octogenarians, who are at increased risk for perioperative complications. Methods: This observational, prospective, multicenter analysis is based on data from the Italian Radical Cystectomy Registry (RIC), collected from January 2017 to June 2020 across 28 major urological centers in Italy. We analyzed consecutive male and female patients undergoing radical cystectomy (RC) and urinary diversion via the open, laparoscopic, or robot-assisted technique. Inclusion criteria: patients aged 80 years or older, with a WHO Performance Status (PS) of 2–3, an American Society of Anesthesiologist score ≥3, a Charlson Comorbidity Index (CCI) ≥ 4, and a glomerular filtration rate (GFR) <60 mL/min. Results: A total of 128 consecutive patients were included: 41 underwent RARC with UCS (Group 1), 65 open RC (ORC) with UCS (Group 2), and 22 laparoscopic RC (LRC) with UCS (Group 3). The cystectomy operative time was longer in robotic surgeries, while the lymph node dissection time was shorter. RARC with UCS showed statistically significant advantages in terms of lower median estimated blood loss (EBL), transfusion rate, and length of hospital stay (LOS) compared to open and laparoscopic procedures. Intra- and postoperative complications were also lower in the RARC groups. Conclusions: Robotic cystectomy in high-volume referral centers (≥20 cystectomies per year) provides the best outcome for fragile patients. Beyond addressing the baseline pathology, RARC with UCS may represent a leading option, offering oncological control while reducing complications in this vulnerable age group. Full article
(This article belongs to the Special Issue The Current State of Robotic Surgery in Urology)
Show Figures

Figure 1

Back to TopTop