Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,470)

Search Parameters:
Keywords = density independent

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2487 KiB  
Article
Feasibility of Sodium and Amide Proton Transfer-Weighted Magnetic Resonance Imaging Methods in Mild Steatotic Liver Disease
by Diana M. Lindquist, Mary Kate Manhard, Joel Levoy and Jonathan R. Dillman
Tomography 2025, 11(8), 89; https://doi.org/10.3390/tomography11080089 (registering DOI) - 6 Aug 2025
Abstract
Background/Objectives: Fat and inflammation confound current magnetic resonance imaging (MRI) methods for assessing fibrosis in liver disease. Sodium or amide proton transfer-weighted MRI methods may be more specific for assessing liver fibrosis. The purpose of this study was to determine the feasibility [...] Read more.
Background/Objectives: Fat and inflammation confound current magnetic resonance imaging (MRI) methods for assessing fibrosis in liver disease. Sodium or amide proton transfer-weighted MRI methods may be more specific for assessing liver fibrosis. The purpose of this study was to determine the feasibility of sodium and amide proton transfer-weighted MRI in individuals with liver disease and to determine if either method correlated with clinical markers of fibrosis. Methods: T1 and T2 relaxation maps, proton density fat fraction maps, liver shear stiffness maps, amide proton transfer-weighted (APTw) images, and sodium images were acquired at 3T. Image data were extracted from regions of interest placed in the liver. ANOVA tests were run with disease status, age, and body mass index as independent factors; significance was set to p < 0.05. Post-hoc t-tests were run when the ANOVA showed significance. Results: A total of 36 participants were enrolled, 34 of whom were included in the final APTw analysis and 24 in the sodium analysis. Estimated liver tissue sodium concentration differentiated participants with liver disease from those without, whereas amide proton transfer-weighted MRI did not. Estimated liver tissue sodium concentration negatively correlated with the Fibrosis-4 score, but amide proton transfer-weighted MRI did not correlate with any clinical marker of disease. Conclusions: Amide proton-weighted imaging was not different between groups. Estimated liver tissue sodium concentrations did differ between groups but did not provide additional information over conventional methods. Full article
(This article belongs to the Section Abdominal Imaging)
Show Figures

Figure 1

14 pages, 1971 KiB  
Article
High-Density Arrayed Spectrometer with Microlens Array Grating for Multi-Channel Parallel Spectral Analysis
by Fangyuan Zhao, Zhigang Feng and Shuonan Shan
Sensors 2025, 25(15), 4833; https://doi.org/10.3390/s25154833 - 6 Aug 2025
Abstract
To enable multi-channel parallel spectral analysis in array-based devices such as micro-light-emitting diodes (Micro-LEDs) and line-scan spectral confocal systems, the development of compact array spectrometers has become increasingly important. In this work, a novel spectrometer architecture based on a microlens array grating (MLAG) [...] Read more.
To enable multi-channel parallel spectral analysis in array-based devices such as micro-light-emitting diodes (Micro-LEDs) and line-scan spectral confocal systems, the development of compact array spectrometers has become increasingly important. In this work, a novel spectrometer architecture based on a microlens array grating (MLAG) is proposed, which addresses the major limitations of conventional spectrometers, including limited parallel detection capability, bulky structures, and insufficient spatial resolution. By integrating dispersion and focusing within a monolithic device, the system enables simultaneous acquisition across more than 2000 parallel channels within a 10 mm × 10 mm unit consisting of an f = 4 mm microlens and a 600 lines/mm blazed grating. Optimized microlens and aperture alignment allows for flexible control of the divergence angle of the incident light, and the system theoretically achieves nanometer-scale spectral resolution across a 380–780 nm wavelength range, with inter-channel measurement deviation below 1.25%. Experimental results demonstrate that this spectrometer system can theoretically support up to 2070 independently addressable subunits. At a wavelength of 638 nm, the coefficient of variation (CV) of spot spacing among array elements is as low as 1.11%, indicating high uniformity. The spectral repeatability precision is better than 1.0 nm, and after image enhancement, the standard deviation of the diffracted light shift is reduced to just 0.26 nm. The practical spectral resolution achieved is as fine as 3.0 nm. This platform supports wafer-level spectral screening of high-density Micro-LEDs, offering a practical hardware solution for high-precision industrial inline sorting, such as Micro-LED defect inspection. Full article
Show Figures

Figure 1

20 pages, 6555 KiB  
Article
Statistical Study of Whistler-Mode Waves in the Magnetospheric Magnetic Ducts
by Salman A. Nejad and Anatoly V. Streltsov
Universe 2025, 11(8), 260; https://doi.org/10.3390/universe11080260 - 6 Aug 2025
Abstract
This paper presents a comprehensive statistical analysis of extremely/very low-frequency (ELF/VLF) whistler-mode waves observed within magnetic ducts (B-ducts) using data from NASA’s Magnetospheric Multiscale (MMS) mission. A total of 687 events were analyzed, comprising 504 occurrences on the dawn-side flank of [...] Read more.
This paper presents a comprehensive statistical analysis of extremely/very low-frequency (ELF/VLF) whistler-mode waves observed within magnetic ducts (B-ducts) using data from NASA’s Magnetospheric Multiscale (MMS) mission. A total of 687 events were analyzed, comprising 504 occurrences on the dawn-side flank of the magnetosphere and 183 in the nightside magnetotail, to investigate the spatial distribution and underlying mechanisms of wave–particle interactions. We identify distinct differences between these regions by examining key parameters such as event width, frequency, plasma density, and magnetic field extrema within B-ducts. Using an independent two-sample t-test, we assess the statistical significance of variations in these parameters between different observation periods. This study provides valuable insights into the magnetospheric conditions influencing B-duct formation and wave propagation, offering a framework for understanding ELF/VLF wave dynamics in Earth’s space environment. Full article
(This article belongs to the Section Space Science)
Show Figures

Figure 1

23 pages, 5135 KiB  
Article
Strategic Multi-Stage Optimization for Asset Investment in Electricity Distribution Networks Under Load Forecasting Uncertainties
by Clainer Bravin Donadel
Eng 2025, 6(8), 186; https://doi.org/10.3390/eng6080186 - 5 Aug 2025
Abstract
Electricity distribution systems face increasing challenges due to demand growth, regulatory requirements, and the integration of distributed generation. In this context, distribution companies must make strategic and well-supported investment decisions, particularly in asset reinforcement actions such as reconductoring. This paper presents a multi-stage [...] Read more.
Electricity distribution systems face increasing challenges due to demand growth, regulatory requirements, and the integration of distributed generation. In this context, distribution companies must make strategic and well-supported investment decisions, particularly in asset reinforcement actions such as reconductoring. This paper presents a multi-stage methodology to optimize reconductoring investments under load forecasting uncertainties. The approach combines a decomposition strategy with Monte Carlo simulation to capture demand variability. By discretizing a lognormal probability density function and selecting the largest loads in the network, the methodology balances computational feasibility with modeling accuracy. The optimization model employs exhaustive search techniques independently for each network branch, ensuring precise and consistent investment decisions. Tests conducted on the IEEE 123-bus feeder consider both operational and regulatory constraints from the Brazilian context. Results show that uncertainty-aware planning leads to a narrow investment range—between USD 55,108 and USD 66,504—highlighting the necessity of reconductoring regardless of demand scenarios. A comparative analysis of representative cases reveals consistent interventions, changes in conductor selection, and schedule adjustments based on load conditions. The proposed methodology enables flexible, cost-effective, and regulation-compliant investment planning, offering valuable insights for utilities seeking to enhance network reliability and performance while managing demand uncertainties. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

15 pages, 2255 KiB  
Article
Nonnormalized Field Statistics in Coupled Reverberation Chambers
by Angelo Gifuni, Anett Kenderes and Giuseppe Grassini
Symmetry 2025, 17(8), 1239; https://doi.org/10.3390/sym17081239 - 5 Aug 2025
Abstract
In this work, we show the probability density functions (PDFs) and cumulative density functions (CDFs) of the nonnormalized field components and the associated powers received inside coupled reverberation chambers (CRCs), considering two canonical cases of single electrically small coupling apertures (ESCAs). These two [...] Read more.
In this work, we show the probability density functions (PDFs) and cumulative density functions (CDFs) of the nonnormalized field components and the associated powers received inside coupled reverberation chambers (CRCs), considering two canonical cases of single electrically small coupling apertures (ESCAs). These two cases involve one-dimensional (1D) and two-dimensional (2D) single electrically small CAs, respectively. We achieve normalized statistics from the nonnormalized ones for both field components and associated powers. We show that the comparison of the mean square values (MSVs) of the nonnormalized PDFs of the field components to the mean values (MVs) of the related nonnormalized PDFs of the powers is a proper method to corroborate the accuracy of the same achieved theoretical distributions, when they are achieved in an independent way. The achieved theoretical results are also validated by measurements. Moreover, for the sake of completeness and rigor of published results, we show two useful cases of the results from the measurements using two electrically large CAs. Full article
Show Figures

Figure 1

24 pages, 4294 KiB  
Article
Post Hoc Event-Related Potential Analysis of Kinesthetic Motor Imagery-Based Brain-Computer Interface Control of Anthropomorphic Robotic Arms
by Miltiadis Spanos, Theodora Gazea, Vasileios Triantafyllidis, Konstantinos Mitsopoulos, Aristidis Vrahatis, Maria Hadjinicolaou, Panagiotis D. Bamidis and Alkinoos Athanasiou
Electronics 2025, 14(15), 3106; https://doi.org/10.3390/electronics14153106 - 4 Aug 2025
Abstract
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and [...] Read more.
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and imagery remains under investigation in terms of activations, processing of motor onset, and BCI control. The current work aims to conduct a post hoc investigation of the event-related potential (ERP)-based processing of KMI during BCI control of anthropomorphic robotic arms by spinal cord injury (SCI) patients and healthy control participants in a completed clinical trial. For this purpose, we analyzed 14-channel electroencephalography (EEG) data from 10 patients with cervical SCI and 8 healthy individuals, recorded through Emotiv EPOC BCI, as the participants attempted to move anthropomorphic robotic arms using KMI. EEG data were pre-processed by band-pass filtering (8–30 Hz) and independent component analysis (ICA). ERPs were calculated at the sensor space, and analysis of variance (ANOVA) was used to determine potential differences between groups. Our results showed no statistically significant differences between SCI patients and healthy control groups regarding mean amplitude and latency (p < 0.05) across the recorded channels at various time points during stimulus presentation. Notably, no significant differences were observed in ERP components, except for the P200 component at the T8 channel. These findings suggest that brain circuits associated with motor planning and sensorimotor processes are not disrupted due to anatomical damage following SCI. The temporal dynamics of motor-related areas—particularly in channels like F3, FC5, and F7—indicate that essential motor imagery (MI) circuits remain functional. Limitations include the relatively small sample size that may hamper the generalization of our findings, the sensor-space analysis that restricts anatomical specificity and neurophysiological interpretations, and the use of a low-density EEG headset, lacking coverage over key motor regions. Non-invasive EEG-based BCI systems for motor rehabilitation in SCI patients could effectively leverage intact neural circuits to promote neuroplasticity and facilitate motor recovery. Future work should include validation against larger, longitudinal, high-density, source-space EEG datasets. Full article
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)
Show Figures

Figure 1

17 pages, 310 KiB  
Article
Statistical Entropy Based on the Generalized-Uncertainty-Principle-Induced Effective Metric
by Soon-Tae Hong, Yong-Wan Kim and Young-Jai Park
Universe 2025, 11(8), 256; https://doi.org/10.3390/universe11080256 - 2 Aug 2025
Viewed by 81
Abstract
We investigate the statistical entropy of black holes within the framework of the generalized uncertainty principle (GUP) by employing effective metrics that incorporate leading-order and all-order quantum gravitational corrections. We construct three distinct effective metrics induced by the GUP, which are derived from [...] Read more.
We investigate the statistical entropy of black holes within the framework of the generalized uncertainty principle (GUP) by employing effective metrics that incorporate leading-order and all-order quantum gravitational corrections. We construct three distinct effective metrics induced by the GUP, which are derived from the GUP-corrected temperature, entropy, and all-order GUP corrections, and analyze their impact on black hole entropy using ’t Hooft’s brick wall method. Our results show that, despite the differences in the effective metrics and the corresponding ultraviolet cutoffs, the statistical entropy consistently satisfies the Bekenstein–Hawking area law when expressed in terms of an invariant (coordinate-independent) distance near the horizon. Furthermore, we demonstrate that the GUP naturally regularizes the ultraviolet divergence in the density of states, eliminating the need for artificial cutoffs and yielding finite entropy even when counting quantum states only in the vicinity of the event horizon. These findings highlight the universality and robustness of the area law under GUP modifications and provide new insights into the interplay between quantum gravity effects and black hole thermodynamics. Full article
(This article belongs to the Collection Open Questions in Black Hole Physics)
13 pages, 1168 KiB  
Article
Importance of Imaging Assessment Criteria in Predicting the Need for Post-Dilatation in Transcatheter Aortic Valve Implantation with a Self-Expanding Bioprosthesis
by Matthias Hammerer, Philipp Hasenbichler, Nikolaos Schörghofer, Christoph Knapitsch, Nikolaus Clodi, Uta C. Hoppe, Klaus Hergan, Elke Boxhammer and Bernhard Scharinger
J. Cardiovasc. Dev. Dis. 2025, 12(8), 296; https://doi.org/10.3390/jcdd12080296 - 1 Aug 2025
Viewed by 101
Abstract
Background: Transcatheter aortic valve implantation (TAVI) has revolutionized the treatment of severe aortic valve stenosis (AS). Balloon post-dilatation (PD) remains an important procedural step to optimize valve function by resolving incomplete valve expansion, which may lead to paravalvular regurgitation and other potentially adverse [...] Read more.
Background: Transcatheter aortic valve implantation (TAVI) has revolutionized the treatment of severe aortic valve stenosis (AS). Balloon post-dilatation (PD) remains an important procedural step to optimize valve function by resolving incomplete valve expansion, which may lead to paravalvular regurgitation and other potentially adverse effects. There are only limited data on the predictors, incidence, and clinical impact of PD during TAVI. Methods: This retrospective, single-center study analyzed 585 patients who underwent TAVI (2016–2022). Pre-procedural evaluations included transthoracic echocardiography and CT angiography to assess key parameters, including the aortic valve calcium score (AVCS); aortic valve calcium density (AVCd); aortic valve maximal systolic transvalvular flow velocity (AV Vmax); and aortic valve mean systolic pressure gradient (AV MPG). We identified imaging predictors of PD and evaluated associated clinical outcomes by analyzing procedural endpoints (according to VARC-3 criteria) and long-term survival. Results: PD was performed on 67 out of 585 patients, with elevated AV Vmax (OR: 1.424, 95% CI: 1.039–1.950; p = 0.028) and AVCd (OR: 1.618, 95% CI: 1.227–2.132; p = 0.001) emerging as a significant independent predictor for PD in TAVI. Kaplan–Meier survival analysis revealed no significant differences in short- and mid-term survival between patients who underwent PD and those who did not. Interestingly, patients requiring PD exhibited a lower incidence of adverse events regarding major vascular complications, permanent pacemaker implantations and stroke. Conclusions: The study highlights AV Vmax and AVCd as key predictors of PD. Importantly, PD was not associated with increased procedural adverse events and did not predict adverse events in this contemporary cohort. Full article
(This article belongs to the Special Issue Clinical Applications of Cardiovascular Computed Tomography (CT))
Show Figures

Figure 1

13 pages, 1482 KiB  
Article
Effect of Surrounding Detritus on Phragmites australis Litter Decomposition: Evidence from Laboratory Aquatic Microcosms
by Franca Sangiorgio, Daniela Santagata, Fabio Vignes, Maurizio Pinna and Alberto Basset
Limnol. Rev. 2025, 25(3), 34; https://doi.org/10.3390/limnolrev25030034 - 1 Aug 2025
Viewed by 119
Abstract
The availability of detritus is a key factor influencing aquatic biota and can significantly affect decomposition processes. In this study, we investigated how varying quantities of surrounding detritus impact leaf litter decay rates. It was tested in flowing and still-water microcosms to highlight [...] Read more.
The availability of detritus is a key factor influencing aquatic biota and can significantly affect decomposition processes. In this study, we investigated how varying quantities of surrounding detritus impact leaf litter decay rates. It was tested in flowing and still-water microcosms to highlight context-dependent effects of surrounding detritus on leaf litter decomposition. To isolate the effect of detritus amount, experiments were conducted in laboratory microcosms simulating lotic and lentic ecosystems, each containing leaf fragments for decomposition assessments. Four detritus quantities were tested, with invertebrates either allowed or restricted from moving among detritus patches. Leaf decomposition rates were influenced by the amount of surrounding detritus, with slower decay observed at higher detritus conditions, regardless of invertebrate mobility. Detritivore distribution responded to both detritus quantity and oxygen availability, showing a preference for high detritus conditions. Additionally, detritus quantity affected microbial activity with a quadratic response, as indicated by leaf respiration rates. Overall, our findings indicate that the amount of surrounding detritus modulates leaf litter decomposition independently of invertebrate density, by influencing oxygen dynamics and, consequently, the activity of biological decomposers. Full article
Show Figures

Graphical abstract

12 pages, 456 KiB  
Article
From Variability to Standardization: The Impact of Breast Density on Background Parenchymal Enhancement in Contrast-Enhanced Mammography and the Need for a Structured Reporting System
by Graziella Di Grezia, Antonio Nazzaro, Luigi Schiavone, Cisternino Elisa, Alessandro Galiano, Gatta Gianluca, Cuccurullo Vincenzo and Mariano Scaglione
Cancers 2025, 17(15), 2523; https://doi.org/10.3390/cancers17152523 - 30 Jul 2025
Viewed by 462
Abstract
Introduction: Breast density is a well-recognized factor in breast cancer risk assessment, with higher density linked to increased malignancy risk and reduced sensitivity of conventional mammography. Background parenchymal enhancement (BPE), observed in contrast-enhanced imaging, reflects physiological contrast uptake in non-pathologic breast tissue. [...] Read more.
Introduction: Breast density is a well-recognized factor in breast cancer risk assessment, with higher density linked to increased malignancy risk and reduced sensitivity of conventional mammography. Background parenchymal enhancement (BPE), observed in contrast-enhanced imaging, reflects physiological contrast uptake in non-pathologic breast tissue. While extensively characterized in breast MRI, the role of BPE in contrast-enhanced mammography (CEM) remains uncertain due to inconsistent findings regarding its correlation with breast density and cancer risk. Unlike breast density—standardized through the ACR BI-RADS lexicon—BPE lacks a uniform classification system in CEM, leading to variability in clinical interpretation and research outcomes. To address this gap, we introduce the BPE-CEM Standard Scale (BCSS), a structured four-tiered classification system specifically tailored to the two-dimensional characteristics of CEM, aiming to improve consistency and diagnostic alignment in BPE evaluation. Materials and Methods: In this retrospective single-center study, 213 patients who underwent mammography (MG), ultrasound (US), and contrast-enhanced mammography (CEM) between May 2022 and June 2023 at the “A. Perrino” Hospital in Brindisi were included. Breast density was classified according to ACR BI-RADS (categories A–D). BPE was categorized into four levels: Minimal (< 10% enhancement), Light (10–25%), Moderate (25–50%), and Marked (> 50%). Three radiologists independently assessed BPE in a subset of 50 randomly selected cases to evaluate inter-observer agreement using Cohen’s kappa. Correlations between BPE, breast density, and age were examined through regression analysis. Results: BPE was Minimal in 57% of patients, Light in 31%, Moderate in 10%, and Marked in 2%. A significant positive association was found between higher breast density (BI-RADS C–D) and increased BPE (p < 0.05), whereas lower-density breasts (A–B) were predominantly associated with minimal or light BPE. Regression analysis confirmed a modest but statistically significant association between breast density and BPE (R2 = 0.144), while age showed no significant effect. Inter-observer agreement for BPE categorization using the BCSS was excellent (κ = 0.85; 95% CI: 0.78–0.92), supporting its reproducibility. Conclusions: Our findings indicate that breast density is a key determinant of BPE in CEM. The proposed BCSS offers a reproducible, four-level framework for standardized BPE assessment tailored to the imaging characteristics of CEM. By reducing variability in interpretation, the BCSS has the potential to improve diagnostic consistency and facilitate integration of BPE into personalized breast cancer risk models. Further prospective multicenter studies are needed to validate this classification and assess its clinical impact. Full article
Show Figures

Figure 1

20 pages, 1899 KiB  
Case Report
Ruptured Posterior Inferior Cerebellar Artery Aneurysms: Integrating Microsurgical Expertise, Endovascular Challenges, and AI-Driven Risk Assessment
by Matei Șerban, Corneliu Toader and Răzvan-Adrian Covache-Busuioc
J. Clin. Med. 2025, 14(15), 5374; https://doi.org/10.3390/jcm14155374 - 30 Jul 2025
Viewed by 441
Abstract
Background/Objectives: Posterior inferior cerebellar artery (PICA) aneurysms are one of the most difficult cerebrovascular lesions to treat and account for 0.5–3% of all intracranial aneurysms. They have deep anatomical locations, broad-neck configurations, high perforator density, and a close association with the brainstem, which [...] Read more.
Background/Objectives: Posterior inferior cerebellar artery (PICA) aneurysms are one of the most difficult cerebrovascular lesions to treat and account for 0.5–3% of all intracranial aneurysms. They have deep anatomical locations, broad-neck configurations, high perforator density, and a close association with the brainstem, which creates considerable technical challenges for either microsurgical or endovascular treatment. Despite its acceptance as the standard of care for most posterior circulation aneurysms, PICA aneurysms are often associated with flow diversion using a coil or flow diversion due to incomplete occlusions, parent vessel compromise and high rate of recurrence. This case aims to describe the utility of microsurgical clipping as a durable and definitive option demonstrating the value of tailored surgical planning, preservation of anatomy and ancillary technologies for protecting a genuine outcome in ruptured PICA aneurysms. Methods: A 66-year-old male was evaluated for an acute subarachnoid hemorrhage from a ruptured and broad-necked fusiform left PICA aneurysm at the vertebra–PICA junction. Endovascular therapy was not an option due to morphology and the center of the recurrence; therefore, a microsurgical approach was essential. A far-lateral craniotomy with a partial C1 laminectomy was carried out for proximal vascular control, with careful dissection of the perforating arteries and precise clip application for the complete exclusion of the aneurysm whilst preserving distal PICA flow. Results: Post-operative imaging demonstrated the complete obliteration of the aneurysm with unchanged cerebrovascular flow dynamics. The patient had progressive neurological recovery with no new cranial nerve deficits or ischemic complications. Long-term follow-up demonstrated stable aneurysm exclusion and full functional independence emphasizing the sustainability of microsurgical intervention in challenging PICA aneurysms. Conclusions: This case intends to highlight the current and evolving role of microsurgical practice for treating posterior circulation aneurysms, particularly at a time when endovascular alternatives are limited by anatomy and hemodynamics. Advances in artificial intelligence cerebral aneurysm rupture prediction, high-resolution vessel wall imaging, robotic-assisted microsurgery and new generation flow-modifying implants have the potential to revolutionize treatment paradigms by embedding precision medicine principles into aneurysm management. While the discipline of cerebrovascular surgery is expanding, it can be combined together with microsurgery, endovascular technologies and computational knowledge to ensure individualized, durable, and minimally invasive treatment options for high-risk PICA aneurysms. Full article
(This article belongs to the Special Issue Neurovascular Diseases: Clinical Advances and Challenges)
Show Figures

Figure 1

16 pages, 880 KiB  
Article
Probabilistic Estimates of Extreme Snow Avalanche Runout Distance
by David McClung and Peter Hoeller
Geosciences 2025, 15(8), 278; https://doi.org/10.3390/geosciences15080278 - 24 Jul 2025
Viewed by 253
Abstract
The estimation of runout distances for long return period avalanches is vital in zoning schemes for mountainous countries. There are two broad methods to estimate snow avalanche runout distance. One involves the use of a physical model to calculate speeds along the incline, [...] Read more.
The estimation of runout distances for long return period avalanches is vital in zoning schemes for mountainous countries. There are two broad methods to estimate snow avalanche runout distance. One involves the use of a physical model to calculate speeds along the incline, with runout distance determined when the speed drops to zero. The second method, which is used here, is based on empirical or statistical models from databases of extreme runout for a given mountain range or area. The second method has been used for more than 40 years with diverse datasets collected from North America and Europe. The primary reason for choosing the method used here is that it is independent of physical models such as avalanche dynamics, which allows comparisons between methods. In this paper, data from diverse datasets are analyzed to explain the relation between them to give an overall view of the meaning of the data. Runout is formulated from nine different datasets and 738 values of extreme runout, mostly with average return periods of about 100 years. Each dataset was initially fit to 65 probability density functions (pdf) using five goodness-of-fit tests. Detailed discussion and analysis are presented for a set of extreme value distributions (Gumbel, Frechet, Weibull). Two distributions had exemplary results in terms of goodness of fit: the generalized logistic (GLO) and the generalized extreme value (GEV) distributions. Considerations included both the goodness-of-fit and the heaviness of the tail, of which the latter is important in engineering decisions. The results showed that, generally, the GLO has a heavier tail. Our paper is the first to compare median extreme runout distances, the first to compare exceedance probability of extreme runout, and the first to analyze many probability distributions for a diverse set of datasets rigorously using five goodness-of-fit tests. Previous papers contained analysis mostly for the Gumbel distribution using only one goodness-of-fit test. Given that climate change is in effect, consideration of stationarity of the distributions is considered. Based on studies of climate change and avalanches, thus far, it has been suggested that stationarity should be a reasonable assumption for the extreme avalanche data considered. Full article
(This article belongs to the Section Natural Hazards)
Show Figures

Figure 1

28 pages, 835 KiB  
Article
Progressive First-Failure Censoring in Reliability Analysis: Inference for a New Weibull–Pareto Distribution
by Rashad M. EL-Sagheer and Mahmoud M. Ramadan
Mathematics 2025, 13(15), 2377; https://doi.org/10.3390/math13152377 - 24 Jul 2025
Viewed by 185
Abstract
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival [...] Read more.
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival and hazard rate functions, although these estimators do not have explicit closed-form solutions. The Newton–Raphson algorithm is employed for the numerical computation of these estimates. Confidence intervals for the parameters are approximated based on the asymptotic normality of the maximum likelihood estimators. The Fisher information matrix is calculated using the missing information principle, and the delta technique is applied to approximate confidence intervals for the survival and hazard rate functions. Bayesian estimators are developed under squared error, linear exponential, and general entropy loss functions, assuming independent gamma priors. Markov chain Monte Carlo sampling is used to obtain Bayesian point estimates and the highest posterior density credible intervals for the parameters and reliability measures. Finally, the proposed methods are demonstrated through the analysis of a real dataset. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

18 pages, 1154 KiB  
Article
Predicting Major Adverse Cardiovascular Events After Cardiac Surgery Using Combined Clinical, Laboratory, and Echocardiographic Parameters: A Machine Learning Approach
by Mladjan Golubovic, Velimir Peric, Marija Stosic, Vladimir Stojiljkovic, Sasa Zivic, Aleksandar Kamenov, Dragan Milic, Vesna Dinic, Dalibor Stojanovic and Milan Lazarevic
Medicina 2025, 61(8), 1323; https://doi.org/10.3390/medicina61081323 - 23 Jul 2025
Viewed by 292
Abstract
Background and Objectives: Despite significant advances in surgical techniques and perioperative care, major adverse cardiovascular events (MACE) remain a leading cause of postoperative morbidity and mortality in patients undergoing coronary artery bypass grafting and/or aortic valve replacement. Accurate preoperative risk stratification is essential [...] Read more.
Background and Objectives: Despite significant advances in surgical techniques and perioperative care, major adverse cardiovascular events (MACE) remain a leading cause of postoperative morbidity and mortality in patients undergoing coronary artery bypass grafting and/or aortic valve replacement. Accurate preoperative risk stratification is essential yet often limited by models that overlook atrial mechanics and underutilized biomarkers. Materials and Methods: This study aimed to develop an interpretable machine learning model for predicting perioperative MACE by integrating clinical, biochemical, and echocardiographic features, with a particular focus on novel physiological markers. A retrospective cohort of 131 patients was analyzed. An Extreme Gradient Boosting (XGBoost) classifier was trained on a comprehensive feature set, and SHapley Additive exPlanations (SHAPs) were used to quantify each variable’s contribution to model predictions. Results: In a stratified 80:20 train–test split, the model initially achieved an AUC of 1.00. Acknowledging the potential for overfitting in small datasets, additional validation was performed using 10 independent random splits and 5-fold cross-validation. These analyses yielded an average AUC of 0.846 ± 0.092 and an F1-score of 0.807 ± 0.096, supporting the model’s stability and generalizability. The most influential predictors included total atrial conduction time, mitral and tricuspid annular orifice areas, and high-density lipoprotein (HDL) cholesterol. These variables, spanning electrophysiological, structural, and metabolic domains, significantly enhanced discriminative performance, even in patients with preserved left ventricular function. The model’s transparency provides clinically intuitive insights into individual risk profiles, emphasizing the significance of non-traditional parameters in perioperative assessments. Conclusions: This study demonstrates the feasibility and potential clinical value of combining advanced echocardiographic, biochemical, and machine learning tools for individualized cardiovascular risk prediction. While promising, these findings require prospective validation in larger, multicenter cohorts before being integrated into routine clinical decision-making. Full article
(This article belongs to the Section Intensive Care/ Anesthesiology)
Show Figures

Figure 1

20 pages, 1461 KiB  
Article
Vulnerability-Based Economic Loss Rate Assessment of a Frame Structure Under Stochastic Sequence Ground Motions
by Zheng Zhang, Yunmu Jiang and Zixin Liu
Buildings 2025, 15(15), 2584; https://doi.org/10.3390/buildings15152584 - 22 Jul 2025
Viewed by 236
Abstract
Modeling mainshock–aftershock ground motions is essential for seismic risk assessment, especially in regions experiencing frequent earthquakes. Recent studies have often employed Copula-based joint distributions or machine learning techniques to simulate the statistical dependency between mainshock and aftershock parameters. While effective at capturing nonlinear [...] Read more.
Modeling mainshock–aftershock ground motions is essential for seismic risk assessment, especially in regions experiencing frequent earthquakes. Recent studies have often employed Copula-based joint distributions or machine learning techniques to simulate the statistical dependency between mainshock and aftershock parameters. While effective at capturing nonlinear correlations, these methods are typically black box in nature, data-dependent, and difficult to generalize across tectonic settings. More importantly, they tend to focus solely on marginal or joint parameter correlations, which implicitly treat mainshocks and aftershocks as independent stochastic processes, thereby overlooking their inherent spectral interaction. To address these limitations, this study proposes an explicit and parameterized modeling framework based on the evolutionary power spectral density (EPSD) of random ground motions. Using the magnitude difference between a mainshock and an aftershock as the control variable, we derive attenuation relationships for the amplitude, frequency content, and duration. A coherence function model is further developed from real seismic records, treating the mainshock–aftershock pair as a vector-valued stochastic process and thus enabling a more accurate representation of their spectral dependence. Coherence analysis shows that the function remains relatively stable between 0.3 and 0.6 across the 0–30 Rad/s frequency range. Validation results indicate that the simulated response spectra align closely with recorded spectra, achieving R2 values exceeding 0.90 and 0.91. To demonstrate the model’s applicability, a case study is conducted on a representative frame structure to evaluate seismic vulnerability and economic loss. As the mainshock PGA increases from 0.2 g to 1.2 g, the structure progresses from slight damage to complete collapse, with loss rates saturating near 1.0 g. These findings underscore the engineering importance of incorporating mainshock–aftershock spectral interaction in seismic damage and risk modeling, offering a transparent and transferable tool for future seismic resilience assessments. Full article
(This article belongs to the Special Issue Structural Vibration Analysis and Control in Civil Engineering)
Show Figures

Figure 1

Back to TopTop