Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,991)

Search Parameters:
Keywords = threshold determination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1677 KiB  
Article
222Rn Exhalation Rate of Building Materials: Comparison of Standard Experimental Protocols and Radiological Health Hazard Assessment
by Francesco Caridi, Lorenzo Pistorino, Federica Minissale, Giuseppe Paladini, Michele Guida, Simona Mancini, Domenico Majolino and Valentina Venuti
Appl. Sci. 2025, 15(14), 8015; https://doi.org/10.3390/app15148015 - 18 Jul 2025
Abstract
This study evaluates the accuracy of 222Rn exhalation rates from building materials using two standard experimental protocols, thus addressing the increasing importance of rapid radon assessment due to health concerns and regulatory limits. In detail, six types of natural stones frequently employed [...] Read more.
This study evaluates the accuracy of 222Rn exhalation rates from building materials using two standard experimental protocols, thus addressing the increasing importance of rapid radon assessment due to health concerns and regulatory limits. In detail, six types of natural stones frequently employed for the construction of buildings of historical-artistic relevance were analyzed using the closed chamber method (CCM) combined with the Durridge Rad7 system, by using two experimental protocols that differed in the measurement duration: 10 days (Method 1) versus 24 h (Method 2). Obtained results revealed that the radon exhalation rates ranged from 0.004 to 0.072 Bq h−1, which are moderate to low if compared to studies in other regions. Statistical comparison using the u-test confirmed equivalence between protocols (u-test ≤ 2), thus supporting the validity of the faster Method 2 for practical applications. Furthermore, to estimate the potential indoor radon levels and determine the associated radiological risks to human health, for the investigated natural stones, the Markkanen room model was employed. As a result, simulated indoor radon concentrations remained well below regulatory thresholds (maximum value: 37.3 Bq m−3), thus excluding any significant health concerns under typical indoor conditions. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

12 pages, 932 KiB  
Article
Determining Large Trees and Population Structures of Typical Tree Species in Northeast China
by Yutong Yang, Zhiyuan Jia, Shusen Ge, Yutang Li, Dongwei Kang and Junqing Li
Diversity 2025, 17(7), 491; https://doi.org/10.3390/d17070491 - 18 Jul 2025
Abstract
Specialized research on large trees in Northeast China is rare. To strengthen the understanding of local large trees, a survey of 4055 tree individuals from 75 plots in southeastern Jilin Province was conducted. The individual number and species composition of large trees in [...] Read more.
Specialized research on large trees in Northeast China is rare. To strengthen the understanding of local large trees, a survey of 4055 tree individuals from 75 plots in southeastern Jilin Province was conducted. The individual number and species composition of large trees in the community, as well as large individual standards in diameter at breast height (DBH) and population structures of typical tree species, were analyzed. By setting a DBH ≥ 50 cm as the threshold, 155 individuals across all the recorded trees were determined as large trees in the community, and 32.9% (51/155) of them were national second-class protected plant species in China. By setting the top 5% in DBH of a certain tree species as the threshold of large individuals of that tree species, the large individual criteria of six typical tree species were determined. The proportion of basal area of large trees to all trees was 30.4%, and the mean proportion of basal area of large individuals across the six typical tree species was 23.9% (±4.0%). As for the population characteristics, Abies nephrolepis and Picea jezoensis had large population sizes but relatively thin individuals, Tilia amurensis and Pinus koraiensis had small population sizes but relatively thick individuals, while Betula costata and Larix olgensis had medium population sizes and medium-sized individuals. Full article
(This article belongs to the Section Plant Diversity)
Show Figures

Figure 1

26 pages, 54898 KiB  
Article
MSWF: A Multi-Modal Remote Sensing Image Matching Method Based on a Side Window Filter with Global Position, Orientation, and Scale Guidance
by Jiaqing Ye, Guorong Yu and Haizhou Bao
Sensors 2025, 25(14), 4472; https://doi.org/10.3390/s25144472 - 18 Jul 2025
Abstract
Multi-modal remote sensing image (MRSI) matching suffers from severe nonlinear radiometric distortions and geometric deformations, and conventional feature-based techniques are generally ineffective. This study proposes a novel and robust MRSI matching method using the side window filter (MSWF). First, a novel side window [...] Read more.
Multi-modal remote sensing image (MRSI) matching suffers from severe nonlinear radiometric distortions and geometric deformations, and conventional feature-based techniques are generally ineffective. This study proposes a novel and robust MRSI matching method using the side window filter (MSWF). First, a novel side window scale space is constructed based on the side window filter (SWF), which can preserve shared image contours and facilitate the extraction of feature points within this newly defined scale space. Second, noise thresholds in phase congruency (PC) computation are adaptively refined with the Weibull distribution; weighted phase features are then exploited to determine the principal orientation of each point, from which a maximum index map (MIM) descriptor is constructed. Third, coarse position, orientation, and scale information obtained through global matching are employed to estimate image-pair geometry, after which descriptors are recalculated for precise correspondence search. MSWF is benchmarked against eight state-of-the-art multi-modal methods—six hand-crafted (PSO-SIFT, LGHD, RIFT, RIFT2, HAPCG, COFSM) and two learning-based (CMM-Net, RedFeat) methods—on three public datasets. Experiments demonstrate that MSWF consistently achieves the highest number of correct matches (NCM) and the highest rate of correct matches (RCM) while delivering the lowest root mean square error (RMSE), confirming its superiority for challenging MRSI registration tasks. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

15 pages, 898 KiB  
Review
Heart Failure Syndromes: Different Definitions of Different Diseases—Do We Need Separate Guidelines? A Narrative Review
by Massimo Romanò
J. Clin. Med. 2025, 14(14), 5090; https://doi.org/10.3390/jcm14145090 - 17 Jul 2025
Abstract
Heart failure (HF) is a well-known leading cause of mortality, associated with a high symptom burden in advanced stages, frequent hospitalizations, and increasing economic costs. HF is typically classified into three main subgroups, based on left ventricular ejection fraction (LVEF): HF with reduced [...] Read more.
Heart failure (HF) is a well-known leading cause of mortality, associated with a high symptom burden in advanced stages, frequent hospitalizations, and increasing economic costs. HF is typically classified into three main subgroups, based on left ventricular ejection fraction (LVEF): HF with reduced ejection fraction (HFrEF), HF with mildly reduced ejection fraction (HFmrEF), and HF with preserved ejection fraction (HFpEF). Recently, two additional subgroups have been proposed: HF with improved ejection fraction (HFimpEF) and HF with supernormal ejection fraction (HFsnEF). These five phenotypes exhibit distinct risk factors, clinical presentations, therapeutic responses, and prognosis. However, the LVEF thresholds used to define these subgroups remain a subject of considerable debate, with significant differences in opinions among leading experts. A major criticism concerns the reliability of LVEF in accurately classifying HF subgroups. Due to substantial intra and interobserver variability, determining the appropriate therapy and prognosis can be challenging, particularly in patients with HFmrEF. Additionally, patients classified under HFpEF are often too heterogeneous to be effectively managed as a single group. This narrative review explores these issues, and suggests a possible need for a new approach to HF classification, one that involves revising the LVEF reference values for HF phenotypes and highlighting LVEF trajectories rather than relying on a single measurement. Moreover, in light of the relatively limited therapeutic options for patients with LVEF > 40%, a new, simplified classification may be proposed: HF with reduced EF (LVEF ≤ 40%), HF with below-normal EF (41% ≤ LVEF ≤ 55%), and HF with normal EF (LVEF > 55%). This mindset would better equip clinical cardiologists to manage the diverse spectrum of HF syndromes, always with the patient at the center. Full article
(This article belongs to the Special Issue Clinical Update on the Diagnosis and Treatment of Heart Failure)
Show Figures

Figure 1

16 pages, 1637 KiB  
Article
Contextualizing Radon Mitigation into Healthy and Sustainable Home Design in the Commonwealth of Kentucky: A Conjoint Analysis
by Osama E. Mansour, Lydia (Niang) Cing and Omar Mansour
Sustainability 2025, 17(14), 6543; https://doi.org/10.3390/su17146543 - 17 Jul 2025
Abstract
Indoor radon constitutes a public health issue in various regions across the United States as the second leading cause of lung cancer following tobacco smoke. The U.S. Environmental Protection Agency advises radon mitigation interventions for residential buildings with indoor radon concentrations exceeding the [...] Read more.
Indoor radon constitutes a public health issue in various regions across the United States as the second leading cause of lung cancer following tobacco smoke. The U.S. Environmental Protection Agency advises radon mitigation interventions for residential buildings with indoor radon concentrations exceeding the threshold level of 4 pCi/L. Despite considerable research assessing the technical effectiveness of radon mitigation systems, there remains a gap in understanding their broader influence on occupant behavior and preferences in residential design. This study aims to investigate the impact of residing in radon-mitigated homes within the Commonwealth of Kentucky—an area known for elevated radon concentrations—on occupants’ preferences regarding healthy home design attributes. The objectives of this research are twofold: firstly to determine if living in radon-mitigated homes enhances occupant awareness and consequently influences their preferences toward health-related home attributes and secondly to quantitatively evaluate and compare the relative significance homeowners assign to health-related attributes such as indoor air quality, thermal comfort, and water quality relative to conventional attributes including home size, architectural style, and neighborhood quality. The overarching purpose is to explore the potential role radon mitigation initiatives may play in motivating occupants towards healthier home construction and renovation practices. Using choice-based conjoint (CBC) analysis, this paper compares preferences reported by homeowners from radon-mitigated homes against those from non-mitigated homes. While the findings suggest a relationship between radon mitigation and increased preference for indoor air quality, the cross-sectional design limits causal interpretation, and the possibility of reverse causation—where health-conscious individuals are more likely to seek mitigation—must be considered. The results provide novel insights into how radon mitigation efforts might effectively influence occupant priorities towards integrating healthier design elements in residential environments. Full article
(This article belongs to the Section Pollution Prevention, Mitigation and Sustainability)
Show Figures

Figure 1

13 pages, 1723 KiB  
Article
Effects of Trimethylamine Concentrations in Hatching Eggs on Chick Quality in Dwarf Hens
by Xuefeng Shi, Lin Xuan, Jiahui Lai, Caiyun Jiang, Junying Li, Guiyun Xu and Jiangxia Zheng
Animals 2025, 15(14), 2121; https://doi.org/10.3390/ani15142121 - 17 Jul 2025
Abstract
Microbial contamination of hatching eggs often leads to reduced hatchability and poor chick quality. As trimethylamine (TMA), a metabolite derived from dietary choline, has antimicrobial properties, increasing yolk TMA contents may increase bacterial resistance to eggs; however, the effects of TMA concentrations on [...] Read more.
Microbial contamination of hatching eggs often leads to reduced hatchability and poor chick quality. As trimethylamine (TMA), a metabolite derived from dietary choline, has antimicrobial properties, increasing yolk TMA contents may increase bacterial resistance to eggs; however, the effects of TMA concentrations on chick quality remain unknown. Hence, this study was conducted to determine the effects of yolk TMA concentrations on the hatchability and chick quality of dwarf hens with different FMO3 genotypes. Hens (n = 140) were divided into control and experimental groups; the latter received choline chloride (2800 mg/kg) to elevate their yolk TMA concentrations. The TMA content, Pasgar score, hatchability, and post-hatching performance were evaluated. The results showed that choline supplementation significantly increased TMA concentrations in hens with AT and TT genotypes. Higher yolk TMA concentrations (≥4 µg/g) correlated with improved Pasgar scores and reduced abnormalities in vitality, navel, and yolk sac absorption. Hatchability peaked at 6.49 µg/g TMA, suggesting a threshold effect. Although the growth rate remained unaffected, chick mortality decreased in the high-TMA group. Therefore, moderate TMA concentrations can enhance egg antimicrobial defenses and improve reproductive performance. This strategy provides a biologically grounded alternative to traditional chemical disinfection in hatcheries. Full article
(This article belongs to the Section Poultry)
Show Figures

Figure 1

24 pages, 1481 KiB  
Article
Sources of Environmental Exposure to the Naturally Occurring Anabolic Steroid Ecdysterone in Horses
by Martin N. Sillence, Kathi Holt, Fang Ivy Li, Patricia A. Harris, Mitchell Coyle and Danielle M. Fitzgerald
Animals 2025, 15(14), 2120; https://doi.org/10.3390/ani15142120 - 17 Jul 2025
Abstract
Ecdysterone controls moulting and reproduction in insects, crustaceans, and helminths. It is also produced by many plants, probably as an insect deterrent. The steroid is not made by vertebrates but has anabolic effects in mammals and could be useful for treating sarcopenia in [...] Read more.
Ecdysterone controls moulting and reproduction in insects, crustaceans, and helminths. It is also produced by many plants, probably as an insect deterrent. The steroid is not made by vertebrates but has anabolic effects in mammals and could be useful for treating sarcopenia in aged horses. However, ecdysterone is banned in horseracing and equestrian sports, and with no limit of reporting, the risk of unintended exposure to this naturally occurring prohibited substance is a concern. To explore this risk, pasture plants and hay samples were analysed for ecdysterone content, as well as samples of blood, faeces, and intestinal mucosa from horses (euthanized for non-research purposes) with varying degrees of endo-parasite infestation. The variability in serum ecdysterone concentrations between different horses after administering a fixed dose was also examined. Ecdysterone was detected in 24 hay samples (0.09 to 3.74 µg/g) and several weeds, with particularly high concentrations in Chenopodium album (244 µg/g) and Solanum nigrum (233 µg/g). There was a positive correlation between faecal ecdysterone and faecal egg counts, but no effect of anthelmintic treatment and no relation to the number of encysted cyathostome larvae in the large intestine mucosa. Certain horses maintained an unusually high serum ecdysterone concentration over several weeks and/or displayed an abnormally large response to oral ecdysterone administration. Thus, the risk of environmental exposure to ecdysterone is apparent, and several factors must be considered when determining an appropriate dosage for clinical studies or setting a reporting threshold for equine sports. Full article
Show Figures

Figure 1

16 pages, 667 KiB  
Article
Strength Training vs. Aerobic Interval Training: Effects on Anaerobic Capacity, Aerobic Power and Second Ventilatory Threshold in Men
by Aleksander Drwal and Marcin Maciejczyk
Appl. Sci. 2025, 15(14), 7953; https://doi.org/10.3390/app15147953 - 17 Jul 2025
Abstract
The purpose of this non-randomized study was to determine the effect of strength training and aerobic interval training on the anaerobic and aerobic power and endurance of young men (assessed by determination of the second ventilatory threshold (VT2)) in non-trained men. Participants (n [...] Read more.
The purpose of this non-randomized study was to determine the effect of strength training and aerobic interval training on the anaerobic and aerobic power and endurance of young men (assessed by determination of the second ventilatory threshold (VT2)) in non-trained men. Participants (n = 45) were recruited into three groups of 15 each. The first group performed strength training (ST), the second performed aerobic interval training (AIT), and the third group was the control group (CON). In each group, somatic measurements and tests of aerobic (graded test with VT2 determination) and anaerobic capacity (Wingate test) were performed twice (before and after the exercise intervention in the training groups). In the graded test, the level of maximal load (Pmax), maximal oxygen uptake (VO2max) and intensity and oxygen uptake at VT2 were determined. In the Wingate test, peak power (PP) and mean power (MP) were determined. The exercise intervention in the ST and AIT groups lasted 6 weeks, with three workouts per week. Training in the ST and AIT groups resulted in significant increase in absolute Pmax (p < 0.001, ES = 0.52 and p < 0.05, ES = 0.36), VO2max (p < 0.001, ES = 0.50 and p = 0.02, ES = 0.55) in the participants. Only AIT was significantly effective in improving oxygen uptake at VT2 (p < 0.04, ES = 0.64), and ST in improving PP. Strength training can be an effective training method in training aerobic and anaerobic capacity (significantly increases Pmax, VO2max, and PP), while it does not significantly affect work intensity at VT2. Our results suggest that, particularly in anaerobic–aerobic sports, strength training may be a training method that can simultaneously improve both anaerobic power and maximal oxygen uptake. It can also complement endurance training. Full article
(This article belongs to the Special Issue Recent Research on Biomechanics and Sports)
Show Figures

Figure 1

16 pages, 2247 KiB  
Article
Feasibility of Hypotension Prediction Index-Guided Monitoring for Epidural Labor Analgesia: A Randomized Controlled Trial
by Okechukwu Aloziem, Hsing-Hua Sylvia Lin, Kourtney Kelly, Alexandra Nicholas, Ryan C. Romeo, C. Tyler Smith, Ximiao Yu and Grace Lim
J. Clin. Med. 2025, 14(14), 5037; https://doi.org/10.3390/jcm14145037 - 16 Jul 2025
Viewed by 147
Abstract
Background: Hypotension following epidural labor analgesia (ELA) is its most common complication, affecting approximately 20% of patients and posing risks to both maternal and fetal health. As digital tools and predictive analytics increasingly shape perioperative and obstetric anesthesia practices, real-world implementation data are [...] Read more.
Background: Hypotension following epidural labor analgesia (ELA) is its most common complication, affecting approximately 20% of patients and posing risks to both maternal and fetal health. As digital tools and predictive analytics increasingly shape perioperative and obstetric anesthesia practices, real-world implementation data are needed to guide their integration into clinical care. Current monitoring practices rely on intermittent non-invasive blood pressure (NIBP) measurements, which may delay recognition and treatment of hypotension. The Hypotension Prediction Index (HPI) algorithm uses continuous arterial waveform monitoring to predict hypotension for potentially earlier intervention. This clinical trial evaluated the feasibility, acceptability, and efficacy of continuous HPI-guided treatment in reducing time-to-treatment for ELA-associated hypotension and improving maternal hemodynamics. Methods: This was a prospective randomized controlled trial design involving healthy pregnant individuals receiving ELA. Participants were randomized into two groups: Group CM (conventional monitoring with NIBP) and Group HPI (continuous noninvasive blood pressure monitoring). In Group HPI, hypotension treatment was guided by HPI output; in Group CM, treatment was based on NIBP readings. Feasibility, appropriateness, and acceptability outcomes were assessed among subjects and their bedside nurse using the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of Intervention Measure (FIM) instruments. The primary efficacy outcome was time-to-treatment of hypotension, defined as the duration between onset of hypotension and administration of a vasopressor or fluid therapy. This outcome was chosen to evaluate the clinical responsiveness enabled by HPI monitoring. Hypotension is defined as a mean arterial pressure (MAP) < 65 mmHg for more than 1 min in Group CM and an HPI threshold < 75 for more than 1 min in Group HPI. Secondary outcomes included total time in hypotension, vasopressor doses, and hemodynamic parameters. Results: There were 30 patients (Group HPI, n = 16; Group CM, n = 14) included in the final analysis. Subjects and clinicians alike rated the acceptability, appropriateness, and feasibility of the continuous monitoring device highly, with median scores ≥ 4 across all domains, indicating favorable perceptions of the intervention. The cumulative probability of time-to-treatment of hypotension was lower by 75 min after ELA initiation in Group HPI (65%) than Group CM (71%), although this difference was not statistically significant (log-rank p = 0.66). Mixed models indicated trends that Group HPI had higher cardiac output (β = 0.58, 95% confidence interval −0.18 to 1.34, p = 0.13) and lower systemic vascular resistance (β = −97.22, 95% confidence interval −200.84 to 6.40, p = 0.07) throughout the monitoring period. No differences were found in total vasopressor use or intravenous fluid administration. Conclusions: Continuous monitoring and precision hypotension treatment is feasible, appropriate, and acceptable to both patients and clinicians in a labor and delivery setting. These hypothesis-generating results support that HPI-guided treatment may be associated with hemodynamic trends that warrant further investigation to determine definitive efficacy in labor analgesia contexts. Full article
(This article belongs to the Section Anesthesiology)
Show Figures

Graphical abstract

14 pages, 15062 KiB  
Article
Short-Term Effects of Visceral Manual Therapy on Autonomic Nervous System Modulation in Individuals with Clinically Based Bruxism: A Randomized Controlled Trial
by Cayetano Navarro-Rico, Hermann Fricke-Comellas, Alberto M. Heredia-Rizo, Juan Antonio Díaz-Mancha, Adolfo Rosado-Portillo and Lourdes M. Fernández-Seguín
Dent. J. 2025, 13(7), 325; https://doi.org/10.3390/dj13070325 - 16 Jul 2025
Viewed by 66
Abstract
Background/Objectives: Bruxism has been associated with dysregulation of the autonomic nervous system (ANS). Visceral manual therapy (VMT) has shown beneficial effects on the vagal tone and modulation of ANS activity. This study aimed to evaluate the immediate and short-term effects of VMT [...] Read more.
Background/Objectives: Bruxism has been associated with dysregulation of the autonomic nervous system (ANS). Visceral manual therapy (VMT) has shown beneficial effects on the vagal tone and modulation of ANS activity. This study aimed to evaluate the immediate and short-term effects of VMT in individuals with clinically based bruxism. Methods: A single-blind randomized controlled trial was conducted including 24 individuals with clinically based bruxism. Participants received two sessions of either VMT or a sham placebo technique. Outcome measures included heart rate variability (HRV), both normal-to-normal intervals (HRV-SDNN), and the root mean square of successive normal-to-normal intervals (HRV-RMSSD), as well as muscle tone and stiffness and pressure pain thresholds (PPTs). Measurements were made at T1 (baseline), T2 (post-first intervention), T3 (pre-second intervention), T4 (post-second intervention), and T5 (4-week follow-up). Results: A significant time*group interaction was observed for HRV-SDNN (p = 0.04, η2 = 0.12). No significant changes were found for muscle tone or stiffness. PPTs significantly increased at C4 after the second session (p = 0.049, η2 = 0.16) and at the left temporalis muscle after the first session (p = 0.01, η2 = 0.07). Conclusions: The findings suggest that two sessions of VMT may lead to significant improvements in HRV-SDNN compared to the placebo, suggesting a modulatory effect on autonomic function. No consistent changes were observed for the viscoelastic properties of the masticatory muscles. Isolated improvements in pressure pain sensitivity were found at C4 and the left temporalis muscle. Further research with larger sample sizes and long-term follow-up is needed to determine the clinical relevance of VMT in the management of signs and symptoms in individuals with bruxism. Full article
(This article belongs to the Special Issue Dentistry in the 21st Century: Challenges and Opportunities)
Show Figures

Figure 1

24 pages, 1188 KiB  
Article
Toward an Experimental Common Framework for Measuring Double Materiality in Companies
by Christian Bux, Paola Geatti, Serena Sebastiani, Andrea Del Chicca, Pasquale Giungato, Angela Tarabella and Caterina Tricase
Sustainability 2025, 17(14), 6518; https://doi.org/10.3390/su17146518 - 16 Jul 2025
Viewed by 68
Abstract
In Europe, corporate sustainability reporting through the double materiality assessment was formally introduced with the Corporate Sustainability Reporting Directive in response to the European Sustainability Reporting Standards. The double materiality assessment is essential not only to determine the scope of corporate sustainability reporting [...] Read more.
In Europe, corporate sustainability reporting through the double materiality assessment was formally introduced with the Corporate Sustainability Reporting Directive in response to the European Sustainability Reporting Standards. The double materiality assessment is essential not only to determine the scope of corporate sustainability reporting but also to guide companies toward an efficient allocation of resources and shape corporate sustainability strategies. However, although EFRAG represents the technical adviser of the European Commission, there are numerous “interoperable” standards related to the assessment of double materiality, including the Global Reporting Initiative (GRI), or UNI 11919-1:2023. This research intends to systematically analyze similarities and divergences between the most widespread double materiality assessment standards at the global scale, highlighting their strengths and weaknesses and trying to identify a comparable path toward the creation of a set of common guidelines. This analysis is carried out through the systematic study of seven standards and by answering nine questions ranging from generic ones, such as “what is the concept of double materiality?”, to more technical questions like “does the standard identify thresholds?”, but adding original prospects such as “does the standard refer to different types of capital?”. Findings highlight that EFRAG, UNI 11919-1:2023, and GRI represent the most complete and least-discretionary standards, but some methodological aspects need to be enhanced. In the double materiality assessment, companies must identify key stakeholders, material topics and material risks, and must develop the double materiality matrix, promoting transparent disclosure, continuous monitoring, and stakeholders’ engagement. While comparability is principally required among companies operating within the same sector and of similar size, this does not preclude the possibility of comparing firms across different sectors with respect to specific indicators, when appropriate or necessary. Full article
Show Figures

Figure 1

24 pages, 5824 KiB  
Article
Evaluation of Highway Pavement Structural Conditions Based on Measured Crack Morphology by 3D GPR and Finite Element Modeling
by Zhonglu Cao, Dianguang Cao, Haolei Chang, Yaoguo Fu, Xiyuan Shen, Weiping Huang, Huiping Wang, Wanlu Bao, Chao Feng, Zheng Tong, Xiaopeng Lin and Weiguang Zhang
Materials 2025, 18(14), 3336; https://doi.org/10.3390/ma18143336 - 16 Jul 2025
Viewed by 167
Abstract
Structural cracks are internal distresses that cannot be observed from pavement surfaces. However, the existing evaluation methods for asphalt pavement structures lack the consideration of these cracks, which are crucial for accurate pavement assessment and effective maintenance planning. This study develops a novel [...] Read more.
Structural cracks are internal distresses that cannot be observed from pavement surfaces. However, the existing evaluation methods for asphalt pavement structures lack the consideration of these cracks, which are crucial for accurate pavement assessment and effective maintenance planning. This study develops a novel framework combining a three-dimensional (3D) ground penetrating radar (GPR) and finite element modeling (FEM) to evaluate the severity of structural cracks. First, the size and depth development of structural cracks on a four-layer asphalt pavement were determined using the 3D GPR. Then, the range of influence of the structural crack on structural bearing capacity was analyzed based on 3D FEM simulation model. Structural cracks have a distance-dependent diminishing influence on the deflection in the horizontal direction, with the most pronounced effects within a 20-cm width zone surrounding the cracks. Finally, two indices have been proposed: the pavement structural crack index (PSCI) to assess the depth of crack damage and the structural crack reflection ratio (SCRR) to evaluate surface reflection. Besides, PSCI and SCRR are used to classify the severities of structural cracks: none, low, and high. The threshold between none/low damage is a structural crack damage rate of 0.19%, and the threshold between low/high damage is 0.663%. An experiment on a 132-km expressway indicated that the proposed method achieved 94.4% accuracy via coring. The results also demonstrate the strong correlation between PSCI and pavement deflection (R2 = 0.92), supporting performance-based maintenance strategies. The results also demonstrate the correlation between structural and surface cracks, with 65.8% of the cracked sections having both structural and surface cracks. Full article
Show Figures

Figure 1

18 pages, 2154 KiB  
Article
Performance Limits of Hydraulic-Binder Stabilization for Dredged Sediments: Comparative Case Studies
by Abdeljalil Zri, Nor-Edine Abriak, Amine el Mahdi Safhi, Shima Pilehvar and Mahdi Kioumarsi
Buildings 2025, 15(14), 2484; https://doi.org/10.3390/buildings15142484 - 15 Jul 2025
Viewed by 208
Abstract
Maintenance dredging produces large volumes of fine sediments that are commonly discarded, despite increasing pressure for beneficial reuse. Lime–cement stabilization offers one pathway, yet field performance is highly variable. This study juxtaposes two French marine dredged sediments—DS-F (low plasticity, organic matter (OM) ≈ [...] Read more.
Maintenance dredging produces large volumes of fine sediments that are commonly discarded, despite increasing pressure for beneficial reuse. Lime–cement stabilization offers one pathway, yet field performance is highly variable. This study juxtaposes two French marine dredged sediments—DS-F (low plasticity, organic matter (OM) ≈ 2 wt.%) and DS-M (high plasticity, OM ≈ 18 wt.%)—treated with practical hydraulic road binder (HRB) dosages. This is the first French study that directly contrasts two different DS types under identical HRB treatment and proposes practical boundary thresholds. Physical indexes (particle size, methylene-blue value, Atterberg limits, OM) were measured; mixtures were compacted (Modified Proctor) and tested for immediate bearing index (IBI). IBI, unconfined compressive strength, indirect tensile strength, and elastic modulus were determined. DS-F reached IBI ≈ 90–125%, UCS ≈ 4.7–5.9 MPa, and ITS ≈ 0.40–0.47 MPa with only 6–8 wt.% HRB, satisfying LCPC-SETRA class S2–S3 requirements for road subgrades. DS-M never exceeded IBI ≈ 8%, despite 3 wt.% lime + 6 wt.% cement. A decision matrix distilled from these cases and recent literature shows that successful stabilization requires MBV < 3 g/100 g, plastic index < 25%, OM < 7 wt.%, and fine particles < 35%. These thresholds permit rapid screening of dredged lots before costly treatment. Highlighting both positive and negative evidence clarifies the realistic performance envelope of soil–cement reuse and supports circular-economy management of DS. Full article
(This article belongs to the Collection Advanced Concrete Materials in Construction)
Show Figures

Figure 1

13 pages, 1819 KiB  
Article
Numerical Investigation of 2D Ordered Pillar Array Columns: An Algorithm of Unit-Cell Automatic Generation and the Corresponding CFD Simulation
by Qihao Jiang, Stefano Rocca, Kareem Shaikhuzzaman and Simone Dimartino
Separations 2025, 12(7), 184; https://doi.org/10.3390/separations12070184 - 15 Jul 2025
Viewed by 161
Abstract
This paper presents a numerical investigation into the generation of 2D ordered pillar array columns for liquid chromatography columns, focusing on the development of an algorithm for the automatic creation of unit-cell morphologies and their subsequent computational fluid dynamics (CFD) simulation. The algorithm [...] Read more.
This paper presents a numerical investigation into the generation of 2D ordered pillar array columns for liquid chromatography columns, focusing on the development of an algorithm for the automatic creation of unit-cell morphologies and their subsequent computational fluid dynamics (CFD) simulation. The algorithm is developed to incorporate functional and operational constraints, which ensure that the generated structures are permeable and suitable for chromatographic separations. The functional constraints include the principal pathway and no dry void constraints, while the operational constraints involve symmetry and porosity thresholds. The algorithm’s efficacy is demonstrated with a reduction rate of 97.8% for order 5 matrices. CFD simulations of the generated morphologies reveal that the homogeneity of the fluid velocity profile within the unit cell is a key determinant of separation performance, suggesting that refining the resolution of discrete unit cells could enhance separation efficiency. Future work will explore the inclusion of more complex morphologies and the impact of particle shape and size on separation efficiency. Full article
Show Figures

Figure 1

31 pages, 4652 KiB  
Article
A Delayed Malware Propagation Model Under a Distributed Patching Mechanism: Stability Analysis
by Wei Zhang, Xiaofan Yang and Luxing Yang
Mathematics 2025, 13(14), 2266; https://doi.org/10.3390/math13142266 - 14 Jul 2025
Viewed by 83
Abstract
Antivirus (patch) is one of the most powerful tools for defending against malware spread. Distributed patching is superior to its centralized counterpart in terms of significantly lower bandwidth requirement. Under the distributed patching mechanism, a novel malware propagation model with double delays and [...] Read more.
Antivirus (patch) is one of the most powerful tools for defending against malware spread. Distributed patching is superior to its centralized counterpart in terms of significantly lower bandwidth requirement. Under the distributed patching mechanism, a novel malware propagation model with double delays and double saturation effects is proposed. The basic properties of the model are discussed. A pair of thresholds, i.e., the first threshold R0 and the second threshold R1, are determined. It is shown that (a) the model admits no malware-endemic equilibrium if R01, (b) the model admits a unique patch-free malware-endemic equilibrium and admits no patch-endemic malware-endemic equilibrium if 1<R0R1, and (c) the model admits a unique patch-free malware-endemic equilibrium and a unique patch-endemic malware-endemic equilibrium if R0>R1. A criterion for the global asymptotic stability of the malware-free equilibrium is given. A pair of criteria for the local asymptotic stability of the patch-free malware-endemic equilibrium are presented. A pair of criteria for the local asymptotic stability of the patch-endemic malware-endemic equilibrium are derived. Using cybersecurity terms, these theoretical outcomes have the following explanations: (a) In the case where the first threshold can be kept below unity, the malware can be eradicated through distributed patching. (b) In the case where the first threshold can only be kept between unity and the second threshold, the patches may fail completely, and the malware cannot be eradicated through distributed patching. (c) In the case where the first threshold cannot be kept below the second threshold, the patches may work permanently, but the malware cannot be eradicated through distributed patching. The influence of the delays and the saturation effects on malware propagation is examined experimentally. The relevant conclusions reveal the way the delays and saturation effects modulate these outcomes. Full article
Show Figures

Figure 1

Back to TopTop