Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,818)

Search Parameters:
Keywords = distance coefficient

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3858 KB  
Article
Vulnerability Assessment of Levee Failure Due to Underseepage in the Szigetköz Floodplain Area
by Edina Koch and Richard Ray
Water 2026, 18(5), 634; https://doi.org/10.3390/w18050634 (registering DOI) - 7 Mar 2026
Abstract
Analysis of the likelihood of structural or hydraulic failures of levees is a crucial part of flood risk assessment and is affected by many uncertainties. This paper evaluates a critical area along the Danube River in the Szigetköz floodplain, where hydraulic risk has [...] Read more.
Analysis of the likelihood of structural or hydraulic failures of levees is a crucial part of flood risk assessment and is affected by many uncertainties. This paper evaluates a critical area along the Danube River in the Szigetköz floodplain, where hydraulic risk has been increasing due to rising flood levels. Blanket theory approaches demonstrated the probability of failure relative to erosional failure, and Monte Carlo simulations generated fragility curves. The results of the case study show that the thickness of the aquifer layer has a slight effect if it is deeper than 30 m. The research also reveals that the probability of failure is highly affected by the distance from the river to the riverside levee toe; the shorter this distance, the higher the hydraulic risk. Sensitivity analyses emphasize the effect of variable inhomogeneity; as the leakage factor increases, the probability of failure due to underseepage increases. Comparing coefficients of variation across different floodwater levels showed that at low floodwater levels, a lower coefficient of variation corresponds to a lower probability of failure. In contrast, at higher floodwater levels, the same coefficient produced a higher probability of failure. Full article
(This article belongs to the Section Soil and Water)
Show Figures

Figure 1

23 pages, 2843 KB  
Article
Robust Multiblock STATICO for Modeling Environmental Indicator Structures: A Methodological Framework for Sustainability Monitoring in Complex Systems
by Harry Vite-Cevallos, Omar Ruiz-Barzola and Purificación Galindo-Villardón
Sustainability 2026, 18(5), 2607; https://doi.org/10.3390/su18052607 - 6 Mar 2026
Abstract
Sustainability monitoring relies on environmental indicator systems that integrate heterogeneous multivariate measurements across space and time; however, collinearity, non-Gaussian variability, and influential observations frequently destabilize classical multiblock methods and may bias indicator-based assessment and decision support. This study proposes a robust extension of [...] Read more.
Sustainability monitoring relies on environmental indicator systems that integrate heterogeneous multivariate measurements across space and time; however, collinearity, non-Gaussian variability, and influential observations frequently destabilize classical multiblock methods and may bias indicator-based assessment and decision support. This study proposes a robust extension of the STATICO (STATIS–CO-inertia) framework to model common structures among paired environmental indicator blocks under realistic data contamination. The approach preserves the original triadic algebraic formulation while incorporating robust covariance estimation and adaptive weighting to reduce the influence of outliers and structurally unstable blocks. Robustification is implemented at the interstructure stage through a reformulated Escoufier’s RV coefficient and in the construction of the compromise space via robust distances. The RV coefficient, a multivariate generalization of the squared Pearson correlation computed between cross-product matrices, is used to quantify structural similarity between paired data blocks and to evaluate the stability of the compromise structure. Performance is evaluated using simulated datasets calibrated to represent Ecuadorian coastal monitoring conditions. The results show that Robust STATICO increases compromise dominance and stability, redistributes inter-block similarities more coherently, and improves discriminative representation in the factorial space, yielding more interpretable and environmentally plausible structures. Overall, the proposed method provides a reliable analytical tool for sustainability-oriented environmental monitoring by supporting stable identification of persistent multivariate patterns and robust comparison of indicator structures in complex systems. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

11 pages, 866 KB  
Technical Note
CTV Delineation in the Era of Artificial Intelligence: A Multicenter Assessment of a 3D U-Net Model as Predictive Peer Review for Hypofractionated Prostate Cancer Treatment
by Luca Capone, Giorgio H. Raza, Chiara D’Ambrosio, Francesco Tortorelli, Francesco Aquilanti and Pier Carlo Gentile
AI 2026, 7(3), 97; https://doi.org/10.3390/ai7030097 - 6 Mar 2026
Abstract
Purpose: The aim is to evaluate the effectiveness of artificial intelligence (AI)-based automatic segmentation as a predictive tool for clinical peer review in prostate cancer patients treated with hypofractionated radiotherapy. Methodology: A retrospective analysis was conducted on 62 patients treated across three Italian [...] Read more.
Purpose: The aim is to evaluate the effectiveness of artificial intelligence (AI)-based automatic segmentation as a predictive tool for clinical peer review in prostate cancer patients treated with hypofractionated radiotherapy. Methodology: A retrospective analysis was conducted on 62 patients treated across three Italian centers between 2020 and 2025. CT images were segmented using software based on 3D U-net models. Three workflows were compared: manual segmentation (C man), automatic segmentation (C AI), and AI-based segmentation adjusted by clinicians (C adj). Quantitative metrics used for comparison included the Dice Similarity Coefficient (DSC) and Hausdorff Distance (HDmax). Statistical analysis involved Welch’s t-test and Cohen’s d for effect size. Results: The results showed a significant improvement in agreement between C AI and C adj compared to C man. Median DSC for CTV increased from 0.80 (C man) to 0.92 (C adj), while HDmax decreased from 12.33 mm to 9.22 mm. Similar improvements were observed for the bladder and anorectum. All differences were statistically significant (p < 0.0001), with large effect sizes (Cohen’s d > 0.8). Discussion: AI use demonstrated a reduction in interobserver variability and segmentation time, enhancing workflow standardization. The C adj workflow, where the physician acts as a reviewer of AI-generated contours, proved effective and potentially integrable into clinical peer review. The predictive peer review refers to a preliminary support step in the clinical review process rather than a substitute for medical decision-making. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine)
Show Figures

Figure 1

24 pages, 3661 KB  
Article
A CNN-Based Model of Cross-Immunity to Influenza A(H3N2) Virus: Testing Under “Real-World” Conditions
by Marina N. Asatryan, Vaagn G. Agasaryan, Boris I. Timofeev, Ilya S. Shmyr, Dmitrii N. Shcherbinin, Elita R. Gerasimuk, Tatiana A. Timofeeva, Ivan F. Ershov, Tatiana A. Semenenko, Denis Yu. Logunov and Alexander L. Gintsburg
Viruses 2026, 18(3), 327; https://doi.org/10.3390/v18030327 - 6 Mar 2026
Abstract
A cross-immunity model for influenza A(H3N2) based on convolutional neural networks (CNNs) was developed and validated under temporally structured conditions that mimic real-world forecasting. Antigenic distance was derived from hemagglutination inhibition (HI) titers. The model was trained on WHO data (2011–2023) and tested [...] Read more.
A cross-immunity model for influenza A(H3N2) based on convolutional neural networks (CNNs) was developed and validated under temporally structured conditions that mimic real-world forecasting. Antigenic distance was derived from hemagglutination inhibition (HI) titers. The model was trained on WHO data (2011–2023) and tested in a time-split fashion on independent recent data (2022–2024). Hemagglutinin sequences (HA/HA1) were encoded into 3D tensors using five physicochemical indices from AAindex. Two- and three-layer CNN architectures were tested. Performance was evaluated using Accuracy, Sensitivity, Specificity, and Matthews Correlation Coefficient (MCC) with 95% confidence intervals. Validation on the classic Smith’s dataset showed high accuracy (Accuracy = 0.9996, MCC = 0.9964), serving as a necessary sanity check. Testing on current data yielded lower but robust results (Accuracy: 0.73–0.81, MCC: 0.48–0.60), reflecting real-world forecasting complexity. ROC analysis confirmed the strong discriminative ability (AUC ≥ 0.805) and good calibration (Brier scores ≤ 0.192). The three-layer CNN demonstrated greater robustness on challenging data. This CNN model is an effective tool for assessing influenza A(H3N2) antigenic distances and holds promise for integration into epidemiological models to aid vaccine strain selection. Further accuracy improvements may arise from modeling the structural impact of amino acid substitutions and polyclonal immune responses. Full article
(This article belongs to the Section General Virology)
Show Figures

Figure 1

23 pages, 5979 KB  
Article
Physics-Informed Graph Attention Network with Topology Masking for Probabilistic Load Forecasting in Active Distribution Networks
by Wenting Lei, Weifeng Peng, Chenxi Dai and Shufeng Dong
Energies 2026, 19(5), 1294; https://doi.org/10.3390/en19051294 - 4 Mar 2026
Viewed by 106
Abstract
The integration of distributed photovoltaics (PV) introduces time-varying electrical coupling in active distribution networks, limiting the efficacy of conventional forecasting methods that rely on incomplete topological information and static physical models. This paper proposes a physics-informed spatio-temporal graph attention network (PI-STGAT) for probabilistic [...] Read more.
The integration of distributed photovoltaics (PV) introduces time-varying electrical coupling in active distribution networks, limiting the efficacy of conventional forecasting methods that rely on incomplete topological information and static physical models. This paper proposes a physics-informed spatio-temporal graph attention network (PI-STGAT) for probabilistic load forecasting under highly fluctuating conditions. A condition-adaptive correlation blending mechanism, derived from voltage–power sensitivity principles, fuses physical priors with statistical correlations using a PV-weighted strategy to capture time-varying electrical connectivity. An impedance-weighted continuous physical gating architecture maps voltage correlation coefficients into continuous attention biases, reflecting the spatial continuity of electrical distances while suppressing long-range noise. An uncertainty-aware adaptive physical constraint strategy dynamically modulates physical loss weights based on prediction variance and PV penetration, balancing fitting accuracy against physical consistency. Validation on real-world distribution network data demonstrates that, over a 24 h day-ahead horizon, PI-STGAT achieves a MAPE of 5.50%, a 3.7% relative reduction compared with LSTM. The model further attains a prediction interval coverage probability of 97.9%, confirming reliable uncertainty estimates under complex conditions. Full article
Show Figures

Figure 1

15 pages, 2281 KB  
Article
Potential for Distribution Expansion of Stephanitis chinensis in China Based on MaxEnt Model
by Hongyan Jiang, Yizhe Wang, Shichun Chen, Shuran Liao, Tingxu Chen and Xiaoqing Wang
Insects 2026, 17(3), 279; https://doi.org/10.3390/insects17030279 - 4 Mar 2026
Viewed by 115
Abstract
The tea lace bug, Stephanitis chinensis, is an important pest in the southwest tea region in China. It has recently emerged in some parts of the tea areas, severely impacting the profitability of spring tea. To clarify the distribution dynamics of S. [...] Read more.
The tea lace bug, Stephanitis chinensis, is an important pest in the southwest tea region in China. It has recently emerged in some parts of the tea areas, severely impacting the profitability of spring tea. To clarify the distribution dynamics of S. chinensis under current and future climate change, this study used the MaxEnt model and ArcGIS software to predict the distribution and dominant environmental factors of S. chinensis. The results show that the mean precipitation of the warmest quarter (Bio18), the minimum temperature of the coldest month (Bio6), annual precipitation (Bio12), and the variation coefficient of temperature (Bio4) are the dominant environmental factors affecting S. chinensis distribution. Under the current climatic conditions, the suitable habitats for S. chinensis are mainly distributed in East and South Asia, with only a small distribution in southern Europe, southeastern North America, and coastal areas of southeastern South America; the highly suitable habitats are primarily distributed in China, southern Japan, and southern South Korea. The total suitable area of S. chinensis accounts for approximately 28.58% of China’s land area. The high-suitability regions are primarily concentrated in the Guizhou, Chongqing, Sichuan, Hubei, Hunan, Shaanxi, and Jiangsu provinces. Under future climate conditions, the total suitable area of S. chinensis will increase to varying degrees, primarily expanding northward, with the extension of high-suitability areas mainly concentrated in Hubei, Anhui, and Henan. The migration distance of the geographical distribution center ranges between 32.27 km and 96.13 km, with a primary shift toward the northeast. This study predicts potential suitable areas for the tea lace bug under different climate change scenarios. Specifically, regions at the highest risk, such as the Hubei, Anhui, and Henan provinces, should enhance monitoring and early warning systems and implement timely prevention and control measures to ensure the safe production of tea. Full article
(This article belongs to the Section Insect Ecology, Diversity and Conservation)
Show Figures

Figure 1

33 pages, 5521 KB  
Article
Contrast-Free Myocardial Infarction Segmentation with Attention U-Net
by Khaled Ali Deeb, Yasmeen Alshelle, Hala Hammoud, Andrey Briko, Vladislava Kapravchuk, Alexey Tikhomirov, Amaliya Latypova and Ahmad Hammoud
Diagnostics 2026, 16(5), 768; https://doi.org/10.3390/diagnostics16050768 - 4 Mar 2026
Viewed by 158
Abstract
Background: Cardiovascular magnetic resonance (CMR) is the clinical gold standard for assessing cardiac anatomy and function. However, the manual segmentation of cardiac structures and myocardial infarction (MI) is time-consuming, prone to inter-observer variability, and often depends on contrast-enhanced imaging. Although deep learning (DL) [...] Read more.
Background: Cardiovascular magnetic resonance (CMR) is the clinical gold standard for assessing cardiac anatomy and function. However, the manual segmentation of cardiac structures and myocardial infarction (MI) is time-consuming, prone to inter-observer variability, and often depends on contrast-enhanced imaging. Although deep learning (DL) has enabled substantial automation, challenges remain in generalizability, particularly for MI detection from non-contrast cine CMR. Objective: This study proposes a comprehensive DL-based framework for automatic segmentation of cardiac structures and myocardial infarction using contrast-free cine CMR. Methods: The framework integrates multiple convolutional neural network (CNN) architectures for cardiac structure segmentation with an attention-based deep learning model for MI localization. Post-processing refinement using stacked autoencoders and active contour modeling is applied to improve anatomical consistency. Segmentation performance is evaluated using overlap-based and boundary-based metrics, including the Dice Similarity Coefficient (DSC), Mean Contour Distance (MCD), and Hausdorff Distance (HD). Results: The best-performing model achieved Dice scores of 0.93 ± 0.05 for the left ventricular (LV) cavity, 0.89 ± 0.04 for the LV myocardium, and 0.91 ± 0.06 for the right ventricular (RV) cavity, with consistently low boundary errors across all structures. Myocardial infarction segmentation achieved a Dice score of 0.80 ± 0.02 with high recall, demonstrating reliable infarct localization without the use of contrast agents. Conclusions: By enabling accurate cardiac structure and myocardial infarction segmentation from contrast-free cine CMR, the proposed framework supports broader clinical applicability, particularly for patients with contraindications to gadolinium-based contrast agents and in emergency or resource-limited settings. This approach facilitates scalable, contrast-independent cardiac assessment. Full article
(This article belongs to the Special Issue Artificial Intelligence and Computational Methods in Cardiology 2026)
Show Figures

Figure 1

29 pages, 1303 KB  
Article
Assessing the Effect of Digital Financial Inclusion on Provincial Sustainable Development in China from the Perspective of Synergistic Efficiency of Pollution Reduction and Carbon Abatement Based on DDF Measurement and a Bartik Instrumental Variable (2012–2022)
by Mingwei Song, Pingkai Wang, Mixue Liu and Shibo Chen
Sustainability 2026, 18(5), 2421; https://doi.org/10.3390/su18052421 - 2 Mar 2026
Viewed by 169
Abstract
Under the background of the “dual-carbon” goals and the ecological ecological-civilization-construction strategy, improving the synergistic efficiency of pollution reduction and carbon abatement is a key to promoting green high-quality development. Based on a panel of 30 provincial-level regions in China for 2012–2022, this [...] Read more.
Under the background of the “dual-carbon” goals and the ecological ecological-civilization-construction strategy, improving the synergistic efficiency of pollution reduction and carbon abatement is a key to promoting green high-quality development. Based on a panel of 30 provincial-level regions in China for 2012–2022, this paper evaluates the impact of digital financial inclusion on the synergistic efficiency of pollution reduction and carbon abatement. First, using a global-frontier directional-distance function (DDF), we characterize the improvement space of “desirable-output expansion—simultaneous contraction of pollution and carbon emissions” under given input constraints, and construct a synergistic efficiency indicator (eff_main). Second, we present a correlation benchmark within a two-way fixed-effects (TWFE) framework and use lead/lag (placebo) tests to probe potential endogeneity; we further construct a Bartik (shift–share) instrumental variable and employ Two-Stage Least Squares (2SLS) to strengthen causal identification. The results show that in TWFE regressions, digital financial inclusion (dif100) is positively and significantly correlated with synergistic efficiency, with a coefficient of 0.113 (i.e., an increase of 100 index points in the digital financial inclusion index is associated with an average increase of 0.113 in eff_main), but a significant lead effect is present, so this result should be interpreted as correlational only; 2SLS estimates indicate a robust positive causal effect of digital financial inclusion on synergistic efficiency, with a baseline coefficient of 0.405, rising to 0.501 under lagged specifications—exhibiting a dynamic feature of “gradual release in subsequent years.” The study suggests that developing digital financial inclusion helps raise regions’ comprehensive green-transition performance and sustainable development capacity; policy implications include accelerating the closing of digital infrastructure gaps, improving green-finance institutions and performance constraints, and guiding funds more effectively toward energy-saving, emission reduction and low-carbon technology areas. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

23 pages, 1736 KB  
Article
Enhancing Sustainable Traffic Safety Through Machine Learning: A Risk Assessment and Feature Selection Framework Using NGSIM Data
by Meltem Aslantas and Fatma Kutlu Gündoğdu
Sustainability 2026, 18(5), 2423; https://doi.org/10.3390/su18052423 - 2 Mar 2026
Viewed by 173
Abstract
Precisely assessing driving danger is essential for various applications, including the advancement of autonomous driving systems and traffic engineering decisions. This study presents a driving risk analysis framework based on the Next-Generation Simulation (NGSIM) dataset. First, vehicles were classified into four risk classes [...] Read more.
Precisely assessing driving danger is essential for various applications, including the advancement of autonomous driving systems and traffic engineering decisions. This study presents a driving risk analysis framework based on the Next-Generation Simulation (NGSIM) dataset. First, vehicles were classified into four risk classes using the Fuzzy C-Means algorithm using five key risk indicators. Subsequently, comprehensive driving behavior features representing vehicle movements were extracted and evaluated for both risk class prediction and driving behavior feature selection. A new driving risk score was developed using Spearman’s rho coefficient weights, which reflect the relationship of each risk indicator to risk levels. This score was observed to exhibit an increasing trend consistent with the sequential structure of the Fuzzy C-Means (FCM) clustering based on risk labels, thus confirming that it accurately reflects the labeling process. Furthermore, the findings show that the 26 key driving behavior features selected can predict the driving risk score developed using the XGBoost algorithm with over 85% accuracy. Moreover, feature importance analysis reveals that the following distances and inter-vehicle distance variability are particularly effective in determining driving risk. The study discusses the limitations of driving risk assessment based solely on vehicle dynamics and highlights the importance of developing enriched datasets that include multidimensional data sources such as environmental conditions, infrastructure features, traffic density, and autonomous vehicles in future risk prediction studies. Ultimately, this framework contributes to the development of safer and more efficient transportation systems, supporting environmental sustainability by reducing accident-related congestion and promoting resource-efficient traffic management. Full article
Show Figures

Figure 1

24 pages, 4414 KB  
Article
Modelling of Location Uncertainties of Leakages in Pressurized Buried Water Mains Using Leak Noise Correlator (LNC)
by Alex Yu-Ching Cheng, Tom Chun-Wai Lau and Wallace Wai-Lok Lai
Water 2026, 18(5), 588; https://doi.org/10.3390/w18050588 - 28 Feb 2026
Viewed by 120
Abstract
This paper investigates the specific positioning accuracies and uncertainties associated with the measurement of acoustic leakage noise correlation (LNC) in underground pressurized water mains, treating them as acoustic waveguides. It begins by identifying three key intrinsic sources of measurement errors: (1) the speed [...] Read more.
This paper investigates the specific positioning accuracies and uncertainties associated with the measurement of acoustic leakage noise correlation (LNC) in underground pressurized water mains, treating them as acoustic waveguides. It begins by identifying three key intrinsic sources of measurement errors: (1) the speed of acoustic waves in the water mains as influenced by pipe material, wall thickness, modulus of elasticity, and bulk modulus; (2) the distance between the two accelerometers used for correlation; (3) the time delay from the point of leakage to the accelerometers. A mathematical uncertainty model was developed to compute sensitivity coefficients, enabling the propagation of measurement errors from these sources. This was validated through seven sets of full-scale experiments conducted at Q-Leak, a 25,000 sq. ft. test site in Hong Kong. This study ultimately quantified and assessed the contributions of individual error sources to the overall uncertainty, allowing for the prioritization of factors that have the most significant impact in various scenarios. The findings reveal that Young’s modulus and pipe wall thickness are the primary factors affecting measurements for both plastic and metal pipes. Additionally, a universal in-house program, “LNC uncertainty calculator,” was developed to provide insights into the buffer ranges for confirming suspected leak locations while considering constraints within the uncertainty budget. This research highlights the critical but often overlooked area of uncertainty modeling in leak detection for pressurized buried water mains, offering valuable insights intended to enhance operational strategies and maintenance practices within the industry. This research provides a robust framework for understanding the accuracy of leak detection. This means operators can better interpret the reliability of their measurements, leading to consistent decision-making across different situations and minimizing the risk of misidentifying the presence or absence of leakage. In addition, the insights gained from prioritizing factors that affect measurement accuracy allow engineers and operators to make informed decisions about where to focus their resources and efforts. This can lead to more effective maintenance strategies that are tailored to specific conditions, thereby optimizing operational efficiency. Full article
Show Figures

Figure 1

29 pages, 5973 KB  
Article
Mitigating Regional Disparities in Green Development Amid the Trade-Off Between Economic Growth and Environmental Protection: Evidence from China
by Xianhong Su and Yunyan Li
Sustainability 2026, 18(5), 2343; https://doi.org/10.3390/su18052343 - 28 Feb 2026
Viewed by 144
Abstract
Achieving green development demands simultaneously balancing between economic growth and pollutant emission reduction, which can have complex impacts on regional disparities. This study measures the green development efficiency (GDE) for 266 Chinese cities during the 11th–13th National Five-Year Plan periods (2006–2020) by a [...] Read more.
Achieving green development demands simultaneously balancing between economic growth and pollutant emission reduction, which can have complex impacts on regional disparities. This study measures the green development efficiency (GDE) for 266 Chinese cities during the 11th–13th National Five-Year Plan periods (2006–2020) by a global non-oriented endogenous directional distance function (endogenous DDF) that endogenizes direction vectors using the maximum improvement potential. Regional disparities are then quantified and decomposed by the Dagum Gini coefficient decomposition (intra-group, net inter-group, and hypervariable density) across seven regions. To make the policy implications operational, we further derive the separable economic efficiency loss and environmental efficiency loss components from the endogenous DDF and identify cities’ optimal preference options (γ); scenario experiments (5–30% improvement) are used to validate whether differentiated improvement directions can simultaneously raise GDE and narrow disparities. The main findings are as follows: (1) From the 11th Five to the 13th Five, cities’ GDE evolved from a median-centered distribution to a bimodal distribution, accompanied by spatial polarization and widening regional disparities. (2) By the 13th Five period, regional disparities had deepened, mainly driven by inter-group differences, notably hypervariable density. The original regional development patterns were disrupted, leading to increased overlap across cities. (3) Most cities have shifted from a “green-oriented” to an “economic-oriented” development preference since the 13th Five period. North China, South China, and East China favor pollution reduction, while others prioritize economic growth. (4) Preference options for cities with varying resource endowments should adapt over time. Under various hypothetical scenarios, cities adopting differentiated optimal options can enhance their GDE while simultaneously narrowing regional disparities. Reducing arbitrariness in balancing emission reduction and economic growth can promote regionally coordinated and environmentally sustainable development. Full article
(This article belongs to the Section Sustainable Urban and Rural Development)
Show Figures

Figure 1

33 pages, 3628 KB  
Article
Stone Matrix Asphalt with Fischer–Tropsch Wax and Recycled Rubber: A Multi-Scale Evaluation of Mechanical and Functional Performance
by Roman Pacholak, Biruh Alemayehu Seyoum and Mohamed Eladly
Materials 2026, 19(5), 928; https://doi.org/10.3390/ma19050928 - 28 Feb 2026
Viewed by 193
Abstract
This study investigates the synergistic use of Fischer–Tropsch wax (FTW) and recycled rubber powder (RP) as dual modifiers in stone mastic asphalt (SMA11) to improve its mechanical and functional performance. Rheological analysis demonstrated that an FTW content of 4% achieves the optimal balance [...] Read more.
This study investigates the synergistic use of Fischer–Tropsch wax (FTW) and recycled rubber powder (RP) as dual modifiers in stone mastic asphalt (SMA11) to improve its mechanical and functional performance. Rheological analysis demonstrated that an FTW content of 4% achieves the optimal balance of high-temperature rutting resistance, aging resistance, and workability, with a binder viscosity of 1.6 Pa·s at 135 °C. When incorporated into SMA11 mixtures at 15%, RP yielded the best overall mechanical performance, including a reduction in rut depth to 1.22 mm and a 25% decrease in wheel tracking slope (WTS). The 15% RP mixtures also exhibited superior long-term skid resistance (μm = 0.329 after 180,000 polishing cycles, corresponding to a 13% reduction in braking distance) and enhanced thermal cracking resistance (failure temperature improved by 8.0 °C to −32.7 °C). An RP content of 5% maximized moisture resistance (ITSR = 100%), while 10% RP produced the highest mid-frequency sound absorption coefficient (α = 0.050). The hybrid modification system enables a 20 °C reduction in production temperature, consistent with published data on wax-based warm-mix technologies, and is associated with reduced energy consumption and lower emissions. The approach simultaneously supports sustainable pavement design through the high-value reuse of waste tire rubber. Full article
Show Figures

Graphical abstract

13 pages, 11104 KB  
Article
A Highly Compact and Isolated Triple-Band MIMO Antenna for Wireless Capsule Endoscopy and Cardiac Implant
by Tahir Bashir, Guanjie Feng, Shunbiao Chen, Yunqi Cao and Wei Li
Micromachines 2026, 17(3), 296; https://doi.org/10.3390/mi17030296 - 27 Feb 2026
Viewed by 181
Abstract
This work presents a highly compact triple-band multi-input-multi-output (MIMO) implantable antenna for wireless capsule endoscopy (WCE) and leadless cardiac pacemakers. The proposed antenna operates at industrial, scientific, and medical (ISM) bands of 2.400 to 2.480 GHz and 5.725 to 5.875 GHz for data [...] Read more.
This work presents a highly compact triple-band multi-input-multi-output (MIMO) implantable antenna for wireless capsule endoscopy (WCE) and leadless cardiac pacemakers. The proposed antenna operates at industrial, scientific, and medical (ISM) bands of 2.400 to 2.480 GHz and 5.725 to 5.875 GHz for data telemetry and the wireless medical telemetry service (WMTS) band of 1.395 to 1.432 GHz for efficient wireless power transfer. The four-element design measures 8.5 × 8.5 × 0.26 mm3 and achieves low mutual coupling through a planar four-port configuration with optimized inter-element spacing. The antenna is integrated within realistic capsule devices containing batteries, sensors, and electronic components, and evaluated in both homogeneous and realistic heterogeneous body phantoms, including the large intestine and heart. The design yields maximum reflection coefficients of −26.15 dB, −15 dB, and −36.32 dB, −10 dB bandwidths of 260 MHz, 160 MHz, and 160 MHz, mutual coupling of −37.74 dB, −44.55 dB, −26.48 dB, and peak realized gains of −35 dBi, −25 dBi, and −15 dBi at 1.4 GHz, 2.45 GHz, and 5.8 GHz, respectively. Specific absorption rate (SAR) analysis satisfies implantation safety limits. Link budget analysis confirms reliable communication over distances > 20 m in all bands with data rates up to 100 Mbps. MIMO channel parameters such as envelope correlation coefficient (ECC) and diversity gain (DG) remain within acceptable limits. Owing to its multi-band operation, miniaturization, and isolation, the proposed four-port antenna is a good candidate for next-generation WCE and leadless pacemaker systems. Full article
Show Figures

Figure 1

28 pages, 11887 KB  
Article
Effect of Layer Thickness and Scanning Parameters on Melt Pool Geometry and Track Continuity in Powder-Bed Arc Additive Manufacturing
by Arif Balci and Fatih Alibeyoglu
Metals 2026, 16(3), 259; https://doi.org/10.3390/met16030259 - 26 Feb 2026
Viewed by 194
Abstract
Powder-bed arc additive manufacturing (PBAAM) may reduce the cost of powder-bed metal additive manufacturing and enable thicker layers than laser powder bed fusion (LPBF), but melt-track stability limits are not well established. Here, 316L stainless steel powder (15–53 µm) was melted by a [...] Read more.
Powder-bed arc additive manufacturing (PBAAM) may reduce the cost of powder-bed metal additive manufacturing and enable thicker layers than laser powder bed fusion (LPBF), but melt-track stability limits are not well established. Here, 316L stainless steel powder (15–53 µm) was melted by a TIG-based arc in a custom powder-bed system while varying current, travel speed, layer thickness and hatch distance. Single tracks on an inclined bed (≈0–0.4 mm thickness) were used to identify continuity loss and melt-pool width, quantified from top-view images via width profiles, a gap-based continuity metric and the coefficient of variation. Parallel-track tests at 0.15, 0.20 and 0.25 mm layer thickness with hatch distances set to 25%, 50% and 75% of the measured melt-pool width assessed inter-track bonding and lack of fusion, and selected parameters were validated in five-layer builds. Higher current with low-to-moderate travel speeds produced wider, more stable melt pools on the inclined bed. Hatch ratios of 25–50% were the most effective for sustaining fusion in single layers and multi-layer builds, whereas 75% promoted unbonded regions and narrow-track morphologies. Overall, PBAAM can process substantially thicker layers with relatively simple equipment, but requires a narrow, carefully tuned window to balance continuity, fusion and heat accumulation. Full article
16 pages, 1387 KB  
Article
Between-Session Reliability of GPS Technology for Quantifying Linear and Curvilinear Base-Running Performance
by José Antonio Martínez-Rodríguez, Ryan L. Crotin, Jonathon Neville, Roderick A. Barcelo and John B. Cronin
Appl. Sci. 2026, 16(5), 2224; https://doi.org/10.3390/app16052224 - 25 Feb 2026
Viewed by 165
Abstract
The purpose of this study was to quantify the between-session reliability of time, velocity, and distance measures over 54.7 m straight-line and home-to-second base sprints (curvilinear), using global positioning satellite (GPS) technology. Twelve trained male high school baseball position players attended four sessions: [...] Read more.
The purpose of this study was to quantify the between-session reliability of time, velocity, and distance measures over 54.7 m straight-line and home-to-second base sprints (curvilinear), using global positioning satellite (GPS) technology. Twelve trained male high school baseball position players attended four sessions: one familiarization session and three identical testing sessions, separated by at least two days, each consisting of two linear and two curvilinear trials. There was no statistically significant evidence (p < 0.05) of systematic change in any of the variables between sessions, with the majority of the mean percent changes ranging from −2.7 to 2.5%, and only four between-session comparisons greater than 2% (−6.2 to 3.4%). In terms of absolute consistency, no measure exceeded a coefficient of variation (CV) of 10%, with the majority (93%) of the CVs under 5%. With regard to relative consistency, 66% of the measures had intraclass correlation coefficients (ICCs) greater than 0.74, ranging from 0.76 to 0.98. Comparison of smallest worthwhile change (SWC) values with CV-derived typical error indicated that several key time- and speed-based metrics were sensitive to meaningful performance changes, with error estimates that were comparable to or smaller than SWC. In contrast, event-timed typical errors (e.g., time to peak speed) were substantially greater than the SWC, indicating limited sensitivity for detecting small performance changes. The non-significant changes in the mean, low CVs, and high ICCs, for the most part, over repeated testing occasions, indicate acceptable between-session reliability for many of the procedures and GPS-derived variables examined in this study. Practitioners should prioritize linear time at 41.1 m and 54.7 m and velocity at 27.4 m and 41.1 m for return-to-play and short-term performance tracking. For curvilinear running, peak speed before first base, peak speed before second base and after first base, and speed at 41.1 m are the most suitable monitoring metrics based on the results. Specifically, speed at 41.1 m should be considered for return-to-play and short-term performance tracking, while peak speed before first base and peak speed before second base and after first base may be used cautiously when larger performance changes are expected. Full article
Show Figures

Figure 1

Back to TopTop