Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (864)

Search Parameters:
Keywords = partition coefficients

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3382 KB  
Article
Sources of Heavy Metals and Their Effects on Distribution at the Sediment–Water Interface of the Yellow Sea Shelf off Northern Jiangsu
by Wenyu Liu, Yu Li, Xinjun Wang and Yuhan Cao
Toxics 2026, 14(2), 133; https://doi.org/10.3390/toxics14020133 - 29 Jan 2026
Viewed by 122
Abstract
To investigate the distribution, sources, and partitioning of heavy metals at the sediment–water interface in the northern Jiangsu coastal waters, seawater and sediment samples were collected from 24 stations east of Yanwei Port in April 2021. The concentrations of seven heavy metals (Cu, [...] Read more.
To investigate the distribution, sources, and partitioning of heavy metals at the sediment–water interface in the northern Jiangsu coastal waters, seawater and sediment samples were collected from 24 stations east of Yanwei Port in April 2021. The concentrations of seven heavy metals (Cu, Pb, Zn, Cd, Cr, Hg, and As) and environmental parameters were determined. Methods including principal component analysis (PCA), random forest (RF), positive matrix factorization (PMF), the partition coefficient (Kp), and the source-specific partition coefficient (S-Kp) were applied. The results showed the following: (1) The overall concentration order was Zn > Cu > As > Pb > Cd > Hg in seawater and Zn > Cr > Cu > Pb > As > Hg > Cd in sediments, with Cd and Pb characterized by high spatial variability. (2) PCA and RF indicated that dissolved heavy metals were mainly influenced by dissolved oxygen, petroleum, phosphate, and dissolved inorganic nitrogen, with DIN playing a common dominant role. PMF revealed three main sources for sediment metals: agricultural (contributing notably to Cu and Zn), traffic and industrial exhaust (dominating Pb, Cr, and Hg inputs), and industrial (primarily affecting Cd, Cr, and Pb). (3) Kp analysis suggested that Pb, As, and Cu were readily adsorbed by sediments, while Cd, Hg, and Zn tended to remain dissolved. Critically, S-Kp demonstrated source dependent partitioning: Pb derived from industrial sources was almost entirely associated with sediments, while Cu and Zn originating from traffic and industrial exhaust emissions were predominantly present in the aqueous phase, and Cu and Pb derived from agricultural sources were largely deposited in sediments. These findings provide a scientific basis for heavy metal pollution control in the region. Full article
Show Figures

Graphical abstract

17 pages, 1210 KB  
Article
Modeling Multi-Fracture Propagation in Fractured Reservoirs: Impacts of Limited-Entry and Temporary Plugging
by Wenjie Li, Hongjian Li, Tianbin Liao, Chao Duan, Tianyu Nie, Pan Hou, Minghao Hu and Bo Wang
Processes 2026, 14(3), 450; https://doi.org/10.3390/pr14030450 - 27 Jan 2026
Viewed by 102
Abstract
Staged multi-cluster fracturing in horizontal wells is a key technology for efficiently developing unconventional oil and gas reservoirs. Extreme Limited-Entry Fracturing (ELF) and Temporary Plugging Fracturing (TPF) are effective techniques to enhance the uniformity of fracture stimulation within a stage. However, in fractured [...] Read more.
Staged multi-cluster fracturing in horizontal wells is a key technology for efficiently developing unconventional oil and gas reservoirs. Extreme Limited-Entry Fracturing (ELF) and Temporary Plugging Fracturing (TPF) are effective techniques to enhance the uniformity of fracture stimulation within a stage. However, in fractured reservoirs, the propagation morphology of multiple intra-stage fractures and fluid distribution patterns becomes significantly more complex under the influence of ELF and TPF. This complexity results in a lack of theoretical guidance for optimizing field operational parameters. This study establishes a competitive propagation model for multiple hydraulic fractures (HFs) within a stage under ELF and TPF conditions in fractured reservoirs based on the Displacement Discontinuity Method (DDM) and fluid mechanics theory. The accuracy of the model was verified by comparing it with laboratory experimental results and existing numerical simulation results. Using this model, the influence of ELF and TPF on intra-stage fracture propagation morphology and fluid partitioning was investigated. Results demonstrate that extremely limited-entry perforation and ball-sealer diversion effectively mitigate the additional flow resistance induced by both the stress shadow effect and the connection of natural fractures (NFs), thereby mitigating uneven fluid distribution and imbalanced fracture propagation among clusters. ELF artificially creates extremely high perforation friction by drastically reducing the number of perforations or the perforation diameter, thereby forcing the fracturing fluid to enter multiple perforation clusters relatively uniformly. Compared to the unlimited-entry scheme (16 perforations/cluster), the limited-entry scheme (5 perforations/cluster) yielded a 37.84% improvement in fluid distribution uniformity and reduced the coefficient of variation (CV) for fracture length and fluid intake by 54.28% and 44.16%, respectively. The essence of the TPF is non-uniform perforation distribution, which enables the perforation clusters with large fluid intake to obtain more temporary plugging balls (TPBs), so that their perforation friction can be increased and their fluid intake can be reduced, thereby diverting the fluid to the perforation clusters with small fluid intake. Deploying TPBs (50% of total perforations) at the mid-stage of fracturing (50% time) increased fluid distribution uniformity by 37.86% and reduced the CV of fracture length and fluid intake by 72.54% and 58.39%, respectively. This study provides methodological and modeling foundations for systematic optimization of balanced stimulation parameters in fractured reservoirs. Full article
(This article belongs to the Special Issue New Technology of Unconventional Reservoir Stimulation and Protection)
18 pages, 502 KB  
Article
A Multi-Key Homomorphic Scheme Based on Multivariate Polynomial Look-Up Tables Evaluation
by Jiang Shen, Ruwei Huang, Lei Lei, Junjie Wang and Junbin Qiu
Mathematics 2026, 14(3), 430; https://doi.org/10.3390/math14030430 - 26 Jan 2026
Viewed by 144
Abstract
Multi-key homomorphic encryption (MKHE) is crucial for secure collaborative computing, yet it suffers from high multiplicative depth and computational overhead during Look-Up Table (LUT) evaluations, particularly for large input domains. To address these challenges, this paper proposes an optimized LUT evaluation method based [...] Read more.
Multi-key homomorphic encryption (MKHE) is crucial for secure collaborative computing, yet it suffers from high multiplicative depth and computational overhead during Look-Up Table (LUT) evaluations, particularly for large input domains. To address these challenges, this paper proposes an optimized LUT evaluation method based on multivariate polynomial approximation. Specifically, we partition the high-dimensional input space into several lower-dimensional variables to design low-depth multivariate polynomials. By integrating blockwise encoding and tensor-based transformations, we construct a parallelizable evaluation framework that maps multivariate functions into a high-dimensional polynomial-coefficient space. This approach allows for efficient parallel processing and effective noise management. Theoretical analysis demonstrates that our method significantly reduces the multiplicative depth from O(l) to O(l/α), indicating its robustness and efficiency in large-scale LUT scenarios. Full article
Show Figures

Figure 1

11 pages, 1164 KB  
Article
Electron Energies of Two-Dimensional Lithium with the Dirac Equation
by Raúl García-Llamas, Jesús D. Valenzuela-Sau, Jorge A. Gaspar-Armenta and Rafael A. Méndez-Sánchez
Crystals 2026, 16(2), 79; https://doi.org/10.3390/cryst16020079 - 23 Jan 2026
Viewed by 95
Abstract
The electronic band structure of two-dimensional lithium is calculated using the Dirac equation. Lithium is modeled as a two-dimensional square lattice in which the two strongly bound inner electrons and the fixed nucleus are treated as a positively charged ion (+e), while the [...] Read more.
The electronic band structure of two-dimensional lithium is calculated using the Dirac equation. Lithium is modeled as a two-dimensional square lattice in which the two strongly bound inner electrons and the fixed nucleus are treated as a positively charged ion (+e), while the outer electron is assumed to be uniformly distributed within the cell. The electronic potential is obtained by considering Coulomb-type interactions between the charges inside the unit cell and those in the surrounding cells. A numerical method that divides the unit cell into small pieces is employed to calculate the potential and then the Fourier coefficients are obtained. The Bloch method is used to determine the energy bands, leading to an eigenvalue matrix equation (in momentum space) of infinite dimension, which is truncated and solved using standard matrix diagonalization techniques. Convergence is analyzed with respect to the key parameters influencing the calculation: the lattice period, the dimension of the eigenvalue matrix, the unit-cell partition used to compute the potential’s Fourier coefficients, and the number of neighboring cells that contribute to the electronic interaction. Full article
(This article belongs to the Section Materials for Energy Applications)
Show Figures

Figure 1

12 pages, 248 KB  
Article
Blockwise Exponential Covariance Modeling for High-Dimensional Portfolio Optimization
by Congying Fan and Jacquline Tham
Symmetry 2026, 18(1), 171; https://doi.org/10.3390/sym18010171 - 16 Jan 2026
Viewed by 110
Abstract
This paper introduces a new framework for high-dimensional covariance matrix estimation, the Blockwise Exponential Covariance Model (BECM), which extends the traditional block-partitioned representation to the log-covariance domain. By exploiting the block-preserving properties of the matrix logarithm and exponential transformations, the proposed model guarantees [...] Read more.
This paper introduces a new framework for high-dimensional covariance matrix estimation, the Blockwise Exponential Covariance Model (BECM), which extends the traditional block-partitioned representation to the log-covariance domain. By exploiting the block-preserving properties of the matrix logarithm and exponential transformations, the proposed model guarantees strict positive definiteness while substantially reducing the number of parameters to be estimated through a blockwise log-covariance parameterization, without imposing any rank constraint. Within each block, intra- and inter-group dependencies are parameterized through interpretable coefficients and kernel-based similarity measures of factor loadings, enabling a data-driven representation of nonlinear groupwise associations. Using monthly stock return data from the U.S. stock market, we conduct extensive rolling-window tests to evaluate the empirical performance of the BECM in minimum-variance portfolio construction. The results reveal three main findings. First, the BECM consistently outperforms the Canonical Block Representation Model (CBRM) and the native 1/N benchmark in terms of out-of-sample Sharpe ratios and risk-adjusted returns. Second, adaptive determination of the number of clusters through cross-validation effectively balances structural flexibility and estimation stability. Third, the model maintains numerical robustness under fine-grained partitions, avoiding the loss of positive definiteness common in high-dimensional covariance estimators. Overall, the BECM offers a theoretically grounded and empirically effective approach to modeling complex covariance structures in high-dimensional financial applications. Full article
(This article belongs to the Section Mathematics)
21 pages, 10154 KB  
Article
Sea Ice Concentration Retrieval in the Arctic and Antarctic Using FY-3E GNSS-R Data
by Tingyu Xie, Cong Yin, Weihua Bai, Dongmei Song, Feixiong Huang, Junming Xia, Xiaochun Zhai, Yueqiang Sun, Qifei Du and Bin Wang
Remote Sens. 2026, 18(2), 285; https://doi.org/10.3390/rs18020285 - 15 Jan 2026
Viewed by 239
Abstract
Recognizing the critical role of polar Sea Ice Concentration (SIC) in climate feedback mechanisms, this study presents the first comprehensive investigation of China’s Fengyun-3E(FY-3E) GNOS-II Global Navigation Satellite System Reflectometry (GNSS-R) for bipolar SIC retrieval. Specifically, reflected signals from multiple Global Navigation Satellite [...] Read more.
Recognizing the critical role of polar Sea Ice Concentration (SIC) in climate feedback mechanisms, this study presents the first comprehensive investigation of China’s Fengyun-3E(FY-3E) GNOS-II Global Navigation Satellite System Reflectometry (GNSS-R) for bipolar SIC retrieval. Specifically, reflected signals from multiple Global Navigation Satellite Systems (GNSS) are utilized to extract characteristic parameters from Delay Doppler Maps (DDMs). By integrating regional partitioning and dynamic thresholding for sea ice detection, a Random Forest Regression (RFR) model incorporating a rolling-window training strategy is developed to estimate SIC. The retrieved SIC products are generated at the native GNSS-R observation resolution of approximately 1 × 6 km, with each SIC estimate corresponding to an individual GNSS-R observation time. Owing to the limited daily spatial coverage of GNSS-R measurements, the retrieved SIC results are further aggregated into monthly composites for spatial distribution analysis. The model is trained and validated across both polar regions, including targeted ice–water boundary zones. Retrieved SIC estimates are compared with reference data from the OSI SAF Special Sensor Microwave Imager Sounder (SSMIS), demonstrating strong agreement. Based on an extensive dataset, the average correlation coefficient (R) reaches 0.9450 in the Arctic and 0.9602 in the Antarctic for the testing set, with corresponding Root Mean Squared Error (RMSE) of 0.1262 and 0.0818, respectively. Even in the more challenging ice–water transition zones, RMSE values remain within acceptable ranges, reaching 0.1486 in the Arctic and 0.1404 in the Antarctic. This study demonstrates the feasibility and accuracy of GNSS-R-based SIC retrieval, offering a robust and effective approach for cryospheric monitoring at high latitudes in both polar regions. Full article
Show Figures

Figure 1

25 pages, 5552 KB  
Article
Predicting Carbonation Depth of Recycled Aggregate Concrete Using Optuna-Optimized Explainable Machine Learning
by Yuxin Chen, Xiaoyuan Li, Enming Li and Jian Zhou
Buildings 2026, 16(2), 349; https://doi.org/10.3390/buildings16020349 - 14 Jan 2026
Viewed by 253
Abstract
Accurately predicting the carbonation depth of recycled aggregate (RA) concrete is essential for durability assessment. Based on a dataset of 682 experimental samples, this study employed seven machine learning algorithms to develop prediction models for the carbonation depth of RA concrete. The Optuna [...] Read more.
Accurately predicting the carbonation depth of recycled aggregate (RA) concrete is essential for durability assessment. Based on a dataset of 682 experimental samples, this study employed seven machine learning algorithms to develop prediction models for the carbonation depth of RA concrete. The Optuna framework was utilized to conduct 500 trials of hyperparameter optimization for these models, with the objective of minimizing the 5-fold cross-validated mean squared error. Results indicate that model performance improved significantly after optimization. Among them, the XGBoost model achieved the best performance, with a coefficient of determination (R2) of 0.9789, root mean squared error (RMSE) of 1.0811, mean absolute error (MAE) of 0.6972, mean absolute percentage error (MAPE) of 8.7932%, variance accounted for (VAF) of 97.8966%, and mean bias error (MBE) of 0.0641 on the test set. Explainability analysis using SHapley Additive exPlanations (SHAP) further revealed that exposure time is the most significant factor influencing the carbonation depth prediction. Additionally, considering that the database incorporates both natural and accelerated carbonation conditions, the samples were partitioned based on CO2 concentration and conducts a stratified performance evaluation. The results demonstrate that the model maintains high predictive accuracy under natural carbonation as well as across different accelerated carbonation intervals, indicating that, within the scope covered by the current dataset, the proposed approach provides a highly accurate and interpretable tool for predicting the carbonation depth of recycled aggregate concrete. Full article
Show Figures

Figure 1

34 pages, 14353 KB  
Article
Nationwide Prediction of Flood Damage Costs in the Contiguous United States Using ML-Based Models: A Data-Driven Approach
by Khaled M. Adel, Hany G. Radwan and Mohamed M. Morsy
Hydrology 2026, 13(1), 31; https://doi.org/10.3390/hydrology13010031 - 14 Jan 2026
Viewed by 296
Abstract
Flooding remains one of the most disruptive and costly natural hazards worldwide. Conventional approaches for estimating flood damage cost rely on empirical loss curves or historical insurance data, which often lack spatial resolution and predictive robustness. This study develops a data-driven framework for [...] Read more.
Flooding remains one of the most disruptive and costly natural hazards worldwide. Conventional approaches for estimating flood damage cost rely on empirical loss curves or historical insurance data, which often lack spatial resolution and predictive robustness. This study develops a data-driven framework for estimating flood damage costs across the contiguous United States, where comprehensive hydrologic, climatic, and socioeconomic data are available. A database of 17,407 flood events was compiled, incorporating approximately 38 parameters obtained from the National Oceanic and Atmospheric Administration (NOAA), the National Water Model (NWM), the United States Geological Survey (USGS NED), and the U.S. Census Bureau. Data preprocessing addressed missing values and outliers using the interquartile range and Walsh tests, followed by partitioning into training (70%), testing (15%), and validation (15%) subsets. Four modeling configurations were examined to improve predictive accuracy. The optimal hybrid regression–classification framework achieved correlation coefficients of 0.97 (training), 0.77 (testing), and 0.81 (validation) with minimal bias (−5.85, −107.8, and −274.5 USD, respectively). The findings demonstrate the potential of nationwide, event-based predictive approaches to enhance flood-damage cost assessment, providing a practical tool for risk evaluation and resource planning. Full article
Show Figures

Figure 1

13 pages, 683 KB  
Article
Translational Model to Predict Lung and Prostate Distribution of Levofloxacin in Humans
by Estevan Sonego Zimmermann, Teresa Dalla Costa, Brian Cicali, Mohammed Almoslem, Rodrigo Cristofoletti and Stephan Schmidt
Pharmaceutics 2026, 18(1), 107; https://doi.org/10.3390/pharmaceutics18010107 - 13 Jan 2026
Viewed by 423
Abstract
Background/Objectives: Levofloxacin (LVX) is a fluoroquinolone approved for the treatment of bacterial pneumonia, sinusitis, and prostatitis. Emerging in vitro and preclinical evidence suggests that efflux transporters are involved in LVX’s target tissue site distribution. Methods: The objective of this research was to [...] Read more.
Background/Objectives: Levofloxacin (LVX) is a fluoroquinolone approved for the treatment of bacterial pneumonia, sinusitis, and prostatitis. Emerging in vitro and preclinical evidence suggests that efflux transporters are involved in LVX’s target tissue site distribution. Methods: The objective of this research was to characterize tissue exposure using a physiologically based pharmacokinetic (PBPK) model to be able to make more educated choices for optimal doses using target site pharmacokinetics data. Results: The final PBPK model in humans was applied to simulate free target site concentrations of LVX in lung and prostate, linking to minimum inhibitory concentrations (MIC) to assess appropriateness of currently approved dosing regimens for infections in both tissues. The clinical PBPK model was able to reproduce total plasma as well as free lung and prostate exposure of LVX in humans. Efflux transporters participate in LVX distribution to prostatic but not pulmonary tissue. Our results show a good penetration of LVX in both tissues with unbound partition coefficient (Kp,uu) equal to 0.79 and 0.72 for lung and prostate, respectively. Since LVX penetration in lung and prostate is similar, different sensitivities of the pathogens to LVX will dictate the effectiveness of the approved therapeutic regimen in the treatment of bacterial pneumonia, sinusitis, and prostatitis. Conclusions: Our research provides relevant insight into LVX’s target site exposure in lung and prostate. When integrated with pathogen-specific susceptibility data, these findings can be applied to refine current dosing regimens and help optimize the pharmacological treatment outcomes. Full article
(This article belongs to the Section Pharmacokinetics and Pharmacodynamics)
Show Figures

Figure 1

27 pages, 11326 KB  
Article
Numerical Study on Lost Circulation Mechanism in Complex Fracture Network Coupled Wellbore and Its Application in Lost-Circulation Zone Diagnosis
by Zhichao Xie, Yili Kang, Chengyuan Xu, Lijun You, Chong Lin and Feifei Zhang
Processes 2026, 14(1), 143; https://doi.org/10.3390/pr14010143 - 31 Dec 2025
Viewed by 319
Abstract
Deep and ultra-deep drilling operations commonly encounter fractured and fracture-vuggy formations, where weak wellbore strength and well-developed fracture networks lead to frequent lost circulation, presenting a key challenge to safe and efficient drilling. Existing diagnostic practices mostly rely on drilling fluid loss dynamic [...] Read more.
Deep and ultra-deep drilling operations commonly encounter fractured and fracture-vuggy formations, where weak wellbore strength and well-developed fracture networks lead to frequent lost circulation, presenting a key challenge to safe and efficient drilling. Existing diagnostic practices mostly rely on drilling fluid loss dynamic models of single fractures or simplified discrete fractures to invert fracture geometry, which cannot capture the spatiotemporal evolution of loss in complex fracture networks, resulting in limited inversion accuracy and a lack of quantitative, fracture-network-based loss-dynamics support for bridge-plugging design. In this study, a geologically realistic wellbore–fracture-network coupled loss dynamic model is constructed to overcome the limitations of single- or simplified-fracture descriptions. Within a unified computational fluid dynamics (CFD) framework, solid–liquid two-phase flow and Herschel–Bulkley rheology are incorporated to quantitatively characterise fracture connectivity. This approach reveals how instantaneous and steady losses are controlled by key geometrical factors, thereby providing a computable physical basis for loss-zone inversion and bridge-plugging design. Validation against experiments shows a maximum relative error of 7.26% in pressure and loss rate, indicating that the model can reasonably reproduce actual loss behaviour. Different encounter positions and node types lead to systematic variations in loss intensity and flow partitioning. Compared with a single fracture, a fracture network significantly amplifies loss intensity through branch-induced capacity enhancement, superposition of shortest paths, and shortening of loss paths. In a typical network, the shortest path accounts for only about 20% of the total length, but contributes 40–55% of the total loss, while extending branch length from 300 mm to 1500 mm reduces the steady loss rate by 40–60%. Correlation analysis shows that the instantaneous loss rate is mainly controlled by the maximum width and height of fractures connected to the wellbore, whereas the steady loss rate has a correlation coefficient of about 0.7 with minimum width and effective path length, and decreases monotonically with the number of connected fractures under a fixed total width, indicating that the shortest path and bottleneck width are the key geometrical factors governing long-term loss in complex fracture networks. This work refines the understanding of fractured-loss dynamics and proposes the concept of coupling hydraulic deviation codes with deep learning to build a mapping model from mud-logging curves to fracture geometrical parameters, thereby providing support for lost-circulation diagnosis and bridge-plugging optimisation in complex fractured formations. Full article
Show Figures

Figure 1

14 pages, 1776 KB  
Article
Theoretical Computation-Driven Screening and Mechanism Study of Washing Oil Composite Solvents for Benzene Waste Gas Absorption
by Chengyi Qiu, Zekai Jin, Meisi Chen, Li Wang, Sisi Li, Gang Zhang, Muhua Chen, Xinbao Zhu and Bo Fu
Atmosphere 2026, 17(1), 52; https://doi.org/10.3390/atmos17010052 - 31 Dec 2025
Viewed by 393
Abstract
In order to solve the problems of high volatility and insufficient absorption effect when using chemical by-product washing oil to treat benzene-containing waste gas, this study innovatively proposed a composite solvent screening method based on the solvation free energy (ΔGsol), and [...] Read more.
In order to solve the problems of high volatility and insufficient absorption effect when using chemical by-product washing oil to treat benzene-containing waste gas, this study innovatively proposed a composite solvent screening method based on the solvation free energy (ΔGsol), and reasonably predicted the absorption performance of 26 solvents for benzene. Through theoretical calculation and experimental verification, tetraethylene glycol dimethyl ether (TGDE) was finally determined to be the optimal composite component of washing oil. The absorption efficiency of the composite solvent reached 96.2%, and the regeneration efficiency was stable after 12 cycles with a mass loss of only 2.4%. Quantum computing simulation revealed that the dispersion force is dominant between benzene and the solvent, and TGDE enhances the electrostatic interaction through weak hydrogen bonds. The synergistic effect of the two improves the absorption performance. This study provides theoretical and technical support for the development of efficient and renewable benzene waste gas recovery solvent systems. Full article
(This article belongs to the Section Air Pollution Control)
Show Figures

Figure 1

15 pages, 3206 KB  
Article
Austenite Formation Kinetics of Dual-Phase Steels: Insights from a Mixed-Control Model Under Different Heating Conditions
by Huifang Lan, Xiaoying Hui, Jiangbo Du, Shuai Tang and Linxiu Du
Modelling 2026, 7(1), 7; https://doi.org/10.3390/modelling7010007 - 29 Dec 2025
Viewed by 201
Abstract
A semi-analytical mixed-control model based on the Non-Partitioned Local Equilibrium (NPLE) assumption was developed to simulate the austenite phase transformation kinetics during heating and isothermal processes. The model was validated by comparing the simulation results with experimental data, showing excellent agreement. The effects [...] Read more.
A semi-analytical mixed-control model based on the Non-Partitioned Local Equilibrium (NPLE) assumption was developed to simulate the austenite phase transformation kinetics during heating and isothermal processes. The model was validated by comparing the simulation results with experimental data, showing excellent agreement. The effects of various model parameters and process conditions on the phase transformation kinetics was investigated. The results indicate that higher heating rates lead to an increase in the austenite volume fraction at the start of the isothermal hold, accelerating the transformation and resulting in a more complete phase transformation. The transformation during the isothermal stage was found to follow a mixed control mode at all investigated heating rates. Increasing the mobility coefficient enhances interface migration, thereby accelerating the transformation kinetics, while decreasing the grain size promotes nucleation, further accelerating the phase transformation. Modifying the diffusion coefficient had a minor effect on transformation kinetics. Additionally, raising the isothermal temperature increased both the austenite volume fraction at the beginning and end of the isothermal process and the interface migration velocity, suggesting that temperature dominates the phase transformation rather than time. The phase transformation mode under different process conditions was also investigated. For both 5 °C/s and 100 °C/s heating rates, the phase transformation during the isothermal process was predominantly interface-controlled, as indicated by the mixed-mode parameter approaching 1, with a rapid increase followed by a decrease. Full article
Show Figures

Figure 1

51 pages, 5351 KB  
Article
Isogeometric Transfinite Elements: A Unified B-Spline Framework for Arbitrary Node Layouts
by Christopher G. Provatidis
Axioms 2026, 15(1), 28; https://doi.org/10.3390/axioms15010028 - 29 Dec 2025
Viewed by 216
Abstract
This paper presents a unified framework for constructing partially unstructured B-spline transfinite finite elements with arbitrary nodal distributions. Three novel, distinct classes of elements are investigated and compared with older single Coons-patch elements. The first consists of classical transfinite elements reformulated using B-spline [...] Read more.
This paper presents a unified framework for constructing partially unstructured B-spline transfinite finite elements with arbitrary nodal distributions. Three novel, distinct classes of elements are investigated and compared with older single Coons-patch elements. The first consists of classical transfinite elements reformulated using B-spline basis functions. The second includes elements defined by arbitrary control point networks arranged in parallel layers along one direction. The third features arbitrarily placed boundary nodes combined with a tensor-product structure in the interior. For all three classes, novel macro-element formulations are introduced, enabling flexible and customizable nodal configurations while preserving the partition of unity property. The key innovation lies in reinterpreting the generalized coefficients as discrete samples of an underlying continuous univariate function, which is independently approximated at each station in the transfinite element. This perspective generalizes the classical transfinite interpolation by allowing both the blending functions and the univariate trial functions to be defined using non-cardinal bases such as Bernstein polynomials or B-splines, offering enhanced adaptability for complex geometries and nonuniform node layouts. Full article
Show Figures

Figure 1

36 pages, 35595 KB  
Article
Robust ISAR Autofocus for Maneuvering Ships Using Centerline-Driven Adaptive Partitioning and Resampling
by Wenao Ruan, Chang Liu and Dahu Wang
Remote Sens. 2026, 18(1), 105; https://doi.org/10.3390/rs18010105 - 27 Dec 2025
Viewed by 342
Abstract
Synthetic aperture radar (SAR) is a critical enabling technology for maritime surveillance. However, maneuvering ships often appear defocused in SAR images, posing significant challenges for subsequent ship detection and recognition. To address this problem, this study proposes an improved iteration phase gradient resampling [...] Read more.
Synthetic aperture radar (SAR) is a critical enabling technology for maritime surveillance. However, maneuvering ships often appear defocused in SAR images, posing significant challenges for subsequent ship detection and recognition. To address this problem, this study proposes an improved iteration phase gradient resampling autofocus (IIPGRA) method. First, we extract the defocused ships from SAR images, followed by azimuth decompression and translational motion compensation. Subsequently, a centerline-driven adaptive azimuth partitioning strategy is proposed: the geometric centerline of the vessel is extracted from coarsely focused images using an enhanced RANSAC algorithm, and the target is partitioned into upper and lower sub-blocks along the azimuth direction to maximize the separation of rotational centers between sub-blocks, establishing a foundation for the accurate estimation of spatially variant phase errors. Next, phase gradient autofocus (PGA) is employed to estimate the phase errors of each sub-block and compute their differential. Then, resampling the original echoes based on this differential phase error linearizes non-uniform rotational motion. Furthermore, this study introduces the Rotational Uniformity Coefficient (β) as the convergence criterion. This coefficient can stably and reliably quantify the linearity of the rotational phase, thereby ensuring robust termination of the iterative process. Simulation and real airborne SAR data validate the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

33 pages, 795 KB  
Article
Estimating the Impact of Government Green Subsidies on Corporate ESG Performance: Double Machine Learning for Causal Inference
by Yingzhao Cao, Mohd Hizam-Hanafiah, Mohd Fahmi Ghazali, Ruzanna Ab Razak and Yang Zheng
Sustainability 2026, 18(1), 281; https://doi.org/10.3390/su18010281 - 26 Dec 2025
Viewed by 533
Abstract
In this study, we examine the impact of government green subsidies on corporate ESG performance. We employ the method of double machine learning for causal inference. We use all A-share listed companies in China from 2013 to 2023 as the research sample. After [...] Read more.
In this study, we examine the impact of government green subsidies on corporate ESG performance. We employ the method of double machine learning for causal inference. We use all A-share listed companies in China from 2013 to 2023 as the research sample. After excluding financial and insurance companies, those in ST/*ST/PT status, and those with missing key indicators, we ultimately obtain 2337 sample observations. Our baseline results based on double machine learning reveal government green subsidies significantly enhance corporate ESG performance. The findings suggest that this enhancement occurs notably through the mediating variables of digital technology innovation and technology conversion efficiency. We also introduce heterogeneous dimensions such as the level of digital inclusive finance, the intensity of environmental regulations, and the scale of enterprises. Meanwhile, we adopt multiple robustness test methods, including changing the dependent variable, excluding data from special years, controlling for exogenous policy shocks, using instrumental variable methods, and resetting the double machine learning model—adjusting the sample partition ratio from the original 1:4 to 1:9 and replacing the prediction algorithm from random forest to gradient boosting, lasso regression, and ensemble machine learning methods—to ensure the reliability and scientific nature of the research conclusions. Additional tests indicate that the regression coefficient remains positive and is significant, indicating the robustness of our conclusions. This research offers implications for further optimizing the design of government green subsidy policies, and to promote the improvement of enterprises’ ESG performance and economic green transformation. Full article
Show Figures

Figure 1

Back to TopTop