Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,055)

Search Parameters:
Keywords = Second Law

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 12791 KB  
Article
Empirical Validation of Fitts’ Law in Virtual Reality: Modeling, Prediction, and Modality Comparison
by Nikolina Rodin, Dario Ogrizović, Luka Batistić and Sandi Ljubic
Multimodal Technol. Interact. 2026, 10(5), 49; https://doi.org/10.3390/mti10050049 - 1 May 2026
Abstract
Fitts’ law is a foundational model for predicting pointing performance and has been increasingly explored in immersive virtual reality (VR) environments. This paper presents a controlled experimental framework for deriving modality-specific Fitts’ law models in VR and evaluating their predictive transfer to applied [...] Read more.
Fitts’ law is a foundational model for predicting pointing performance and has been increasingly explored in immersive virtual reality (VR) environments. This paper presents a controlled experimental framework for deriving modality-specific Fitts’ law models in VR and evaluating their predictive transfer to applied interaction tasks. The framework comprises two scenarios. The first replicates a standardized ISO 9241 pointing task in a 3D virtual environment to derive predictive movement time models by systematically varying target distance (20–50 cm), target size (2.5–5 cm), and spatial configuration (0, 45, 90, 135). The second simulates an applied warehouse-inspired task involving tool sorting and structured placement actions to evaluate the generalizability of the derived models in more ecologically valid VR interactions. Thirty-two participants completed all tasks using the Meta Quest 3 headset and two interaction modalities: a handheld controller and hand tracking with gesture recognition. Results show that Fitts’ law remains a strong predictor of movement time for 3D pointing in VR, with high linear fits for both the controller (R2=0.9615) and hand tracking (R2=0.9668). However, models derived from standardized pointing tasks showed limited transferability to applied object-manipulation scenarios, producing prediction errors of approximately 27–35% and systematically underestimating movement times. Additionally, both objective metrics and subjective evaluations indicated that controller-based interaction outperformed hand tracking in efficiency, accuracy, perceived workload, and usability. These findings highlight both the robustness and limitations of Fitts-based performance modeling in realistic VR interaction contexts. Full article
19 pages, 2463 KB  
Article
Leveraging Electrical Network Models for Solving Fick’s Second Law of Diffusion in Membrane Gas Permeation
by Zheng Cao, Boguslaw Kruczek and Jules Thibault
Membranes 2026, 16(5), 165; https://doi.org/10.3390/membranes16050165 - 1 May 2026
Abstract
The permeation of gases through membranes is a fundamental process with wide-ranging applications, from gas separation and fuel cell technology to respiratory physiology. Governed by Fick’s second law of diffusion, the mathematical modelling of such transport processes often becomes analytically and computationally challenging, [...] Read more.
The permeation of gases through membranes is a fundamental process with wide-ranging applications, from gas separation and fuel cell technology to respiratory physiology. Governed by Fick’s second law of diffusion, the mathematical modelling of such transport processes often becomes analytically and computationally challenging, especially in heterogeneous, mixed matrix, or multilayered systems. To navigate these complexities, this study revisits and expands upon the use of electrical analogies as an intuitive and powerful modelling approach rooted in mid-20th-century analog computing. By leveraging the mathematical equivalence between diffusion and electrical conduction, we construct an equivalent electrical network that mirrors the transient behaviour of gas permeation across membranes. In this framework, concentration gradients are represented as voltage differences, diffusive fluxes as electrical currents, and diffusional resistances as circuit resistances. While traditional applications of electrical analogies have largely focused on steady-state phenomena, our approach enables dynamic analysis, offering conceptual clarity and computational efficiency. This methodology not only simplifies the solution of Fick’s second law but also reinforces the enduring relevance of analogical thinking in modern engineering practice. Comparative results demonstrate that the equivalent electrical circuit closely aligns with both analytical and finite difference solutions, validating its effectiveness and accuracy. Full article
(This article belongs to the Section Membrane Applications for Gas Separation)
Show Figures

Graphical abstract

25 pages, 4142 KB  
Article
Evolutionary Patterns and Advanced Strategies of Health Policies Based on Topic Modeling and Social Network Analysis
by Kaixuan Zhu, Lirong Song, Xuejie Yang, Wenxing Lu and Dongxiao Gu
Systems 2026, 14(5), 497; https://doi.org/10.3390/systems14050497 - 1 May 2026
Abstract
We systematically analyze the evolutionary characteristics of China’s public health policies, focusing on the dynamic changes in policy content, stage-specific differences, and inter-subject collaborative relationships. Based on 137 public health policy documents issued by the central government, the analysis is conducted from a [...] Read more.
We systematically analyze the evolutionary characteristics of China’s public health policies, focusing on the dynamic changes in policy content, stage-specific differences, and inter-subject collaborative relationships. Based on 137 public health policy documents issued by the central government, the analysis is conducted from a dual perspective: first, the BERTopic model is employed to identify prominent policy themes and track their evolutionary paths; second, Social Network Analysis (SNA) is utilized to deconstruct the collaborative mechanisms and network structural characteristics among policy actors, goals, and tools. The findings indicate: (1) Collaboration among core policy actors is close, yet inter-departmental transparency and collaborative inclusivity remain limited for certain organizations. (2) Policy goals show a diversifying trend, with the strategic focus shifting from infectious disease prevention and control to comprehensive public health services. (3) There are significant preferences in the selection of policy tools for balancing rapid emergency response with sustainable long-term health governance. These findings reveal the evolutionary laws of the public health policy system and provide a theoretical basis for optimizing the policy framework and enhancing governance efficacy. Full article
(This article belongs to the Topic Data Science and Intelligent Management)
Show Figures

Figure 1

23 pages, 985 KB  
Article
Analysis of Power System Cost Evolution Characteristics Under Different Thermal Power Substitution Modes
by Xiuyu Yang, Yi Wang, Gangui Yan, Hongda Dong and Chenggang Li
Energies 2026, 19(9), 2174; https://doi.org/10.3390/en19092174 - 30 Apr 2026
Viewed by 7
Abstract
With the continuous decline in the cost of renewable energy such as wind power and photovoltaic power generation, the economic competitiveness in the power supply structure is increasing, and traditional thermal power units are gradually being replaced, resulting in a profound adjustment of [...] Read more.
With the continuous decline in the cost of renewable energy such as wind power and photovoltaic power generation, the economic competitiveness in the power supply structure is increasing, and traditional thermal power units are gradually being replaced, resulting in a profound adjustment of the power supply structure. However, the unclear alternative between units may lead to system redundancy configuration or power supply shortage. At the same time, the volatility of the new energy output and the flexible allocation of resources and other factors work together, resulting in the cost of the power system showing complex evolution characteristics. Therefore, it is of great significance to study the evolution of system cost in the process of thermal power substitution. This paper first analyzes the internal mechanism of the cost change of the new power system. Second, the cost accounting model of the power system is constructed to reveal the relationship between ‘thermal power substitution mode-system cost’ in the process of thermal power installed capacity substitution. Finally, the Garver-6 system is taken as an example to carry out simulation analysis, solve the optimal thermal power substitution mode under different renewable energy penetration rates, and explore the evolution law of system cost. The results of the example show that with the increase of renewable energy penetration, the total cost of the system first decreases and then increases, and the optimal substitution method is ‘unit thermal power to replace more renewable energy’. Full article
(This article belongs to the Section F1: Electrical Power System)
67 pages, 531 KB  
Article
Photon Entanglement, Bell Inequality Violation, and Energy Interpretation of the Born Rule in Maxwell–Schwartz Field Theory
by David Carfì
Mathematics 2026, 14(9), 1490; https://doi.org/10.3390/math14091490 - 28 Apr 2026
Viewed by 129
Abstract
In this paper we study photon entanglement in the framework of Maxwell–Schwartz field theory. The ambient state space is the complex Maxwellian distribution space W=S(M4,C3), whose elements are fields of the form [...] Read more.
In this paper we study photon entanglement in the framework of Maxwell–Schwartz field theory. The ambient state space is the complex Maxwellian distribution space W=S(M4,C3), whose elements are fields of the form F=E+icB. Polarization is realized as a two-dimensional complex subspace of W, generated by suitable linearly polarized Maxwellian solutions associated with opposite propagation directions. This yields canonical polarization sectors PA and PB, each naturally isomorphic to C2. Within this setting, the Bell singlet state is represented by a non-factorizable tensorial Maxwellian field in PAPBWW. By means of the induced rotated polarization bases, the standard joint probabilities of the photon polarization experiment are recovered exactly, and the correlation law E(a,b)=cos(2(ab)) is obtained. Consequently, the usual CHSH value 22 is reproduced in the Maxwell–Schwartz framework. To clarify the meaning of this violation, we first formulate the CHSH inequality in a purely measure-theoretic form, as a theorem about four correlators represented on a single probability space by bounded measurable functions. We then show that the correlators produced by the intrinsic Maxwellian Bell state do not admit such a common representation. The obstruction is structural: the ontic state is a global non-product field configuration, and the four correlations arise from different polarization resolutions of the same tensorial Maxwellian state. A second main result concerns the Born rule. For L2 scalar quantum states in the domain of the Maxwellian correspondence, we prove that the squared Hilbert norm, times the constant ε0, coincides with the electromagnetic energy of the associated field. This leads to an energy interpretation of the Born rule: the Born probability density is identified with the normalized electromagnetic energy density up to an interference term depending on the chosen Maxwell–Schwartz isomorphism, which assumes the role of a quantum context. In the context of the Aspect and collaborators’ experiment, we prove that, on the other hand, the polarization probabilities become energy contributions of the corresponding field components. These results show that photon entanglement, Bell inequality violation, and the Born rule admit a coherent interpretation within Maxwell–Schwartz field theory, where the basic ontological objects are electromagnetic-like fields rather than abstract state vectors. Full article
36 pages, 1539 KB  
Article
PGT-Net: A Physics-Guided Transformer–CNN Hybrid Network for Low-Light Image Enhancement and Object Detection in Traffic Scenes
by Bin Chen, Jian Qiao, Baowei Li, Shipeng Liu and Wei She
J. Imaging 2026, 12(5), 191; https://doi.org/10.3390/jimaging12050191 - 28 Apr 2026
Viewed by 134
Abstract
In autonomous driving and intelligent transportation systems, the degradation of image quality under low-light conditions severely impacts the reliability of subsequent object detection. Existing methods predominantly employ data-driven deep learning models for image enhancement, often lacking physical interpretability and struggling to maintain robustness [...] Read more.
In autonomous driving and intelligent transportation systems, the degradation of image quality under low-light conditions severely impacts the reliability of subsequent object detection. Existing methods predominantly employ data-driven deep learning models for image enhancement, often lacking physical interpretability and struggling to maintain robustness in complex lighting-varying traffic scenarios. To address this, this paper proposes a Physically Guided Transformer–CNN Hybrid Network (Physically Guided Transformer–CNN Hybrid Network, PGT-Net) for end-to-end joint optimization of low-light enhancement and object detection. PGT-Net innovatively integrates the atmospheric scattering physical model with deep learning architecture: first, a learnable physical guidance branch estimates the scene’s atmospheric illumination map and transmittance map, providing explicit physical priors for the network; second, a dual-branch enhancement backbone is designed, where the local CNN branch (based on an improved UNet) restores fine textures, while the Global Transformer Branch (based on Swin Transformer) models long-range dependencies to correct global uneven illumination, with features adaptively combined via a Physical Fusion Module to ensure enhancement results align with physical laws while retaining rich visual features; finally, the enhanced images are directly fed into a lightweight detection head (e.g., YOLOv7) for joint training and optimization. Comprehensive experiments on public datasets (ExDark, BDD100K-night, etc.) demonstrate that PGT-Net significantly outperforms mainstream methods (e.g., RetinexNet, KinD, Zero-DCE) in both low-light image enhancement quality (PSNR/SSIM) and object detection accuracy (mAP), while maintaining high inference efficiency. This research offers an interpretable, high-performance solution for visual perception tasks under adverse lighting conditions, holding strong theoretical significance and practical value. Full article
(This article belongs to the Section AI in Imaging)
21 pages, 398 KB  
Article
Modified Gravity as Entropic Cosmology
by Shin’ichi Nojiri, Sergei D. Odintsov, Tanmoy Paul and Soumitra SenGupta
Universe 2026, 12(5), 126; https://doi.org/10.3390/universe12050126 - 27 Apr 2026
Viewed by 211
Abstract
The present work reveals a direct correspondence between modified theories of gravity (cosmology) and entropic cosmology based on the thermodynamics of apparent horizon. It turns out that due to the total differentiable property of entropy, the usual thermodynamic law (used for Einstein gravity) [...] Read more.
The present work reveals a direct correspondence between modified theories of gravity (cosmology) and entropic cosmology based on the thermodynamics of apparent horizon. It turns out that due to the total differentiable property of entropy, the usual thermodynamic law (used for Einstein gravity) needs to be generalized for modified gravity theories having more than one thermodynamic degree of freedom (d.o.f.). For the modified theories having n number of thermodynamic d.o.f., the corresponding horizon entropy is given by ShSBH+ terms containing the time derivatives of SBH up to (n1)-th order, and moreover, the coefficient(s) of the derivative term(s) are proportional to the modification parameter of the gravity theory (compared to the Einstein gravity; SBH is the Bekenstein–Hawking entropy). By identifying the independent thermodynamic variables from the first law of thermodynamics, we show that the equivalent thermodynamic description of modified gravity naturally allows the time derivative of the Bekenstein–Hawking entropy in the horizon entropy. Full article
Show Figures

Figure 1

35 pages, 10652 KB  
Article
Unveiling Long-Memory Dynamics in Turbulent Markets: A Novel Fractional-Order Attention-Based GRU-LSTM Framework with Multifractal Analysis
by Yangxin Wang and Yuxuan Zhang
Fractal Fract. 2026, 10(5), 293; https://doi.org/10.3390/fractalfract10050293 - 26 Apr 2026
Viewed by 171
Abstract
Financial time series in turbulent markets exhibit complex long-memory dynamics and multifractal features that traditional deep learning models fail to capture due to inherent exponential forgetting mechanisms. To address this, we propose Frac-Attn-GL, a novel Fractional-order Spatiotemporal Attention-based GRU-LSTM framework. Grounded in the [...] Read more.
Financial time series in turbulent markets exhibit complex long-memory dynamics and multifractal features that traditional deep learning models fail to capture due to inherent exponential forgetting mechanisms. To address this, we propose Frac-Attn-GL, a novel Fractional-order Spatiotemporal Attention-based GRU-LSTM framework. Grounded in the Fractal Market Hypothesis, the model embeds Grünwald–Letnikov fractional-order operators into a dual-channel architecture (FracLSTM and FracGRU) to characterize long-range memory with rigorous power-law decay priors. Furthermore, an extreme-aware asymmetric loss function is designed to drive a dynamic spatiotemporal routing mechanism, enabling adaptive shifts between long-term macro trends and short-term micro shocks. Empirical tests on major U.S. stock indices reveal three significant findings. First, the Frac-Attn-GL framework substantially reduces prediction errors, achieving up to a 93.1% RMSE reduction on the highly volatile NASDAQ index compared to standard baselines. Second, the adaptively learned fractional-order parameters exhibit a consistent quantitative alignment with the market’s empirical multifractal singularity spectrum, supporting the physical interpretability of the model’s endogenous memory mechanism. Finally, hybrid residual multifractal diagnostics indicate that the framework effectively captures deep long-range correlations, reducing the Hurst exponent of the prediction residuals from ~0.83 to approximately 0.50, a level consistent with the absence of significant long-range dependence. Full article
(This article belongs to the Special Issue Fractal Approaches and Machine Learning in Financial Markets)
20 pages, 946 KB  
Article
Minimum-Entropy Optimal Control of Electromechanical Linkages for Energy Harvesting
by Meysam Fathizadeh and Hanz Richter
Entropy 2026, 28(5), 489; https://doi.org/10.3390/e28050489 (registering DOI) - 24 Apr 2026
Viewed by 134
Abstract
This work considers optimal mechanical–electrical power conversion across rigid linkages equipped with current-controlled actuators. A novel cost function derived from a generalization of the Second Law of Thermodynamics is adopted from our previous work, where cycle-averaged energies are interpreted as generalized temperatures. A [...] Read more.
This work considers optimal mechanical–electrical power conversion across rigid linkages equipped with current-controlled actuators. A novel cost function derived from a generalization of the Second Law of Thermodynamics is adopted from our previous work, where cycle-averaged energies are interpreted as generalized temperatures. A cost function based on generalized entropy generation is used to formulate an optimal control problem yielding a decoupled velocity feedback controller. Suboptimal gains are found, which are independent of both the excitation characteristics and the mechanical subsystem dynamics, and yield closed-loop stability. The effectiveness and simplicity of the resulting controller is demonstrated by a Monte Carlo simulation study, where random episodes of unknown, periodic forcing are applied under the proposed controller and compared with a maximum-efficiency controller. Results show that the proposed controller offers a higher statistical expectation for the average harvested power. Full article
(This article belongs to the Section Multidisciplinary Applications)
17 pages, 310 KB  
Article
Second Law of Thermodynamics and Strain Gradient Theories of Elasticity
by Claudio Giorgi and Angelo Morro
Entropy 2026, 28(5), 487; https://doi.org/10.3390/e28050487 (registering DOI) - 24 Apr 2026
Viewed by 127
Abstract
The paper addresses the second law of thermodynamics through the Clausius–Duhem inequality in its general form, with entropy flux and entropy production given by suitable constitutive functions. For definiteness the paper investigates possible models of elastic solids where, to account for non-local properties, [...] Read more.
The paper addresses the second law of thermodynamics through the Clausius–Duhem inequality in its general form, with entropy flux and entropy production given by suitable constitutive functions. For definiteness the paper investigates possible models of elastic solids where, to account for non-local properties, the stress depends on strain gradients up to second order. While previous approaches are developed through variational formulations or by applying the virtual power method, here it is shown that no change in the energy balance or the form of kinetic energy is necessary; it is sufficient that the entropy flux be given by a suitable constitutive function. The paper also emphasizes that non-local constitutive properties arise from the Clausius–Duhem thermodynamic inequality, while variational formulations and the virtual power method are in fact limited to the purely mechanical context, as they involve only the equation of motion. Full article
15 pages, 5064 KB  
Article
Physics-Guided Machine Learning with Flowing Material Balance Integration: A Novel Approach for Reliable Production Forecasting and Well Performance Analytics
by Eghbal Motaei, Tarek Ganat and Hai T. Nguyen
Energies 2026, 19(9), 2022; https://doi.org/10.3390/en19092022 - 22 Apr 2026
Viewed by 364
Abstract
Reliable production forecasting is a critical task for evaluating asset valuation and commercial performance in oil and gas reservoirs. Conventional short-term forecasting methods, such as Arps’ decline curve analysis, rely on simple mathematical curve fitting and often oversimplify reservoir performance. On the other [...] Read more.
Reliable production forecasting is a critical task for evaluating asset valuation and commercial performance in oil and gas reservoirs. Conventional short-term forecasting methods, such as Arps’ decline curve analysis, rely on simple mathematical curve fitting and often oversimplify reservoir performance. On the other hand, long-term forecasting requires complex multidisciplinary models that integrate geophysics, reservoir engineering, and production engineering, but these approaches are time-consuming and have high turnaround times. To bridge the gap between long and short-term production forecasts, reduced-physics models such as Blasingame type curves have been developed, incorporating transient well behaviour derived from diffusivity equations and Darcy’s law. These models assume homogeneity and uniform reservoir properties, enabling faster results while honouring pressure performance. However, despite their efficiency, they still face limitations in reliability, particularly when extended to long-term forecasts. This paper proposes a hybrid modelling approach that integrates flowing material balance (FMB) concepts into physics-informed neural networks (PiNNs) and machine learning models to improve the accuracy and reliability of production forecasting. The proposed methodology introduces two hybrid strategies: physics-informed models enriched with FMB feature, and PiNNs. The first proposed hybrid model uses a created FMB-derived feature as input to neural networks. The second PiNN model embeds data-driven loss functions with a physics-based envelope to reflect reservoir response into the machine learning model. The primary loss function is mean squared error, ensuring minimization of data misfit between predicted and observed production rates. The study validates both proposed physically informed neural network models through performance metrics such as RMSE, MAE, MAPE, and R2. Results application on field data shows that the integration of FMB into neural network models using the PiNN concept guides the neural network models to predict the production rates with higher reliability over the full span of the tested data period, which was the last year of unseen production data. Additionally, the proposed PiNN model is able to predict the well productivity index via hyper-tuning of the PiNN model. Furthermore, the PiNN is not improving the metric performance of conventional neural networks, as it has to satisfy an additional material balance equation. This is due to a lower degree of freedom in the PiNN models. Full article
Show Figures

Figure 1

19 pages, 2779 KB  
Article
Study on the Characteristics of Positive and Negative Corona Discharge of an Independent Lightning Rod Under Different Background Electric Field Amplitude
by He Zhang, Xiufeng Guo, Zhaoxia Wang, Yubin Zhao, Yuhang Zheng and Shijie Liu
Atmosphere 2026, 17(5), 428; https://doi.org/10.3390/atmos17050428 - 22 Apr 2026
Viewed by 253
Abstract
Corona discharge at the tip of buildings in a thunderstorm environment is an important factor causing changes in the near-ground electric field, but the influence of a quadratic growth law and quantitative research on the parameters is still rare. Therefore, based on the [...] Read more.
Corona discharge at the tip of buildings in a thunderstorm environment is an important factor causing changes in the near-ground electric field, but the influence of a quadratic growth law and quantitative research on the parameters is still rare. Therefore, based on the three-dimensional corona discharge model, this paper studies the influence of positive and negative symmetrical triangular wave electric fields with different amplitudes on the corona discharge of an independent lightning rod. Studies have shown that the corona current is synchronized with the peak of the background electric field. Studies have shown that the corona current is synchronized with the peak of the background electric field. When the polarity of the electric field changes from positive to negative, the positive charge accumulated in the positive half-cycle promotes the subsequent negative corona, so the negative corona starts in advance when the polarity reverses. Compared with unipolar discharge, the amplitude of the negative current and the number of negative charges have significantly improved. However, due to the counteraction of neutralization between positive and negative charges, the total corona charge is at a low level, which shows a net negative polarity result. The corona current and the amount of charge increase nonlinearly with an increase in the background electric field amplitude. Under the symmetrical triangular wave electric field, the quantitative fitting relationship between the peak value of the negative corona current in the second half-cycle and the amount of charge is established for the 5 m high independent lightning rod, which is I = −0.0532 − 0.153 E − 0.0682 E2, Q = −3.18 × 10−3 + 7.762 × 10−4E − 4.671 × 10−5 E2, respectively. The increase in the background electric field amplitude will aggravate the disturbance of the corona discharge to the near-surface electric field. When the direction of the electric field has reverted to zero, the existence of the space charge will lead to a significant change in the strength and polarity of the ground electric field. When the thunderstorm background electric field changes from positive to negative, the corona effect reverses the polarity of the ground electric field in advance, and the larger the peak value of the background electric field, the larger the advance. The corona interference mechanism revealed by this study can provide an important reference for correcting the electric field monitoring data and improving the accuracy of lightning warnings. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

30 pages, 2367 KB  
Article
Estimating Households’ Willingness-to-Pay for Improved Waste Treatment Service in Vietnam
by Van Quy Khuc, Ngoc Duc Doan, Thuy Nguyen, Thi Vinh Ha Nguyen, Nguyen Thi Mai Huong, Nguyen Duc Lam, Thi Quynh Trang Tran and Thi Nguyet Nuong Nguyen
Sustainability 2026, 18(8), 4102; https://doi.org/10.3390/su18084102 - 20 Apr 2026
Viewed by 313
Abstract
Waste pollution is becoming a major health issue in many developing nations. Waste-reduction options have been investigated and proposed, but environmental culture-based initiatives have not. This study explores and advances Vietnamese families’ environmental culture related to waste management, using the Culture Tower framework [...] Read more.
Waste pollution is becoming a major health issue in many developing nations. Waste-reduction options have been investigated and proposed, but environmental culture-based initiatives have not. This study explores and advances Vietnamese families’ environmental culture related to waste management, using the Culture Tower framework and a contingent valuation method coupled with a Bayesian model (CVBM). Specifically, descriptive statistics measure environmental literacy, while CVBM determines household willingness-to-pay (WTP) and estimates WTP for waste treatment services (WTP4WTS). Based on our survey of 487 households across 11 communes and wards in Hai Phong City, local waste pollution has decreased over time, although the respondents remain concerned. Over 13% of households were dissatisfied with waste treatment services (WTSs), while approximately 50% were neutral. Most respondents (79.26%) were willing to pay for improved WTSs, with an average WTP of 60,200 VND (US$2.32) per household per month. Behavioral and perceptual factors, such as the desire for improved waste services, current perceived waste pollution, and the perception that pollution has worsened, were found to significantly influence this willingness. Our study makes three major contributions. First, it develops a novel CVBM framework that links environmental culture and an economic valuation method, strengthening green economy micro-behavioral research. Second, it advances the circular economy literature by highlighting household engagement and willingness-to-pay as key drivers of sustainable waste financing and resource-loop closure. Third, it provides empirical evidence to inform and refine Vietnam’s revised Law on Environmental Protection (2020), particularly in implementing the “polluter pays” principle, promoting waste classification at the source, and designing socially acceptable environmental financing mechanisms. Full article
(This article belongs to the Special Issue Advancing Awareness in Sustainability and Integrated Waste Management)
Show Figures

Figure 1

19 pages, 4280 KB  
Article
Adaptive Recursive Model Predictive Current Control for Linear Motor Drives in CNC Machine Tools Based on Cartesian Distance Minimization
by Lin Song, Ziling Nie, Jun Sun, Yangwei Zhou, Jingxin Yuan and Huayu Li
Mathematics 2026, 14(8), 1377; https://doi.org/10.3390/math14081377 - 20 Apr 2026
Viewed by 313
Abstract
With the increasing demand for high speed and high-precision motion control in CNC machine tools, permanent magnet linear synchronous motors (PMLSMs) have been widely adopted in feed drive systems due to their excellent dynamic performance and positioning accuracy. However, existing model predictive current [...] Read more.
With the increasing demand for high speed and high-precision motion control in CNC machine tools, permanent magnet linear synchronous motors (PMLSMs) have been widely adopted in feed drive systems due to their excellent dynamic performance and positioning accuracy. However, existing model predictive current control (MPCC) variants still face challenges regarding high computational overhead and strong dependency on accurate motor parameters, which limit their industrial applicability. To address these issues, this paper proposes an adaptive recursive MPCC for PMLSM drives based on the Cartesian distance minimization principle. An adaptive recursive prediction scheme that is inspired by the feedback structure of recurrent architectures is first introduced. By cyclically utilizing the previously sampled current to predict the next period’s state, the strategy effectively decouples the control law from inductance variations. The dependence on resistance is further mitigated by analyzing the correlation between the ideal current vector and voltage vector deviations. Second, the selection of the optimal voltage vector is transformed into a geometric problem: minimizing the Cartesian distance between the reference voltage and 19 candidate deviations within a proposed virtual voltage vector hexagon. To minimize the computational burden, the vector space is partitioned into eight regions, allowing the optimal candidate to be selected from only two pre-derived deviations. The experimental results demonstrate that the proposed method significantly outperforms existing MPCC benchmarks. Specifically, the execution time is reduced by 63.6%. Under severe parameter mismatch, the current THD is reduced from 14.82% to 6.35%, and the thrust ripple is improved from 12.06 N to 5.25 N, validating its superior robustness and efficiency. Full article
(This article belongs to the Special Issue Advances in Control Theory and Applications in Energy Systems)
Show Figures

Figure 1

18 pages, 3217 KB  
Article
Machine Learning-Based Prediction of Multi-Year Cumulative Atmospheric Corrosion Loss in Low-Alloy Steels with SHAP Analysis
by Saurabh Tiwari, Seong Jun Heo and Nokeun Park
Coatings 2026, 16(4), 488; https://doi.org/10.3390/coatings16040488 - 17 Apr 2026
Viewed by 240
Abstract
Atmospheric corrosion of carbon and low-alloy steels causes direct economic losses that are estimated at around 3.4% of the global GDP, and its accurate multi-year prediction is essential for protective coating selection, service-life estimation, and infrastructure maintenance scheduling. In this study, machine learning [...] Read more.
Atmospheric corrosion of carbon and low-alloy steels causes direct economic losses that are estimated at around 3.4% of the global GDP, and its accurate multi-year prediction is essential for protective coating selection, service-life estimation, and infrastructure maintenance scheduling. In this study, machine learning (ML) algorithms, including gradient boosting regressor (GBR), eXtreme gradient boosting (XGBoost), random forest (RF), support vector regression (SVR), and ridge regression, were trained on a 600-sample physics-grounded dataset to predict the cumulative atmospheric corrosion loss (µm) of low-alloy steels over 1–10 years of exposure. The dataset was constructed using the exact ISO 9223:2012 dose–response function (DRF) for a first-year corrosion rate and the ISO 9224:2012 power-law multi-year kinetic model (C(t) = C1·t0.5), spanning ISO 9223 corrosivity categories C2–CX across 11 environmental and material input features. All models were evaluated on the original (untransformed) corrosion scale under an 80/20 train/test split and five-fold cross-validation. Gradient boosting achieved the best overall performance with test set R2 = 0.968, CV-R2 = 0.969, RMSE = 10.58 µm, MAE = 5.99 µm, and MAPE = 12.6%. XGBoost was a close second (R2 = 0.958, CV-R2 = 0.960). RF achieved an R2 of 0.944. SHAP (SHapley Additive exPlanations) analysis identified SO2 deposition rate, exposure time, relative humidity, Cl deposition rate, and temperature as the five most influential predictors. The dominance of the SO2 deposition rate (mean |SHAP| = 26.37 µm) and the high second-place ranking of exposure time (13.67 µm) are fully consistent with the ISO 9223:2012 dose–response function and ISO 9224:2012 power-law kinetics, respectively, while among the material features, Cu and Cr contents showed the strongest negative SHAP contributions, confirming their corrosion-inhibiting roles in weathering steels. These results establish a physics-consistent, interpretable ML benchmark exceeding R2 = 0.90 for multi-year cumulative corrosion loss prediction and provide a quantitative tool for alloy screening, coating selection in aggressive atmospheric environments, and service-life planning. Full article
Show Figures

Graphical abstract

Back to TopTop