Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,704)

Search Parameters:
Keywords = multi-cost

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 19537 KiB  
Article
Submarine Topography Classification Using ConDenseNet with Label Smoothing Regularization
by Jingyan Zhang, Kongwen Zhang and Jiangtao Liu
Remote Sens. 2025, 17(15), 2686; https://doi.org/10.3390/rs17152686 (registering DOI) - 3 Aug 2025
Abstract
The classification of submarine topography and geomorphology is essential for marine resource exploitation and ocean engineering, with wide-ranging implications in marine geology, disaster assessment, resource exploration, and autonomous underwater navigation. Submarine landscapes are highly complex and diverse. Traditional visual interpretation methods are not [...] Read more.
The classification of submarine topography and geomorphology is essential for marine resource exploitation and ocean engineering, with wide-ranging implications in marine geology, disaster assessment, resource exploration, and autonomous underwater navigation. Submarine landscapes are highly complex and diverse. Traditional visual interpretation methods are not only inefficient and subjective but also lack the precision required for high-accuracy classification. While many machine learning and deep learning models have achieved promising results in image classification, limited work has been performed on integrating backscatter and bathymetric data for multi-source processing. Existing approaches often suffer from high computational costs and excessive hyperparameter demands. In this study, we propose a novel approach that integrates pruning-enhanced ConDenseNet with label smoothing regularization to reduce misclassification, strengthen the cross-entropy loss function, and significantly lower model complexity. Our method improves classification accuracy by 2% to 10%, reduces the number of hyperparameters by 50% to 96%, and cuts computation time by 50% to 85.5% compared to state-of-the-art models, including AlexNet, VGG, ResNet, and Vision Transformer. These results demonstrate the effectiveness and efficiency of our model for multi-source submarine topography classification. Full article
Show Figures

Figure 1

24 pages, 985 KiB  
Article
A Spatiotemporal Deep Learning Framework for Joint Load and Renewable Energy Forecasting in Stability-Constrained Power Systems
by Min Cheng, Jiawei Yu, Mingkang Wu, Yihua Zhu, Yayao Zhang and Yuanfu Zhu
Information 2025, 16(8), 662; https://doi.org/10.3390/info16080662 (registering DOI) - 3 Aug 2025
Abstract
With the increasing uncertainty introduced by the large-scale integration of renewable energy sources, traditional power dispatching methods face significant challenges, including severe frequency fluctuations, substantial forecasting deviations, and the difficulty of balancing economic efficiency with system stability. To address these issues, a deep [...] Read more.
With the increasing uncertainty introduced by the large-scale integration of renewable energy sources, traditional power dispatching methods face significant challenges, including severe frequency fluctuations, substantial forecasting deviations, and the difficulty of balancing economic efficiency with system stability. To address these issues, a deep learning-based dispatching framework is proposed, which integrates spatiotemporal feature extraction with a stability-aware mechanism. A joint forecasting model is constructed using Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to handle multi-source inputs, while a reinforcement learning-based stability-aware scheduler is developed to manage dynamic system responses. In addition, an uncertainty modeling mechanism combining Dropout and Bayesian networks is incorporated to enhance dispatch robustness. Experiments conducted on real-world power grid and renewable generation datasets demonstrate that the proposed forecasting module achieves approximately a 2.1% improvement in accuracy compared with Autoformer and reduces Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) by 18.1% and 14.1%, respectively, compared with traditional LSTM models. The achieved Mean Absolute Percentage Error (MAPE) of 5.82% outperforms all baseline models. In terms of scheduling performance, the proposed method reduces the total operating cost by 5.8% relative to Autoformer, decreases the frequency deviation from 0.158 Hz to 0.129 Hz, and increases the Critical Clearing Time (CCT) to 2.74 s, significantly enhancing dynamic system stability. Ablation studies reveal that removing the uncertainty modeling module increases the frequency deviation to 0.153 Hz and raises operational costs by approximately 6.9%, confirming the critical role of this module in maintaining robustness. Furthermore, under diverse load profiles and meteorological disturbances, the proposed method maintains stable forecasting accuracy and scheduling policy outputs, demonstrating strong generalization capabilities. Overall, the proposed approach achieves a well-balanced performance in terms of forecasting precision, system stability, and economic efficiency in power grids with high renewable energy penetration, indicating substantial potential for practical deployment and further research. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
18 pages, 5178 KiB  
Article
Quantification of Suspended Sediment Concentration Using Laboratory Experimental Data and Machine Learning Model
by Sathvik Reddy Nookala, Jennifer G. Duan, Kun Qi, Jason Pacheco and Sen He
Water 2025, 17(15), 2301; https://doi.org/10.3390/w17152301 (registering DOI) - 2 Aug 2025
Abstract
Monitoring sediment concentration in water bodies is crucial for assessing water quality, ecosystems, and environmental health. However, physical sampling and sensor-based approaches are labor-intensive and unsuitable for large-scale, continuous monitoring. This study employs machine learning models to estimate suspended sediment concentration using images [...] Read more.
Monitoring sediment concentration in water bodies is crucial for assessing water quality, ecosystems, and environmental health. However, physical sampling and sensor-based approaches are labor-intensive and unsuitable for large-scale, continuous monitoring. This study employs machine learning models to estimate suspended sediment concentration using images captured in natural light, named RGB, and near-infrared (NIR) conditions. A controlled dataset of approximately 1300 images with SSC values ranging from 1000 mg/L to 150,000 mg/L was developed, incorporating temperature, time of image capture, and solar irradiance as additional features. Random forest regression and gradient boosting regression were trained on mean RGB values, red reflectance, time of captured, and temperature for natural light images, achieving up to 72.96% accuracy within a 30% relative error. In contrast, NIR images leveraged gray-level co-occurrence matrix texture features and temperature, reaching 83.08% accuracy. Comparative analysis showed that ensemble models outperformed deep learning models like Convolutional Neural Networks and Multi-Layer Perceptrons, which struggled with high-dimensional feature extraction. These findings suggest that using machine learning models and RGB and NIR imagery offers a scalable, non-invasive, and cost-effective way of sediment monitoring in support of water quality assessment and environmental management. Full article
Show Figures

Figure 1

25 pages, 2100 KiB  
Article
Flexible Demand Side Management in Smart Cities: Integrating Diverse User Profiles and Multiple Objectives
by Nuno Souza e Silva and Paulo Ferrão
Energies 2025, 18(15), 4107; https://doi.org/10.3390/en18154107 (registering DOI) - 2 Aug 2025
Abstract
Demand Side Management (DSM) plays a crucial role in modern energy systems, enabling more efficient use of energy resources and contributing to the sustainability of the power grid. This study examines DSM strategies within a multi-environment context encompassing residential, commercial, and industrial sectors, [...] Read more.
Demand Side Management (DSM) plays a crucial role in modern energy systems, enabling more efficient use of energy resources and contributing to the sustainability of the power grid. This study examines DSM strategies within a multi-environment context encompassing residential, commercial, and industrial sectors, with a focus on diverse appliance types that exhibit distinct operational characteristics and user preferences. Initially, a single-objective optimization approach using Genetic Algorithms (GAs) is employed to minimize the total energy cost under a real Time-of-Use (ToU) pricing scheme. This heuristic method allows for the effective scheduling of appliance operations while factoring in their unique characteristics such as power consumption, usage duration, and user-defined operational flexibility. This study extends the optimization problem to a multi-objective framework that incorporates the minimization of CO2 emissions under a real annual energy mix while also accounting for user discomfort. The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is utilized for this purpose, providing a Pareto-optimal set of solutions that balances these competing objectives. The inclusion of multiple objectives ensures a comprehensive assessment of DSM strategies, aiming to reduce environmental impact and enhance user satisfaction. Additionally, this study monitors the Peak-to-Average Ratio (PAR) to evaluate the impact of DSM strategies on load balancing and grid stability. It also analyzes the impact of considering different periods of the year with the associated ToU hourly schedule and CO2 emissions hourly profile. A key innovation of this research is the integration of detailed, category-specific metrics that enable the disaggregation of costs, emissions, and user discomfort across residential, commercial, and industrial appliances. This granularity enables stakeholders to implement tailored strategies that align with specific operational goals and regulatory compliance. Also, the emphasis on a user discomfort indicator allows us to explore the flexibility available in such DSM mechanisms. The results demonstrate the effectiveness of the proposed multi-objective optimization approach in achieving significant cost savings that may reach 20% for industrial applications, while the order of magnitude of the trade-offs involved in terms of emissions reduction, improvement in discomfort, and PAR reduction is quantified for different frameworks. The outcomes not only underscore the efficacy of applying advanced optimization frameworks to real-world problems but also point to pathways for future research in smart energy management. This comprehensive analysis highlights the potential of advanced DSM techniques to enhance the sustainability and resilience of energy systems while also offering valuable policy implications. Full article
Show Figures

Figure 1

19 pages, 1159 KiB  
Article
A Biased–Randomized Iterated Local Search with Round-Robin for the Periodic Vehicle Routing Problem
by Juan F. Gomez, Antonio R. Uguina, Javier Panadero and Angel A. Juan
Mathematics 2025, 13(15), 2488; https://doi.org/10.3390/math13152488 (registering DOI) - 2 Aug 2025
Abstract
The periodic vehicle routing problem (PVRP) is a well-known challenge in real-life logistics, requiring the planning of vehicle routes over multiple days while enforcing visitation frequency constraints. Although numerous metaheuristic and exact methods have tackled various PVRP extensions, real-world settings call for additional [...] Read more.
The periodic vehicle routing problem (PVRP) is a well-known challenge in real-life logistics, requiring the planning of vehicle routes over multiple days while enforcing visitation frequency constraints. Although numerous metaheuristic and exact methods have tackled various PVRP extensions, real-world settings call for additional features such as depot configurations, tight visitation frequency constraints, and heterogeneous fleets. In this paper, we present a two-phase biased–randomized algorithm that addresses these complexities. In the first phase, a round-robin assignment quickly generates feasible and promising solutions, ensuring each customer’s frequency requirement is met across the multi-day horizon. The second phase refines these assignments via an iterative search procedure, improving route efficiency and reducing total operational costs. Extensive experimentation on standard PVRP benchmarks shows that our approach is able to generate solutions of comparable quality to established state-of-the-art algorithms in relatively low computational times and stands out in many instances, making it a practical choice for real life multi-day vehicle routing applications. Full article
Show Figures

Figure 1

16 pages, 4733 KiB  
Article
Vibratory Pile Driving in High Viscous Soil Layers: Numerical Analysis of Penetration Resistance and Prebored Hole of CEL Method
by Caihui Li, Changkai Qiu, Xuejin Liu, Junhao Wang and Xiaofei Jing
Buildings 2025, 15(15), 2729; https://doi.org/10.3390/buildings15152729 (registering DOI) - 2 Aug 2025
Abstract
High-viscosity stratified strata, characterized by complex geotechnical properties such as strong cohesion, low permeability, and pronounced layered structures, exhibit significant lateral friction resistance and high-end resistance during steel sheet pile installation. These factors substantially increase construction difficulty and may even cause structural damage. [...] Read more.
High-viscosity stratified strata, characterized by complex geotechnical properties such as strong cohesion, low permeability, and pronounced layered structures, exhibit significant lateral friction resistance and high-end resistance during steel sheet pile installation. These factors substantially increase construction difficulty and may even cause structural damage. This study addresses two critical mechanical challenges during vibratory pile driving in Fujian Province’s hydraulic engineering project: prolonged high-frequency driving durations, and severe U-shaped steel sheet pile head damage in high-viscosity stratified soils. Employing the Coupled Eulerian–Lagrangian (CEL) numerical method, a systematic investigation was conducted into the penetration resistance, stress distribution, and damage patterns during vibratory pile driving under varying conditions of cohesive soil layer thickness, predrilled hole spacing, and aperture dimensions. The correlation between pile stress and penetration depth was established, with the influence mechanisms of key factors on driving-induced damage in high-viscosity stratified strata under multi-factor coupling effects elucidated. Finally, the feasibility of predrilling techniques for resistance reduction was explored. This study applies the damage prediction model based on the CEL method to U-shaped sheet piles in high-viscosity stratified formations, solving the problem of mesh distortion in traditional finite element methods. The findings provide scientific guidance for steel sheet pile construction in high-viscosity stratified formations, offering significant implications for enhancing construction efficiency, ensuring operational safety, and reducing costs in such challenging geological conditions. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

19 pages, 2359 KiB  
Article
Research on Concrete Crack Damage Assessment Method Based on Pseudo-Label Semi-Supervised Learning
by Ming Xie, Zhangdong Wang and Li’e Yin
Buildings 2025, 15(15), 2726; https://doi.org/10.3390/buildings15152726 (registering DOI) - 1 Aug 2025
Abstract
To address the inefficiency of traditional concrete crack detection methods and the heavy reliance of supervised learning on extensive labeled data, in this study, an intelligent assessment method of concrete damage based on pseudo-label semi-supervised learning and fractal geometry theory is proposed to [...] Read more.
To address the inefficiency of traditional concrete crack detection methods and the heavy reliance of supervised learning on extensive labeled data, in this study, an intelligent assessment method of concrete damage based on pseudo-label semi-supervised learning and fractal geometry theory is proposed to solve two core tasks: one is binary classification of pixel-level cracks, and the other is multi-category assessment of damage state based on crack morphology. Using three-channel RGB images as input, a dual-path collaborative training framework based on U-Net encoder–decoder architecture is constructed, and a binary segmentation mask of the same size is output to achieve the accurate segmentation of cracks at the pixel level. By constructing a dual-path collaborative training framework and employing a dynamic pseudo-label refinement mechanism, the model achieves an F1-score of 0.883 using only 50% labeled data—a mere 1.3% decrease compared to the fully supervised benchmark DeepCrack (F1 = 0.896)—while reducing manual annotation costs by over 60%. Furthermore, a quantitative correlation model between crack fractal characteristics and structural damage severity is established by combining a U-Net segmentation network with the differential box-counting algorithm. The experimental results demonstrate that under a cyclic loading of 147.6–221.4 kN, the fractal dimension monotonically increases from 1.073 (moderate damage) to 1.189 (failure), with 100% accuracy in damage state identification, closely aligning with the degradation trend of macroscopic mechanical properties. In complex crack scenarios, the model attains a recall rate (Re = 0.882), surpassing U-Net by 13.9%, with significantly enhanced edge reconstruction precision. Compared with the mainstream models, this method effectively alleviates the problem of data annotation dependence through a semi-supervised strategy while maintaining high accuracy. It provides an efficient structural health monitoring solution for engineering practice, which is of great value to promote the application of intelligent detection technology in infrastructure operation and maintenance. Full article
Show Figures

Figure 1

25 pages, 6272 KiB  
Article
Research on Energy-Saving Control of Automotive PEMFC Thermal Management System Based on Optimal Operating Temperature Tracking
by Qi Jiang, Shusheng Xiong, Baoquan Sun, Ping Chen, Huipeng Chen and Shaopeng Zhu
Energies 2025, 18(15), 4100; https://doi.org/10.3390/en18154100 (registering DOI) - 1 Aug 2025
Abstract
To further enhance the economic performance of fuel cell vehicles (FCVs), this study develops a model-adaptive model predictive control (MPC) strategy. This strategy leverages the dynamic relationship between proton exchange membrane fuel cell (PEMFC) output characteristics and temperature to track its optimal operating [...] Read more.
To further enhance the economic performance of fuel cell vehicles (FCVs), this study develops a model-adaptive model predictive control (MPC) strategy. This strategy leverages the dynamic relationship between proton exchange membrane fuel cell (PEMFC) output characteristics and temperature to track its optimal operating temperature (OOT), addressing challenges of temperature control accuracy and high energy consumption in the PEMFC thermal management system (TMS). First, PEMFC and TMS models were developed and experimentally validated. Subsequently, the PEMFC power–temperature coupling curve was experimentally determined under multiple operating conditions to serve as the reference trajectory for TMS multi-objective optimization. For MPC controller design, the TMS model was linearized and discretized, yielding a predictive model adaptable to different load demands for stack temperature across the full operating range. A multi-constrained quadratic cost function was formulated, aiming to minimize the deviation of the PEMFC operating temperature from the OOT while accounting for TMS parasitic power consumption. Finally, simulations under Worldwide Harmonized Light Vehicles Test Cycle (WLTC) conditions evaluated the OOT tracking performance of both PID and MPC control strategies, as well as their impact on stack efficiency and TMS energy consumption at different ambient temperatures. The results indicate that, compared to PID control, MPC reduces temperature tracking error by 33%, decreases fan and pump speed fluctuations by over 24%, and lowers TMS energy consumption by 10%. These improvements enhance PEMFC operational stability and improve FCV energy efficiency. Full article
Show Figures

Figure 1

22 pages, 29737 KiB  
Article
A Comparative Investigation of CFD Approaches for Oil–Air Two-Phase Flow in High-Speed Lubricated Rolling Bearings
by Ruifeng Zhao, Pengfei Zhou, Jianfeng Zhong, Duan Yang and Jie Ling
Machines 2025, 13(8), 678; https://doi.org/10.3390/machines13080678 (registering DOI) - 1 Aug 2025
Abstract
Analyzing the two-phase flow behavior in bearing lubrication is crucial for understanding friction and wear mechanisms, optimizing lubrication design, and improving bearing operational efficiency and reliability. However, the complexity of oil–air two-phase flow in high-speed bearings poses significant research challenges. Currently, there is [...] Read more.
Analyzing the two-phase flow behavior in bearing lubrication is crucial for understanding friction and wear mechanisms, optimizing lubrication design, and improving bearing operational efficiency and reliability. However, the complexity of oil–air two-phase flow in high-speed bearings poses significant research challenges. Currently, there is a lack of comparative studies employing different simulation strategies to address this issue, leaving a gap in evidence-based guidance for selecting appropriate simulation approaches in practical applications. This study begins with a comparative analysis between experimental and simulation results to validate the reliability of the adopted simulation approach. Subsequently, a comparative evaluation of different simulation methods is conducted to provide a scientific basis for relevant decision-making. Evaluated from three dimensions—adaptability to rotational speed conditions, research focuses (oil distribution and power loss), and computational economy—the findings reveal that FVM excels at medium-to-high speeds, accurately predicting continuous oil film distribution and power loss, while MPS, leveraging its meshless Lagrangian characteristics, demonstrates superior capability in describing physical phenomena under extreme conditions, albeit with higher computational costs. Economically, FVM, supported by mature software ecosystems and parallel computing optimization, is more suitable for industrial design applications, whereas MPS, being more reliant on high-performance hardware, is better suited for academic research and customized scenarios. The study further proposes that future research could adopt an FVM-MPS coupled approach to balance efficiency and precision, offering a new paradigm for multi-scale lubrication analysis in bearings. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

19 pages, 5340 KiB  
Article
Potential of Multi-Source Multispectral vs. Hyperspectral Remote Sensing for Winter Wheat Nitrogen Monitoring
by Xiaokai Chen, Yuxin Miao, Krzysztof Kusnierek, Fenling Li, Chao Wang, Botai Shi, Fei Wu, Qingrui Chang and Kang Yu
Remote Sens. 2025, 17(15), 2666; https://doi.org/10.3390/rs17152666 (registering DOI) - 1 Aug 2025
Viewed by 37
Abstract
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral [...] Read more.
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral data (S185 sensor) with simulated multispectral data from DJI Phantom 4 Multispectral (P4M), PlanetScope (PS), and Sentinel-2A (S2) in estimating winter wheat PNC. Spectral data were collected across six growth stages over two seasons and resampled to match the spectral characteristics of the three multispectral sensors. Three variable selection strategies (one-dimensional (1D) spectral reflectance, optimized two-dimensional (2D), and three-dimensional (3D) spectral indices) were combined with Random Forest Regression (RFR), Support Vector Machine Regression (SVMR), and Partial Least Squares Regression (PLSR) to build PNC prediction models. Results showed that, while hyperspectral data yielded slightly higher accuracy, optimized multispectral indices, particularly from PS and S2, achieved comparable performance. Among models, SVM and RFR showed consistent effectiveness across strategies. These findings highlight the potential of low-cost multispectral platforms for practical crop N monitoring. Future work should validate these models using real satellite imagery and explore multi-source data fusion with advanced learning algorithms. Full article
(This article belongs to the Special Issue Perspectives of Remote Sensing for Precision Agriculture)
34 pages, 434 KiB  
Article
Mobile Banking Adoption: A Multi-Factorial Study on Social Influence, Compatibility, Digital Self-Efficacy, and Perceived Cost Among Generation Z Consumers in the United States
by Santosh Reddy Addula
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 192; https://doi.org/10.3390/jtaer20030192 (registering DOI) - 1 Aug 2025
Viewed by 39
Abstract
The introduction of mobile banking is essential in today’s financial sector, where technological innovation plays a critical role. To remain competitive in the current market, businesses must analyze client attitudes and perspectives, as these influence long-term demand and overall profitability. While previous studies [...] Read more.
The introduction of mobile banking is essential in today’s financial sector, where technological innovation plays a critical role. To remain competitive in the current market, businesses must analyze client attitudes and perspectives, as these influence long-term demand and overall profitability. While previous studies have explored general adoption behaviors, limited research has examined how individual factors such as social influence, lifestyle compatibility, financial technology self-efficacy, and perceived usage cost affect mobile banking adoption among specific generational cohorts. This study addresses that gap by offering insights into these variables, contributing to the growing literature on mobile banking adoption, and presenting actionable recommendations for financial institutions targeting younger market segments. Using a structured questionnaire survey, data were collected from both users and non-users of mobile banking among the Gen Z population in the United States. The regression model significantly predicts mobile banking adoption, with an intercept of 0.548 (p < 0.001). Among the independent variables, perceived cost of usage has the strongest positive effect on adoption (B=0.857, β=0.722, p < 0.001), suggesting that adoption increases when mobile banking is perceived as more affordable. Social influence also has a significant positive impact (B=0.642, β=0.643, p < 0.001), indicating that peer influence is a central driver of adoption decisions. However, self-efficacy shows a significant negative relationship (B=0.343, β=0.339, p < 0.001), and lifestyle compatibility was found to be statistically insignificant (p=0.615). These findings suggest that reducing perceived costs, through lower fees, data bundling, or clearer communication about affordability, can directly enhance adoption among Gen Z consumers. Furthermore, leveraging peer influence via referral rewards, Partnerships with influencers, and in-app social features can increase user adoption. Since digital self-efficacy presents a barrier for some, banks should prioritize simplifying user interfaces and offering guided assistance, such as tutorials or chat-based support. Future research may employ longitudinal designs or analyze real-life transaction data for a more objective understanding of behavior. Additional variables like trust, perceived risk, and regulatory policies, not included in this study, should be integrated into future models to offer a more comprehensive analysis. Full article
21 pages, 360 KiB  
Review
Prognostic Models in Heart Failure: Hope or Hype?
by Spyridon Skoularigkis, Christos Kourek, Andrew Xanthopoulos, Alexandros Briasoulis, Vasiliki Androutsopoulou, Dimitrios Magouliotis, Thanos Athanasiou and John Skoularigis
J. Pers. Med. 2025, 15(8), 345; https://doi.org/10.3390/jpm15080345 (registering DOI) - 1 Aug 2025
Viewed by 40
Abstract
Heart failure (HF) poses a substantial global burden due to its high morbidity, mortality, and healthcare costs. Accurate prognostication is crucial for optimizing treatment, resource allocation, and patient counseling. Prognostic tools range from simple clinical scores such as ADHERE and MAGGIC to more [...] Read more.
Heart failure (HF) poses a substantial global burden due to its high morbidity, mortality, and healthcare costs. Accurate prognostication is crucial for optimizing treatment, resource allocation, and patient counseling. Prognostic tools range from simple clinical scores such as ADHERE and MAGGIC to more complex models incorporating biomarkers (e.g., NT-proBNP, sST2), imaging, and artificial intelligence techniques. In acute HF, models like EHMRG and STRATIFY aid early triage, while in chronic HF, tools like SHFM and BCN Bio-HF support long-term management decisions. Despite their utility, most models are limited by poor generalizability, reliance on static inputs, lack of integration into electronic health records, and underuse in clinical practice. Novel approaches involving machine learning, multi-omics profiling, and remote monitoring hold promise for dynamic and individualized risk assessment. However, these innovations face challenges regarding interpretability, validation, and ethical implementation. For prognostic models to transition from theoretical promise to practical impact, they must be continuously updated, externally validated, and seamlessly embedded into clinical workflows. This review emphasizes the potential of prognostic models to transform HF care but cautions against uncritical adoption without robust evidence and practical integration. In the evolving landscape of HF management, prognostic models represent a hopeful avenue, provided their limitations are acknowledged and addressed through interdisciplinary collaboration and patient-centered innovation. Full article
(This article belongs to the Special Issue Personalized Treatment for Heart Failure)
13 pages, 251 KiB  
Article
On Solution Set Associated with a Class of Multiple Objective Control Models
by Savin Treanţă and Omar Mutab Alsalami
Mathematics 2025, 13(15), 2484; https://doi.org/10.3390/math13152484 (registering DOI) - 1 Aug 2025
Viewed by 44
Abstract
In this paper, necessary and sufficient efficiency conditions in new multi-cost variational models are formulated and proved. To this end, we introduce a new notion of [...] Read more.
In this paper, necessary and sufficient efficiency conditions in new multi-cost variational models are formulated and proved. To this end, we introduce a new notion of (ϑ0,ϑ1)(σ0,σ1)typeI functionals determined by multiple integrals. To better emphasize the significance of the suggested (ϑ0,ϑ1)(σ0,σ1)typeI functionals and how they add to previous studies, we mention that the (ϑ0,ϑ1)(σ0,σ1)typeI and generalized (ϑ0,ϑ1)(σ0,σ1)typeItypeI assumptions associated with the involved multiple integral functionals cover broader and more general classes of problems, where the convexity of the functionals is not fulfilled or the functionals considered are not of simple integral type. In addition, innovative proofs are provided for the main results. Full article
(This article belongs to the Special Issue Applied Functional Analysis and Applications: 2nd Edition)
20 pages, 3027 KiB  
Article
Evolutionary Game Analysis of Multi-Agent Synergistic Incentives Driving Green Energy Market Expansion
by Yanping Yang, Xuan Yu and Bojun Wang
Sustainability 2025, 17(15), 7002; https://doi.org/10.3390/su17157002 (registering DOI) - 1 Aug 2025
Viewed by 52
Abstract
Achieving the construction sector’s dual carbon objectives necessitates scaling green energy adoption in new residential buildings. The current literature critically overlooks four unresolved problems: oversimplified penalty mechanisms, ignoring escalating regulatory costs; static subsidies misaligned with market maturity evolution; systematic exclusion of innovation feedback [...] Read more.
Achieving the construction sector’s dual carbon objectives necessitates scaling green energy adoption in new residential buildings. The current literature critically overlooks four unresolved problems: oversimplified penalty mechanisms, ignoring escalating regulatory costs; static subsidies misaligned with market maturity evolution; systematic exclusion of innovation feedback from energy suppliers; and underexplored behavioral evolution of building owners. This study establishes a government–suppliers–owners evolutionary game framework with dynamically calibrated policies, simulated using MATLAB multi-scenario analysis. Novel findings demonstrate: (1) A dual-threshold penalty effect where excessive fines diminish policy returns due to regulatory costs, requiring dynamic calibration distinct from fixed-penalty approaches; (2) Market-maturity-phased subsidies increasing owner adoption probability by 30% through staged progression; (3) Energy suppliers’ cost-reducing innovations as pivotal feedback drivers resolving coordination failures, overlooked in prior tripartite models; (4) Owners’ adoption motivation shifts from short-term economic incentives to environmentally driven decisions under policy guidance. The framework resolves these gaps through integrated dynamic mechanisms, providing policymakers with evidence-based regulatory thresholds, energy suppliers with cost-reduction targets, and academia with replicable modeling tools. Full article
Show Figures

Figure 1

22 pages, 1937 KiB  
Review
Carbon Dot Nanozymes in Orthopedic Disease Treatment: Comprehensive Overview, Perspectives and Challenges
by Huihui Wang
C 2025, 11(3), 58; https://doi.org/10.3390/c11030058 (registering DOI) - 1 Aug 2025
Viewed by 44
Abstract
Nanozymes, as a new generation of artificial enzymes, have attracted increasing attention in the field of biomedicine due to their multiple enzymatic characteristics, multi-functionality, low cost, and high stability. Among them, carbon dot nanozymes (CDzymes) possess excellent enzymatic-like catalytic activity and biocompatibility and [...] Read more.
Nanozymes, as a new generation of artificial enzymes, have attracted increasing attention in the field of biomedicine due to their multiple enzymatic characteristics, multi-functionality, low cost, and high stability. Among them, carbon dot nanozymes (CDzymes) possess excellent enzymatic-like catalytic activity and biocompatibility and have been developed for various diagnostic and therapeutic studies of diseases. Here, we briefly review the representative research on CDzymes in recent years, including their synthesis, modification, and applications, especially in orthopedic diseases, including osteoarthritis, osteoporosis, osteomyelitis, intervertebral disc degenerative diseases, bone tumors, and bone injury repair and periodontitis. Additionally, we briefly discuss the potential future applications and opportunities and challenges of CDzymes. We hope this review can provide some reference opinions for CDzymes and offer insights for promoting their application strategies in the treatment of orthopedic disease. Full article
(This article belongs to the Special Issue Carbon Nanohybrids for Biomedical Applications (2nd Edition))
Show Figures

Figure 1

Back to TopTop