Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (417)

Search Parameters:
Keywords = early cost estimates

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 491 KiB  
Article
Optimizing One-Sample Tests for Proportions in Single- and Two-Stage Oncology Trials
by Alan David Hutson
Cancers 2025, 17(15), 2570; https://doi.org/10.3390/cancers17152570 - 4 Aug 2025
Viewed by 1
Abstract
Background/Objectives: Phase II oncology trials often rely on single-arm designs to test H0:π=π0 versus Ha:π>π0, especially when randomized trials are infeasible due to cost or disease rarity. Traditional approaches, such [...] Read more.
Background/Objectives: Phase II oncology trials often rely on single-arm designs to test H0:π=π0 versus Ha:π>π0, especially when randomized trials are infeasible due to cost or disease rarity. Traditional approaches, such as the exact binomial test and Simon’s two-stage design, tend to be conservative, with actual Type I error rates falling below the nominal α due to the discreteness of the underlying binomial distribution. This study aims to develop a more efficient and flexible method that maintains accurate Type I error control in such settings. Methods: We propose a convolution-based method that combines the binomial distribution with a simulated normal variable to construct an unbiased estimator of π. This method is designed to precisely control the Type I error rate while enabling more efficient trial designs. We derive its theoretical properties and assess its performance against traditional exact tests in both one-stage and two-stage trial designs. Results: The proposed method results in more efficient designs with reduced sample sizes compared to standard approaches, without compromising the control of Type I error rates. We introduce a new two-stage design incorporating interim futility analysis and compare it with Simon’s design. Simulations and real-world examples demonstrate that the proposed approach can significantly lower trial cost and duration. Conclusions: This convolution-based approach offers a flexible and efficient alternative to traditional methods for early-phase oncology trial design. It addresses the conservativeness of existing designs and provides practical benefits in terms of resource use and study timelines. Full article
(This article belongs to the Special Issue Application of Biostatistics in Cancer Research)
Show Figures

Figure 1

21 pages, 6219 KiB  
Article
Semi-Supervised Density Estimation with Background-Augmented Data for In Situ Seed Counting
by Baek-Gyeom Sung, Chun-Gu Lee, Yeong-Ho Kang, Seung-Hwa Yu and Dae-Hyun Lee
Agriculture 2025, 15(15), 1682; https://doi.org/10.3390/agriculture15151682 - 4 Aug 2025
Viewed by 76
Abstract
Direct seeding has gained prominence as a labor-efficient and environmentally sustainable alternative to conventional transplanting in rice cultivation. In direct seeding systems, early-stage management is crucial for stable seedling establishment, with sowing uniformity measured by seed counts being a critical indicator of success. [...] Read more.
Direct seeding has gained prominence as a labor-efficient and environmentally sustainable alternative to conventional transplanting in rice cultivation. In direct seeding systems, early-stage management is crucial for stable seedling establishment, with sowing uniformity measured by seed counts being a critical indicator of success. However, conventional manual seed counting methods are time-consuming, prone to human error, and impractical for large-scale or repetitive tasks, necessitating advanced automated solutions. Recent advances in computer vision technologies and precision agriculture tools, offer the potential to automate seed counting tasks. Nevertheless, challenges such as domain discrepancies and limited labeled data restrict robust real-world deployment. To address these issues, we propose a density estimation-based seed counting framework integrating semi-supervised learning and background augmentation. This framework includes a cost-effective data acquisition system enabling diverse domain data collection through indoor background augmentation, combined with semi-supervised learning to utilize augmented data effectively while minimizing labeling costs. The experimental results on field data from unknown domains show that our approach reduces seed counting errors by up to 58.5% compared to conventional methods, highlighting its potential as a scalable and effective solution for agricultural applications in real-world environments. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

19 pages, 18533 KiB  
Article
Modeling of Marine Assembly Logistics for an Offshore Floating Photovoltaic Plant Subject to Weather Dependencies
by Lu-Jan Huang, Simone Mancini and Minne de Jong
J. Mar. Sci. Eng. 2025, 13(8), 1493; https://doi.org/10.3390/jmse13081493 - 2 Aug 2025
Viewed by 111
Abstract
Floating solar technology has gained significant attention as part of the global expansion of renewable energy due to its potential for installation in underutilized water bodies. Several countries, including the Netherlands, have initiated efforts to extend this technology from inland freshwater applications to [...] Read more.
Floating solar technology has gained significant attention as part of the global expansion of renewable energy due to its potential for installation in underutilized water bodies. Several countries, including the Netherlands, have initiated efforts to extend this technology from inland freshwater applications to open offshore environments, particularly within offshore wind farm areas. This development is motivated by the synergistic benefits of increasing site energy density and leveraging the existing offshore grid infrastructure. The deployment of offshore floating photovoltaic (OFPV) systems involves assembling multiple modular units in a marine environment, introducing operational risks that may give rise to safety concerns. To mitigate these risks, weather windows must be considered prior to the task execution to ensure continuity between weather-sensitive activities, which can also lead to additional time delays and increased costs. Consequently, optimizing marine logistics becomes crucial to achieving the cost reductions necessary for making OFPV technology economically viable. This study employs a simulation-based approach to estimate the installation duration of a 5 MWp OFPV plant at a Dutch offshore wind farm site, started in different months and under three distinct risk management scenarios. Based on 20 years of hindcast wave data, the results reveal the impacts of campaign start months and risk management policies on installation duration. Across all the scenarios, the installation duration during the autumn and winter period is 160% longer than the one in the spring and summer period. The average installation durations, based on results from 12 campaign start months, are 70, 80, and 130 days for the three risk management policies analyzed. The result variation highlights the additional time required to mitigate operational risks arising from potential discontinuity between highly interdependent tasks (e.g., offshore platform assembly and mooring). Additionally, it is found that the weather-induced delays are mainly associated with the campaigns of pre-laying anchors and platform and mooring line installation compared with the other campaigns. In conclusion, this study presents a logistics modeling methodology for OFPV systems, demonstrated through a representative case study based on a state-of-the-art truss-type design. The primary contribution lies in providing a framework to quantify the performance of OFPV installation strategies at an early design stage. The findings of this case study further highlight that marine installation logistics are highly sensitive to local marine conditions and the chosen installation strategy, and should be integrated early in the OFPV design process to help reduce the levelized cost of electricity. Full article
(This article belongs to the Special Issue Design, Modeling, and Development of Marine Renewable Energy Devices)
Show Figures

Figure 1

24 pages, 624 KiB  
Systematic Review
Integrating Artificial Intelligence into Perinatal Care Pathways: A Scoping Review of Reviews of Applications, Outcomes, and Equity
by Rabie Adel El Arab, Omayma Abdulaziz Al Moosa, Zahraa Albahrani, Israa Alkhalil, Joel Somerville and Fuad Abuadas
Nurs. Rep. 2025, 15(8), 281; https://doi.org/10.3390/nursrep15080281 - 31 Jul 2025
Viewed by 143
Abstract
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping [...] Read more.
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping review of reviews of AI/ML applications spanning reproductive, prenatal, postpartum, neonatal, and early child-development care. Methods: We searched PubMed, Embase, the Cochrane Library, Web of Science, and Scopus through April 2025. Two reviewers independently screened records, extracted data, and assessed methodological quality using AMSTAR 2 for systematic reviews, ROBIS for bias assessment, SANRA for narrative reviews, and JBI guidance for scoping reviews. Results: Thirty-nine reviews met our inclusion criteria. In preconception and fertility treatment, convolutional neural network-based platforms can identify viable embryos and key sperm parameters with over 90 percent accuracy, and machine-learning models can personalize follicle-stimulating hormone regimens to boost mature oocyte yield while reducing overall medication use. Digital sexual-health chatbots have enhanced patient education, pre-exposure prophylaxis adherence, and safer sexual behaviors, although data-privacy safeguards and bias mitigation remain priorities. During pregnancy, advanced deep-learning models can segment fetal anatomy on ultrasound images with more than 90 percent overlap compared to expert annotations and can detect anomalies with sensitivity exceeding 93 percent. Predictive biometric tools can estimate gestational age within one week with accuracy and fetal weight within approximately 190 g. In the postpartum period, AI-driven decision-support systems and conversational agents can facilitate early screening for depression and can guide follow-up care. Wearable sensors enable remote monitoring of maternal blood pressure and heart rate to support timely clinical intervention. Within neonatal care, the Heart Rate Observation (HeRO) system has reduced mortality among very low-birth-weight infants by roughly 20 percent, and additional AI models can predict neonatal sepsis, retinopathy of prematurity, and necrotizing enterocolitis with area-under-the-curve values above 0.80. From an operational standpoint, automated ultrasound workflows deliver biometric measurements at about 14 milliseconds per frame, and dynamic scheduling in IVF laboratories lowers staff workload and per-cycle costs. Home-monitoring platforms for pregnant women are associated with 7–11 percent reductions in maternal mortality and preeclampsia incidence. Despite these advances, most evidence derives from retrospective, single-center studies with limited external validation. Low-resource settings, especially in Sub-Saharan Africa, remain under-represented, and few AI solutions are fully embedded in electronic health records. Conclusions: AI holds transformative promise for perinatal care but will require prospective multicenter validation, equity-centered design, robust governance, transparent fairness audits, and seamless electronic health record integration to translate these innovations into routine practice and improve maternal and neonatal outcomes. Full article
Show Figures

Figure 1

21 pages, 2704 KiB  
Article
A BIM-Based Integrated Model for Low-Cost Housing Mass Customization in Brazil: Real-Time Variability with Data Control
by Alexander Lopes de Aquino Brasil and Andressa Carmo Pena Martinez
Architecture 2025, 5(3), 54; https://doi.org/10.3390/architecture5030054 - 25 Jul 2025
Viewed by 441
Abstract
Addressing the growing demand for affordable housing requires innovative solutions that strike a balance between cost efficiency and user-specific needs. Mass customization (MC) presents a promising approach that enables the creation of tailored housing solutions on a scale. In this context, this study [...] Read more.
Addressing the growing demand for affordable housing requires innovative solutions that strike a balance between cost efficiency and user-specific needs. Mass customization (MC) presents a promising approach that enables the creation of tailored housing solutions on a scale. In this context, this study introduces a model for mass customization of affordable single-family housing units in the city of Teresina, PI, Brazil. Our approach integrates algorithmic–parametric modeling and BIM technologies, facilitating the flow of information and enabling informed decision-making throughout the design process. Since the early design stages, the work has assumed that these integrated technologies provide real-time control over design variables and associated construction data. To develop the model, the method proceeded through the following phases: (1) analysis of the context and definition of the design language; (2) definition of the design process; (3) definition of the cost calculation method and estimation of construction time; (4) definition of the computing model based on the specified technologies; and (5) quantitative and qualitative evaluation of the computational model. As a result, this research aims to contribute to the state-of-the-art by formalizing the knowledge generated through the systematic description of the processes involved in this workflow, with a special focus on the Brazilian context, where the issue of social housing is a critical challenge. Full article
(This article belongs to the Special Issue Shaping Architecture with Computation)
Show Figures

Figure 1

31 pages, 2179 KiB  
Article
Statistical Analysis and Modeling for Optical Networks
by Sudhir K. Routray, Gokhan Sahin, José R. Ferreira da Rocha and Armando N. Pinto
Electronics 2025, 14(15), 2950; https://doi.org/10.3390/electronics14152950 - 24 Jul 2025
Viewed by 331
Abstract
Optical networks serve as the backbone of modern communication, requiring statistical analysis and modeling to optimize performance, reliability, and scalability. This review paper explores statistical methodologies for analyzing network characteristics, dimensioning, parameter estimation, and cost prediction of optical networks, and provides a generalized [...] Read more.
Optical networks serve as the backbone of modern communication, requiring statistical analysis and modeling to optimize performance, reliability, and scalability. This review paper explores statistical methodologies for analyzing network characteristics, dimensioning, parameter estimation, and cost prediction of optical networks, and provides a generalized framework based on the idea of convex areas, and link length and shortest path length distributions. Accurate dimensioning and cost estimation are crucial for optical network planning, especially during early-stage design, network upgrades, and optimization. However, detailed information is often unavailable or too complex to compute. Basic parameters like coverage area and node count, along with statistical insights such as distribution patterns and moments, aid in determining the appropriate modulation schemes, compensation techniques, repeater placement, and in estimating the fiber length. Statistical models also help predict link lengths and shortest path lengths, ensuring efficiency in design. Probability distributions, stochastic processes, and machine learning improve network optimization and fault prediction. Metrics like bit error rate, quality of service, and spectral efficiency can be statistically assessed to enhance data transmission. This paper provides a review on statistical analysis and modeling of optical networks, which supports intelligent optical network management, dimensioning of optical networks, performance prediction, and estimation of important optical network parameters with partial information. Full article
(This article belongs to the Special Issue Optical Networking and Computing)
Show Figures

Figure 1

22 pages, 594 KiB  
Article
Information-Theoretic Cost–Benefit Analysis of Hybrid Decision Workflows in Finance
by Philip Beaucamp, Harvey Maylor and Min Chen
Entropy 2025, 27(8), 780; https://doi.org/10.3390/e27080780 - 23 Jul 2025
Viewed by 243
Abstract
Analyzing and leveraging data effectively has been an advantageous strategy in the management workflows of many contemporary organizations. In business and finance, data-informed decision workflows are nowadays essential for enabling development and growth. However, there is yet a theoretical or quantitative approach for [...] Read more.
Analyzing and leveraging data effectively has been an advantageous strategy in the management workflows of many contemporary organizations. In business and finance, data-informed decision workflows are nowadays essential for enabling development and growth. However, there is yet a theoretical or quantitative approach for analyzing the cost–benefit of the processes in such workflows, e.g., in determining the trade-offs between machine- and human-centric processes and quantifying biases. The aim of this work is to translate an information-theoretic concept and measure for cost–benefit analysis to a methodology that is relevant to the analysis of hybrid decision workflows in business and finance. We propose to combine an information-theoretic approach (i.e., information-theoretic cost–benefit analysis) and an engineering approach (e.g., workflow decomposition), which enables us to utilize information-theoretic measures to estimate the cost–benefit of individual processes quantitatively. We provide three case studies to demonstrate the feasibility of the proposed methodology, including (i) the use of a statistical and computational algorithm, (ii) incomplete information and humans’ soft knowledge, and (iii) cognitive biases in a committee meeting. While this is an early application of information-theoretic cost–benefit analysis to business and financial workflows, it is a significant step towards the development of a systematic, quantitative, and computer-assisted approach for optimizing data-informed decision workflows. Full article
Show Figures

Figure 1

27 pages, 839 KiB  
Article
AI-Powered Forecasting of Environmental Impacts and Construction Costs to Enhance Project Management in Highway Projects
by Joon-Soo Kim
Buildings 2025, 15(14), 2546; https://doi.org/10.3390/buildings15142546 - 19 Jul 2025
Viewed by 340
Abstract
The accurate early-stage estimation of environmental load (EL) and construction cost (CC) in road infrastructure projects remains a significant challenge, constrained by limited data and the complexity of construction activities. To address this, our study proposes a machine learning-based predictive framework utilizing artificial [...] Read more.
The accurate early-stage estimation of environmental load (EL) and construction cost (CC) in road infrastructure projects remains a significant challenge, constrained by limited data and the complexity of construction activities. To address this, our study proposes a machine learning-based predictive framework utilizing artificial neural networks (ANNs) and deep neural networks (DNNs), enhanced by autoencoder-driven feature selection. A structured dataset of 150 completed national road projects in South Korea was compiled, covering both planning and design phases. The database focused on 19 high-impact sub-work types to reduce noise and improve prediction precision. A hybrid imputation approach—combining mean substitution with random forest regression—was applied to handle 4.47% missing data in the design-phase inputs, reducing variance by up to 5% and improving data stability. Dimensionality reduction via autoencoder retained 16 core variables, preserving 97% of explanatory power while minimizing redundancy. ANN models benefited from cross-validation and hyperparameter tuning, achieving consistent performance across training and validation sets without overfitting (MSE = 0.06, RMSE = 0.24). The optimal ANN yielded average error rates of 29.8% for EL and 21.0% for CC at the design stage. DNN models, with their deeper architectures and dropout regularization, further improved performance—achieving 27.1% (EL) and 17.0% (CC) average error rates at the planning stage and 24.0% (EL) and 14.6% (CC) at the design stage. These results met all predefined accuracy thresholds, underscoring the DNN’s advantage in handling complex, high-variance data while the ANN excelled in structured cost prediction. Overall, the synergy between deep learning and autoencoder-based feature selection offers a scalable and data-informed approach for enhancing early-stage environmental and economic assessments in road infrastructure planning—supporting more sustainable and efficient project management. Full article
Show Figures

Figure 1

18 pages, 3691 KiB  
Article
A Field Study on Sampling Strategy of Short-Term Pumping Tests for Hydraulic Tomography Based on the Successive Linear Estimator
by Xiaolan Hou, Rui Hu, Huiyang Qiu, Yukun Li, Minhui Xiao and Yang Song
Water 2025, 17(14), 2133; https://doi.org/10.3390/w17142133 - 17 Jul 2025
Viewed by 223
Abstract
Hydraulic tomography (HT) based on the successive linear estimator (SLE) offers the high-resolution characterization of aquifer heterogeneity but conventionally requires prolonged pumping to achieve steady-state conditions, limiting its applicability in contamination-sensitive or low-permeability settings. This study bridged theoretical and practical gaps (1) by [...] Read more.
Hydraulic tomography (HT) based on the successive linear estimator (SLE) offers the high-resolution characterization of aquifer heterogeneity but conventionally requires prolonged pumping to achieve steady-state conditions, limiting its applicability in contamination-sensitive or low-permeability settings. This study bridged theoretical and practical gaps (1) by identifying spatial periodicity (hole effect) as the mechanism underlying divergences in steady-state cross-correlation patterns between random finite element method (RFEM) and first-order analysis, modeled via an oscillatory covariance function, and (2) by validating a novel short-term sampling strategy for SLE-based HT using field experiments at the University of Göttingen test site. Utilizing early-time drawdown data, we reconstructed spatially congruent distributions of hydraulic conductivity, specific storage, and hydraulic diffusivity after rigorous wavelet denoising. The results demonstrate that the short-term sampling strategy achieves accuracy comparable to that of long-term sampling strategy in characterizing aquifer heterogeneity. Critically, by decoupling SLE from steady-state requirements, this approach minimizes groundwater disturbance and time costs, expanding HT’s feasibility to challenging environments. Full article
(This article belongs to the Special Issue Hydrogeophysical Methods and Hydrogeological Models)
Show Figures

Figure 1

16 pages, 860 KiB  
Article
Cost–Effectiveness of Newborn Screening for X-Linked Adrenoleukodystrophy in the Netherlands: A Health-Economic Modelling Study
by Rosalie C. Martens, Hana M. Broulikova, Marc Engelen, Stephan Kemp, Anita Boelen, Robert de Jonge, Judith E. Bosmans and Annemieke C. Heijboer
Int. J. Neonatal Screen. 2025, 11(3), 53; https://doi.org/10.3390/ijns11030053 - 16 Jul 2025
Viewed by 366
Abstract
X-linked adrenoleukodystrophy (ALD) is an inherited metabolic disorder that can cause adrenal insufficiency and cerebral ALD (cALD) in childhood. Early detection prevents adverse health outcomes and can be achieved by newborn screening (NBS) followed by monitoring disease progression. However, monitoring is associated with [...] Read more.
X-linked adrenoleukodystrophy (ALD) is an inherited metabolic disorder that can cause adrenal insufficiency and cerebral ALD (cALD) in childhood. Early detection prevents adverse health outcomes and can be achieved by newborn screening (NBS) followed by monitoring disease progression. However, monitoring is associated with high costs. This study evaluates the cost–effectiveness of NBS for ALD in The Netherlands compared to no screening using a health economic model. A decision tree combined with a Markov model was developed to estimate societal costs, including screening costs, healthcare costs, and productivity losses of parents, and health outcomes over an 18-year time horizon. Model parameters were derived from the literature and expert opinion. A probabilistic sensitivity analysis (PSA) was performed to assess uncertainty. The screening costs of detecting one ALD case by NBS was EUR 40,630. Until the age of 18 years, the total societal cost per ALD case was EUR 120,779 for screening and EUR 62,914 for no screening. Screening gained an average of 1.7 QALYs compared with no screening. This resulted in an incremental cost–effectiveness ratio (ICER) of EUR 34,084 per QALY gained for screening compared to no screening. Although the results are sensitive to uncertainty surrounding costs and effectiveness due to limited data, NBS for ALD is likely to be cost–effective using a willingness-to-pay (WTP) threshold of EUR 50,000– EUR 80,000 per QALY gained. Full article
Show Figures

Graphical abstract

20 pages, 1370 KiB  
Article
Interpretable Machine Learning for Osteopenia Detection: A Proof-of-Concept Study Using Bioelectrical Impedance in Perimenopausal Women
by Dimitrios Balampanos, Christos Kokkotis, Theodoros Stampoulis, Alexandra Avloniti, Dimitrios Pantazis, Maria Protopapa, Nikolaos-Orestis Retzepis, Maria Emmanouilidou, Panagiotis Aggelakis, Nikolaos Zaras, Maria Michalopoulou and Athanasios Chatzinikolaou
J. Funct. Morphol. Kinesiol. 2025, 10(3), 262; https://doi.org/10.3390/jfmk10030262 - 11 Jul 2025
Viewed by 396
Abstract
Objectives: The early detection of low bone mineral density (BMD) is essential for preventing osteoporosis and related complications. While dual-energy X-ray absorptiometry (DXA) remains the gold standard for diagnosis, its cost and limited availability restrict its use in large-scale screening. This study investigated [...] Read more.
Objectives: The early detection of low bone mineral density (BMD) is essential for preventing osteoporosis and related complications. While dual-energy X-ray absorptiometry (DXA) remains the gold standard for diagnosis, its cost and limited availability restrict its use in large-scale screening. This study investigated whether raw bioelectrical impedance analysis (BIA) data combined with explainable machine learning (ML) models could accurately classify osteopenia in women aged 40 to 55. Methods: In a cross-sectional design, 138 women underwent same-day BIA and DXA assessments. Participants were categorized as osteopenic (T-score between −1.0 and −2.5; n = 33) or normal (T-score ≥ −1.0) based on DXA results. Overall, 24.1% of the sample were classified as osteopenic, and 32.85% were postmenopausal. Raw BIA outputs were used as input features, including impedance values, phase angles, and segmental tissue parameters. A sequential forward feature selection (SFFS) algorithm was employed to optimize input dimensionality. Four ML classifiers were trained using stratified five-fold cross-validation, and SHapley Additive exPlanations (SHAP) were applied to interpret feature contributions. Results: The neural network (NN) model achieved the highest classification accuracy (92.12%) using 34 selected features, including raw impedance measurements, derived body composition indices such as regional lean mass estimates and the edema index, as well as a limited number of categorical variables, including self-reported physical activity status. SHAP analysis identified muscle mass indices and fluid distribution metrics, features previously associated with bone health, as the most influential predictors in the current model. Other classifiers performed comparably but with lower precision or interpretability. Conclusions: ML models based on raw BIA data can classify osteopenia with high accuracy and clinical transparency. This approach provides a cost-effective and interpretable alternative for the early identification of individuals at risk for low BMD in resource-limited or primary care settings. Full article
Show Figures

Figure 1

16 pages, 3497 KiB  
Article
Utilizing Circadian Heart Rate Variability Features and Machine Learning for Estimating Left Ventricular Ejection Fraction Levels in Hypertensive Patients: A Composite Multiscale Entropy Analysis
by Nanxiang Zhang, Qi Pan, Shuo Yang, Leen Huang, Jianan Yin, Hai Lin, Xiang Huang, Chonglong Ding, Xinyan Zou, Yongjun Zheng and Jinxin Zhang
Biosensors 2025, 15(7), 442; https://doi.org/10.3390/bios15070442 - 10 Jul 2025
Viewed by 399
Abstract
Background: Early identification of left ventricular ejection fraction (LVEF) levels during the progression of hypertension is essential to prevent cardiac deterioration. However, achieving a non-invasive, cost-effective, and definitive assessment is challenging. It has prompted us to develop a comprehensive machine learning framework for [...] Read more.
Background: Early identification of left ventricular ejection fraction (LVEF) levels during the progression of hypertension is essential to prevent cardiac deterioration. However, achieving a non-invasive, cost-effective, and definitive assessment is challenging. It has prompted us to develop a comprehensive machine learning framework for the automatic quantitative estimation of LVEF levels from electrocardiography (ECG) signals. Methods: We enrolled 200 hypertensive patients from Zhongshan City, Guangdong Province, China, from 1 November 2022 to 1 January 2025. Participants underwent 24 h Holter monitoring and echocardiography for LVEF estimation. We developed a comprehensive machine learning framework that initiated with preprocessed ECG signal in one-hour intervals to extract CMSE-based heart rate variability (HRV) features, then utilized machine learning models such as linear regression (LR), Support Vector Machines (SVMs), and random forests (RFs) with recursive feature elimination for optimal LVEF estimation. Results: The LR model, notably during early night interval (20:00–21:00), achieved a RMSE of 4.61% and a MAE of 3.74%, highlighting its superiority. Compared with other similar studies, key CMSE parameters (Scales 1, 5, Slope 1–5, and Area 1–5) can effectively enhance regression models’ estimation performance. Conclusion: Our findings suggest that CMSE-derived circadian HRV features from Holter ECG could serve as a non-invasive, cost-effective, and interpretable solution for LVEF assessment in community settings. From a machine learning interpretable perspective, the proposed method emphasized CMSE’s clinical potential in capturing autonomic dynamics and cardiac function fluctuations. Full article
(This article belongs to the Special Issue Latest Wearable Biosensors—2nd Edition)
Show Figures

Figure 1

16 pages, 3215 KiB  
Article
Proactive and Data-Driven Decision-Making Using Earned Value Analysis in Infrastructure Projects
by Bayram Ateş and Mohammad Azim Eirgash
Buildings 2025, 15(14), 2388; https://doi.org/10.3390/buildings15142388 - 8 Jul 2025
Viewed by 929
Abstract
Timely and informed decision-making is essential for the successful execution of construction projects, where delays and cost overruns frequently pose significant risks. Earned value analysis (EVA) provides a robust, integrated framework that combines scope, schedule, and cost performance to support proactive project control. [...] Read more.
Timely and informed decision-making is essential for the successful execution of construction projects, where delays and cost overruns frequently pose significant risks. Earned value analysis (EVA) provides a robust, integrated framework that combines scope, schedule, and cost performance to support proactive project control. This study investigates the effectiveness of EVA as a decision-support tool by applying it to two real-life construction case studies. Key performance indicators, including Cost Performance Index (CPI), Schedule Performance Index (SPI), Estimate at Completion (EAC), and Estimate to Complete (ETC), are calculated and analyzed over a specific monitoring period. The analysis revealed a 15.36% cost savings and a 10.42% schedule improvement during the monitored period. By comparing planned and actual performance data, the study demonstrates how EVA enables early detection of deviations, thereby empowering project managers to implement timely corrective actions. The findings highlight EVA’s practical utility in improving project transparency, enhancing cost and schedule control, and supporting strategic decision-making in real-world construction environments. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

18 pages, 1123 KiB  
Article
Corrosion Risk Assessment in Coastal Environments Using Machine Learning-Based Predictive Models
by Marta Terrados-Cristos, Marina Diaz-Piloneta, Francisco Ortega-Fernández, Gemma Marta Martinez-Huerta and José Valeriano Alvarez-Cabal
Sensors 2025, 25(13), 4231; https://doi.org/10.3390/s25134231 - 7 Jul 2025
Viewed by 439
Abstract
Atmospheric corrosion, especially in coastal environments, presents a major challenge for the long-term durability of metallic and concrete infrastructure due to chloride deposition from marine aerosols. With a significant portion of the global population residing in coastal zones—often associated with intense industrial activity—there [...] Read more.
Atmospheric corrosion, especially in coastal environments, presents a major challenge for the long-term durability of metallic and concrete infrastructure due to chloride deposition from marine aerosols. With a significant portion of the global population residing in coastal zones—often associated with intense industrial activity—there is growing demand for accurate and early corrosion prediction methods. Traditional standards for assessing atmospheric corrosivity depend on long-term empirical data, limiting their usefulness during the design stage of infrastructure projects. To address this limitation, this study develops predictive models using machine-learning techniques, namely gradient boosting, support vector machine, and neural networks, to estimate chloride deposition levels based on easily accessible climatic and geographical parameters. Our models were trained on a comprehensive dataset that included variables such as land coverage, wind speed, and orientation. Among the models tested, tree-based algorithms, particularly gradient boosting, provided the highest prediction accuracy (F1 score: 0.8673). This approach not only highlights the most influential environmental variables driving chloride deposition but also offers a scalable and cost-effective solution to support corrosion monitoring and structural life assessment in coastal infrastructure. Full article
(This article belongs to the Special Issue Advanced Sensor Technologies for Corrosion Monitoring)
Show Figures

Figure 1

22 pages, 3183 KiB  
Article
Surrogate Modeling for Building Design: Energy and Cost Prediction Compared to Simulation-Based Methods
by Navid Shirzadi, Dominic Lau and Meli Stylianou
Buildings 2025, 15(13), 2361; https://doi.org/10.3390/buildings15132361 - 5 Jul 2025
Viewed by 517
Abstract
Designing energy-efficient buildings is essential for reducing global energy consumption and carbon emissions. However, traditional physics-based simulation models require substantial computational resources, detailed input data, and domain expertise. To address these limitations, this study investigates the use of three machine learning-based surrogate models—Random [...] Read more.
Designing energy-efficient buildings is essential for reducing global energy consumption and carbon emissions. However, traditional physics-based simulation models require substantial computational resources, detailed input data, and domain expertise. To address these limitations, this study investigates the use of three machine learning-based surrogate models—Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Multilayer Perceptron (MLP)—trained on a synthetic dataset of 2000 EnergyPlus-simulated building design scenarios to predict both energy use intensity (EUI) and cost estimates for midrise apartment buildings in the Toronto area. All three models exhibit strong predictive performance, with R2 values exceeding 0.9 for both EUI and cost. XGBoost achieves the best performance in cost prediction on the testing dataset with a root mean squared error (RMSE) of 5.13 CAD/m2, while MLP outperforms others in EUI prediction with a testing RMSE of 0.002 GJ/m2. In terms of computational efficiency, the surrogate models significantly outperform a physics-based simulation model, with MLP running approximately 340 times faster and XGBoost and RF achieving over 200 times speedup. This study also examines the effect of training dataset size on model performance, identifying a point of diminishing returns where further increases in data size yield minimal accuracy gains but substantially higher training times. To enhance model interpretability, SHapley Additive exPlanations (SHAP) analysis is used to quantify feature importance, revealing how different model types prioritize design parameters. A parametric design configuration analysis further evaluates the models’ sensitivity to changes in building envelope features. Overall, the findings demonstrate that machine learning-based surrogate models can serve as fast, accurate, and interpretable alternatives to traditional simulation methods, supporting efficient decision-making during early-stage building design. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

Back to TopTop