Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (473)

Search Parameters:
Keywords = limitation of probabilistic method

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 18468 KB  
Article
Assessment of Heavy Metal Transfer from Soil to Forage and Milk in the Tungurahua Volcano Area, Ecuador
by Lourdes Carrera-Beltrán, Irene Gavilanes-Terán, Víctor Hugo Valverde-Orozco, Steven Ramos-Romero, Concepción Paredes, Ángel A. Carbonell-Barrachina and Antonio J. Signes-Pastor
Agriculture 2025, 15(19), 2072; https://doi.org/10.3390/agriculture15192072 - 2 Oct 2025
Abstract
The Bilbao parish, located on the slopes of the Tungurahua volcano (Ecuador), was heavily impacted by ashfall during eruptions between 1999 and 2016. Volcanic ash may contain toxic metals such as Pb, Cd, Hg, As, and Se, which are linked to neurological, renal, [...] Read more.
The Bilbao parish, located on the slopes of the Tungurahua volcano (Ecuador), was heavily impacted by ashfall during eruptions between 1999 and 2016. Volcanic ash may contain toxic metals such as Pb, Cd, Hg, As, and Se, which are linked to neurological, renal, skeletal, pulmonary, and dermatological disorders. This study evaluated metal concentrations in soil (40–50 cm depth, corresponding to the rooting zone of forage grasses), forage (English ryegrass and Kikuyu grass), and raw milk to assess potential risks to livestock and human health. Sixteen georeferenced sites were selected using a simple random probabilistic sampling method considering geological variability, vegetation cover, accessibility, and cattle presence. Samples were digested and analyzed with a SpectrAA 220 atomic absorption spectrophotometer (Varian Inc., Victoria, Australia). Soils (Andisols) contained Hg (1.82 mg/kg), Cd (0.36 mg/kg), As (1.36 mg/kg), Pb (1.62 mg/kg), and Se (1.39 mg/kg); all were below the Ecuadorian limits, except for Hg and Se. Forage exceeded FAO thresholds for Pb, Cd, As, Hg, and Se. Milk contained Pb, Cd, and Hg below detection limits, while Se averaged 0.047 mg/kg, exceeding water safety guidelines. Findings suggest soils act as sources with significant bioaccumulation in forage but limited transfer to milk. Although immediate consumer risk is low, forage contamination highlights long-term hazards, emphasizing the need for monitoring, soil management, and farmer guidance. Full article
(This article belongs to the Section Agricultural Soils)
Show Figures

Figure 1

21 pages, 1106 KB  
Article
Risk Assessment Method for CPS-Based Distributed Generation Cluster Control in Active Distribution Networks Under Cyber Attacks
by Jinxin Ouyang, Fan Mo, Fei Huang and Yujie Chen
Sensors 2025, 25(19), 6053; https://doi.org/10.3390/s25196053 - 1 Oct 2025
Abstract
In modern power systems, distributed generation (DG) clusters such as wind and solar resources are increasingly being integrated into active distribution networks through DG cluster control, which enhances the economic efficiency and adaptability of the DGs. However, cyber attacks on cyber–physical systems (CPS) [...] Read more.
In modern power systems, distributed generation (DG) clusters such as wind and solar resources are increasingly being integrated into active distribution networks through DG cluster control, which enhances the economic efficiency and adaptability of the DGs. However, cyber attacks on cyber–physical systems (CPS) may disable control links within the DG cluster, leading to the loss of control over slave DGs and resulting in power deficits, thereby threatening system stability. Existing CPS security assessment methods have limited capacity to capture cross-domain propagation effects caused by cyber attacks and lack a comprehensive evaluation framework from the attacker’s perspective. This paper establishes a CPS system model and control–communication framework and then analyzes the cyber–physical interaction characteristics under DG cluster control. A logical model of cyber attack strategies targeting DG cluster inverters is proposed. Based on the control topology and master–slave logic, a probabilistic failure model for DG cluster control is developed. By considering power deficits at cluster point of common coupling (PCC) and results in internal network of the DG cluster, a physical consequence quantification method is introduced. Finally, a cyber risk assessment method is proposed for DG cluster control under cyber attacks. Simulation results validate the effectiveness of the proposed method. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 5751 KB  
Article
Multiscale Uncertainty Quantification of Woven Composite Structures by Dual-Correlation Sampling for Stochastic Mechanical Behavior
by Guangmeng Yang, Sinan Xiao, Chi Hou, Xiaopeng Wan, Jing Gong and Dabiao Xia
Polymers 2025, 17(19), 2648; https://doi.org/10.3390/polym17192648 - 30 Sep 2025
Abstract
Woven composite structures are inherently influenced by uncertainties across multiple scales, ranging from constituent material properties to mesoscale geometric variations. These uncertainties give rise to both spatial autocorrelation and cross-correlation among material parameters, resulting in stochastic strength performance and damage morphology at the [...] Read more.
Woven composite structures are inherently influenced by uncertainties across multiple scales, ranging from constituent material properties to mesoscale geometric variations. These uncertainties give rise to both spatial autocorrelation and cross-correlation among material parameters, resulting in stochastic strength performance and damage morphology at the macroscopic structural level. This study established a comprehensive multiscale uncertainty quantification framework to systematically propagate uncertainties from the microscale to the macroscale. A novel dual-correlation sampling approach, based on multivariate random field (MRF) theory, was proposed to simultaneously capture spatial autocorrelation and cross-correlation with clear physical interpretability. This method enabled a realistic representation of both inter-specimen variability and intra-specimen heterogeneity of material properties. Experimental validation via in-plane tensile tests demonstrated that the proposed approach accurately predicts not only probabilistic mechanical responses but also discrete damage morphology in woven composite structures. In contrast, traditional independent sampling methods exhibited inherent limitations in representing spatially distributed correlations of material properties, leading to inaccurate predictions of stochastic structural behavior. The findings offered valuable insights into structural reliability assessment and risk management in engineering applications. Full article
(This article belongs to the Section Polymer Composites and Nanocomposites)
Show Figures

Figure 1

16 pages, 769 KB  
Article
Evaluating Google Gemini’s Capability to Generate NBME-Standard Pharmacology Questions Using a 16-Criterion NBME Rubric
by Wesam Almasri, Marwa Saad and Changiz Mohiyeddini
Algorithms 2025, 18(10), 612; https://doi.org/10.3390/a18100612 - 29 Sep 2025
Abstract
Background: Large language models (LLMs) such as Google Gemini have demonstrated strong capabilities in natural language generation, but their ability to create medical assessment items aligned with National Board of Medical Examiners (NBME) standards remains underexplored. Objective: This study evaluated the [...] Read more.
Background: Large language models (LLMs) such as Google Gemini have demonstrated strong capabilities in natural language generation, but their ability to create medical assessment items aligned with National Board of Medical Examiners (NBME) standards remains underexplored. Objective: This study evaluated the quality of Gemini-generated NBME-style pharmacology questions using a structured rubric to assess accuracy, clarity, and alignment with examination standards. Methods: Ten pharmacology questions were generated using a standardized prompt and assessed independently by two pharmacology experts. Each item was evaluated using a 16-criterion NBME rubric with binary scoring. Inter-rater reliability was calculated (Cohen’s Kappa = 0.81) following a calibration session. Results: On average, questions met 14.3 of 16 criteria. Strengths included logical structure, appropriate distractors, and clinically relevant framing. Limitations included occasional pseudo-vignettes, cueing issues, and one instance of factual inaccuracy (albuterol mechanism of action). The evaluation highlighted Gemini’s ability to produce high-quality NBME-style questions, while underscoring concerns regarding sample size, reproducibility, and factual reliability. Conclusions: Gemini shows promise as a tool for generating pharmacology assessment items, but its probabilistic outputs, factual inaccuracies, and limited scope necessitate caution. Larger-scale studies, inclusion of multiple medical disciplines, incorporation of student performance data, and use of broader expert panels are recommended to establish reliability and educational applicability. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis: 2nd Edition)
Show Figures

Figure 1

20 pages, 12343 KB  
Article
Geographical Origin Identification of Dendrobium officinale Using Variational Inference-Enhanced Deep Learning
by Changqing Liu, Fan Cao, Yifeng Diao, Yan He and Shuting Cai
Foods 2025, 14(19), 3361; https://doi.org/10.3390/foods14193361 - 28 Sep 2025
Abstract
Dendrobium officinale is an important medicinal and edible plant in China, widely used in the dietary health industry and pharmaceutical field. Due to the different geographical origins and cultivation methods, the nutritional value, medicinal quality, and price of Dendrobium are significantly different, and [...] Read more.
Dendrobium officinale is an important medicinal and edible plant in China, widely used in the dietary health industry and pharmaceutical field. Due to the different geographical origins and cultivation methods, the nutritional value, medicinal quality, and price of Dendrobium are significantly different, and accurate identification of the origin is crucial. Current origin identification relies on expert judgment or requires costly instruments, lacking an efficient solution. This study proposes a Variational Inference-enabled Data-Efficient Learning (VIDE) model for high-precision, non-destructive origin identification using a small number of image samples. VIDE integrates dual probabilistic networks: a prior network generating latent feature prototypes and a posterior network employing variational inference to model feature distributions via mean and variance estimators. This synergistic design enhances intra-class feature diversity while maximizing inter-class separability, achieving robust classification with limited samples. Experiments on a self-built dataset of Dendrobium officinale samples from six major Chinese regions show the VIDE model achieves 91.51% precision, 92.63% recall, and 92.07% F1-score, outperforming state-of-the-art models. The study offers a practical solution for geographical origin identification and advances intelligent quality assessment in Dendrobium officinale. Full article
Show Figures

Figure 1

18 pages, 313 KB  
Article
Prevalence and Exposure Assessment of Alternaria Toxins in Zhejiang Province, China
by Zijie Lu, Ronghua Zhang, Pinggu Wu, Dong Zhao, Jiang Chen, Xiaodong Pan, Jikai Wang, Hexiang Zhang, Xiaojuan Qi, Shufeng Ye and Biao Zhou
Foods 2025, 14(19), 3298; https://doi.org/10.3390/foods14193298 - 23 Sep 2025
Viewed by 209
Abstract
This study aimed to assess the prevalence of four Alternaria toxins (alternariol [AOH], alternariol monomethyl ether [AME], tenuazonic acid [TeA], and tentoxin [TEN]) in various foods and assess the risk of Alternaria-toxin exposure in Zhejiang Province, China. A total of 325 samples [...] Read more.
This study aimed to assess the prevalence of four Alternaria toxins (alternariol [AOH], alternariol monomethyl ether [AME], tenuazonic acid [TeA], and tentoxin [TEN]) in various foods and assess the risk of Alternaria-toxin exposure in Zhejiang Province, China. A total of 325 samples were collected in this study, and at least one type of Alternaria toxin was detected in 53.85% of the samples. Wheat flour had a high detection rate of 97.41%, and TeA was the most prevalent compound in terms of concentration and detection rate. Assessment of Alternaria toxins using the threshold of toxicological concern (TTC) method showed that the majority of the population had a low exposure risk. Population-wide dietary exposure assessment suggested a potential health risk for some residents with 95th percentile (P95) assessment values 0.0038, 0.0128, and 0.0047 µg/kg b.w. for AOH from wheat flour and AOH and AME from Coix rice, respectively, exceeding the TTC value of 0.0025 µg/kg b.w. Probabilistic assessment showed that the mean exposure of children aged ≤6 years to AOH via wheat flour for P92 and of those aged 7–12 years for P93 were both 0.0025 µg/kg b.w. Exposures to TeA and TEN were within the acceptable limits (below the TTC value of 1.5 µg/kg b.w.). Age-group probabilistic and point assessments indicated that children aged ≤6 and 7–12 years are at higher exposure risk. This study provides a useful reference for developing limiting values and legislation for Alternaria toxins in food. Full article
(This article belongs to the Special Issue Advances in Food Toxin Analysis and Risk Assessment)
22 pages, 5876 KB  
Article
Development of a Methodology Used to Predict the Wheel–Surface Friction Coefficient in Challenging Climatic Conditions
by Viktor V. Petin, Andrey V. Keller, Sergey S. Shadrin, Daria A. Makarova and Yury M. Furletov
Future Transp. 2025, 5(4), 129; https://doi.org/10.3390/futuretransp5040129 - 23 Sep 2025
Viewed by 113
Abstract
This paper presents a novel methodology for predicting the tire–road friction coefficient in real-time under challenging climatic conditions based on a fuzzy logic inference system. The core innovation of the proposed approach lies in the integration and probabilistic weighting of a diverse set [...] Read more.
This paper presents a novel methodology for predicting the tire–road friction coefficient in real-time under challenging climatic conditions based on a fuzzy logic inference system. The core innovation of the proposed approach lies in the integration and probabilistic weighting of a diverse set of input data, which includes signals from ambient temperature and precipitation intensity sensors, activation events of the anti-lock braking system (ABS) and electronic stability control (ESP), windshield wiper operation modes, and road marking recognition via a front-facing camera. This multi-sensor data fusion strategy significantly enhances prediction accuracy compared to traditional methods that rely on limited data sources (e.g., temperature and precipitation alone), especially in transient or non-uniform road conditions such as compacted snow or shortly after rainfall. The reliability of the fuzzy-logic-based predictor was experimentally validated through extensive road tests on dry asphalt, wet asphalt, and wet basalt (simulating packed snow). The results demonstrate a high degree of convergence between predicted and actual values, with a maximum modeling error of less than 10% across all tested scenarios. The developed methodology provides a robust and adaptive solution for enhancing the performance of Advanced Driver Assistance Systems (ADASs), particularly Automatic Emergency Braking (AEB), by enabling more accurate braking distance calculations. Full article
Show Figures

Figure 1

22 pages, 883 KB  
Article
Development of a Model for Increasing the Capacity of Small and Medium-Sized Ports Using the Principles of Probability Theory
by Vytautas Paulauskas, Donatas Paulauskas and Vytas Paulauskas
J. Mar. Sci. Eng. 2025, 13(9), 1833; https://doi.org/10.3390/jmse13091833 - 22 Sep 2025
Viewed by 164
Abstract
Every year, more and more general and other types of cargo are transported by containers, and many ports, including small and medium-sized ones, are trying to join the container transportation processes. Port connectivity with container shipping is associated with easier and faster cargo [...] Read more.
Every year, more and more general and other types of cargo are transported by containers, and many ports, including small and medium-sized ones, are trying to join the container transportation processes. Port connectivity with container shipping is associated with easier and faster cargo processing and reduced environmental impact by optimizing ship arrivals and processing in small and medium-sized ports. Small and medium-sized ports are often limited by port infrastructure, especially suitable quays; therefore, it is very important to correctly assess the capabilities of such ports so that ships do not have to wait for entry and so that quays and other port infrastructure are optimally used. The research is relevant because small and medium-sized ports are increasingly involved in the activities of logistics chains and are becoming very important for the development of individual regions. The wider use of small and medium-sized ports in logistics chains is a new and original research direction. Optimal assessment of port or terminal and berth utilization is possible using the principles of probability theory. The article develops and presents a probabilistic method for assessment of port and terminal and ship mooring at their berths, using possible and actual time periods, based on the principles of transport process organization and linked to the capabilities of the port infrastructure and terminal superstructure. The conditional probability method was used to assess port and terminal capacity, as well as a method for assessing ship maneuverability under limited conditions. The developed probabilistic method for assessing port terminals and ship berthing at port quays can be used in any port or terminal, taking into account local conditions. Combined theoretical research and experimental results of the optimal use of small and medium-sized ports ensure sufficient research quality. Full article
(This article belongs to the Special Issue Smart Seaport and Maritime Transport Management, Second Edition)
Show Figures

Figure 1

23 pages, 2011 KB  
Article
A Second-Order Second-Moment Approximate Probabilistic Design Method for Structural Components Considering the Curvature of Limit State Surfaces
by Hanmin Liu, Yicheng Mao, Zhenhao Zhang, Fang Yuan and Fuming Wang
Buildings 2025, 15(18), 3421; https://doi.org/10.3390/buildings15183421 - 22 Sep 2025
Viewed by 116
Abstract
The current engineering structural design code employs a direct probability design method based on the Taylor series expansion of the performance function at verification points, retaining only linear terms. This approach ignores the curvature and other nonlinear properties of the performance function, leading [...] Read more.
The current engineering structural design code employs a direct probability design method based on the Taylor series expansion of the performance function at verification points, retaining only linear terms. This approach ignores the curvature and other nonlinear properties of the performance function, leading to insufficient accuracy. To address the deficiencies of the current design method, this paper develops an approximate probability design method that considers the curvature of the limit state surface, integrating it with the second-order moment theory based on the direct probability design method. Using a simply supported plate as a representative example, this paper systematically compares the performance of the proposed design method with the direct probability design method, the partial coefficient method, and the design value method in reinforcement design. The reinforcement areas calculated by the four methods are similar, confirming the correctness and practicality of the proposed method for engineering applications. The accuracy of the design outcomes from the various methods is validated through Monte Carlo simulation. The results indicate that the method proposed in this paper exhibits a high accuracy, with the relative errors of the reliability indices in the two examples being 0.346% and 0.228%, respectively—significantly lower than those of the direct probability design method (2.919% and 0.769%, respectively). This underscores the effectiveness and substantial benefits of the proposed method in structural reliability design, offering a dependable, highly accurate, and economically viable design tool for engineering applications. Full article
Show Figures

Figure 1

30 pages, 3270 KB  
Article
Tree–Hillclimb Search: An Efficient and Interpretable Threat Assessment Method for Uncertain Battlefield Environments
by Zuoxin Zeng, Jinye Peng and Qi Feng
Entropy 2025, 27(9), 987; https://doi.org/10.3390/e27090987 - 21 Sep 2025
Viewed by 168
Abstract
In uncertain battlefield environments, rapid and accurate detection, identification of hostile targets, and assessment of threat levels are crucial for supporting effective decision-making. Despite offering the advantage of structural transparency, traditional analytical methods rely on expert knowledge to construct models and often fail [...] Read more.
In uncertain battlefield environments, rapid and accurate detection, identification of hostile targets, and assessment of threat levels are crucial for supporting effective decision-making. Despite offering the advantage of structural transparency, traditional analytical methods rely on expert knowledge to construct models and often fail to comprehensively capture the non-linear causal relationships among complex threat factors. In contrast, data-driven methods excel at uncovering patterns in data but suffer from limited interpretability due to their black-box nature. Owing to probabilistic graphical modeling capabilities, Bayesian networks possess unique advantages in threat assessment. However, existing models are either constrained by the limitation of expert experience or suffer from excessively high complexity due to structure learning algorithms, making it difficult to meet the stringent real-time requirements of uncertain battlefield environments. To address these issues, this paper proposes a new method, the Tree–Hillclimb Search method—an efficient and interpretable threat assessment method specifically designed for uncertain battlefield environments. The core of the method is a structure learning algorithm constrained by expert knowledge—the initial network structure constructed from expert knowledge serves as a constraint, enabling the discovery of hidden causal dependencies among variables through structure learning. The model is then refined under these expert knowledge constraints and can effectively balance accuracy and complexity. Sensitivity analysis further validates the consistency between the model structure and the influence degree of threat factors, providing a theoretical basis for formulating hierarchical threat assessment strategies under resource-constrained conditions, which can effectively optimize sensor resource allocation. The Tree–Hillclimb Search method features (1) enhanced interpretability; (2) high predictive accuracy; (3) high efficiency and real-time performance; (4) actual impact on battlefield decision-making; and (5) good generality and broad applicability. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Discovery)
Show Figures

Figure 1

25 pages, 9998 KB  
Article
A Study on the Soil Seismic Liquefaction Artificial Neural Network Probabilistic Assessment Method Based on Standard Penetration Test Data
by Jingjun Li, Meng Fan, Zhengquan Yang, Xiaosheng Liu and Jianming Zhao
Appl. Sci. 2025, 15(18), 10229; https://doi.org/10.3390/app151810229 - 19 Sep 2025
Viewed by 237
Abstract
Constructing a probabilistic assessment method is the primary task and key step in liquefaction research. This paper presents a systematic investigation into liquefaction potential evaluation methods. Through a comparative analysis of three conventional assessment methods, we identify critical limitations in existing approaches regarding [...] Read more.
Constructing a probabilistic assessment method is the primary task and key step in liquefaction research. This paper presents a systematic investigation into liquefaction potential evaluation methods. Through a comparative analysis of three conventional assessment methods, we identify critical limitations in existing approaches regarding accuracy and adaptability. A probabilistic ANN model was developed using field-collected standard penetration test (SPT) data from 311 liquefaction case histories. The model demonstrates superior performance with an overall accuracy of 86.17%, achieving 83.33% and 90.00% recognition rates for liquefied and non-liquefied cases, respectively. Key metrics, including precision (91.84%), recall (83.33%), and F1-score (87.38%), indicate robust discriminative capability. Comparative studies confirm the ANN model’s advantages over traditional methods in terms of prediction reliability and operational practicality. The research outcomes offer significant value for improving current liquefaction hazard assessment protocols in geotechnical engineering practice. Full article
Show Figures

Figure 1

21 pages, 5337 KB  
Article
SC-NBTI: A Smart Contract-Based Incentive Mechanism for Federated Knowledge Sharing
by Yuanyuan Zhang, Jingwen Liu, Jingpeng Li, Yuchen Huang, Wang Zhong, Yanru Chen and Liangyin Chen
Sensors 2025, 25(18), 5802; https://doi.org/10.3390/s25185802 - 17 Sep 2025
Viewed by 327
Abstract
With the rapid expansion of digital knowledge platforms and intelligent information systems, organizations and communities are producing a vast number of unstructured knowledge data, including annotated corpora, technical diagrams, collaborative whiteboard content, and domain-specific multimedia archives. However, knowledge sharing across institutions is hindered [...] Read more.
With the rapid expansion of digital knowledge platforms and intelligent information systems, organizations and communities are producing a vast number of unstructured knowledge data, including annotated corpora, technical diagrams, collaborative whiteboard content, and domain-specific multimedia archives. However, knowledge sharing across institutions is hindered by privacy risks, high communication overhead, and fragmented ownership of data. Federated learning promises to overcome these barriers by enabling collaborative model training without exchanging raw knowledge artifacts, but its success depends on motivating data holders to undertake the additional computational and communication costs. Most existing incentive schemes, which are based on non-cooperative game formulations, neglect unstructured interactions and communication efficiency, thereby limiting their applicability in knowledge-driven scenarios. To address these challenges, we introduce SC-NBTI, a smart contract and Nash bargaining-based incentive framework for federated learning in knowledge collaboration environments. We cast the reward allocation problem as a cooperative game, devise a heuristic algorithm to approximate the NP-hard Nash bargaining solution, and integrate a probabilistic gradient sparsification method to trim communication costs while safeguarding privacy. Experiments on the FMNIST image classification task show that SC-NBTI requires fewer training rounds while achieving 5.89% higher accuracy than the DRL-Incentive baseline. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

26 pages, 2889 KB  
Article
Advanced Implementation of the Asymmetric Distribution Expectation-Maximum Algorithm in Fault-Tolerant Control for Turbofan Acceleration
by Xinhai Zhang, Jia Geng, Kang Wang, Ming Li and Zhiping Song
Aerospace 2025, 12(9), 829; https://doi.org/10.3390/aerospace12090829 - 16 Sep 2025
Viewed by 290
Abstract
For the safety and performance of turbofan engines, the fault-tolerant control of acceleration schedules is becoming increasingly necessary. However, traditional probabilistic approaches struggle to satisfy the single-side surge boundary limits and control asymmetry. Moreover, the baseline fault-tolerance requirement of the acceleration schedule cannot [...] Read more.
For the safety and performance of turbofan engines, the fault-tolerant control of acceleration schedules is becoming increasingly necessary. However, traditional probabilistic approaches struggle to satisfy the single-side surge boundary limits and control asymmetry. Moreover, the baseline fault-tolerance requirement of the acceleration schedule cannot depend on whether fault detection exists, and model-dependent data approaches inherently limit their generalizability. To address all these challenges, this paper proposes a probabilistic viewpoint of non-frequency and non-Bayesian schools, and the asymmetric distribution expectation-maximum algorithm (ADEMA) based on this viewpoint, along with their detailed theoretical derivations. The surge boundary enhances safety requirements for the acceleration control; therefore, simulations and verifications consider the disturbance combinations involving a single significant fault alongside normal deviations from other factors, including minor faults. In the event of such disturbances, ADEMA can effectively prevent the acceleration process from approaching the surge boundary, both at sea level and within the flight envelope. It demonstrates the smallest median estimation error (0.27% at sea level and 0.96% within the flight envelope) compared to other methods, such as the Bayesian weighted average method. Although its maintenance of performance is not exceptionally strong, its independence from model-data makes it a valuable reference. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

13 pages, 1001 KB  
Article
Transient-Aware Multi-Objective Optimization of Water Distribution Systems for Cost and Fire Flow Reliability
by Bongseog Jung, Dongwon Ko and Sanghyun Kim
Sustainability 2025, 17(18), 8274; https://doi.org/10.3390/su17188274 - 15 Sep 2025
Viewed by 410
Abstract
Urban water distribution systems, as integral parts of underground pipeline networks, face challenges from aging infrastructure, operational demands, and transient pressure surges that can compromise structural integrity and service reliability. This work introduces a cost-oriented multi-objective design framework that explicitly accounts for both [...] Read more.
Urban water distribution systems, as integral parts of underground pipeline networks, face challenges from aging infrastructure, operational demands, and transient pressure surges that can compromise structural integrity and service reliability. This work introduces a cost-oriented multi-objective design framework that explicitly accounts for both the likelihood of fire flow failure and the risks posed by transient pressures. The approach links a probabilistic reliability model with a transient pressure evaluation module, and couples both within a non-dominated sorting genetic algorithm to generate Pareto-optimal design solutions. Design solutions are constrained to maintain transient pressures within permissible limits, ensuring enhanced pipeline safety while optimizing capital costs. Case studies show that adopting a minimum 150 mm distribution main improves fire flow capacity and reduces transient-induced failure risks. The proposed method provides a predictive, computational tool that can be integrated into digital twin environments, supporting sustainable infrastructure planning, long-term monitoring, and proactive maintenance for resilient urban water supply systems. Full article
Show Figures

Figure 1

37 pages, 3679 KB  
Review
Application of Artificial Intelligence in Hydrological Modeling for Streamflow Prediction in Ungauged Watersheds: A Review
by Jerome G. Gacu, Cris Edward F. Monjardin, Ronald Gabriel T. Mangulabnan and Jerime Chris F. Mendez
Water 2025, 17(18), 2722; https://doi.org/10.3390/w17182722 - 14 Sep 2025
Viewed by 901
Abstract
Streamflow prediction in ungauged watersheds remains a critical challenge in hydrological science due to the absence of in situ measurements, particularly in remote, data-scarce, and developing regions. This review synthesizes recent advancements in artificial intelligence (AI) for streamflow modeling, focusing on machine learning [...] Read more.
Streamflow prediction in ungauged watersheds remains a critical challenge in hydrological science due to the absence of in situ measurements, particularly in remote, data-scarce, and developing regions. This review synthesizes recent advancements in artificial intelligence (AI) for streamflow modeling, focusing on machine learning (ML), deep learning (DL), and hybrid modeling frameworks. Three core methodological domains are examined: regionalization techniques that transfer models from gauged to ungauged basins using physiographic similarity and transfer learning; synthetic data generation through proxy variables such as NDVI, soil moisture, and digital elevation models; and model performance evaluation using both deterministic and probabilistic metrics. Findings from recent literature consistently demonstrate that AI-based models, especially Long Short-Term Memory (LSTM) networks and hybrid attention-based architectures, outperform traditional conceptual and physically based models in capturing nonlinear hydrological responses across diverse climatic and physiographic settings. The integration of AI with remote sensing enhances generalizability, particularly in ungauged and human-impacted basins. This review also addresses several persistent research gaps, including inconsistencies in model evaluation protocols, limited transferability across heterogeneous regions, a lack of reproducibility and open-source tools, and insufficient integration of physical hydrological knowledge into AI models. To bridge these gaps, future research should prioritize the development of physics-informed AI frameworks, standardized benchmarking datasets, uncertainty quantification methods, and interpretable modeling tools to support robust, scalable, and operational streamflow forecasting in ungauged watersheds. Full article
(This article belongs to the Special Issue Application of Machine Learning in Hydrologic Sciences)
Show Figures

Figure 1

Back to TopTop