Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (453)

Search Parameters:
Keywords = generalized information criterion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4569 KB  
Article
Parameter Estimation of MSNBurr-Based Hidden Markov Model: A Simulation Study
by Didik Bani Unggul, Nur Iriawan and Irhamah Irhamah
Symmetry 2025, 17(11), 1931; https://doi.org/10.3390/sym17111931 - 11 Nov 2025
Abstract
Hidden Markov Model (HMM) is a well-known probabilistic framework for representing sequential phenomena governed by doubly stochastic processes. Specifically, it features a Markov chain with hidden (unobserved) states, where each state emits observable values through a state-conditioned emission distribution at every time step. [...] Read more.
Hidden Markov Model (HMM) is a well-known probabilistic framework for representing sequential phenomena governed by doubly stochastic processes. Specifically, it features a Markov chain with hidden (unobserved) states, where each state emits observable values through a state-conditioned emission distribution at every time step. In this framework, selecting an appropriate emission distribution is essential because an unsuitable choice may prevent the HMM from accurately representing the observed phenomenon. To accommodate emission phenomena with situational symmetry, we propose an HMM framework with an adaptive emission distribution, named MSNBurr-HMM. This method is based on the MSNBurr distribution, which can effectively represent symmetric, right-skewed, and left-skewed emission patterns. We also provide its parameter estimation algorithm using the Baum–Welch algorithm. For model validation, we conduct fitting simulations across diverse scenarios and compare the findings against Gaussian-HMM and Fernández–Steel Skew Normal-HMM using log-likelihood, the Akaike Information Criterion (AIC), the corrected AIC (AICc), and the Bayesian Information Criterion (BIC). The results demonstrate that the algorithm can effectively estimate the target parameters accurately in all tested scenarios. In terms of performance, MSNBurr-HMM generally outperforms other models with strong dominance in various aspects across all evaluation metrics, confirming the promising and excellent results of this proposed method. Full article
Show Figures

Figure 1

51 pages, 56694 KB  
Article
Spatial Flows of Information Entropy as Indicators of Climate Variability and Extremes
by Bernard Twaróg
Entropy 2025, 27(11), 1132; https://doi.org/10.3390/e27111132 - 31 Oct 2025
Viewed by 269
Abstract
The objective of this study is to analyze spatial entropy flows that reveal the directional dynamics of climate change—patterns that remain obscured in traditional statistical analyses. This approach enables the identification of pathways for “climate information transport”, highlights associations with atmospheric circulation types, [...] Read more.
The objective of this study is to analyze spatial entropy flows that reveal the directional dynamics of climate change—patterns that remain obscured in traditional statistical analyses. This approach enables the identification of pathways for “climate information transport”, highlights associations with atmospheric circulation types, and allows for the localization of both sources and “informational voids”—regions where entropy is dissipated. The analytical framework is grounded in a quantitative assessment of long-term climate variability across Europe over the period 1901–2010, utilizing Shannon entropy as a measure of atmospheric system uncertainty and variability. The underlying assumption is that the variability of temperature and precipitation reflects the inherently dynamic character of climate as a nonlinear system prone to fluctuations. The study focuses on calculating entropy estimated within a 70-year moving window for each calendar month, using bivariate distributions of temperature and precipitation modeled with copula functions. Marginal distributions were selected based on the Akaike Information Criterion (AIC). To improve the accuracy of the estimation, a block bootstrap resampling technique was applied, along with numerical integration to compute the Shannon entropy values at each of the 4165 grid points with a spatial resolution of 0.5° × 0.5°. The results indicate that entropy and its derivative are complementary indicators of atmospheric system instability—entropy proving effective in long-term diagnostics, while its derivative provides insight into the short-term forecasting of abrupt changes. A lag analysis and Spearman rank correlation between entropy values and their potential supported the investigation of how circulation variability influences the occurrence of extreme precipitation events. Particularly noteworthy is the temporal derivative of entropy, which revealed strong nonlinear relationships between local dynamic conditions and climatic extremes. A spatial analysis of the information entropy field was also conducted, revealing distinct structures with varying degrees of climatic complexity on a continental scale. This field appears to be clearly structured, reflecting not only the directional patterns of change but also the potential sources of meteorological fluctuations. A field-theory-based spatial classification allows for the identification of transitional regions—areas with heightened susceptibility to shifts in local dynamics—as well as entropy source and sink regions. The study is embedded within the Fokker–Planck formalism, wherein the change in the stochastic distribution characterizes the rate of entropy production. In this context, regions of positive divergence are interpreted as active generators of variability, while sink regions function as stabilizing zones that dampen fluctuations. Full article
(This article belongs to the Special Issue 25 Years of Sample Entropy)
Show Figures

Figure 1

21 pages, 5085 KB  
Article
Finite Element Model Updating of a Steel Cantilever Beam: Experimental Validation and Digital Twin Integration
by Mohammad Amin Oyarhossein, Gabriel Sugiyama, Fernanda Rodrigues and Hugo Rodrigues
Buildings 2025, 15(21), 3890; https://doi.org/10.3390/buildings15213890 - 28 Oct 2025
Viewed by 362
Abstract
Accurate identification of modal properties in a steel cantilever beam is crucial for enhancing numerical models and supporting structural health monitoring, particularly when numerical and experimental data are combined. This study investigates the modal system identification of a steel cantilever beam using finite [...] Read more.
Accurate identification of modal properties in a steel cantilever beam is crucial for enhancing numerical models and supporting structural health monitoring, particularly when numerical and experimental data are combined. This study investigates the modal system identification of a steel cantilever beam using finite element method (FEM) simulations, which are validated by experimental testing. The beam was bolted to a reinforced concrete block and subjected to dynamic testing, where natural frequencies and mode shapes were extracted through Frequency Domain Decomposition (FDD). The experimental outcomes were compared with FEM predictions from SAP2000, and discrepancies were analysed using the Modal Assurance Criterion (MAC). A model updating procedure was applied, refining boundary conditions and considering sensor mass effects, which improved model accuracy. The updated FEM achieved closer agreement with frequency deviations reduced to less than 4% and MAC values above 0.9 for the first three modes. Beyond validation, the research links the updated FEM results with a Building Information Modelling (BIM) framework to enable the development of a digital twin of the beam. A workflow was designed to connect vibration monitoring data with BIM, providing visualisation of structural performance through colour-coded alerts. The findings confirm the effectiveness of FEM updating in generating reliable modal representations and demonstrate the potential of BIM-based digital twins for advancing structural condition assessment, maintenance planning and decision-making in civil engineering practice. Full article
(This article belongs to the Collection Innovation in Structural Analysis and Dynamics for Constructions)
Show Figures

Figure 1

34 pages, 7348 KB  
Article
Unsupervised Profiling of Operator Macro-Behaviour in the Italian Ancillary Service Market via Stability-Driven k-Means
by Mahmood Hosseini Imani and Atefeh Khalili Param
Energies 2025, 18(20), 5446; https://doi.org/10.3390/en18205446 - 15 Oct 2025
Viewed by 281
Abstract
The transition toward sustainability in the electric power sector, driven by increasingly renewable integration, has amplified the need to understand complex market dynamics. This study addresses a critical gap in the existing literature by presenting a systematic and reproducible methodology for profiling generating-unit [...] Read more.
The transition toward sustainability in the electric power sector, driven by increasingly renewable integration, has amplified the need to understand complex market dynamics. This study addresses a critical gap in the existing literature by presenting a systematic and reproducible methodology for profiling generating-unit operators’ macro-behaviour in the Italian Ancillary Services market (MSD). Focusing on the Northern zone (NORD) during the pivotal period of 2022–2024, a stability-driven k-means clustering framework is applied to a dataset of capacity-normalized features from the day-ahead market (MGP), intraday market (MI), and MSD. The number of clusters is determined using the Gap Statistic with a 1-SE criterion and validated with bootstrap stability (Adjusted Rand Index), resulting in a robust and reproducible 13-group taxonomy. The use of up-to-date data (2022–2024) enabled a unique investigation into post-2021 market phenomena, including the effects of geopolitical events and extreme price volatility. The findings reveal clear operator-coherent archetypes ranging from units that mainly trade in the day-ahead market to specialists that monetize flexibility in the MSD. The analysis further highlights the dominance of thermoelectric and dispatchable hydro technologies in providing ancillary services, while illustrating varying degrees of responsiveness to price signals. The proposed taxonomy offers regulators and policymakers a practical tool to identify inefficiencies, monitor concentration risks, and inform future market design and policy decisions. Full article
(This article belongs to the Special Issue Policy and Economic Analysis of Energy Systems: 2nd Edition)
Show Figures

Figure 1

21 pages, 512 KB  
Article
A Decision Tree Classification Algorithm Based on Two-Term RS-Entropy
by Ruoyue Mao, Xiaoyang Shi and Zhiyan Shi
Entropy 2025, 27(10), 1069; https://doi.org/10.3390/e27101069 - 14 Oct 2025
Viewed by 446
Abstract
Classification is an important task in the field of machine learning. Decision tree algorithms are a popular choice for handling classification tasks due to their high accuracy, simple algorithmic process, and good interpretability. Traditional decision tree algorithms, such as ID3, C4.5, and CART, [...] Read more.
Classification is an important task in the field of machine learning. Decision tree algorithms are a popular choice for handling classification tasks due to their high accuracy, simple algorithmic process, and good interpretability. Traditional decision tree algorithms, such as ID3, C4.5, and CART, differ primarily in their criteria for splitting trees. Shannon entropy, Gini index, and mean squared error are all examples of measures that can be used as splitting criteria. However, their performance varies on different datasets, making it difficult to determine the optimal splitting criterion. As a result, the algorithms lack flexibility. In this paper, we introduce the concept of generalized entropy from information theory, which unifies many splitting criteria under one free parameter, as the split criterion for decision trees. We propose a new decision tree algorithm called RSE (RS-Entropy decision tree). Additionally, we improve upon a two-term information measure method by incorporating penalty terms and coefficients into the split criterion, leading to a new decision tree algorithm called RSEIM (RS-Entropy Information Method). In theory, the improved algorithms RSE and RSEIM are more flexible due to the presence of multiple free parameters. In experiments conducted on several datasets, using genetic algorithms to optimize the parameters, our proposed RSE and RSEIM methods significantly outperform traditional decision tree methods in terms of classification accuracy without increasing the complexity of the resulting trees. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 1996 KB  
Article
Short-Term Probabilistic Prediction of Photovoltaic Power Based on Bidirectional Long Short-Term Memory with Temporal Convolutional Network
by Weibo Yuan, Jinjin Ding, Li Zhang, Jingyi Ni and Qian Zhang
Energies 2025, 18(20), 5373; https://doi.org/10.3390/en18205373 - 12 Oct 2025
Viewed by 380
Abstract
To mitigate the impact of photovoltaic (PV) power generation uncertainty on power systems and accurately depict the PV output range, this paper proposes a quantile regression probabilistic prediction model (TCN-QRBiLSTM) integrating a Temporal Convolutional Network (TCN) and Bidirectional Long Short-Term Memory (BiLSTM). First, [...] Read more.
To mitigate the impact of photovoltaic (PV) power generation uncertainty on power systems and accurately depict the PV output range, this paper proposes a quantile regression probabilistic prediction model (TCN-QRBiLSTM) integrating a Temporal Convolutional Network (TCN) and Bidirectional Long Short-Term Memory (BiLSTM). First, the historical dataset is divided into three weather scenarios (sunny, cloudy, and rainy) to generate training and test samples under the same weather conditions. Second, a TCN is used to extract local temporal features, and BiLSTM captures the bidirectional temporal dependencies between power and meteorological data. To address the non-differentiable issue of traditional interval prediction quantile loss functions, the Huber norm is introduced as an approximate replacement for the original loss function by constructing a differentiable improved Quantile Regression (QR) model to generate confidence intervals. Finally, Kernel Density Estimation (KDE) is integrated to output probability density prediction results. Taking a distributed PV power station in East China as the research object, using data from July to September 2022 (15 min resolution, 4128 samples), comparative verification with TCN-QRLSTM and QRBiLSTM models shows that under a 90% confidence level, the Prediction Interval Coverage Probability (PICP) of the proposed model under sunny/cloudy/rainy weather reaches 0.9901, 0.9553, 0.9674, respectively, which is 0.56–3.85% higher than that of comparative models; the Percentage Interval Normalized Average Width (PINAW) is 0.1432, 0.1364, 0.1246, respectively, which is 1.35–6.49% lower than that of comparative models; the comprehensive interval evaluation index (I) is the smallest; and the Bayesian Information Criterion (BIC) is the lowest under all three weather conditions. The results demonstrate that the model can effectively quantify and mitigate PV power generation uncertainty, verifying its reliability and superiority in short-term PV power probabilistic prediction, and it has practical significance for ensuring the safe and economical operation of power grids with high PV penetration. Full article
(This article belongs to the Special Issue Advanced Load Forecasting Technologies for Power Systems)
Show Figures

Figure 1

20 pages, 5063 KB  
Article
AI Diffusion Models Generate Realistic Synthetic Dental Radiographs Using a Limited Dataset
by Brian Kirkwood, Byeong Yeob Choi, James Bynum and Jose Salinas
J. Imaging 2025, 11(10), 356; https://doi.org/10.3390/jimaging11100356 - 11 Oct 2025
Viewed by 702
Abstract
Generative Artificial Intelligence (AI) has the potential to address the limited availability of dental radiographs for the development of Dental AI systems by creating clinically realistic synthetic dental radiographs (SDRs). Evaluation of artificially generated images requires both expert review and objective measures of [...] Read more.
Generative Artificial Intelligence (AI) has the potential to address the limited availability of dental radiographs for the development of Dental AI systems by creating clinically realistic synthetic dental radiographs (SDRs). Evaluation of artificially generated images requires both expert review and objective measures of fidelity. A stepwise approach was used to processing 10,000 dental radiographs. First, a single dentist screened images to determine if specific image selection criterion was met; this identified 225 images. From these, 200 images were randomly selected for training an AI image generation model. Second, 100 images were randomly selected from the previous training dataset and evaluated by four dentists; the expert review identified 57 images that met image selection criteria to refine training for two additional AI models. The three models were used to generate 500 SDRs each and the clinical realism of the SDRs was assessed through expert review. In addition, the SDRs generated by each model were objectively evaluated using quantitative metrics: Fréchet Inception Distance (FID) and Kernel Inception Distance (KID). Evaluation of the SDR by a dentist determined that expert-informed curation improved SDR realism, and refinement of model architecture produced further gains. FID and KID analysis confirmed that expert input and technical refinement improve image fidelity. The convergence of subjective and objective assessments strengthens confidence that the refined model architecture can serve as a foundation for SDR image generation, while highlighting the importance of expert-informed data curation and domain-specific evaluation metrics. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

20 pages, 464 KB  
Article
A Generalized Estimation Strategy for the Finite Population Median Using Transformation Methods Under a Two-Phase Sampling Design
by Huda M. Alshanbari
Symmetry 2025, 17(10), 1696; https://doi.org/10.3390/sym17101696 - 10 Oct 2025
Viewed by 239
Abstract
This paper introduces an efficient improved class of estimators for the finite population median under a two-phase sampling scheme. The proposed estimators are developed using transformation techniques to improve the estimation precision over that of conventional approaches. Two-phase sampling is employed to reduce [...] Read more.
This paper introduces an efficient improved class of estimators for the finite population median under a two-phase sampling scheme. The proposed estimators are developed using transformation techniques to improve the estimation precision over that of conventional approaches. Two-phase sampling is employed to reduce data collection costs and enhance estimation accuracy, especially when complete auxiliary information is not easily available. Expressions for the bias and mean squared error (MSE) are derived using a first-order approximation. To assess performance, simulation studies were carried out using data generated from various statistical distributions, alongside several real-life datasets. Estimators are compared using the mean squared error criterion, and the results show that the proposed methods consistently outperform the existing ones in terms of accuracy and efficiency. Graphical comparisons further support the improved performance of the new estimators, highlighting their practical effectiveness in median estimation problems. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

26 pages, 5202 KB  
Article
Time-Varying Bivariate Modeling for Predicting Hydrometeorological Trends in Jakarta Using Rainfall and Air Temperature Data
by Suci Nur Setyawati, Sri Nurdiati, I Wayan Mangku, Ionel Haidu and Mohamad Khoirun Najib
Hydrology 2025, 12(10), 252; https://doi.org/10.3390/hydrology12100252 - 26 Sep 2025
Viewed by 696
Abstract
Changes in rainfall patterns and irregular air temperature have become essential issues in analyzing hydrometeorological trends in Jakarta. This study aims to select the best copula of the stationary and non-stationary copula models and visualize and explore the relationship between rainfall and air [...] Read more.
Changes in rainfall patterns and irregular air temperature have become essential issues in analyzing hydrometeorological trends in Jakarta. This study aims to select the best copula of the stationary and non-stationary copula models and visualize and explore the relationship between rainfall and air temperature to predict hydrometeorological trends. The methods used include combining univariate Lognormal and Generalized Extreme Value (GEV) distributions with Clayton, Gumbel, and Frank copulas, as well as parameter estimation using the fminsearch algorithm, Markov Chain Monte Carlo (MCMC) simulation, and a combination of both. The results show that the best model is the non-stationary Clayton copula estimated using MCMC simulation, which has the lowest Akaike Information Criterion (AIC) value. This model effectively captures extreme dependence in the lower tail of the distribution, indicating a potential increase in extreme low events such as cold droughts. Visualization of the best model through contour plots shows a shifting center of the distribution over time. This study contributes to developing dynamic hydrometeorological models for adaptation planning of changing hydrometeorological trends in Indonesia. Full article
(This article belongs to the Special Issue Trends and Variations in Hydroclimatic Variables: 2nd Edition)
Show Figures

Figure 1

16 pages, 9648 KB  
Article
A Novel Classification Framework for VLF/LF Lightning-Radiation Electric-Field Waveforms
by Wenxing Sun, Tingxiu Jiang, Duanjiao Li, Yun Zhang, Xinru Li, Yunlong Wang and Jiachen Gao
Atmosphere 2025, 16(10), 1130; https://doi.org/10.3390/atmos16101130 - 26 Sep 2025
Viewed by 352
Abstract
The classification of very-low-frequency and low-frequency (VLF/LF) lightning-radiation electric-field waveforms is of paramount importance for lightning-disaster prevention and mitigation. However, traditional waveform classification methods suffer from the complex characteristics of lightning waveforms, such as non-stationarity, strong noise interference, and feature coupling, limiting classification [...] Read more.
The classification of very-low-frequency and low-frequency (VLF/LF) lightning-radiation electric-field waveforms is of paramount importance for lightning-disaster prevention and mitigation. However, traditional waveform classification methods suffer from the complex characteristics of lightning waveforms, such as non-stationarity, strong noise interference, and feature coupling, limiting classification accuracy and generalization. To address this problem, a novel framework is proposed for VLF/LF lightning-radiated electric-field waveform classification. Firstly, an improved Kalman filter (IKF) is meticulously designed to eliminate possible high-frequency interferences (such as atmospheric noise, electromagnetic radiation from power systems, and electronic noise from measurement equipment) embedded within the waveforms based on the maximum entropy criterion. Subsequently, an attention-based multi-fusion convolutional neural network (AMCNN) is developed for waveform classification. In the AMCNN architecture, waveform information is comprehensively extracted and enhanced through an optimized feature fusion structure, which allows for a more thorough consideration of feature diversity, thereby significantly improving the classification accuracy. An actual dataset from Anhui province in China is used to validate the proposed classification framework. Experimental results demonstrate that our framework achieves a classification accuracy of 98.9% within a processing time of no more than 5.3 ms, proving its superior classification performance for lightning-radiation electric-field waveforms. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

26 pages, 10719 KB  
Article
MPGH-FS: A Hybrid Feature Selection Framework for Robust Multi-Temporal OBIA Classification
by Xiangchao Xu, Huijiao Qiao, Zhenfan Xu and Shuya Hu
Sensors 2025, 25(18), 5933; https://doi.org/10.3390/s25185933 - 22 Sep 2025
Viewed by 540
Abstract
Object-Based Image Analysis (OBIA) generates high-dimensional features that frequently induce the curse of dimensionality, impairing classification efficiency and generalizability in high-resolution remote sensing images. To address these challenges while simultaneously overcoming the limitations of single-criterion feature selection and enhancing temporal adaptability, we propose [...] Read more.
Object-Based Image Analysis (OBIA) generates high-dimensional features that frequently induce the curse of dimensionality, impairing classification efficiency and generalizability in high-resolution remote sensing images. To address these challenges while simultaneously overcoming the limitations of single-criterion feature selection and enhancing temporal adaptability, we propose a novel feature selection framework named Mutual information Pre-filtering and Genetic-Hill climbing hybrid Feature Selection (MPGH-FS), which integrates Mutual Information Correlation Coefficient (MICC) pre-filtering, Genetic Algorithm (GA) global search, and Hill Climbing (HC) local optimization. Experiments based on multi-temporal GF-2 imagery from 2018 to 2023 demonstrated that MPGH-FS could reduce the feature dimension from 232 to 9, and it achieved the highest Overall Accuracy (OA) of 85.55% and a Kappa coefficient of 0.75 in full-scene classification, with training and inference times limited to 6 s and 1 min, respectively. Cross-temporal transfer experiments further validated the method’s robustness to inter-annual variation within the same area, with classification accuracy fluctuations remaining below 4% across different years, outperforming comparative methods. These results confirm that MPGH-FS offers significant advantages in feature compression, classification performance, and temporal adaptability, providing a robust technical foundation for efficient and accurate multi-temporal remote sensing classification. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing, Analysis and Application)
Show Figures

Figure 1

17 pages, 918 KB  
Article
Criteria and Protocol: Assessing Generative AI Efficacy in Perceiving EULAR 2019 Lupus Classification
by Gerald H. Lushington, Sandeep Nair, Eldon R. Jupe, Bernard Rubin and Mohan Purushothaman
Diagnostics 2025, 15(18), 2409; https://doi.org/10.3390/diagnostics15182409 - 22 Sep 2025
Viewed by 564
Abstract
Background/Objectives: In clinical informatics, the term ‘information overload’ is increasingly used to describe the operational impediments of excessive documentation. While electronic health records (EHRs) are growing in abundance, many medical records (MRs) remain in legacy formats that impede efficient, systematic processing, contributing to [...] Read more.
Background/Objectives: In clinical informatics, the term ‘information overload’ is increasingly used to describe the operational impediments of excessive documentation. While electronic health records (EHRs) are growing in abundance, many medical records (MRs) remain in legacy formats that impede efficient, systematic processing, contributing to the extenuating challenges of care fragmentation. Thus, there is a growing interest in using generative AI (genAI) for automated MR summarization and characterization. Methods: MRs for a set of 78 individuals were digitized. Some were known systemic lupus erythematosus (SLE) cases, while others were under evaluation for possible SLE classification. A two-pass genAI assessment strategy was implemented using the Claude 3.5 large language model (LLM) to mine MRs for information relevant to classifying SLE vs. undifferentiated connective tissue disorder (UCTD) vs. neither via the 22-criteria EULAR 2019 model. Results: Compared to clinical determination, the antinuclear antibody (ANA) criterion (whose results are crucial for classifying SLE-negative cases) exhibited favorable sensitivity 0.78 ± 0.09 (95% confidence interval) and a positive predictive value 0.85 ± 0.08 but a marginal performance for specificity 0.60 ± 0.11 and uncertain predictivity for the negative predictive value 0.48 ± 0.11. Averaged over the remaining 21 criteria, these four performance metrics were 0.69 ± 0.11, 0.87 ± 0.04, 0.54 ± 0.10, and 0.93 ± 0.03. Conclusions: ANA performance statistics imply that genAI yields confident assessments of SLE negativity (per high sensitivity) but weaker positivity. The remaining genAI criterial determinations support (per specificity) confident assertions of SLE-positivity but tend to misclassify a significant fraction of clinical positives as UCTD. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Graphical abstract

16 pages, 3447 KB  
Article
Predicting the Dynamic Modulus of Elasticity of Logs at the Standing Tree Stage: A Site-Specific Approach to Streamline Log Trading
by Kiichi Harada, Yasutaka Nakata, Masahiko Nakazawa, Keisuke Kojiro and Keiko Nagashima
Forests 2025, 16(9), 1438; https://doi.org/10.3390/f16091438 - 9 Sep 2025
Viewed by 354
Abstract
As wooden buildings become larger and taller, wood properties such as the dynamic modulus of elasticity (MOEdyn), a criterion for evaluating structural timber, are becoming increasingly important. However, the MOEdyn of [...] Read more.
As wooden buildings become larger and taller, wood properties such as the dynamic modulus of elasticity (MOEdyn), a criterion for evaluating structural timber, are becoming increasingly important. However, the MOEdyn of logs is rarely considered in forestry management. In this study, standing trees that can produce logs with high MOEdyn at the standing tree stage were identified to facilitate log sales decisions based on the MOEdyn values. In the generalized linear mixed model-based prediction of log MOEdyn, bucking position and site index were selected as random effects. Incorporating random effects improved the coefficient of determination to 0.651, and log MOEdyn could be predicted using the site index class, which reflects site productivity. The results indicate that detailed site conditions conventionally used to assess forest productivity are also useful for predicting the MOEdyn of logs before harvesting. Moreover, the MOEdyn of logs estimated at the standing tree stage can inform decisions regarding appropriate sales destinations. Full article
Show Figures

Figure 1

18 pages, 3048 KB  
Article
Estimation of Wheat Leaf Water Content Based on UAV Hyper-Spectral Remote Sensing and Machine Learning
by Yunlong Wu, Shouqi Yuan, Junjie Zhu, Yue Tang and Lingdi Tang
Agriculture 2025, 15(17), 1898; https://doi.org/10.3390/agriculture15171898 - 7 Sep 2025
Cited by 1 | Viewed by 667
Abstract
Leaf water content is a critical metric during the growth and development of winter wheat. Rapid and efficient monitoring of leaf water content in winter wheat is essential for achieving precision irrigation and assessing crop quality. Unmanned aerial vehicle (UAV)-based hyperspectral remote sensing [...] Read more.
Leaf water content is a critical metric during the growth and development of winter wheat. Rapid and efficient monitoring of leaf water content in winter wheat is essential for achieving precision irrigation and assessing crop quality. Unmanned aerial vehicle (UAV)-based hyperspectral remote sensing technology has enormous application potential in the field of crop monitoring. In this study, UAV was used as the platform to conduct six canopy hyperspectral data samplings and field-measured leaf water content (LWC) across four growth stages of winter wheat. Then, six spectral transformations were performed on the original spectral data and combined with the correlation analysis with wheat leaf water content (LWC). Multiple scattering correction (MSC), standard normal variate (SNV), and first derivative (FD) were selected as the subsequent transformation methods. Additionally, competitive adaptive reweighted sampling (CARS) and the Hilbert–Schmidt independence criterion lasso (HSICLasso) were employed for feature selection to eliminate redundant information from the spectral data. Finally, three machine learning algorithms—partial least squares regression (PLSR), support vector regression (SVR), and random forest (RF)—were combined with different data preprocessing methods, and 50 random partition datasets and model evaluation experiments were conducted to compare the accuracy of different combination models in assessing wheat LWC. The results showed that there are significant differences in the predictive performance of different combination models. By comparing the prediction accuracy on the test set, the optimal combinations of the three models are MSC + CARS + SVR (R2 = 0.713, RMSE = 0.793, RPD = 2.097), SNV + CARS + PLSR (R2 = 0.692, RMSE = 0.866, RPD = 2.053), and FD + CARS + RF (R2 = 0.689, RMSE = 0.848, RPD = 2.002). All three models can accurately and stably predict winter wheat LWC, and the CARS feature extraction method can improve the prediction accuracy and enhance the stability of the model, among which the SVR algorithm has better robustness and generalization ability. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

12 pages, 1642 KB  
Article
A Bayesian Approach for Designing Experiments Based on Information Criteria to Reduce Epistemic Uncertainty of Fuel Fracture During Loss-of-Coolant Accidents
by Shusuke Hamaguchi, Takafumi Narukawa and Takashi Takata
J. Nucl. Eng. 2025, 6(3), 35; https://doi.org/10.3390/jne6030035 - 1 Sep 2025
Viewed by 749
Abstract
In probabilistic risk assessment (PRA), the fracture limit of fuel cladding tubes under loss-of-coolant accident conditions plays a critical role in determining the core damage, highlighting the need for accurate modeling of cladding tube fracture behavior. However, for high-burnup cladding tubes, it is [...] Read more.
In probabilistic risk assessment (PRA), the fracture limit of fuel cladding tubes under loss-of-coolant accident conditions plays a critical role in determining the core damage, highlighting the need for accurate modeling of cladding tube fracture behavior. However, for high-burnup cladding tubes, it is often infeasible to conduct extensive experiments due to limited material availability, high costs, and technical constraints. These limitations make it difficult to acquire sufficient data, leading to substantial epistemic uncertainty in fracture modeling. To enhance the realism of PRA results under such constraints, it is essential to develop methods that can effectively reduce epistemic uncertainty using limited experimental data. In this study, we propose a Bayesian approach for designing experimental conditions based on a widely applicable information criterion (WAIC) in order to effectively reduce the uncertainty in the prediction of fuel cladding tube fracture with limited data. We conduct numerical experiments to evaluate the effectiveness of the proposed method in comparison with conventional approaches based on empirical loss and functional variance. Two cases are considered: one where the true and predictive models share the same mathematical structure (Case 1) and one where they differ (Case 2). In Case 1, the empirical loss-based design performs best when the number of added data points is fewer than approximately 10. In Case 2, the WAIC-based design consistently achieves the lowest Bayes generalization loss, demonstrating superior robustness in situations where the true model is unknown. These results indicate that the proposed method enables more informative experimental designs on average and contributes to the effective reduction in epistemic uncertainty in practical applications. Full article
(This article belongs to the Special Issue Probabilistic Safety Assessment and Management of Nuclear Facilities)
Show Figures

Figure 1

Back to TopTop