Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline

Search Results (141)

Search Parameters:
Keywords = variational Bayes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 12611 KiB  
Article
Banana Fusarium Wilt Recognition Based on UAV Multi-Spectral Imagery and Automatically Constructed Enhanced Features
by Ye Su, Longlong Zhao, Huichun Ye, Wenjiang Huang, Xiaoli Li, Hongzhong Li, Jinsong Chen, Weiping Kong and Biyao Zhang
Agronomy 2025, 15(8), 1837; https://doi.org/10.3390/agronomy15081837 - 29 Jul 2025
Viewed by 157
Abstract
Banana Fusarium wilt (BFW, also known as Panama disease) is a highly infectious and destructive disease that threatens global banana production, requiring early recognition for timely prevention and control. Current monitoring methods primarily rely on continuous variable features—such as band reflectances (BRs) and [...] Read more.
Banana Fusarium wilt (BFW, also known as Panama disease) is a highly infectious and destructive disease that threatens global banana production, requiring early recognition for timely prevention and control. Current monitoring methods primarily rely on continuous variable features—such as band reflectances (BRs) and vegetation indices (VIs)—collectively referred to as basic features (BFs)—which are prone to noise during the early stages of infection and struggle to capture subtle spectral variations, thus limiting the recognition accuracy. To address this limitation, this study proposes a discretized enhanced feature (EF) construction method, the automated kernel density segmentation-based feature construction algorithm (AutoKDFC). By analyzing the differences in the kernel density distributions between healthy and diseased samples, the AutoKDFC automatically determines the optimal segmentation threshold, converting continuous BFs into binary features with higher discriminative power for early-stage recognition. Using UAV-based multi-spectral imagery, BFW recognition models are developed and tested with the random forest (RF), support vector machine (SVM), and Gaussian naïve Bayes (GNB) algorithms. The results show that EFs exhibit significantly stronger correlations with BFW’s presence than original BFs. Feature importance analysis via RF further confirms that EFs contribute more to the model performance, with VI-derived features outperforming BR-based ones. The integration of EFs results in average performance gains of 0.88%, 2.61%, and 3.07% for RF, SVM, and GNB, respectively, with SVM achieving the best performance, averaging over 90%. Additionally, the generated BFW distribution map closely aligns with ground observations and captures spectral changes linked to disease progression, validating the method’s practical utility. Overall, the proposed AutoKDFC method demonstrates high effectiveness and generalizability for BFW recognition. Its core concept of “automatic feature enhancement” has strong potential for broader applications in crop disease monitoring and supports the development of intelligent early warning systems in plant health management. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

18 pages, 12574 KiB  
Article
A Framework Integrating GWAS and Genomic Selection to Enhance Prediction Accuracy of Economical Traits in Common Carp
by Zhipeng Sun, Yuhan Fu, Xiaoyue Zhu, Ruixin Zhang, Yongjun Shu, Xianhu Zheng and Guo Hu
Int. J. Mol. Sci. 2025, 26(14), 7009; https://doi.org/10.3390/ijms26147009 - 21 Jul 2025
Viewed by 197
Abstract
Common carp (Cyprinus carpio) is one of the most significant fish species worldwide, with its natural distribution spanning Europe and Asia. To conduct a genome-wide association study (GWAS) and compare the prediction accuracy of genomic selection (GS) models for the growth [...] Read more.
Common carp (Cyprinus carpio) is one of the most significant fish species worldwide, with its natural distribution spanning Europe and Asia. To conduct a genome-wide association study (GWAS) and compare the prediction accuracy of genomic selection (GS) models for the growth traits of common carp in spring and autumn at 2 years of age, a total of 325 carp individuals were re-sequenced and phenotypic measurements were taken. Three GWAS methods (FarmCPU, GEMMA, and GLM) were applied and their performance was evaluated in conjunction with various GS models, using significance levels based on p-values. GWAS analyses were performed on eight traits (including the body length, body weight, fat content of fillet, and condition factor) for both spring and autumn seasons. Eleven different GS models (such as Bayes A, Bayes B, and SVR-linear) were combined to evaluate their performance in genomic selection. The results demonstrate that the FarmCPU method consistently exhibits superior stability and predictive accuracy across most traits, particularly under higher SNP densities (e.g., 5K), where prediction accuracies frequently exceed 0.8. Notably, when integrated with Bayesian approaches, FarmCPU achieves a substantial performance boost, with the prediction accuracy reaching as high as 0.95 for the autumn body weight, highlighting its potential for high-resolution genomic prediction. In contrast, GEMMA and GLM exhibited a more variable performance at lower SNP densities. Overall, the integration of FarmCPU with genomic selection (GS) models offers one of the most reliable and efficient frameworks for trait prediction, particularly for complex traits with substantial genetic variation. This approach proves especially powerful when coupled with Bayesian methodologies, further enhancing its applicability in advanced breeding programs. Full article
Show Figures

Figure 1

16 pages, 1037 KiB  
Article
Generative Learning from Semantically Confused Label Distribution via Auto-Encoding Variational Bayes
by Xinhai Li, Chenxu Meng, Heng Zhou, Yi Guo, Bowen Xue, Tianzuo Yu and Yunan Lu
Electronics 2025, 14(13), 2736; https://doi.org/10.3390/electronics14132736 - 7 Jul 2025
Viewed by 221
Abstract
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is [...] Read more.
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is ubiquitous in real-world applications due to the annotator subjectivity, algorithmic biases, and experimental errors. Existing related LDL algorithms often assume a linear combination of true and random label distributions when modeling the noisy label distributions, an oversimplification that fails to capture the practical generation processes of noisy label distributions. Therefore, this paper introduces an assumption that the noise in label distributions primarily arises from the semantic confusion between labels and proposes a novel generative label distribution learning algorithm to model the confusion-based generation process of both the feature data and the noisy label distribution data. The proposed model is inferred using variational methods and its effectiveness is demonstrated through extensive experiments across various real-world datasets, showcasing its superiority in handling noisy label distributions. Full article
(This article belongs to the Special Issue Neural Networks: From Software to Hardware)
Show Figures

Graphical abstract

29 pages, 3774 KiB  
Article
Improving the Minimum Free Energy Principle to the Maximum Information Efficiency Principle
by Chenguang Lu
Entropy 2025, 27(7), 684; https://doi.org/10.3390/e27070684 - 26 Jun 2025
Viewed by 998
Abstract
Friston proposed the Minimum Free Energy Principle (FEP) based on the Variational Bayesian (VB) method. This principle emphasizes that the brain and behavior coordinate with the environment, promoting self-organization. However, it has a theoretical flaw, a possibility of being misunderstood, and a limitation [...] Read more.
Friston proposed the Minimum Free Energy Principle (FEP) based on the Variational Bayesian (VB) method. This principle emphasizes that the brain and behavior coordinate with the environment, promoting self-organization. However, it has a theoretical flaw, a possibility of being misunderstood, and a limitation (only likelihood functions are used as constraints). This paper first introduces the semantic information G theory and the R(G) function (where R is the minimum mutual information for the given semantic mutual information G). The G theory is based on the P-T probability framework and, therefore, allows for the use of truth, membership, similarity, and distortion functions (related to semantics) as constraints. Based on the study of the R(G) function and logical Bayesian Inference, this paper proposes the Semantic Variational Bayesian (SVB) and the Maximum Information Efficiency (MIE) principle. Theoretic analysis and computing experiments prove that RG = FH(X|Y) (where F denotes VFE, and H(X|Y) is Shannon conditional entropy) instead of F continues to decrease when optimizing latent variables; SVB is a reliable and straightforward approach for latent variables and active inference. This paper also explains the relationship between information, entropy, free energy, and VFE in local non-equilibrium and equilibrium systems, concluding that Shannon information, semantic information, and VFE are analogous to the increment of free energy, the increment of exergy, and physical conditional entropy. The MIE principle builds upon the fundamental ideas of the FEP, making them easier to understand and apply. It needs to combine deep learning methods for wider applications. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches for Machine Learning and AI)
Show Figures

Figure 1

21 pages, 5516 KiB  
Article
Hyperspectral Imaging for Non-Destructive Moisture Prediction in Oat Seeds
by Peng Zhang and Jiangping Liu
Agriculture 2025, 15(13), 1341; https://doi.org/10.3390/agriculture15131341 - 22 Jun 2025
Viewed by 537
Abstract
Oat is a highly nutritious cereal crop, and the moisture content of its seeds plays a vital role in cultivation management, storage preservation, and quality control. To enable efficient and non-destructive prediction of this key quality parameter, this study presents a modeling framework [...] Read more.
Oat is a highly nutritious cereal crop, and the moisture content of its seeds plays a vital role in cultivation management, storage preservation, and quality control. To enable efficient and non-destructive prediction of this key quality parameter, this study presents a modeling framework integrating hyperspectral imaging (HSI) technology with a dual-optimization machine learning strategy. Seven spectral preprocessing techniques—standard normal variate (SNV), multiplicative scatter correction (MSC), first derivative (FD), second derivative (SD), and combinations such as SNV + FD, SNV + SD, and SNV + MSC—were systematically evaluated. Among them, SNV combined with FD was identified as the optimal preprocessing scheme, effectively enhancing spectral feature expression. To further refine the predictive model, three feature selection methods—successive projections algorithm (SPA), competitive adaptive reweighted sampling (CARS), and principal component analysis (PCA)—were assessed. PCA exhibited superior performance in information compression and modeling stability. Subsequently, a dual-optimized neural network model, termed Bayes-ASFSSA-BP, was developed by incorporating Bayesian optimization and the Adaptive Spiral Flight Sparrow Search Algorithm (ASFSSA). Bayesian optimization was used for global tuning of network structural parameters, while ASFSSA was applied to fine-tune the initial weights and thresholds, improving convergence efficiency and predictive accuracy. The proposed Bayes-ASFSSA-BP model achieved determination coefficients (R2) of 0.982 and 0.963, and root mean square errors (RMSEs) of 0.173 and 0.188 on the training and test sets, respectively. The corresponding mean absolute error (MAE) on the test set was 0.170, indicating excellent average prediction accuracy. These results significantly outperformed benchmark models such as SSA-BP, ASFSSA-BP, and Bayes-BP. Compared to the conventional BP model, the proposed approach increased the test R2 by 0.046 and reduced the RMSE by 0.157. Moreover, the model produced the narrowest 95% confidence intervals for test set performance (Rp2: [0.961, 0.971]; RMSE: [0.185, 0.193]), demonstrating outstanding robustness and generalization capability. Although the model incurred a slightly higher computational cost (480.9 s), the accuracy gain was deemed worthwhile. In conclusion, the proposed Bayes-ASFSSA-BP framework shows strong potential for accurate and stable non-destructive prediction of oat seed moisture content. This work provides a practical and efficient solution for quality assessment in agricultural products and highlights the promise of integrating Bayesian optimization with ASFSSA in modeling high-dimensional spectral data. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 1370 KiB  
Article
Machine Learning-Based Identification of Phonological Biomarkers for Speech Sound Disorders in Saudi Arabic-Speaking Children
by Deema F. Turki and Ahmad F. Turki
Diagnostics 2025, 15(11), 1401; https://doi.org/10.3390/diagnostics15111401 - 31 May 2025
Viewed by 647
Abstract
Background/Objectives: This study investigates the application of machine learning (ML) techniques in diagnosing speech sound disorders (SSDs) in Saudi Arabic-speaking children, with a specific focus on phonological biomarkers, particularly Infrequent Variance (InfrVar), to improve diagnostic accuracy. SSDs are a significant concern in pediatric [...] Read more.
Background/Objectives: This study investigates the application of machine learning (ML) techniques in diagnosing speech sound disorders (SSDs) in Saudi Arabic-speaking children, with a specific focus on phonological biomarkers, particularly Infrequent Variance (InfrVar), to improve diagnostic accuracy. SSDs are a significant concern in pediatric speech pathology, affecting an estimated 10–15% of preschool-aged children worldwide. However, accurate diagnosis remains challenging, especially in linguistically diverse populations. Traditional diagnostic tools, such as the Percentage of Consonants Correct (PCC), often fail to capture subtle phonological variations. This study explores the potential of machine learning models to enhance diagnostic accuracy by incorporating culturally relevant phonological biomarkers like InfrVar, aiming to develop a more effective diagnostic approach for SSDs in Saudi Arabic-speaking children. Methods: Data from 235 Saudi Arabic-speaking children aged 2;6 to 5;11 years were analyzed using several machine learning models: Random Forest, Support Vector Machine (SVM), XGBoost, Logistic Regression, K-Nearest Neighbors, and Naïve Bayes. The dataset was used to classify speech patterns into four categories: Atypical, Typical Development (TD), Articulation, and Delay. Phonological features such as Phonological Variance (PhonVar), InfrVar, and Percentage of Consonants Correct (PCC) were used as key variables. SHapley Additive exPlanations (SHAP) analysis was employed to interpret the contributions of individual features to model predictions. Results: The XGBoost and Random Forest models demonstrated the highest performance, with an accuracy of 91.49% and an AUC of 99.14%. SHAP analysis revealed that articulation patterns and phonological patterns were the most influential features for distinguishing between Atypical and TD categories. The K-Means clustering approach identified four distinct subgroups based on speech development patterns: TD (46.61%), Articulation (25.42%), Atypical (18.64%), and Delay (9.32%). Conclusions: Machine learning models, particularly XGBoost and Random Forest, effectively classified speech development categories in Saudi Arabic-speaking children. This study highlights the importance of incorporating culturally specific phonological biomarkers like InfrVar and PhonVar to improve diagnostic precision for SSDs. These findings lay the groundwork for the development of AI-assisted diagnostic tools tailored to diverse linguistic contexts, enhancing early intervention strategies in pediatric speech pathology. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Medicine)
Show Figures

Figure 1

17 pages, 2429 KiB  
Article
Identification of Loci and Candidate Genes Associated with Arginine Content in Soybean
by Jiahao Ma, Qing Yang, Cuihong Yu, Zhi Liu, Xiaolei Shi, Xintong Wu, Rongqing Xu, Pengshuo Shen, Yuechen Zhang, Ainong Shi and Long Yan
Agronomy 2025, 15(6), 1339; https://doi.org/10.3390/agronomy15061339 - 30 May 2025
Viewed by 583
Abstract
Soybean (Glycine max) seeds are rich in amino acids, offering key nutritional and physiological benefits. In this study, 290 soybean accessions from the USDA Germplasm Collection based in Urbana, IL Information Network (GRIN) were analyzed. Four Genome-Wide Association Study (GWAS) models—Bayesian-information [...] Read more.
Soybean (Glycine max) seeds are rich in amino acids, offering key nutritional and physiological benefits. In this study, 290 soybean accessions from the USDA Germplasm Collection based in Urbana, IL Information Network (GRIN) were analyzed. Four Genome-Wide Association Study (GWAS) models—Bayesian-information and Linkage-disequilibrium Iteratively Nested Keyway (BLINK), Mixed Linear Model (MLM), Fixed and Random Model Circulating Probability Unification (FarmCPU), and Multi-Locus Mixed Model (MLMM)—identified two significant Single Nucleotide Polymorphisms (SNPs) associated with arginine content: Gm06_19014194_ss715593808 (LOD = 9.91, 3.91% variation) at 19,014,194 bp on chromosome 6 and Gm11_2054710_ss715609614 (LOD = 9.05, 19% variation) at 2,054,710 bp on chromosome 11. Two candidate genes, Glyma.06g203200 and Glyma.11G028600, were found in the two SNP marker regions, respectively. Genomic Prediction (GP) was performed for arginine content using several models: Bayes A (BA), Bayes B (BB), Bayesian LASSO (BL), Bayesian Ridge Regression (BRR), Ridge Regression Best Linear Unbiased Prediction (rrBLUP), Random Forest (RF), and Support Vector Machine (SVM). A high GP accuracy was observed in both across- and cross-populations, supporting Genomic Selection (GS) for breeding high-arginine soybean cultivars. This study holds significant commercial potential by providing valuable genetic resources and molecular tools for improving the nutritional quality and market value of soybean cultivars. Through the identification of SNP markers associated with high arginine content and the demonstration of high prediction accuracy using genomic selection, this research supports the development of soybean accessions with enhanced protein profiles. These advancements can better meet the demands of health-conscious consumers and serve high-value food and feed markets. Full article
Show Figures

Figure 1

31 pages, 15699 KiB  
Article
Preliminary Machine Learning-Based Classification of Ink Disease in Chestnut Orchards Using High-Resolution Multispectral Imagery from Unmanned Aerial Vehicles: A Comparison of Vegetation Indices and Classifiers
by Lorenzo Arcidiaco, Roberto Danti, Manuela Corongiu, Giovanni Emiliani, Arcangela Frascella, Antonietta Mello, Laura Bonora, Sara Barberini, David Pellegrini, Nicola Sabatini and Gianni Della Rocca
Forests 2025, 16(5), 754; https://doi.org/10.3390/f16050754 - 28 Apr 2025
Cited by 1 | Viewed by 480
Abstract
Ink disease, primarily caused by the pathogen Phytophthora xcambivora, significantly threatens the health and productivity of sweet chestnut (Castanea sativa Mill.) orchards, highlighting the need for accurate detection methods. This study investigates the efficacy of machine learning (ML) classifiers combined with high-resolution [...] Read more.
Ink disease, primarily caused by the pathogen Phytophthora xcambivora, significantly threatens the health and productivity of sweet chestnut (Castanea sativa Mill.) orchards, highlighting the need for accurate detection methods. This study investigates the efficacy of machine learning (ML) classifiers combined with high-resolution multispectral imagery acquired via unmanned aerial vehicles (UAVs) to assess chestnut tree health at a site in Tuscany, Italy. Three machine learning algorithms—support vector machines (SVMs), Gaussian Naive Bayes (GNB), and logistic regression (Log)—were evaluated against eight vegetation indices (VIs), including NDVI, GnDVI, and RdNDVI, to classify chestnut tree crowns as either symptomatic or asymptomatic. High-resolution multispectral images were processed to derive vegetation indices that effectively captured subtle spectral variations indicative of disease presence. Ground-truthing involved visual tree health assessments performed by expert forest pathologists, subsequently validated through leaf area index (LAI) measurements. Correlation analysis confirmed significant associations between LAI and most VIs, supporting LAI as a robust physiological metric for validating visual health assessments. GnDVI and RdNDVI combined with SVM and GNB classifiers achieved the highest classification accuracy (95.2%), demonstrating their superior sensitivity in discriminating symptomatic from asymptomatic trees. Indices such as MCARI and SAVI showed limited discriminative power, underscoring the importance of selecting appropriate VIs that are tailored to specific disease symptoms. This study highlights the potential of integrating UAV-derived multispectral imagery and machine learning techniques, validated by LAI, as an effective approach for the detection of ink disease, enabling precision forestry practices and informed orchard management strategies. Full article
Show Figures

Figure 1

19 pages, 2209 KiB  
Article
Optimizing the Genomic Evaluation Model in Crossbred Cattle for Smallholder Production Systems in India
by Kashif Dawood Khan, Rani Alex, Ashish Yadav, Varadanayakanahalli N. Sahana, Amritanshu Upadhyay, Rajesh V. Mani, Thankappan Sajeev Kumar, Rajeev Raghavan Pillai, Vikas Vohra and Gopal Ramdasji Gowane
Agriculture 2025, 15(9), 945; https://doi.org/10.3390/agriculture15090945 - 27 Apr 2025
Viewed by 1209
Abstract
Implementing genomic selection in smallholder dairy systems is challenging due to limited genetic connectedness and diverse management practices. This study aimed to optimize genomic evaluation models for crossbred cattle in South India. Data included 305-day first lactation milk yield (FLMY) records from 17,650 [...] Read more.
Implementing genomic selection in smallholder dairy systems is challenging due to limited genetic connectedness and diverse management practices. This study aimed to optimize genomic evaluation models for crossbred cattle in South India. Data included 305-day first lactation milk yield (FLMY) records from 17,650 cows (1984–2021), with partial pedigree and genotypes for 1004 bulls and 1568 cows. Non-genetic factors such as geography, season and period of calving, and age at first calving were significant sources of variation. The average milk yield was 2875 ± 123.54 kg. Genetic evaluation models used a female-only reference. Heritability estimates using different approaches were 0.32 ± 0.03 (REML), 0.40 ± 0.03 (ssGREML), and 0.25 ± 0.08 (GREML). Bayesian estimates (Bayes A, B, C, Cπ, and ssBR) ranged from 0.20 ± 0.02 to 0.43 ± 0.04. Genomic-only models showed reduced variance due to the Bulmer effect, as genomic data belonged to recent generations. Breeding value prediction accuracies were 0.60 (PBLUP), 0.45 (GBLUP), and 0.65 (ssGBLUP). Using the LR method, the estimates of bias, dispersion, and ratio of accuracies for ssGBLUP were −39.83, 1.09, and 0.69; for ssBR, they were 71.83, 0.83, and 0.76. ssGBLUP resulted in more accurate and less biased GEBVs than ssBR. We recommend ssGBLUP for genomic evaluation of crossbred cattle for milk production under smallholder systems. Full article
Show Figures

Figure 1

45 pages, 6952 KiB  
Review
A Semantic Generalization of Shannon’s Information Theory and Applications
by Chenguang Lu
Entropy 2025, 27(5), 461; https://doi.org/10.3390/e27050461 - 24 Apr 2025
Cited by 1 | Viewed by 1057
Abstract
Does semantic communication require a semantic information theory parallel to Shannon’s information theory, or can Shannon’s work be generalized for semantic communication? This paper advocates for the latter and introduces a semantic generalization of Shannon’s information theory (G theory for short). The core [...] Read more.
Does semantic communication require a semantic information theory parallel to Shannon’s information theory, or can Shannon’s work be generalized for semantic communication? This paper advocates for the latter and introduces a semantic generalization of Shannon’s information theory (G theory for short). The core idea is to replace the distortion constraint with the semantic constraint, achieved by utilizing a set of truth functions as a semantic channel. These truth functions enable the expressions of semantic distortion, semantic information measures, and semantic information loss. Notably, the maximum semantic information criterion is equivalent to the maximum likelihood criterion and similar to the Regularized Least Squares criterion. This paper shows G theory’s applications to daily and electronic semantic communication, machine learning, constraint control, Bayesian confirmation, portfolio theory, and information value. The improvements in machine learning methods involve multi-label learning and classification, maximum mutual information classification, mixture models, and solving latent variables. Furthermore, insights from statistical physics are discussed: Shannon information is similar to free energy; semantic information to free energy in local equilibrium systems; and information efficiency to the efficiency of free energy in performing work. The paper also proposes refining Friston’s minimum free energy principle into the maximum information efficiency principle. Lastly, it compares G theory with other semantic information theories and discusses its limitation in representing the semantics of complex data. Full article
(This article belongs to the Special Issue Semantic Information Theory)
Show Figures

Figure 1

8 pages, 252 KiB  
Article
On the Bayesian Two-Sample Problem for Ranking Data
by Mayer Alvo
Axioms 2025, 14(4), 292; https://doi.org/10.3390/axioms14040292 - 14 Apr 2025
Viewed by 230
Abstract
We consider the two-sample problem involving a new class of angle-based models for ranking data. These models are functions of the cosine of the angle between a ranking and a consensus vector. A Bayesian approach is employed to determine the corresponding predictive densities. [...] Read more.
We consider the two-sample problem involving a new class of angle-based models for ranking data. These models are functions of the cosine of the angle between a ranking and a consensus vector. A Bayesian approach is employed to determine the corresponding predictive densities. Two competing hypotheses are considered, and we compute the Bayes factor to quantify the evidence provided by the observed data under each hypothesis. We apply the results to a real data set. Full article
Show Figures

Figure 1

26 pages, 3639 KiB  
Article
An Adaptive Combined Filtering Algorithm for Non-Holonomic Constraints with Time-Varying and Thick-Tailed Measurement Noise
by Zijian Wang, Jianghua Liu, Jinguang Jiang, Jiaji Wu, Qinghai Wang and Jingnan Liu
Remote Sens. 2025, 17(7), 1126; https://doi.org/10.3390/rs17071126 - 21 Mar 2025
Cited by 1 | Viewed by 485
Abstract
Aiming at the problem that the pseudo-velocity measurement noise of non-holonomic constraints (NHCs) in the integrated navigation of vehicle-mounted a global navigation satellite system/inertial navigation system (GNSS/INS) is time-varying and thick-tailed in complex road conditions (turning, sideslip, etc.) and cannot be accurately predicted, [...] Read more.
Aiming at the problem that the pseudo-velocity measurement noise of non-holonomic constraints (NHCs) in the integrated navigation of vehicle-mounted a global navigation satellite system/inertial navigation system (GNSS/INS) is time-varying and thick-tailed in complex road conditions (turning, sideslip, etc.) and cannot be accurately predicted, an adaptive estimation method for the initial value of NHC lateral velocity noise based on multiple linear regression is proposed. On the basis of this method, a Gaussian Student’s T distribution variational Bayesian filtering algorithm (Ga-St VBAKF) based on NHC pseudo-velocity measurement noise modeling is proposed through modeling and analysis of pseudo-velocity measurement noise. Firstly, in order to adaptively adjust the initial value of NHC lateral velocity noise, a vehicle turning detection algorithm is used to detect whether the vehicle is turning. Secondly, based on the vehicle motion state, the variational Bayesian method is used to adaptively estimate the statistical characteristics of the measurement noise in real time based on modeling of the lateral velocity noise as Gaussian white noise or Student’s T distribution thick-tail noise. The test results show that compared to the traditional Kalman filtering algorithm with fixed noise, the Ga-St VBAKF algorithm with noise adaptation reduces the maximum horizontal position error by 65.9% in the GNSS/NHC/OD/INS (where OD stands for odometer and INS stands for inertial measurement unit) system when the vehicle is in a turning state, and by 42.3% in the NHC/OD/INS system. This indicates that the algorithm can effectively suppress the divergence of positioning errors during turning and improve the performance of integrated navigation. Full article
Show Figures

Graphical abstract

19 pages, 296 KiB  
Article
Affine Calculus for Constrained Minima of the Kullback–Leibler Divergence
by Giovanni Pistone
Stats 2025, 8(2), 25; https://doi.org/10.3390/stats8020025 - 21 Mar 2025
Viewed by 364
Abstract
The non-parametric version of Amari’s dually affine Information Geometry provides a practical calculus to perform computations of interest in statistical machine learning. The method uses the notion of a statistical bundle, a mathematical structure that includes both probability densities and random variables to [...] Read more.
The non-parametric version of Amari’s dually affine Information Geometry provides a practical calculus to perform computations of interest in statistical machine learning. The method uses the notion of a statistical bundle, a mathematical structure that includes both probability densities and random variables to capture the spirit of Fisherian statistics. We focus on computations involving a constrained minimization of the Kullback–Leibler divergence. We show how to obtain neat and principled versions of known computations in applications such as mean-field approximation, adversarial generative models, and variational Bayes. Full article
23 pages, 4309 KiB  
Article
Comparison of Deep Learning and Traditional Machine Learning Models for Predicting Mild Cognitive Impairment Using Plasma Proteomic Biomarkers
by Kesheng Wang, Donald A. Adjeroh, Wei Fang, Suzy M. Walter, Danqing Xiao, Ubolrat Piamjariyakul and Chun Xu
Int. J. Mol. Sci. 2025, 26(6), 2428; https://doi.org/10.3390/ijms26062428 - 8 Mar 2025
Viewed by 2115
Abstract
Mild cognitive impairment (MCI) is a clinical condition characterized by a decline in cognitive ability and progression of cognitive impairment. It is often considered a transitional stage between normal aging and Alzheimer’s disease (AD). This study aimed to compare deep learning (DL) and [...] Read more.
Mild cognitive impairment (MCI) is a clinical condition characterized by a decline in cognitive ability and progression of cognitive impairment. It is often considered a transitional stage between normal aging and Alzheimer’s disease (AD). This study aimed to compare deep learning (DL) and traditional machine learning (ML) methods in predicting MCI using plasma proteomic biomarkers. A total of 239 adults were selected from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort along with a pool of 146 plasma proteomic biomarkers. We evaluated seven traditional ML models (support vector machines (SVMs), logistic regression (LR), naïve Bayes (NB), random forest (RF), k-nearest neighbor (KNN), gradient boosting machine (GBM), and extreme gradient boosting (XGBoost)) and six variations of a deep neural network (DNN) model—the DL model in the H2O package. Least Absolute Shrinkage and Selection Operator (LASSO) selected 35 proteomic biomarkers from the pool. Based on grid search, the DNN model with an activation function of “Rectifier With Dropout” with 2 layers and 32 of 35 selected proteomic biomarkers revealed the best model with the highest accuracy of 0.995 and an F1 Score of 0.996, while among seven traditional ML methods, XGBoost was the best with an accuracy of 0.986 and an F1 Score of 0.985. Several biomarkers were correlated with the APOE-ε4 genotype, polygenic hazard score (PHS), and three clinical cerebrospinal fluid biomarkers (Aβ42, tTau, and pTau). Bioinformatics analysis using Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) revealed several molecular functions and pathways associated with the selected biomarkers, including cytokine-cytokine receptor interaction, cholesterol metabolism, and regulation of lipid localization. The results showed that the DL model may represent a promising tool in the prediction of MCI. These plasma proteomic biomarkers may help with early diagnosis, prognostic risk stratification, and early treatment interventions for individuals at risk for MCI. Full article
(This article belongs to the Special Issue New Advances in Proteomics in Disease)
Show Figures

Figure 1

40 pages, 6118 KiB  
Article
Single-Source and Multi-Source Cross-Subject Transfer Based on Domain Adaptation Algorithms for EEG Classification
by Rito Clifford Maswanganyi, Chunling Tu, Pius Adewale Owolawi and Shengzhi Du
Mathematics 2025, 13(5), 802; https://doi.org/10.3390/math13050802 - 27 Feb 2025
Cited by 2 | Viewed by 763
Abstract
Transfer learning (TL) has been employed in electroencephalogram (EEG)-based brain–computer interfaces (BCIs) to enhance performance for cross-session and cross-subject EEG classification. However, domain shifts coupled with a low signal-to-noise ratio between EEG recordings have been demonstrated to contribute to significant variations in EEG [...] Read more.
Transfer learning (TL) has been employed in electroencephalogram (EEG)-based brain–computer interfaces (BCIs) to enhance performance for cross-session and cross-subject EEG classification. However, domain shifts coupled with a low signal-to-noise ratio between EEG recordings have been demonstrated to contribute to significant variations in EEG neural dynamics from session to session and subject to subject. Critical factors—such as mental fatigue, concentration, and physiological and non-physiological artifacts—can constitute the immense domain shifts seen between EEG recordings, leading to massive inter-subject variations. Consequently, such variations increase the distribution shifts across the source and target domains, in turn weakening the discriminative knowledge of classes and resulting in poor cross-subject transfer performance. In this paper, domain adaptation algorithms, including two machine learning (ML) algorithms, are contrasted based on the single-source-to-single-target (STS) and multi-source-to-single-target (MTS) transfer paradigms, mainly to mitigate the challenge of immense inter-subject variations in EEG neural dynamics that lead to poor classification performance. Afterward, we evaluate the effect of the STS and MTS transfer paradigms on cross-subject transfer performance utilizing three EEG datasets. In this case, to evaluate the effect of STS and MTS transfer schemes on classification performance, domain adaptation algorithms (DAA)—including ML algorithms implemented through a traditional BCI—are compared, namely, manifold embedded knowledge transfer (MEKT), multi-source manifold feature transfer learning (MMFT), k-nearest neighbor (K-NN), and Naïve Bayes (NB). The experimental results illustrated that compared to traditional ML methods, DAA can significantly reduce immense variations in EEG characteristics, in turn resulting in superior cross-subject transfer performance. Notably, superior classification accuracies (CAs) were noted when MMFT was applied, with mean CAs of 89% and 83% recorded, while MEKT recorded mean CAs of 87% and 76% under the STS and MTS transfer paradigms, respectively. Full article
(This article belongs to the Special Issue Learning Algorithms and Neural Networks)
Show Figures

Figure 1

Back to TopTop