Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (139)

Search Parameters:
Keywords = Fisher discriminant analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 7743 KB  
Article
Improved Daytime Cloud Detection Algorithm in FY-4A’s Advanced Geostationary Radiation Imager
by Xiao Zhang, Song-Ying Zhao and Rui-Xuan Tang
Atmosphere 2025, 16(9), 1105; https://doi.org/10.3390/atmos16091105 - 20 Sep 2025
Viewed by 330
Abstract
Cloud detection is an indispensable step in satellite remote sensing of cloud properties and objects under the influence of cloud occlusion. Nevertheless, interfering targets such as snow and haze pollution are easily misjudged as clouds for most of the current algorithms. Hence, a [...] Read more.
Cloud detection is an indispensable step in satellite remote sensing of cloud properties and objects under the influence of cloud occlusion. Nevertheless, interfering targets such as snow and haze pollution are easily misjudged as clouds for most of the current algorithms. Hence, a robust cloud detection algorithm is urgently needed, especially for regions with high latitudes or severe air pollution. This paper demonstrated that the passive satellite detector Advanced Geosynchronous Radiation Imager (AGRI) onboard the FY-4A satellite has a great possibility to misjudge the dense aerosols in haze pollution as clouds during the daytime, and constructed an algorithm based on the spectral information of the AGRI’s 14 bands with a concise and high-speed calculation. This study adjusted the previously proposed cloud mask rectification algorithm of Moderate-Resolution Imaging Spectroradiometer (MODIS), rectified the MODIS cloud detection result, and used it as the accurate cloud mask data. The algorithm was constructed based on adjusted Fisher discrimination analysis (AFDA) and spectral spatial variability (SSV) methods over four different underlying surfaces (land, desert, snow, and water) and two seasons (summer and winter). This algorithm divides the identification into two steps to screen the confident cloud clusters and broken clouds, which are not easy to recognize, respectively. In the first step, channels with obvious differences in cloudy and cloud-free areas were selected, and AFDA was utilized to build a weighted sum formula across the normalized spectral data of the selected bands. This step transforms the traditional dynamic-threshold test on multiple bands into a simple test of the calculated summation value. In the second step, SSV was used to capture the broken clouds by calculating the standard deviation (STD) of spectra in every 3 × 3-pixel window to quantify the spectral homogeneity within a small scale. To assess the algorithm’s spatial and temporal generalizability, two evaluations were conducted: one examining four key regions and another assessing three different moments on a certain day in East China. The results showed that the algorithm has an excellent accuracy across four different underlying surfaces, insusceptible to the main interferences such as haze and snow, and shows a strong detection capability for broken clouds. This algorithm enables widespread application to different regions and times of day, with a low calculation complexity, indicating that a new method satisfying the requirements of fast and robust cloud detection can be achieved. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

14 pages, 819 KB  
Article
Neurocognitive Impairment in ART-Experienced People Living with HIV: An Analysis of Clinical Risk Factors, Injection Drug Use, and the sCD163
by Syed Zaryab Ahmed, Faiq Amin, Nida Farooqui, Zhannur Omarova, Syed Faisal Mahmood, Qurat ul ain Khan, Haider A. Naqvi, Aida Mumtaz, Saeeda Baig, Muhammad Rehan Khan, Sharaf A. Shah, Ali Hassan, Srinivasa Bolla, Shamim Mushtaq and Syed Hani Abidi
Viruses 2025, 17(9), 1232; https://doi.org/10.3390/v17091232 - 10 Sep 2025
Viewed by 582
Abstract
Background: In people living with HIV (PLHIV), ongoing neuronal injury has shown a correlation with elevated levels of soluble markers of immune activation, such as sCD163. Additionally, various risk factors, such as injection drug use (IDU), can independently affect immune and cognitive functions, [...] Read more.
Background: In people living with HIV (PLHIV), ongoing neuronal injury has shown a correlation with elevated levels of soluble markers of immune activation, such as sCD163. Additionally, various risk factors, such as injection drug use (IDU), can independently affect immune and cognitive functions, leading to neurocognitive impairment (NCI). However, the potential sCD163-IDU-NCI axis in ART-experienced PLHIV is not clear. This study aims to determine NCI prevalence and investigate the interplay between risk factors and sCD163 in Pakistani PLHIV. Methods: For this cross-sectional study, 150 PLHIV and 30 HIV-negative people who inject drugs (PWID) were recruited using a convenience sampling strategy. NCI screening was performed using the International HIV Dementia Scale (IHDS) tool. Blood samples from PLHIV were used to perform HIV recency testing using the Asante Rapid Recency Assay, and to evaluate sCD163 levels using ELISA. Sociodemographic and clinical data were collected from medical records. Subsequently, descriptive statistics were used to summarize data variables, while comparisons (two and multiple groups) between participants with and without NCI were conducted, respectively, using the Mann–Whitney test or Kruskal–Wallis test for continuous variables, and Fisher’s exact test for categorial variables. Receiver Operating Characteristic (ROC) curve analysis was performed to assess the discriminative ability of sCD163. Logistic regression was used to identify predictors of neurocognitive impairment. Results: The majority of PLHIV had IDU as a high-risk behavior. In PLHIV, the median age was 34.5 years (IQR: 30–41), ART duration was 35 months (IQR: 17–54), and median CD4 count was 326.5 cells/µL (IQR: 116–545.5). Long-term infections (>6 months post-seroconversion; median ART duration: 35 months; median CD4 counts: 326.5 cells/μL) were noted in 83.3% of PLHIV. IHDS-based screening showed that 83.33% (all PLHIV) and 50% (PLHIV with no IDU history) scored ≤ 9 on the IHDS, suggestive of NCI. IHDS-component analysis showed the memory recall to be significantly affected in PLHIV compared to controls (median score 3.2 versus 3.7, respectively, p < 0.001). Regression analysis showed only long-term infection (OR: 2.99, p = 0.03) to be significantly associated with neurocognitive impairment. sCD163 levels were significantly lower in PLHIV with NCI (mean = 7.48 ng/mL, SD = 7.05) compared to those without NCI (mean = 14.82 ng/mL, SD = 8.23; p < 0.0001), with an AUC of 0.803 (95% CI: 0.72–0.88). However, after adjusting for IDU history, the regression analysis showed an odds ratio for sCD163 of 0.998 (95% CI: 0.934, 1.067, p = 0.957), indicating no association between sCD163 levels and NCI. Conclusion: This study reports a high prevalence of NCI in Pakistani PLHIV, and no association between sCD163 and neurocognitive impairment in PLHIV after adjustment for a history of IDU. Long-term infection and IDU were significantly linked to NCI, while only IDU was associated with lower sCD163 levels, regardless of NCI. Full article
(This article belongs to the Special Issue HIV Neurological Disorders: 2nd Edition)
Show Figures

Figure 1

16 pages, 1094 KB  
Article
Recognition of EEG Features in Autism Disorder Using SWT and Fisher Linear Discriminant Analysis
by Fahmi Fahmi, Melinda Melinda, Prima Dewi Purnamasari, Elizar Elizar and Aufa Rafiki
Diagnostics 2025, 15(18), 2291; https://doi.org/10.3390/diagnostics15182291 - 10 Sep 2025
Viewed by 521
Abstract
Background/Objectives: An ASD diagnosis from EEG is challenging due to non-stationary, low-SNR signals and small cohorts. We propose a compact, interpretable pipeline that pairs a shift-invariant Stationary Wavelet Transform (SWT) with Fisher’s Linear Discriminant (FLDA) as a supervised projection method, delivering band-level [...] Read more.
Background/Objectives: An ASD diagnosis from EEG is challenging due to non-stationary, low-SNR signals and small cohorts. We propose a compact, interpretable pipeline that pairs a shift-invariant Stationary Wavelet Transform (SWT) with Fisher’s Linear Discriminant (FLDA) as a supervised projection method, delivering band-level insight and subject-wise evaluation suitable for resource-constrained clinics. Methods: EEG from the KAU dataset (eight ASD, eight controls; 256 Hz) was decomposed with SWT (db4). We retained levels 3, 4, and 6 (γ/β/θ) as features. FLDA learned a low-dimensional discriminant subspace, followed by a linear decision rule. Evaluation was conducted using a subject-wise 70/30 split (no subject overlap) with accuracy, precision, recall, F1, and confusion matrices. Results: The β band (Level 4) achieved the best performance (accuracy/precision/recall/F1 = 0.95), followed by γ (0.92) and θ (0.85). Despite partial overlap in FLDA scores, the projection maximized between-class separation relative to within-class variance, yielding robust linear decisions. Conclusions: Unlike earlier FLDA-only pipelines and wavelet–entropy–ANN approaches, our study (1) employs SWT (undecimated, shift-invariant) rather than DWT to stabilize sub-band features on short resting segments, (2) uses FLDA as a supervised projection to mitigate small-sample covariance pathologies before classification, (3) provides band-specific discriminative insight (β > γ/θ) under a subject-wise protocol, and (4) targets low-compute deployment. These choices yield a reproducible baseline with competitive accuracy and clear clinical interpretability. Future work will benchmark kernel/regularized discriminants and lightweight deep models as cohort size and compute permit. Full article
(This article belongs to the Special Issue Advances in the Diagnosis of Nervous System Diseases—3rd Edition)
Show Figures

Figure 1

20 pages, 2409 KB  
Article
Brainwave Biometrics: A Secure and Scalable Brain–Computer Interface-Based Authentication System
by Mashael Aldayel, Nouf Alsedairy and Abeer Al-Nafjan
AI 2025, 6(9), 205; https://doi.org/10.3390/ai6090205 - 28 Aug 2025
Viewed by 1143
Abstract
This study introduces a promising authentication framework utilizing brain–computer interface (BCI) technology to enhance both security protocols and user experience. A key strength of this approach lies in its reliance on objective, physiological signals—specifically, brainwave patterns—which are inherently difficult to replicate or forge, [...] Read more.
This study introduces a promising authentication framework utilizing brain–computer interface (BCI) technology to enhance both security protocols and user experience. A key strength of this approach lies in its reliance on objective, physiological signals—specifically, brainwave patterns—which are inherently difficult to replicate or forge, thereby providing a robust foundation for secure authentication. The authentication system was developed and implemented in four sequential stages: signal acquisition, preprocessing, feature extraction, and classification. Objective feature extraction methods, including Fisher’s Linear Discriminant (FLD) and Discrete Wavelet Transform (DWT), were employed to isolate meaningful brainwave features. These features were then classified using advanced machine learning techniques, with Quadratic Discriminant Analysis (QDA) and Convolutional Neural Networks (CNN) achieving accuracy rates exceeding 99%. These results highlight the effectiveness of the proposed BCI-based system and underscore the value of objective, data-driven methodologies in developing secure and user-friendly authentication solutions. To further address usability and efficiency, the number of BCI channels was systematically reduced from 64 to 32, and then to 16, resulting in accuracy rates of 92.64% and 80.18%, respectively. This reduction streamlined the authentication process, demonstrating that objective methods can maintain high performance even with simplified hardware and pointing to future directions for practical, real-world implementation. Additionally, we developed a real-time application using our custom dataset, reaching 99.75% accuracy with a CNN model. Full article
Show Figures

Figure 1

13 pages, 367 KB  
Article
Psychometric Properties of the Greek Version of the Claustrophobia Questionnaire
by Varvara Pantoleon, Petros Galanis, Athanasios Tsochatzis, Foteini Christidi, Efstratios Karavasilis, Nikolaos Kelekis and Georgios Velonakis
Behav. Sci. 2025, 15(8), 1059; https://doi.org/10.3390/bs15081059 - 5 Aug 2025
Viewed by 473
Abstract
Background: Claustrophobia is defined as the fear of enclosed spaces, and it is a rather common specific phobia. Although the Claustrophobia Questionnaire (CLQ) is a valid questionnaire to measure claustrophobia, there have been no studies validating this tool in Greek. Thus, our [...] Read more.
Background: Claustrophobia is defined as the fear of enclosed spaces, and it is a rather common specific phobia. Although the Claustrophobia Questionnaire (CLQ) is a valid questionnaire to measure claustrophobia, there have been no studies validating this tool in Greek. Thus, our aim was to translate and validate the CLQ in Greek. Methods: We applied the forward–backward translation method to translate the English CLQ into Greek. We conducted confirmatory factor analysis (CFA) to examine the two-factor model of the CLQ. We examined the convergent and divergent validity of the Greek CLQ by using the Fear Survey Schedule-III (FSS-III-CL), the NEO Five-Factor Inventory (NEO-FFI-NL-N), and the Spielberger’s State-Trait Anxiety Inventory (STAI). We examined the convergent validity of the Greek CLQ by calculating Pearson’s correlation coefficient between the CLQ scores and scores on FSS-III-CL, NEO-FFI-NL-N, STAI-S (state anxiety), and STAI-T (trait anxiety). We examined the divergent validity of the Greek CLQ using the Fisher r-to-z transformation. To further evaluate the discriminant validity of the CLQ, we calculated the average variance extracted (AVE) score and the Composite Reliability (CR) score. We calculated the intraclass correlation coefficient (ICC) and Cronbach’s alpha to assess the reliability of the Greek CLQ. Results: Our CFA confirmed the two-factor model of the CLQ since all the model fit indices were very good. Standardized regression weights between the 26 items of the CLQ and the two factors ranged from 0.559 to 0.854. The convergent validity of the Greek CLQ was very good since it correlated strongly with the FSS-III-CL and moderately with the NEO-FFI-NL-N and the STAI. Additionally, the Greek CLQ correlated more highly with the FSS-III-CL than with the NEO-FFI-NL-N and the STAI, indicating very good divergent validity. The AVE for the suffocation factor was 0.573, while for the restriction factor, it was 0.543, which are both higher than the acceptable value of 0.50. Moreover, the CR score for the suffocation factor was 0.949, while for the restriction factor, it was 0.954. The reliability of the Greek CLQ was excellent since the ICC in test–retest study was 0.986 and the Cronbach’s alpha was 0.956. Conclusions: The Greek version of the CLQ is a reliable and valid tool to measure levels of claustrophobia among individuals. Full article
Show Figures

Figure 1

23 pages, 4949 KB  
Article
Hybrid LDA-CNN Framework for Robust End-to-End Myoelectric Hand Gesture Recognition Under Dynamic Conditions
by Hongquan Le, Marc in het Panhuis, Geoffrey M. Spinks and Gursel Alici
Robotics 2025, 14(6), 83; https://doi.org/10.3390/robotics14060083 - 17 Jun 2025
Viewed by 1260
Abstract
Gesture recognition based on conventional machine learning is the main control approach for advanced prosthetic hand systems. Its primary limitation is the need for feature extraction, which must meet real-time control requirements. On the other hand, deep learning models could potentially overfit when [...] Read more.
Gesture recognition based on conventional machine learning is the main control approach for advanced prosthetic hand systems. Its primary limitation is the need for feature extraction, which must meet real-time control requirements. On the other hand, deep learning models could potentially overfit when trained on small datasets. For these reasons, we propose a hybrid Linear Discriminant Analysis–convolutional neural network (LDA-CNN) framework to improve the gesture recognition performance of sEMG-based prosthetic hand control systems. Within this framework, 1D-CNN filters are trained to generate latent representation that closely approximates Fisher’s (LDA’s) discriminant subspace, constructed from handcrafted features. Under the train-one-test-all evaluation scheme, our proposed hybrid framework consistently outperformed the 1D-CNN trained with cross-entropy loss only, showing improvements from 4% to 11% across two public datasets featuring hand gestures recorded under various limb positions and arm muscle contraction levels. Furthermore, our framework exhibited advantages in terms of induced spectral regularization, which led to a state-of-the-art recognition error of 22.79% with the extended 23 feature set when tested on the multi-limb position dataset. The main novelty of our hybrid framework is that it decouples feature extraction in regard to the inference time, enabling the future incorporation of a more extensive set of features, while keeping the inference computation time minimal. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

21 pages, 360 KB  
Article
Linear Dimensionality Reduction: What Is Better?
by Mohit Baliyan and Evgeny M. Mirkes
Data 2025, 10(5), 70; https://doi.org/10.3390/data10050070 - 6 May 2025
Viewed by 747
Abstract
This research paper focuses on dimensionality reduction, which is a major subproblem in any data processing operation. Dimensionality reduction based on principal components is the most used methodology. Our paper examines three heuristics, namely Kaiser’s rule, the broken stick, and the conditional number [...] Read more.
This research paper focuses on dimensionality reduction, which is a major subproblem in any data processing operation. Dimensionality reduction based on principal components is the most used methodology. Our paper examines three heuristics, namely Kaiser’s rule, the broken stick, and the conditional number rule, for selecting informative principal components when using principal component analysis to reduce high-dimensional data to lower dimensions. This study uses 22 classification datasets and three classifiers, namely Fisher’s discriminant classifier, logistic regression, and K nearest neighbors, to test the effectiveness of the three heuristics. The results show that there is no universal answer to the best intrinsic dimension, but the conditional number heuristic performs better, on average. This means that the conditional number heuristic is the best candidate for automatic data pre-processing. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

16 pages, 3628 KB  
Article
A Gene Ontology-Based Pipeline for Selecting Significant Gene Subsets in Biomedical Applications
by Sergii Babichev, Oleg Yarema, Igor Liakh and Nataliia Shumylo
Appl. Sci. 2025, 15(8), 4471; https://doi.org/10.3390/app15084471 - 18 Apr 2025
Cited by 1 | Viewed by 1249
Abstract
The growing volume and complexity of gene expression data necessitate biologically meaningful and statistically robust methods for feature selection to enhance the effectiveness of disease diagnosis systems. The present study addresses this challenge by proposing a pipeline that integrates RNA-seq data preprocessing, differential [...] Read more.
The growing volume and complexity of gene expression data necessitate biologically meaningful and statistically robust methods for feature selection to enhance the effectiveness of disease diagnosis systems. The present study addresses this challenge by proposing a pipeline that integrates RNA-seq data preprocessing, differential gene expression analysis, Gene Ontology (GO) enrichment, and ensemble-based machine learning. The pipeline employs the non-parametric Kruskal–Wallis test to identify differentially expressed genes, followed by dual enrichment analysis using both Fisher’s exact test and the Kolmogorov–Smirnov test across three GO categories: Biological Process (BP), Molecular Function (MF), and Cellular Component (CC). Genes associated with GO terms found significant by both tests were used to construct multiple gene subsets, including subsets based on individual categories, their union, and their intersection. Classification experiments using a random forest model, validated via 5-fold cross-validation, demonstrated that gene subsets derived from the CC category and the union of all categories achieved the highest accuracy and weighted F1-scores, exceeding 0.97 across 14 cancer types. In contrast, subsets derived from BP, MF, and especially their intersection exhibited lower performance. These results confirm the discriminative power of spatially localized gene annotations and underscore the value of integrating statistical and functional information into gene selection. The proposed approach improves the reliability of biomarker discovery and supports downstream analyses such as clustering and biclustering, providing a strong foundation for developing precise diagnostic tools in personalized medicine. Full article
(This article belongs to the Special Issue Advances in Bioinformatics and Biomedical Engineering)
Show Figures

Figure 1

17 pages, 1066 KB  
Article
Covariation of Amino Acid Substitutions in the HIV-1 Envelope Glycoprotein gp120 and the Antisense Protein ASP Associated with Coreceptor Usage
by Angelo Pavesi and Fabio Romerio
Viruses 2025, 17(3), 323; https://doi.org/10.3390/v17030323 - 26 Feb 2025
Viewed by 744
Abstract
The tropism of the Human Immunodeficiency Virus type 1 (HIV-1) is determined by the use of either or both chemokine coreceptors CCR5 (R5) and CXCR4 (X4) for entry into the target cell. The ability of HIV-1 to bind R5 or X4 is determined [...] Read more.
The tropism of the Human Immunodeficiency Virus type 1 (HIV-1) is determined by the use of either or both chemokine coreceptors CCR5 (R5) and CXCR4 (X4) for entry into the target cell. The ability of HIV-1 to bind R5 or X4 is determined primarily by the third variable loop (V3) of the viral envelope glycoprotein gp120. HIV-1 strains of pandemic group M contain an antisense gene termed asp, which overlaps env outside the region encoding the V3 loop. We previously showed that the ASP protein localizes on the envelope of infectious HIV-1 virions, suggesting that it may play a role in viral entry. In this study, we first developed a statistical method to predict coreceptor tropism based on Fisher’s linear discriminant analysis. We obtained three linear discriminant functions able to predict coreceptor tropism with high accuracy (94.4%) when applied to a training dataset of V3 sequences of known tropism. Using these functions, we predicted the tropism in a dataset of HIV-1 strains containing a full-length asp gene. In the amino acid sequence of ASP proteins expressed from these asp genes, we identified five positions with substitutions significantly associated with viral tropism. Interestingly, we found that these substitutions correlate significantly with substitutions at six amino acid positions of the V3 loop domain associated with tropism. Altogether, our computational analyses identify ASP amino acid signatures coevolving with V3 and potentially affecting HIV-1 tropism, which can be validated through in vitro and in vivo experiments. Full article
(This article belongs to the Section Human Virology and Viral Diseases)
Show Figures

Figure 1

30 pages, 23945 KB  
Article
Assessment Model of Channelized Debris Flow Potential Based on Hillslope Debris Flow Characteristics in Taiwan’s Sedimentary Watersheds
by Tien-Chien Chen, Kun-Ting Chen, Yu-Shan Hsu, Ming-Hsiu Chung and Jia-Zhen Huang
Water 2025, 17(4), 492; https://doi.org/10.3390/w17040492 - 9 Feb 2025
Viewed by 1202
Abstract
This study proposes a novel assessment model to evaluate the occurrence potential of channelized debris flows (CDFs) in sedimentary rock regions of Central and Southern Taiwan, with a particular emphasis on the characteristics of hillslope debris flows (HDFs) within watersheds. CDFs are significantly [...] Read more.
This study proposes a novel assessment model to evaluate the occurrence potential of channelized debris flows (CDFs) in sedimentary rock regions of Central and Southern Taiwan, with a particular emphasis on the characteristics of hillslope debris flows (HDFs) within watersheds. CDFs are significantly related to the occurrence of HDFs in the upper reaches of watersheds, suggesting a high correlation between the potential for both phenomena. The study initially developed a hillslope debris flow (HDF) recognition model utilizing Fisher’s discriminant analysis, based on data from 40 HDF events and 40 landslide events. This model was subsequently applied to identify HDF units within channelized debris flow (CDF) watersheds. Subsequently, a CDF potential assessment model was constructed using data from 32 streams, which included 16 CDFs and 16 non-debris flow streams. Two key indicators emerged as the most effective: “Total WA(>8)” and “number of WA(>10).” These indicators achieved an accuracy rate exceeding 84%, significantly outperforming the official assessment model, which had an accuracy rate of 60%. The newly developed assessment models offer substantial improvements in predicting CDF occurrences, enhancing disaster preparedness and sustainable environment development. Full article
Show Figures

Figure 1

19 pages, 4353 KB  
Article
Fusarium Wilt of Banana Latency and Onset Detection Based on Visible/Near Infrared Spectral Technology
by Cuiling Li, Dandan Xiang, Shuo Yang, Xiu Wang and Chunyu Li
Agronomy 2024, 14(12), 2994; https://doi.org/10.3390/agronomy14122994 - 16 Dec 2024
Cited by 1 | Viewed by 1358
Abstract
Fusarium wilt of banana is a soil-borne vascular disease caused by Fusarium oxysporum f. sp. cubense. The rapid and accurate detection of this disease is of great significance to controlling its spread. The research objective was to explore rapid banana Fusarium wilt [...] Read more.
Fusarium wilt of banana is a soil-borne vascular disease caused by Fusarium oxysporum f. sp. cubense. The rapid and accurate detection of this disease is of great significance to controlling its spread. The research objective was to explore rapid banana Fusarium wilt latency and onset detection methods and establish a disease severity grading model. Visible/near-infrared spectroscopy analysis combined with machine learning methods were used for the rapid in vivo detection of banana Fusarium wilt. A portable visible/near-infrared spectrum acquisition system was constructed to collect the spectra data of banana Fusarium wilt leaves representing five different disease grades, totaling 106 leaf samples which were randomly divided into a training set with 80 samples and a test set with 26 samples. Different data preprocessing methods were utilized, and Fisher discriminant analysis (FDA), an extreme learning machine (ELM), and a one-dimensional convolutional neural network (1D-CNN) were used to establish the classification models of the disease grades. The classification accuracies of the FDA, ELM, and 1D-CNN models reached 0.891, 0.989, and 0.904, respectively. The results showed that the proposed visible/near infrared spectroscopy detection method could realize the detection of the incubation period of banana Fusarium wilt and the classification of the disease severity and could be a favorable tool for the field diagnosis of banana Fusarium wilt. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

16 pages, 1428 KB  
Article
A Definition of a Heywood Case in Item Response Theory Based on Fisher Information
by Jay Verkuilen and Peter J. Johnson
Entropy 2024, 26(12), 1096; https://doi.org/10.3390/e26121096 - 14 Dec 2024
Viewed by 1140
Abstract
Heywood cases and other improper solutions occur frequently in latent variable models, e.g., factor analysis, item response theory, latent class analysis, multilevel models, or structural equation models, all of which are models with response variables taken from an exponential family. They have important [...] Read more.
Heywood cases and other improper solutions occur frequently in latent variable models, e.g., factor analysis, item response theory, latent class analysis, multilevel models, or structural equation models, all of which are models with response variables taken from an exponential family. They have important consequences for scoring with the latent variable model and are indicative of issues in a model, such as poor identification or model misspecification. In the context of the 2PL and 3PL models in IRT, they are more frequently known as Guttman items and are identified by having a discrimination parameter that is deemed excessively large. Other IRT models, such as the newer asymmetric item response theory (AsymIRT) or polytomous IRT models often have parameters that are not easy to interpret directly, so scanning parameter estimates are not necessarily indicative of the presence of problematic values. The graphical examination of the IRF can be useful but is necessarily subjective and highly dependent on choices of graphical defaults. We propose using the derivatives of the IRF, item Fisher information functions, and our proposed Item Fraction of Total Information (IFTI) decomposition metric to bypass the parameters, allowing for the more concrete and consistent identification of Heywood cases. We illustrate the approach by using empirical examples by using AsymIRT and nominal response models. Full article
(This article belongs to the Special Issue Applications of Fisher Information in Sciences II)
Show Figures

Figure 1

12 pages, 810 KB  
Article
Diagnostic Performance of Serum Leucine-Rich Alpha-2-Glycoprotein 1 in Pediatric Acute Appendicitis: A Prospective Validation Study
by Javier Arredondo Montero, Raquel Ros Briones, Amaya Fernández-Celis, Natalia López-Andrés and Nerea Martín-Calvo
Biomedicines 2024, 12(8), 1821; https://doi.org/10.3390/biomedicines12081821 - 11 Aug 2024
Cited by 5 | Viewed by 1546
Abstract
Introduction: Leucine-rich alpha-2-glycoprotein 1(LRG-1) is a human protein that has shown potential usefulness as a biomarker for diagnosing pediatric acute appendicitis (PAA). This study aims to validate the diagnostic performance of serum LRG-1 in PAA. Material and Methods: This work is a subgroup [...] Read more.
Introduction: Leucine-rich alpha-2-glycoprotein 1(LRG-1) is a human protein that has shown potential usefulness as a biomarker for diagnosing pediatric acute appendicitis (PAA). This study aims to validate the diagnostic performance of serum LRG-1 in PAA. Material and Methods: This work is a subgroup analysis from BIDIAP (BIomarkers for DIagnosing Appendicitis in Pediatrics), a prospective single-center observational cohort, to validate serum LRG-1 as a diagnostic tool in PAA. This analysis included 200 patients, divided into three groups: (1) healthy patients undergoing major outpatient surgery (n = 56), (2) patients with non-surgical abdominal pain (n = 52), and (3) patients with a confirmed diagnosis of PAA (n = 92). Patients in group 3 were divided into complicated and uncomplicated PAA. In all patients, a serum sample was obtained during recruitment, and LRG-1 concentration was determined by Enzyme-Linked ImmunoSorbent Assay (ELISA). Comparative statistical analyses were performed using the Mann–Whitney U, Kruskal–Wallis, and Fisher’s exact tests. The area under the receiver operating characteristic curves (AUC) was calculated for all pertinent analyses. Results: Serum LRG-1 values, expressed as median (interquartile range) were 23,145 (18,246–27,453) ng/mL in group 1, 27,655 (21,151–38,795) ng/mL in group 2 and 40,409 (32,631–53,655) ng/mL in group 3 (p < 0.0001). Concerning the type of appendicitis, the serum LRG-1 values obtained were 38,686 (31,804–48,816) ng/mL in the uncomplicated PAA group and 51,857 (34,013–64,202) ng/mL in the complicated PAA group (p = 0.02). The area under the curve (AUC) obtained (group 2 vs. 3) was 0.75 (95% CI 0.67–0.84). For the discrimination between complicated and uncomplicated PAA, the AUC obtained was 0.66 (95% CI 0.52–0.79). Conclusions: This work establishes normative health ranges for serum LRG-1 values in the pediatric population and shows that serum LRG-1 could be a potentially helpful tool for diagnosing PAA in the future. Future prospective multicenter studies, with the parallel evaluation of urinary and salivary LRG-1, are necessary to assess the implementability of this molecule in actual clinical practice. Full article
Show Figures

Figure 1

14 pages, 282 KB  
Article
Biomarker Profiling with Targeted Metabolomic Analysis of Plasma and Urine Samples in Patients with Type 2 Diabetes Mellitus and Early Diabetic Kidney Disease
by Maria Mogos, Carmen Socaciu, Andreea Iulia Socaciu, Adrian Vlad, Florica Gadalean, Flaviu Bob, Oana Milas, Octavian Marius Cretu, Anca Suteanu-Simulescu, Mihaela Glavan, Lavinia Balint, Silvia Ienciu, Lavinia Iancu, Dragos Catalin Jianu, Sorin Ursoniu and Ligia Petrica
J. Clin. Med. 2024, 13(16), 4703; https://doi.org/10.3390/jcm13164703 - 10 Aug 2024
Cited by 1 | Viewed by 2045
Abstract
Background: Over the years, it was noticed that patients with diabetes have reached an alarming number worldwide. Diabetes presents many complications, including diabetic kidney disease (DKD), which can be considered the leading cause of end-stage renal disease. Current biomarkers such as serum [...] Read more.
Background: Over the years, it was noticed that patients with diabetes have reached an alarming number worldwide. Diabetes presents many complications, including diabetic kidney disease (DKD), which can be considered the leading cause of end-stage renal disease. Current biomarkers such as serum creatinine and albuminuria have limitations for early detection of DKD. Methods: In our study, we used UHPLC-QTOF-ESI+-MS techniques to quantify previously analyzed metabolites. Based on one-way ANOVA and Fisher’s LSD, untargeted analysis allowed the discrimination of six metabolites between subgroups P1 versus P2 and P3: tryptophan, kynurenic acid, taurine, l-acetylcarnitine, glycine, and tiglylglycine. Results: Our results showed several metabolites that exhibited significant differences among the patient groups and can be considered putative biomarkers in early DKD, including glycine and kynurenic acid in serum (p < 0.001) and tryptophan and tiglylglycine (p < 0.001) in urine. Conclusions: Although we identified metabolites as potential biomarkers in the present study, additional studies are needed to validate these results. Full article
(This article belongs to the Section Endocrinology & Metabolism)
21 pages, 3618 KB  
Article
Dynamic Evaluation and Risk Projection of Heat Exposure Based on Disaster Events for Single-Season Rice along the Middle and Lower Reaches of the Yangtze River, China
by Mengyuan Jiang, Zhiguo Huo, Lei Zhang, Fengyin Zhang, Meixuan Li, Qianchuan Mi and Rui Kong
Agronomy 2024, 14(8), 1737; https://doi.org/10.3390/agronomy14081737 - 7 Aug 2024
Cited by 2 | Viewed by 1404
Abstract
Along with climate warming, extreme heat events have become more frequent, severe, and seriously threaten rice production. Precisely evaluating rice heat levels based on heat duration and a cumulative intensity index dominated by temperature and humidity is of great merit to effectively assess [...] Read more.
Along with climate warming, extreme heat events have become more frequent, severe, and seriously threaten rice production. Precisely evaluating rice heat levels based on heat duration and a cumulative intensity index dominated by temperature and humidity is of great merit to effectively assess regional heat risk and minimize the deleterious impact of rice heat along the middle and lower reaches of the Yangtze River (MLRYR). This study quantified the response mechanism of daytime heat accumulation, night-time temperature, and relative humidity to disaster-causing intensity in three categories of single-season rice heat (dry, medium, and wet conditions) using Fisher discriminant analysis to obtain the Heat Comprehensive Intensity Index daily (HCIId). It is indicated that relative humidity exhibited a negative contribution under dry heat, i.e., heat disaster-causing intensity increased with decreasing relative humidity, with the opposite being true for medium and wet heat. The Kappa coefficient, combined with heat duration and cumulative HCIId, was implemented to determine classification thresholds for different disaster levels (mild, moderate, and severe) to construct heat evaluation levels. Afterwards, spatiotemporal changes in heat risk for single-season rice through the periods of 1986–2005, 2046–2065 and 2080–2099 under SSP2-4.5 and SSP5-8.5 were evaluated using climate scenario datasets and heat evaluation levels carefully constructed. Regional risk projection explicitly revealed that future risk would reach its maximum at booting and flowering, followed by the tillering stage, and its minimum at filling. The future heat risk for single-season rice significantly increased under SSP5-8.5 than SSP2-4.5 in MLRYR. The higher risk would be highlighted in eastern Hubei, eastern Hunan, most of Jiangxi, and northern Anhui. As time goes on, the heat risk for single-season rice in eastern Jiangsu and southern Zhejiang will progressively shift from low to mid-high by the end of the twenty-first century. Understanding the potential risk of heat exposure at different growth stages can help decision-makers guide the implementation of targeted measures to address climate change. The proposed methodology also provides the possibility of assessing other crops exposure to heat stress or other extreme events. Full article
(This article belongs to the Section Farming Sustainability)
Show Figures

Figure 1

Back to TopTop