Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,529)

Search Parameters:
Keywords = cross-classified data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1138 KiB  
Article
Quality over Quantity: An Effective Large-Scale Data Reduction Strategy Based on Pointwise V-Information
by Fei Chen and Wenchi Zhou
Electronics 2025, 14(15), 3092; https://doi.org/10.3390/electronics14153092 (registering DOI) - 1 Aug 2025
Abstract
In order to increase the effectiveness of model training, data reduction is essential to data-centric Artificial Intelligence (AI). It achieves this by locating the most instructive examples in massive datasets. To increase data quality and training efficiency, the main difficulty is choosing the [...] Read more.
In order to increase the effectiveness of model training, data reduction is essential to data-centric Artificial Intelligence (AI). It achieves this by locating the most instructive examples in massive datasets. To increase data quality and training efficiency, the main difficulty is choosing the best examples rather than the complete datasets. In this paper, we propose an effective data reduction strategy based on Pointwise 𝒱-Information (PVI). To enable a static method, we first use PVI to quantify instance difficulty and remove instances with low difficulty. Experiments show that classifier performance is maintained with only a 0.0001% to 0.76% decline in accuracy when 10–30% of the data is removed. Second, we train the classifiers using a progressive learning strategy on examples sorted by increasing PVI, accelerating convergence and achieving a 0.8% accuracy gain over conventional training. Our findings imply that training a classifier on the chosen optimal subset may improve model performance and increase training efficiency when combined with an efficient data reduction strategy. Furthermore, we have adapted the PVI framework, which was previously limited to English datasets, to a variety of Chinese Natural Language Processing (NLP) tasks and base models, yielding insightful results for faster training and cross-lingual data reduction. Full article
(This article belongs to the Special Issue Data Retrieval and Data Mining)
31 pages, 1370 KiB  
Article
AIM-Net: A Resource-Efficient Self-Supervised Learning Model for Automated Red Spider Mite Severity Classification in Tea Cultivation
by Malathi Kanagarajan, Mohanasundaram Natarajan, Santhosh Rajendran, Parthasarathy Velusamy, Saravana Kumar Ganesan, Manikandan Bose, Ranjithkumar Sakthivel and Baskaran Stephen Inbaraj
AgriEngineering 2025, 7(8), 247; https://doi.org/10.3390/agriengineering7080247 (registering DOI) - 1 Aug 2025
Abstract
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. [...] Read more.
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. This article proposes AIM-Net (AI-based Infestation Mapping Network) by evaluating SwAV (Swapping Assignments between Views), a self-supervised learning framework, for classifying RSM infestation severity (Mild, Moderate, Severe) using a geo-referenced, field-acquired dataset of RSM infested tea-leaves, Cam-RSM. The methodology combines SwAV pre-training on unlabeled data with fine-tuning on labeled subsets, employing multi-crop augmentation and online clustering to learn discriminative features without full supervision. Comparative analysis against a fully supervised ResNet-50 baseline utilized 5-fold cross-validation, assessing accuracy, F1-scores, and computational efficiency. Results demonstrate SwAV’s superiority, achieving 98.7% overall accuracy (vs. 92.1% for ResNet-50) and macro-average F1-scores of 98.3% across classes, with a 62% reduction in labeled data requirements. The model showed particular strength in Mild_RSM-class detection (F1-score: 98.5%) and computational efficiency, enabling deployment on edge devices. Statistical validation confirmed significant improvements (p < 0.001) over baseline approaches. These findings establish self-supervised learning as a transformative tool for precision pest management, offering resource-efficient solutions for early infestation detection while maintaining high accuracy. Full article
28 pages, 2174 KiB  
Article
Validating Lava Tube Stability Through Finite Element Analysis of Real-Scene 3D Models
by Jiawang Wang, Zhizhong Kang, Chenming Ye, Haiting Yang and Xiaoman Qi
Electronics 2025, 14(15), 3062; https://doi.org/10.3390/electronics14153062 (registering DOI) - 31 Jul 2025
Abstract
The structural stability of lava tubes is a critical factor for their potential use in lunar base construction. Previous studies could not reflect the details of lava tube boundaries and perform accurate mechanical analysis. To this end, this study proposes a robust method [...] Read more.
The structural stability of lava tubes is a critical factor for their potential use in lunar base construction. Previous studies could not reflect the details of lava tube boundaries and perform accurate mechanical analysis. To this end, this study proposes a robust method to construct a high-precision, real-scene 3D model based on ground lava tube point cloud data. By employing finite element analysis, this study investigated the impact of real-world cross-sectional geometry, particularly the aspect ratio, on structural stability under surface pressure simulating meteorite impacts. A high-precision 3D reconstruction was achieved using UAV-mounted LiDAR and SLAM-based positioning systems, enabling accurate geometric capture of lava tube profiles. The original point cloud data were processed to extract cross-sections, which were then classified by their aspect ratios for analysis. Experimental results confirmed that the aspect ratio is a significant factor in determining stability. Crucially, unlike the monotonic trends often suggested by idealized models, analysis of real-world geometries revealed that the greatest deformation and structural vulnerability occur in sections with an aspect ratio between 0.5 and 0.6. For small lava tubes buried 3 m deep, the ground pressure they can withstand does not exceed 6 GPa. This process helps identify areas with weaker load-bearing capacity. The analysis demonstrated that a realistic 3D modeling approach provides a more accurate and reliable assessment of lava tube stability. This framework is vital for future evaluations of lunar lava tubes as safe habitats and highlights that complex, real-world geometry can lead to non-intuitive structural weaknesses not predicted by simplified models. Full article
Show Figures

Figure 1

27 pages, 2653 KiB  
Article
Attacker Attribution in Multi-Step and Multi-Adversarial Network Attacks Using Transformer-Based Approach
by Romina Torres and Ana García
Appl. Sci. 2025, 15(15), 8476; https://doi.org/10.3390/app15158476 - 30 Jul 2025
Abstract
Recent studies on network intrusion detection using deep learning primarily focus on detecting attacks or classifying attack types, but they often overlook the challenge of attributing each attack to its specific source among many potential adversaries (multi-adversary attribution). This is a critical and [...] Read more.
Recent studies on network intrusion detection using deep learning primarily focus on detecting attacks or classifying attack types, but they often overlook the challenge of attributing each attack to its specific source among many potential adversaries (multi-adversary attribution). This is a critical and underexplored issue in cybersecurity. In this study, we address the problem of attacker attribution in complex, multi-step network attack (MSNA) environments, aiming to identify the responsible attacker (e.g., IP address) for each sequence of security alerts, rather than merely detecting the presence or type of attack. We propose a deep learning approach based on Transformer encoders to classify sequences of network alerts and attribute them to specific attackers among many candidates. Our pipeline includes data preprocessing, exploratory analysis, and robust training/validation using stratified splits and 5-fold cross-validation, all applied to real-world multi-step attack datasets from capture-the-flag (CTF) competitions. We compare the Transformer-based approach with a multilayer perceptron (MLP) baseline to quantify the benefits of advanced architectures. Experiments on this challenging dataset demonstrate that our Transformer model achieves near-perfect accuracy (99.98%) and F1-scores (macro and weighted ≈ 99%) in attack attribution, significantly outperforming the MLP baseline (accuracy 80.62%, macro F1 65.05% and weighted F1 80.48%). The Transformer generalizes robustly across all attacker classes, including those with few samples, as evidenced by per-class metrics and confusion matrices. Our results show that Transformer-based models are highly effective for multi-adversary attack attribution in MSNA, a scenario not or under-addressed in the previous intrusion detection systems (IDS) literature. The adoption of advanced architectures and rigorous validation strategies is essential for reliable attribution in complex and imbalanced environments. Full article
(This article belongs to the Special Issue Application of Deep Learning for Cybersecurity)
Show Figures

Figure 1

14 pages, 2178 KiB  
Article
State-of-the-Art Document Image Binarization Using a Decision Tree Ensemble Trained on Classic Local Binarization Algorithms and Image Statistics
by Nicolae Tarbă, Costin-Anton Boiangiu and Mihai-Lucian Voncilă
Appl. Sci. 2025, 15(15), 8374; https://doi.org/10.3390/app15158374 - 28 Jul 2025
Viewed by 187
Abstract
Image binarization algorithms reduce the original color space to only two values, black and white. They are an important preprocessing step in many computer vision applications. Image binarization is typically performed using a threshold value by classifying the pixels into two categories: lower [...] Read more.
Image binarization algorithms reduce the original color space to only two values, black and white. They are an important preprocessing step in many computer vision applications. Image binarization is typically performed using a threshold value by classifying the pixels into two categories: lower and higher than the threshold. Global thresholding uses a single threshold value for the entire image, whereas local thresholding uses different values for the different pixels. Although slower and more complex than global thresholding, local thresholding can better classify pixels in noisy areas of an image by considering not only the pixel’s value, but also its surrounding neighborhood. This study introduces a local thresholding method that uses the results of several local thresholding algorithms and other image statistics to train a decision tree ensemble. Through cross-validation, we demonstrate that the model is robust and performs well on new data. We compare the results with state-of-the-art solutions and reveal significant improvements in the average F-measure for all DIBCO datasets, obtaining an F-measure of 95.8%, whereas the previous high score was 93.1%. The proposed solution significantly outperformed the previous state-of-the-art algorithms on the DIBCO 2019 dataset, obtaining an F-measure of 95.8%, whereas the previous high score was 73.8%. Full article
(This article belongs to the Special Issue Statistical Signal Processing: Theory, Methods and Applications)
Show Figures

Figure 1

19 pages, 5198 KiB  
Article
Research on a Fault Diagnosis Method for Rolling Bearings Based on the Fusion of PSR-CRP and DenseNet
by Beining Cui, Zhaobin Tan, Yuhang Gao, Xinyu Wang and Lv Xiao
Processes 2025, 13(8), 2372; https://doi.org/10.3390/pr13082372 - 25 Jul 2025
Viewed by 359
Abstract
To address the challenges of unstable vibration signals, indistinct fault features, and difficulties in feature extraction during rolling bearing operation, this paper presents a novel fault diagnosis method based on the fusion of PSR-CRP and DenseNet. The Phase Space Reconstruction (PSR) method transforms [...] Read more.
To address the challenges of unstable vibration signals, indistinct fault features, and difficulties in feature extraction during rolling bearing operation, this paper presents a novel fault diagnosis method based on the fusion of PSR-CRP and DenseNet. The Phase Space Reconstruction (PSR) method transforms one-dimensional bearing vibration data into a three-dimensional space. Euclidean distances between phase points are calculated and mapped into a Color Recurrence Plot (CRP) to represent the bearings’ operational state. This approach effectively reduces feature extraction ambiguity compared to RP, GAF, and MTF methods. Fault features are extracted and classified using DenseNet’s densely connected topology. Compared with CNN and ViT models, DenseNet improves diagnostic accuracy by reusing limited features across multiple dimensions. The training set accuracy was 99.82% and 99.90%, while the test set accuracy is 97.03% and 95.08% for the CWRU and JNU datasets under five-fold cross-validation; F1 scores were 0.9739 and 0.9537, respectively. This method achieves highly accurate diagnosis under conditions of non-smooth signals and inconspicuous fault characteristics and is applicable to fault diagnosis scenarios for precision components in aerospace, military systems, robotics, and related fields. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

13 pages, 543 KiB  
Article
Subclinical Hypothyroidism in Moderate-to-Severe Psoriasis: A Cross-Sectional Study of Prevalence and Clinical Implications
by Ricardo Ruiz-Villaverde, Marta Cebolla-Verdugo, Carlos Llamas-Segura, Pedro José Ezomo-Gervilla, Jose Molina-Espinosa and Jose Carlos Ruiz-Carrascosa
Diseases 2025, 13(8), 237; https://doi.org/10.3390/diseases13080237 - 25 Jul 2025
Viewed by 168
Abstract
Background: Psoriasis is a chronic inflammatory skin disease linked to systemic comorbidities, including metabolic, cardiovascular, and autoimmune disorders. Thyroid dysfunction, particularly hypothyroidism, has been observed in patients with moderate-to-severe psoriasis, suggesting possible shared inflammatory pathways. Objectives: This study aims to explore [...] Read more.
Background: Psoriasis is a chronic inflammatory skin disease linked to systemic comorbidities, including metabolic, cardiovascular, and autoimmune disorders. Thyroid dysfunction, particularly hypothyroidism, has been observed in patients with moderate-to-severe psoriasis, suggesting possible shared inflammatory pathways. Objectives: This study aims to explore the relationship between psoriasis and thyroid dysfunction in adults with moderate-to-severe psoriasis undergoing biologic therapy to determine whether psoriasis predisposes individuals to thyroid disorders and to identify demographic or clinical factors influencing this association. Materials and Methods: A cross-sectional study included adult patients with moderate-to-severe psoriasis receiving biologic therapy, recruited from the Psoriasis Unit at the Dermatology Department of Hospital Universitario San Cecilio in Granada, Spain, from 2017 to 2023. Patients with mild psoriasis or those treated with conventional systemic therapies were excluded. The data collected included demographics and clinical characteristics, such as age, sex, BMI (body mass index), and psoriasis severity (psoriasis severity was evaluated using the Psoriasis Area Severity Index (PASI), body surface area (BSA) involvement, Investigator’s Global Assessment (IGA), pruritus severity using the Numerical Rating Scale (NRS), and impact on quality of life through the Dermatology Life Quality Index (DLQI)). Thyroid dysfunction, including hypothyroidism and subclinical hypothyroidism, was assessed based on records from the Endocrinology Department. Results: Thyroid dysfunction was found in 4.2% of patients, all classified as hypothyroidism, primarily subclinical. The affected patients were generally older, with a mean age of 57.4 years. No significant differences in psoriasis severity (PASI, BSA) or treatment response were observed between patients with and without thyroid dysfunction. Conclusion: Our findings suggest hypothyroidism is the main thyroid dysfunction in psoriatic patients, independent of psoriasis severity. The lack of impact on psoriasis severity suggests hypothyroidism may be an independent comorbidity, warranting further research into shared inflammatory mechanisms. Full article
Show Figures

Figure 1

35 pages, 4256 KiB  
Article
Automated Segmentation and Morphometric Analysis of Thioflavin-S-Stained Amyloid Deposits in Alzheimer’s Disease Brains and Age-Matched Controls Using Weakly Supervised Deep Learning
by Gábor Barczánfalvi, Tibor Nyári, József Tolnai, László Tiszlavicz, Balázs Gulyás and Karoly Gulya
Int. J. Mol. Sci. 2025, 26(15), 7134; https://doi.org/10.3390/ijms26157134 - 24 Jul 2025
Viewed by 340
Abstract
Alzheimer’s disease (AD) involves the accumulation of amyloid-β (Aβ) plaques, whose quantification plays a central role in understanding disease progression. Automated segmentation of Aβ deposits in histopathological micrographs enables large-scale analyses but is hindered by the high cost of detailed pixel-level annotations. Weakly [...] Read more.
Alzheimer’s disease (AD) involves the accumulation of amyloid-β (Aβ) plaques, whose quantification plays a central role in understanding disease progression. Automated segmentation of Aβ deposits in histopathological micrographs enables large-scale analyses but is hindered by the high cost of detailed pixel-level annotations. Weakly supervised learning offers a promising alternative by leveraging coarse or indirect labels to reduce the annotation burden. We evaluated a weakly supervised approach to segment and analyze thioflavin-S-positive parenchymal amyloid pathology in AD and age-matched brains. Our pipeline integrates three key components, each designed to operate under weak supervision. First, robust preprocessing (including retrospective multi-image illumination correction and gradient-based background estimation) was applied to enhance image fidelity and support training, as models rely more on image features. Second, class activation maps (CAMs), generated by a compact deep classifier SqueezeNet, were used to identify, and coarsely localize amyloid-rich parenchymal regions from patch-wise image labels, serving as spatial priors for subsequent refinement without requiring dense pixel-level annotations. Third, a patch-based convolutional neural network, U-Net, was trained on synthetic data generated from micrographs based on CAM-derived pseudo-labels via an extensive object-level augmentation strategy, enabling refined whole-image semantic segmentation and generalization across diverse spatial configurations. To ensure robustness and unbiased evaluation, we assessed the segmentation performance of the entire framework using patient-wise group k-fold cross-validation, explicitly modeling generalization across unseen individuals, critical in clinical scenarios. Despite relying on weak labels, the integrated pipeline achieved strong segmentation performance with an average Dice similarity coefficient (≈0.763) and Jaccard index (≈0.639), widely accepted metrics for assessing segmentation quality in medical image analysis. The resulting segmentations were also visually coherent, demonstrating that weakly supervised segmentation is a viable alternative in histopathology, where acquiring dense annotations is prohibitively labor-intensive and time-consuming. Subsequent morphometric analyses on automatically segmented Aβ deposits revealed size-, structural complexity-, and global geometry-related differences across brain regions and cognitive status. These findings confirm that deposit architecture exhibits region-specific patterns and reflects underlying neurodegenerative processes, thereby highlighting the biological relevance and practical applicability of the proposed image-processing pipeline for morphometric analysis. Full article
Show Figures

Figure 1

22 pages, 2952 KiB  
Article
Raw-Data Driven Functional Data Analysis with Multi-Adaptive Functional Neural Networks for Ergonomic Risk Classification Using Facial and Bio-Signal Time-Series Data
by Suyeon Kim, Afrooz Shakeri, Seyed Shayan Darabi, Eunsik Kim and Kyongwon Kim
Sensors 2025, 25(15), 4566; https://doi.org/10.3390/s25154566 - 23 Jul 2025
Viewed by 207
Abstract
Ergonomic risk classification during manual lifting tasks is crucial for the prevention of workplace injuries. This study addresses the challenge of classifying lifting task risk levels (low, medium, and high risk, labeled as 0, 1, and 2) using multi-modal time-series data comprising raw [...] Read more.
Ergonomic risk classification during manual lifting tasks is crucial for the prevention of workplace injuries. This study addresses the challenge of classifying lifting task risk levels (low, medium, and high risk, labeled as 0, 1, and 2) using multi-modal time-series data comprising raw facial landmarks and bio-signals (electrocardiography [ECG] and electrodermal activity [EDA]). Classifying such data presents inherent challenges due to multi-source information, temporal dynamics, and class imbalance. To overcome these challenges, this paper proposes a Multi-Adaptive Functional Neural Network (Multi-AdaFNN), a novel method that integrates functional data analysis with deep learning techniques. The proposed model introduces a novel adaptive basis layer composed of micro-networks tailored to each individual time-series feature, enabling end-to-end learning of discriminative temporal patterns directly from raw data. The Multi-AdaFNN approach was evaluated across five distinct dataset configurations: (1) facial landmarks only, (2) bio-signals only, (3) full fusion of all available features, (4) a reduced-dimensionality set of 12 selected facial landmark trajectories, and (5) the same reduced set combined with bio-signals. Performance was rigorously assessed using 100 independent stratified splits (70% training and 30% testing) and optimized via a weighted cross-entropy loss function to manage class imbalance effectively. The results demonstrated that the integrated approach, fusing facial landmarks and bio-signals, achieved the highest classification accuracy and robustness. Furthermore, the adaptive basis functions revealed specific phases within lifting tasks critical for risk prediction. These findings underscore the efficacy and transparency of the Multi-AdaFNN framework for multi-modal ergonomic risk assessment, highlighting its potential for real-time monitoring and proactive injury prevention in industrial environments. Full article
(This article belongs to the Special Issue (Bio)sensors for Physiological Monitoring)
Show Figures

Figure 1

16 pages, 851 KiB  
Article
Impact of Combined Hypertension and Diabetes on the Prevalence of Disability in Brazilian Older People—Evidence from Population Studies in 2013 and 2019
by Rafaela Gonçalves Ribeiro-Lucas, Barbara Niegia Garcia de Goulart and Patricia Klarmann Ziegelmann
Int. J. Environ. Res. Public Health 2025, 22(7), 1157; https://doi.org/10.3390/ijerph22071157 - 21 Jul 2025
Viewed by 373
Abstract
Disability in basic and instrumental activities of daily living (BADL and IADL) reflects functional decline in older adults and can be associated with chronic conditions like type 2 diabetes (T2DM) and hypertension (SAH). This cross-sectional study utilized data from the 2013 and 2019 [...] Read more.
Disability in basic and instrumental activities of daily living (BADL and IADL) reflects functional decline in older adults and can be associated with chronic conditions like type 2 diabetes (T2DM) and hypertension (SAH). This cross-sectional study utilized data from the 2013 and 2019 Brazilian National Health Surveys to investigate the associations between T2DM, SAH, and disability levels. Exposures were self-reported diagnoses and outcomes were classified as independent, moderate, or severe. Multivariable Poisson regression models, with robust variance estimates, estimated adjusted prevalence ratios (PRa), accounting for sociodemographic variables and the survey design. In 2013, the absence of diabetes and hypertension was associated with a lower prevalence (PRa = 0.70; 95% CI: 0.58–0.85) of moderate disability in BADL when compared with the presence of only one of the conditions. On the other hand, the coexistence of T2DM and SAH was associated with a higher prevalence (PRa = 1.39; 95% CI: 1.01–1.91). A similar result was found in 2019 with the addition that coexistence was also associated with a higher prevalence of severe disability in BADLs (PRa = 1.82; 95% CI: 1.59–2.07). For IADL, the absence of T2DM and SAH was associated with a lower prevalence of severe disability in 2013 and 2019 and a lower prevalence of moderate disability only in 2019. However, coexistence showed a higher prevalence in both degrees of disability and both years of the survey. These findings highlight the impact of T2DM and SAH on disability in older people. Therefore, it is crucial to develop targeted strategies for vulnerable subgroups to enhance functional independence in aging populations. Full article
Show Figures

Figure 1

21 pages, 2395 KiB  
Article
A Robust Stacking-Based Ensemble Model for Predicting Cardiovascular Diseases
by Hayat Bihri, Lalla Amina Charaf, Salma Azzouzi and My El Hassan Charaf
AI 2025, 6(7), 160; https://doi.org/10.3390/ai6070160 - 21 Jul 2025
Viewed by 317
Abstract
Background/Objectives: Cardiovascular diseases (CVDs) remain the primary cause of mortality worldwide, underscoring the critical importance of developing accurate early prediction models. In this study, we propose an advanced stacking ensemble learning framework to improve the predictive performance for CVD diagnosis. Methods: The methodology [...] Read more.
Background/Objectives: Cardiovascular diseases (CVDs) remain the primary cause of mortality worldwide, underscoring the critical importance of developing accurate early prediction models. In this study, we propose an advanced stacking ensemble learning framework to improve the predictive performance for CVD diagnosis. Methods: The methodology encompasses comprehensive data preprocessing, feature selection, cross-validation, and the construction of a stacking architecture integrating Random Forest (RF), Support Vector Machine (SVM), and CatBoost as base learners. Two meta-learning configurations were examined: Logistic Regression (LR) and a Multilayer Perceptron (MLP). Results: Experimental results indicate that the MLP-based stacking model achieves superior performance, with an accuracy of 97.06%, outperforming existing approaches reported in the literature. Furthermore, the model demonstrates high recall (96.08%) and precision (98%), confirming its robustness and generalization capacity. Conclusions: Compared to individual classifiers and traditional ensemble methods, the proposed approach yields significantly enhanced predictive outcomes, highlighting the potential of deep learning-based stacking strategies in cardiovascular risk assessment. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

19 pages, 1109 KiB  
Article
Machine Learning Approach to Select Small Compounds in Plasma as Predictors of Alzheimer’s Disease
by Eleonora Stefanini, Alberto Iglesias, Joan Serrano-Marín, Juan Sánchez-Navés, Hanan A. Alkozi, Mercè Pallàs, Christian Griñán-Ferré, David Bernal-Casas and Rafael Franco
Int. J. Mol. Sci. 2025, 26(14), 6991; https://doi.org/10.3390/ijms26146991 - 21 Jul 2025
Viewed by 199
Abstract
This study employs a machine learning approach to identify a small-molecule-based signature capable of predicting Alzheimer’s disease (AD). Utilizing metabolomics data from the plasma of a well-characterized cohort of 94 AD patients and 62 healthy controls; metabolite levels were assessed using the Biocrates [...] Read more.
This study employs a machine learning approach to identify a small-molecule-based signature capable of predicting Alzheimer’s disease (AD). Utilizing metabolomics data from the plasma of a well-characterized cohort of 94 AD patients and 62 healthy controls; metabolite levels were assessed using the Biocrates MxP® Quant 500 platform. Data preprocessing involved removing low-quality samples, selecting relevant biochemical groups, and normalizing metabolite data based on demographic variables such as age, sex, and fasting time. Linear regression models were used to identify concomitant parameters that consisted of the data for a given metabolite within each of the biochemical families that were considered. Detection of these “concomitant” metabolites facilitates normalization and allows sample comparison. Residual analysis revealed distinct metabolite profiles between AD patients and controls across groups, such as amino acid-related compounds, bile acids, biogenic amines, indoles, carboxylic acids, and fatty acids. Correlation heatmaps illustrated significant interdependencies, highlighting specific molecules like carnosine, 5-aminovaleric acid (5-AVA), cholic acid (CA), and indoxyl sulfate (Ind-SO4) as promising indicators. Linear Discriminant Analysis (LDA), validated using Leave-One-Out Cross-Validation, demonstrated that combinations of four or five molecules could classify AD with accuracy exceeding 75%, sensitivity up to 80%, and specificity around 79%. Notably, optimal combinations integrated metabolites with both a tendency to increase and a tendency to decrease in AD. A multivariate strategy consistently identified included 5-AVA, carnosine, CA, and hypoxanthine as having predictive potential. Overall, this study supports the utility of combining data of plasma small molecules as predictors for AD, offering a novel diagnostic tool and paving the way for advancements in personalized medicine. Full article
(This article belongs to the Section Molecular Neurobiology)
Show Figures

Figure 1

12 pages, 630 KiB  
Systematic Review
Advancing Diagnostic Tools in Forensic Science: The Role of Artificial Intelligence in Gunshot Wound Investigation—A Systematic Review
by Francesco Sessa, Mario Chisari, Massimiliano Esposito, Elisa Guardo, Lucio Di Mauro, Monica Salerno and Cristoforo Pomara
Forensic Sci. 2025, 5(3), 30; https://doi.org/10.3390/forensicsci5030030 - 20 Jul 2025
Viewed by 306
Abstract
Background/Objectives: Artificial intelligence (AI) is beginning to be applied in wound ballistics, showing preliminary potential to improve the accuracy and objectivity of forensic analyses. This review explores the current state of AI applications in forensic firearm wound analysis, emphasizing its potential to [...] Read more.
Background/Objectives: Artificial intelligence (AI) is beginning to be applied in wound ballistics, showing preliminary potential to improve the accuracy and objectivity of forensic analyses. This review explores the current state of AI applications in forensic firearm wound analysis, emphasizing its potential to address challenges such as subjective interpretations and data heterogeneity. Methods: A systematic review adhering to PRISMA guidelines was conducted using databases such as Scopus and Web of Science. Keywords focused on AI and GSW classification identified 502 studies, narrowed down to 4 relevant articles after rigorous screening based on inclusion and exclusion criteria. Results: These studies examined the role of deep learning (DL) models in classifying GSWs by type, shooting distance, and entry or exit characteristics. The key findings demonstrated that DL models like TinyResNet, ResNet152, and ConvNext Tiny achieved accuracy ranging from 87.99% to 98%. Models were effective in tasks such as classifying GSWs and estimating shooting distances. However, most studies were exploratory in nature, with small sample sizes and, in some cases, reliance on animal models, which limits generalizability to real-world forensic scenarios. Conclusions: Comparisons with other forensic AI applications revealed that large, diverse datasets significantly enhance model performance. Transparent and interpretable AI systems utilizing techniques are essential for judicial acceptance and ethical compliance. Despite the encouraging results, the field remains in an early stage of development. Limitations highlight the need for standardized protocols, cross-institutional collaboration, and the integration of multimodal data for robust forensic AI systems. Future research should focus on overcoming current data and validation constraints, ensuring the ethical use of human forensic data, and developing AI tools that are scientifically sound and legally defensible. Full article
Show Figures

Figure 1

19 pages, 7782 KiB  
Article
Two Novel Multidimensional Data Analysis Approaches Using InSAR Products for Landslide Prone Areas
by Hamit Beran Gunce and Bekir Taner San
Appl. Sci. 2025, 15(14), 8024; https://doi.org/10.3390/app15148024 - 18 Jul 2025
Viewed by 221
Abstract
Successfully detecting ground deformation, especially landslides, using InSAR has not always been possible. Improvements to existing InSAR tools are needed to address this issue. This study develops and evaluates two novel approaches that use multidimensional InSAR products to detect surface displacements in the [...] Read more.
Successfully detecting ground deformation, especially landslides, using InSAR has not always been possible. Improvements to existing InSAR tools are needed to address this issue. This study develops and evaluates two novel approaches that use multidimensional InSAR products to detect surface displacements in the landslide-prone region of Büyükalan, Antalya. Multi-temporal InSAR analysis of Sentinel-1 data (2015–2020) is performed using LiCSAR–LiCSBAS, followed by two novel approaches: multi-dimensional InSAR research and analysis (MIRA) and Crosta’s InSAR application (InCROSS). Cumulative LOS velocity maps reveal deformation rates of −1.1 cm/year to 1.0 cm/year for descending tracks and −3.8 cm/year to 3.8 cm/year for ascending tracks. Vertical displacements range from −1.9 cm/year to 2.3 cm/year and east–west components from −2.8 cm/year to 2.9 cm/year. MIRA uses an n-Dimensional Visualizer and SVM classifier to identify deformation clusters, and InCROSS applies PCA to enhance deformation features. MIRA increases the deformation detection capacity compared to conventional InSAR products, and InCROSS integrates these products. A comparison of the results reveals 80.48% consistency between them. Overall, the integration of InSAR with statistical and multidimensional analysis significantly enhances the detection and interpretation of ground deformation patterns in landslide-prone areas. Full article
Show Figures

Figure 1

15 pages, 3326 KiB  
Article
Radiomics and Machine Learning Approaches for the Preoperative Classification of In Situ vs. Invasive Breast Cancer Using Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE–MRI)
by Luana Conte, Rocco Rizzo, Alessandra Sallustio, Eleonora Maggiulli, Mariangela Capodieci, Francesco Tramacere, Alessandra Castelluccia, Giuseppe Raso, Ugo De Giorgi, Raffaella Massafra, Maurizio Portaluri, Donato Cascio and Giorgio De Nunzio
Appl. Sci. 2025, 15(14), 7999; https://doi.org/10.3390/app15147999 - 18 Jul 2025
Viewed by 282
Abstract
Accurate preoperative distinction between in situ and invasive Breast Cancer (BC) is critical for clinical decision-making and treatment planning. Radiomics and Machine Learning (ML) have shown promise in enhancing diagnostic performance from breast MRI, yet their application to this specific task remains underexplored. [...] Read more.
Accurate preoperative distinction between in situ and invasive Breast Cancer (BC) is critical for clinical decision-making and treatment planning. Radiomics and Machine Learning (ML) have shown promise in enhancing diagnostic performance from breast MRI, yet their application to this specific task remains underexplored. The aim of this study was to evaluate the performance of several ML classifiers, trained on radiomic features extracted from DCE–MRI and supported by basic clinical information, for the classification of in situ versus invasive BC lesions. In this study, we retrospectively analysed 71 post-contrast DCE–MRI scans (24 in situ, 47 invasive cases). Radiomic features were extracted from manually segmented tumour regions using the PyRadiomics library, and a limited set of basic clinical variables was also included. Several ML classifiers were evaluated in a Leave-One-Out Cross-Validation (LOOCV) scheme. Feature selection was performed using two different strategies: Minimum Redundancy Maximum Relevance (MRMR), mutual information. Axial 3D rotation was used for data augmentation. Support Vector Machine (SVM), K Nearest Neighbors (KNN), Random Forest (RF), and Extreme Gradient Boosting (XGBoost) were the best-performing models, with an Area Under the Curve (AUC) ranging from 0.77 to 0.81. Notably, KNN achieved the best balance between sensitivity and specificity without the need for data augmentation. Our findings confirm that radiomic features extracted from DCE–MRI, combined with well-validated ML models, can effectively support the differentiation of in situ vs. invasive breast cancer. This approach is quite robust even in small datasets and may aid in improving preoperative planning. Further validation on larger cohorts and integration with additional imaging or clinical data are recommended. Full article
Show Figures

Figure 1

Back to TopTop