Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,965)

Search Parameters:
Keywords = machine learning (ML) methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 632 KB  
Article
ML-PSDFA: A Machine Learning Framework for Synthetic Log Pattern Synthesis in Digital Forensics
by Wafa Alorainy
Electronics 2025, 14(19), 3947; https://doi.org/10.3390/electronics14193947 - 6 Oct 2025
Abstract
This study introduces the Machine Learning (ML)-Driven Pattern Synthesis for Digital Forensics in Synthetic Log Analysis (ML-PSDFA) framework to address critical gaps in digital forensics, including the reliance on real-world data, limited pattern diversity, and forensic integration challenges. A key innovation is the [...] Read more.
This study introduces the Machine Learning (ML)-Driven Pattern Synthesis for Digital Forensics in Synthetic Log Analysis (ML-PSDFA) framework to address critical gaps in digital forensics, including the reliance on real-world data, limited pattern diversity, and forensic integration challenges. A key innovation is the introduction of a novel temporal forensics loss LTFL in the Synthetic Attack Pattern Generator (SAPG), which enhances the preservation of temporal sequences in synthetic logs that are crucial for forensic analysis. The framework employs the SAPG with hybrid seed data (UNSW-NB15 and CICIDS2017) to create 500,000 synthetic log entries using Google Colab, achieving a realism score of 0.96, a temporal consistency score of 0.90, and an entropy of 4.0. The methodology employs a three-layer architecture that integrates data generation, pattern analysis, and forensic training, utilizing TimeGAN, XGBoost classification with hyperparameter tuning via Optuna, and reinforcement learning (RL) to optimize the extraction of evidence. Due to enhanced synthetic data quality and advanced modeling, the results exhibit an average classification precision of 98.5% (best fold 98.7%) 98.5% (best fold 98.7%), outperforming previously reported approaches. Feature importance analysis highlights timestamps (0.40) and event types (0.30), while the RL workflow reduces false positives by 17% over 1000 episodes, aligning with RL benchmarks. The temporal forensics loss improves the realism score from 0.92 to 0.96 and introduces a temporal consistency score of 0.90, demonstrating enhanced forensic relevance. This work presents a scalable and accessible training platform for legally constrained environments, as well as a novel RL-based evidence extraction method. Limitations include a lack of real-system validation and resource constraints. Future work will explore dynamic reward tuning and simulated benchmarks to enhance precision and generalizability. Full article
(This article belongs to the Special Issue AI and Cybersecurity: Emerging Trends and Key Challenges)
Show Figures

Figure 1

22 pages, 1273 KB  
Article
Explainable Instrument Classification: From MFCC Mean-Vector Models to CNNs on MFCC and Mel-Spectrograms with t-SNE and Grad-CAM Insights
by Tommaso Senatori, Daniela Nardone, Michele Lo Giudice and Alessandro Salvini
Information 2025, 16(10), 864; https://doi.org/10.3390/info16100864 - 5 Oct 2025
Abstract
This paper presents an automatic system for the classification of musical instruments from audio recordings. The project leverages deep learning (DL) techniques to achieve its objective, exploring three different classification approaches based on distinct input representations. The first method involves the extraction of [...] Read more.
This paper presents an automatic system for the classification of musical instruments from audio recordings. The project leverages deep learning (DL) techniques to achieve its objective, exploring three different classification approaches based on distinct input representations. The first method involves the extraction of Mel-Frequency Cepstral Coefficients (MFCCs) from the audio files, which are then fed into a two-dimensional convolutional neural network (Conv2D). The second approach makes use of mel-spectrogram images as input to a similar Conv2D architecture. The third approach employs conventional machine learning (ML) classifiers, including Logistic Regression, K-Nearest Neighbors, and Random Forest, trained on MFCC-derived feature vectors. To gain insight into the behavior of the DL model, explainability techniques were applied to the Conv2D model using mel-spectrograms, allowing for a better understanding of how the network interprets relevant features for classification. Additionally, t-distributed stochastic neighbor embedding (t-SNE) was employed on the MFCC vectors to visualize how instrument classes are organized in the feature space. One of the main challenges encountered was the class imbalance within the dataset, which was addressed by assigning class-specific weights during training. The results, in terms of classification accuracy, were very satisfactory across all approaches, with the convolutional models and Random Forest achieving around 97–98%, and Logistic Regression yielding slightly lower performance. In conclusion, the proposed methods proved effective for the selected dataset, and future work may focus on further improving class balance techniques. Full article
(This article belongs to the Special Issue Artificial Intelligence for Acoustics and Audio Signal Processing)
35 pages, 5316 KB  
Review
Machine Learning for Quality Control in the Food Industry: A Review
by Konstantinos G. Liakos, Vassilis Athanasiadis, Eleni Bozinou and Stavros I. Lalas
Foods 2025, 14(19), 3424; https://doi.org/10.3390/foods14193424 - 4 Oct 2025
Abstract
The increasing complexity of modern food production demands advanced solutions for quality control (QC), safety monitoring, and process optimization. This review systematically explores recent advancements in machine learning (ML) for QC across six domains: Food Quality Applications; Defect Detection and Visual Inspection Systems; [...] Read more.
The increasing complexity of modern food production demands advanced solutions for quality control (QC), safety monitoring, and process optimization. This review systematically explores recent advancements in machine learning (ML) for QC across six domains: Food Quality Applications; Defect Detection and Visual Inspection Systems; Ingredient Optimization and Nutritional Assessment; Packaging—Sensors and Predictive QC; Supply Chain—Traceability and Transparency and Food Industry Efficiency; and Industry 4.0 Models. Following a PRISMA-based methodology, a structured search of the Scopus database using thematic Boolean keywords identified 124 peer-reviewed publications (2005–2025), from which 25 studies were selected based on predefined inclusion and exclusion criteria, methodological rigor, and innovation. Neural networks dominated the reviewed approaches, with ensemble learning as a secondary method, and supervised learning prevailing across tasks. Emerging trends include hyperspectral imaging, sensor fusion, explainable AI, and blockchain-enabled traceability. Limitations in current research include domain coverage biases, data scarcity, and underexplored unsupervised and hybrid methods. Real-world implementation challenges involve integration with legacy systems, regulatory compliance, scalability, and cost–benefit trade-offs. The novelty of this review lies in combining a transparent PRISMA approach, a six-domain thematic framework, and Industry 4.0/5.0 integration, providing cross-domain insights and a roadmap for robust, transparent, and adaptive QC systems in the food industry. Full article
(This article belongs to the Special Issue Artificial Intelligence for the Food Industry)
Show Figures

Figure 1

17 pages, 1613 KB  
Article
Superimposed CSI Feedback Assisted by Inactive Sensing Information
by Mintao Zhang, Haowen Jiang, Zilong Wang, Linsi He, Yuqiao Yang, Mian Ye and Chaojin Qing
Sensors 2025, 25(19), 6156; https://doi.org/10.3390/s25196156 - 4 Oct 2025
Abstract
In massive multiple-input and multiple-output (mMIMO) systems, superimposed channel state information (CSI) feedback is developed to improve the occupation of uplink bandwidth resources. Nevertheless, the interference from this superimposed mode degrades the recovery performance of both downlink CSI and uplink data sequences. Although [...] Read more.
In massive multiple-input and multiple-output (mMIMO) systems, superimposed channel state information (CSI) feedback is developed to improve the occupation of uplink bandwidth resources. Nevertheless, the interference from this superimposed mode degrades the recovery performance of both downlink CSI and uplink data sequences. Although machine learning (ML)-based methods effectively mitigate superimposed interference by leveraging the multi-domain features of downlink CSI, the complex interactions among network model parameters cause a significant burden on system resources. To address these issues, inspired by sensing-assisted communication, we propose a novel superimposed CSI feedback method assisted by inactive sensing information that previously existed but was not utilized at the base station (BS). To the best of our knowledge, this is the first time that inactive sensing information is utilized to enhance superimposed CSI feedback. In this method, a new type of modal data, different from communication data, is developed to aid in interference suppression without requiring additional hardware at the BS. Specifically, the proposed method utilizes location, speed, and path information extracted from sensing devices to derive prior information. Then, based on the derived prior information, denoising processing is applied to both the delay and Doppler dimensions of downlink CSI in the delay—Doppler (DD) domain, significantly enhancing the recovery accuracy. Simulation results demonstrate the performance improvement of downlink CSI and uplink data sequences when compared to both classic and novel superimposed CSI feedback methods. Moreover, against parameter variations, simulation results also validate the robustness of the proposed method. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

46 pages, 3080 KB  
Review
Machine Learning for Structural Health Monitoring of Aerospace Structures: A Review
by Gennaro Scarselli and Francesco Nicassio
Sensors 2025, 25(19), 6136; https://doi.org/10.3390/s25196136 - 4 Oct 2025
Abstract
Structural health monitoring (SHM) plays a critical role in ensuring the safety and performance of aerospace structures throughout their lifecycle. As aircraft and spacecraft systems grow in complexity, the integration of machine learning (ML) into SHM frameworks is revolutionizing how damage is detected, [...] Read more.
Structural health monitoring (SHM) plays a critical role in ensuring the safety and performance of aerospace structures throughout their lifecycle. As aircraft and spacecraft systems grow in complexity, the integration of machine learning (ML) into SHM frameworks is revolutionizing how damage is detected, localized, and predicted. This review presents a comprehensive examination of recent advances in ML-based SHM methods tailored to aerospace applications. It covers supervised, unsupervised, deep, and hybrid learning techniques, highlighting their capabilities in processing high-dimensional sensor data, managing uncertainty, and enabling real-time diagnostics. Particular focus is given to the challenges of data scarcity, operational variability, and interpretability in safety-critical environments. The review also explores emerging directions such as digital twins, transfer learning, and federated learning. By mapping current strengths and limitations, this paper provides a roadmap for future research and outlines the key enablers needed to bring ML-based SHM from laboratory development to widespread aerospace deployment. Full article
(This article belongs to the Special Issue Feature Review Papers in Fault Diagnosis & Sensors)
Show Figures

Figure 1

21 pages, 2769 KB  
Article
Computational Intelligence-Based Modeling of UAV-Integrated PV Systems
by Mohammad Hosein Saeedinia, Shamsodin Taheri and Ana-Maria Cretu
Solar 2025, 5(4), 45; https://doi.org/10.3390/solar5040045 - 3 Oct 2025
Abstract
The optimal utilization of UAV-integrated photovoltaic (PV) systems demands accurate modeling that accounts for dynamic flight conditions. This paper introduces a novel computational intelligence-based framework that models the behavior of a moving PV system mounted on a UAV. A unique mathematical approach is [...] Read more.
The optimal utilization of UAV-integrated photovoltaic (PV) systems demands accurate modeling that accounts for dynamic flight conditions. This paper introduces a novel computational intelligence-based framework that models the behavior of a moving PV system mounted on a UAV. A unique mathematical approach is developed to translate UAV flight dynamics, specifically roll, pitch, and yaw, into the tilt and azimuth angles of the PV module. To adaptively estimate the diode ideality factor under varying conditions, the Grey Wolf Optimization (GWO) algorithm is employed, outperforming traditional methods like Particle Swarm Optimization (PSO). Using a one-year environmental dataset, multiple machine learning (ML) models are trained to predict maximum power point (MPP) parameters for a commercial PV panel. The best-performing model, Rational Quadratic Gaussian Process Regression (RQGPR), demonstrates high accuracy and low computational cost. Furthermore, the proposed ML-based model is experimentally integrated into an incremental conductance (IC) MPPT technique, forming a hybrid MPPT controller. Hardware and experimental validations confirm the model’s effectiveness in real-time MPP prediction and tracking, highlighting its potential for enhancing UAV endurance and energy efficiency. Full article
(This article belongs to the Special Issue Efficient and Reliable Solar Photovoltaic Systems: 2nd Edition)
17 pages, 2126 KB  
Article
Explainable Machine Learning Applied to Bioelectrical Impedance for Low Back Pain: Classification and Pain-Score Prediction
by Seungwan Jang, Seung Mo Yoo, Se Dong Min and Changwon Wang
Sensors 2025, 25(19), 6135; https://doi.org/10.3390/s25196135 - 3 Oct 2025
Abstract
(1) Background: Low back pain (LBP) is the most prevalent cause of disability worldwide, yet current assessment relies mainly on subjective questionnaires, underscoring the need for objective and interpretable biomarkers. Bioelectrical impedance parameter (BIP), quantified by resistance (R), impedance magnitude (Z), and phase [...] Read more.
(1) Background: Low back pain (LBP) is the most prevalent cause of disability worldwide, yet current assessment relies mainly on subjective questionnaires, underscoring the need for objective and interpretable biomarkers. Bioelectrical impedance parameter (BIP), quantified by resistance (R), impedance magnitude (Z), and phase angle (PA), reflects tissue hydration and cellular integrity and may provide physiological correlates of pain; (2) Methods: This cross-sectional study used lumbar BIP and demographic characteristics from 83 participants (38 with lumbar BIP and 45 normal controls). We applied Extreme Gradient Boosting (XGBoost), a regularized tree-based machine learning (ML) algorithm, with stratified five-fold cross-validation. Model interpretability was ensured using SHapley Additive exPlanations (SHAP), which provide global importance rankings and local feature attributions. Outcomes included classification of LBP versus healthy status and regression-based prediction of pain scales: the Visual Analog Scale (VAS), Oswestry Disability Index (ODI), and Roland–Morris Disability Questionnaire (RMDQ); (3) Results: The classifier achieved high discrimination (ROC–AUC = 0.996 ± 0.009, sensitivity = 0.950 ± 0.068, specificity = 0.977 ± 0.049). Pain prediction showed best performance for VAS (R2 = 0.70 ± 0.14; mean absolute error = 1.23 ± 0.27), with weaker performance for ODI and RMDQ; (4) Conclusions: These findings suggest that explainable ML models applied to BIP could discriminate between LBP and healthy groups and could estimate pain intensity, providing an objective complement to subjective assessments. Full article
Show Figures

Figure 1

28 pages, 650 KB  
Systematic Review
Systematic Review of Optimization Methodologies for Smart Home Energy Management Systems
by Abayomi A. Adebiyi and Mathew Habyarimana
Energies 2025, 18(19), 5262; https://doi.org/10.3390/en18195262 - 3 Oct 2025
Abstract
Power systems are undergoing a transformative transition as consumers seek greater participation in managing electricity systems. This shift has given rise to the concept of “prosumers,” individuals who both consume and produce electricity, primarily through renewable energy sources. While renewables offer undeniable environmental [...] Read more.
Power systems are undergoing a transformative transition as consumers seek greater participation in managing electricity systems. This shift has given rise to the concept of “prosumers,” individuals who both consume and produce electricity, primarily through renewable energy sources. While renewables offer undeniable environmental benefits, they also introduce significant energy management challenges. One major concern is the variability in energy consumption patterns within households, which can lead to inefficiencies. Also, improper energy management can result in economic losses due to unbalanced energy control or inefficient systems. Home Energy Management Systems (HEMSs) have emerged as a promising solution to address these challenges. A well-designed HEMS enables users to achieve greater efficiency in managing their energy consumption, optimizing asset usage while ensuring cost savings and system reliability. This paper presents a comprehensive systematic review of optimization techniques applied to HEMS development between 2019 and 2024, focusing on key technical and computational factors influencing their advancement. The review categorizes optimization techniques into two main groups: conventional methods, emerging techniques, and machine learning methods. By analyzing recent developments, this study provides an integrated perspective on the evolving role of HEMSs in modern power systems, highlighting trends that enhance the efficiency and effectiveness of energy management in smart grids. Unifying taxonomy of HEMSs (2019–2024) and integrating mathematical, heuristic/metaheuristic, and ML/DRL approaches across horizons, controllability, and uncertainty, we assess algorithmic complexity versus tractability, benchmark comparative evidence (cost, PAR, runtime), and highlight deployment gaps (privacy, cybersecurity, AMI/HAN, and explainability), offering a novel synthesis for AI-enabled HEMS. Full article
(This article belongs to the Special Issue Advanced Application of Mathematical Methods in Energy Systems)
Show Figures

Figure 1

34 pages, 3263 KB  
Systematic Review
From Network Sensors to Intelligent Systems: A Decade-Long Review of Swarm Robotics Technologies
by Fouad Chaouki Refis, Nassim Ahmed Mahammedi, Chaker Abdelaziz Kerrache and Sahraoui Dhelim
Sensors 2025, 25(19), 6115; https://doi.org/10.3390/s25196115 - 3 Oct 2025
Abstract
Swarm Robotics (SR) is a relatively new field, inspired by the collective intelligence of social insects. It involves using local rules to control and coordinate large groups (swarms) of relatively simple physical robots. Important tasks that robot swarms can handle include demining, search, [...] Read more.
Swarm Robotics (SR) is a relatively new field, inspired by the collective intelligence of social insects. It involves using local rules to control and coordinate large groups (swarms) of relatively simple physical robots. Important tasks that robot swarms can handle include demining, search, rescue, and cleaning up toxic spills. Over the past decade, the research effort in the field of Swarm Robotics has intensified significantly in terms of hardware, software, and systems integrated developments, yet significant challenges remain, particularly regarding standardization, scalability, and cost-effective deployment. To contextualize the state of Swarm Robotics technologies, this paper provides a systematic literature review (SLR) of Swarm Robotic technologies published from 2014 to 2024, with an emphasis on how hardware and software subsystems have co-evolved. This work provides an overview of 40 studies in peer-reviewed journals along with a well-defined and replicable systematic review protocol. The protocol describes criteria for including and excluding studies and outlines a data extraction approach. We explored trends in sensor hardware, actuation methods, communication devices, and energy systems, as well as an examination of software platforms to produce swarm behavior, covering meta-heuristic algorithms and generic middleware platforms such as ROS. Our results demonstrate how dependent hardware and software are to achieve Swarm Intelligence, the lack of uniform standards for their design, and the pragmatic limits which hinder scalability and deployment. We conclude by noting ongoing challenges and proposing future directions for developing interoperable, energy-efficient Swarm Robotics (SR) systems incorporating machine learning (ML). Full article
(This article belongs to the Special Issue Cooperative Perception and Planning for Swarm Robot Systems)
Show Figures

Figure 1

53 pages, 3207 KB  
Review
Cognitive Bias Mitigation in Executive Decision-Making: A Data-Driven Approach Integrating Big Data Analytics, AI, and Explainable Systems
by Leonidas Theodorakopoulos, Alexandra Theodoropoulou and Constantinos Halkiopoulos
Electronics 2025, 14(19), 3930; https://doi.org/10.3390/electronics14193930 - 3 Oct 2025
Abstract
Cognitive biases continue to pose significant challenges in executive decision-making, often leading to strategic inefficiencies, misallocation of resources, and flawed risk assessments. While traditional decision-making relies on intuition and experience, these methods are increasingly proving inadequate in addressing the complexity of modern business [...] Read more.
Cognitive biases continue to pose significant challenges in executive decision-making, often leading to strategic inefficiencies, misallocation of resources, and flawed risk assessments. While traditional decision-making relies on intuition and experience, these methods are increasingly proving inadequate in addressing the complexity of modern business environments. Despite the growing integration of big data analytics into executive workflows, existing research lacks a comprehensive examination of how AI-driven methodologies can systematically mitigate biases while maintaining transparency and trust. This paper addresses these gaps by analyzing how big data analytics, artificial intelligence (AI), machine learning (ML), and explainable AI (XAI) contribute to reducing heuristic-driven errors in executive reasoning. Specifically, it explores the role of predictive modeling, real-time analytics, and decision intelligence systems in enhancing objectivity and decision accuracy. Furthermore, this study identifies key organizational and technical barriers—such as biases embedded in training data, model opacity, and resistance to AI adoption—that hinder the effectiveness of data-driven decision-making. By reviewing empirical findings from A/B testing, simulation experiments, and behavioral assessments, this research examines the applicability of AI-powered decision support systems in strategic management. The contributions of this paper include a detailed analysis of bias mitigation mechanisms, an evaluation of current limitations in AI-driven decision intelligence, and practical recommendations for fostering a more data-driven decision culture. By addressing these research gaps, this study advances the discourse on responsible AI adoption and provides actionable insights for organizations seeking to enhance executive decision-making through big data analytics. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

36 pages, 462 KB  
Article
No Reproducibility, No Progress: Rethinking CT Benchmarking
by Dmitry Polevoy, Danil Kazimirov, Marat Gilmanov and Dmitry Nikolaev
J. Imaging 2025, 11(10), 344; https://doi.org/10.3390/jimaging11100344 - 2 Oct 2025
Abstract
Reproducibility is a cornerstone of scientific progress, yet in X-ray computed tomography (CT) reconstruction, it remains a critical and unresolved challenge. Current benchmarking practices in CT are hampered by the scarcity of openly available datasets, the incomplete or task-specific nature of existing resources, [...] Read more.
Reproducibility is a cornerstone of scientific progress, yet in X-ray computed tomography (CT) reconstruction, it remains a critical and unresolved challenge. Current benchmarking practices in CT are hampered by the scarcity of openly available datasets, the incomplete or task-specific nature of existing resources, and the lack of transparent implementations of widely used methods and evaluation metrics. As a result, even the fundamental property of reproducibility is frequently violated, undermining objective comparison and slowing methodological progress. In this work, we analyze the systemic limitations of current CT benchmarking, drawing parallels with broader reproducibility issues across scientific domains. We propose an extended data model and formalized schemes for data preparation and quality assessment, designed to improve reproducibility and broaden the applicability of CT datasets across multiple tasks. Building on these schemes, we introduce checklists for dataset construction and quality assessment, offering a foundation for reliable and reproducible benchmarking pipelines. A key aspect of our recommendations is the integration of virtual CT (vCT), which provides highly realistic data and analytically computable phantoms, yet remains underutilized despite its potential to overcome many current barriers. Our work represents a first step toward a methodological framework for reproducible benchmarking in CT. This framework aims to enable transparent, rigorous, and comparable evaluation of reconstruction methods, ultimately supporting their reliable adoption in clinical and industrial applications. Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
Show Figures

Figure 1

13 pages, 1292 KB  
Article
Development and Internal Validation of Machine Learning Algorithms to Predict 30-Day Readmission in Patients Undergoing a C-Section: A Nation-Wide Analysis
by Audrey Andrews, Nadia Islam, George Bcharah, Hend Bcharah and Misha Pangasa
J. Pers. Med. 2025, 15(10), 476; https://doi.org/10.3390/jpm15100476 - 2 Oct 2025
Abstract
Background/Objectives: Cesarean section (C-section) is a common surgical procedure associated with an increased risk of 30-day postpartum hospital readmissions. This study utilized machine learning (ML) to predict readmissions using a nationwide database. Methods: A retrospective analysis of the National Surgical Quality [...] Read more.
Background/Objectives: Cesarean section (C-section) is a common surgical procedure associated with an increased risk of 30-day postpartum hospital readmissions. This study utilized machine learning (ML) to predict readmissions using a nationwide database. Methods: A retrospective analysis of the National Surgical Quality Improvement Project (2012–2022) included 54,593 patients who underwent C-sections. Random Forests (RF) and Extreme Gradient Boosting (XGBoost) models were developed and compared to logistic regression (LR) using demographic, preoperative, and perioperative data. Results: Of the cohort, 1306 (2.39%) patients were readmitted. Readmitted patients had higher rates of being of African American race (17.99% vs. 9.83%), diabetes (11.03% vs. 8.19%), and hypertension (11.49% vs. 4.68%) (p < 0.001). RF achieved the highest performance (AUC = 0.737, sensitivity = 72.03%, specificity: 61.33%), and a preoperative-only RF model achieved a sensitivity of 83.14%. Key predictors included age, BMI, operative time, white blood cell count, and hematocrit. Conclusions: ML effectively predicts C-section readmissions, supporting early identification and interventions to improve patient outcomes and reduce healthcare costs. Full article
(This article belongs to the Special Issue Advances in Prenatal Diagnosis and Maternal Fetal Medicine)
Show Figures

Figure 1

34 pages, 3092 KB  
Review
Processing and Real-Time Monitoring Strategies of Aflatoxin Reduction in Pistachios: Innovative Nonthermal Methods, Advanced Biosensing Platforms, and AI-Based Predictive Approaches
by Seyed Mohammad Taghi Gharibzahedi and Sumeyra Savas
Foods 2025, 14(19), 3411; https://doi.org/10.3390/foods14193411 - 2 Oct 2025
Abstract
Aflatoxin (AF) contamination in pistachios remains a critical food safety and trade challenge, given the potent carcinogenicity of AF-B1 and the nut’s high susceptibility to Aspergillus infection throughout production and storage. Traditional decontamination methods such as roasting, irradiation, ozonation, and acid/alkaline treatments [...] Read more.
Aflatoxin (AF) contamination in pistachios remains a critical food safety and trade challenge, given the potent carcinogenicity of AF-B1 and the nut’s high susceptibility to Aspergillus infection throughout production and storage. Traditional decontamination methods such as roasting, irradiation, ozonation, and acid/alkaline treatments can reduce AF levels but often degrade sensory and nutritional quality, implying the need for more sustainable approaches. In recent years, innovative nonthermal interventions, including pulsed light, cold plasma, nanomaterial-based adsorbents, and bioactive coatings, have demonstrated significant potential to decrease fungal growth and AF accumulation while preserving product quality. Biosensing technologies such as electrochemical immunosensors, aptamer-based systems, and optical or imaging tools are advancing rapid, portable, and sensitive detection capabilities. Combining these experimental strategies with artificial intelligence (AI) and machine learning (ML) models can increasingly be applied to integrate spectral, sensor, and imaging data for predicting fungal development and AF risk in real time. This review brings together progress in nonthermal reduction strategies, biosensing innovations, and data-driven approaches, presenting a comprehensive perspective on emerging tools that could transform pistachio safety management and strengthen compliance with global regulatory standards. Full article
Show Figures

Figure 1

18 pages, 1460 KB  
Article
AI-Based Severity Classification of Dementia Using Gait Analysis
by Gangmin Moon, Jaesung Cho, Hojin Choi, Yunjin Kim, Gun-Do Kim and Seong-Ho Jang
Sensors 2025, 25(19), 6083; https://doi.org/10.3390/s25196083 - 2 Oct 2025
Abstract
This study aims to explore the utility of artificial intelligence (AI) in classifying dementia severity based on gait analysis data and to examine how machine learning (ML) can address the limitations of conventional statistical approaches. The study included 34 individuals with mild cognitive [...] Read more.
This study aims to explore the utility of artificial intelligence (AI) in classifying dementia severity based on gait analysis data and to examine how machine learning (ML) can address the limitations of conventional statistical approaches. The study included 34 individuals with mild cognitive impairment (MCI), 25 with mild dementia, 26 with moderate dementia, and 54 healthy controls. A support vector machine (SVM) classifier was employed to categorize dementia severity using gait parameters. As complexity and high dimensionality of gait data increase, traditional statistical methods may struggle to capture subtle patterns and interactions among variables. In contrast, ML techniques, including dimensionality reduction methods such as principal component analysis (PCA) and gradient-based feature selection, can effectively identify key gait features relevant to dementia severity classification. This study shows that ML can complement traditional statistical analyses by efficiently handling high-dimensional data and uncovering meaningful patterns that may be overlooked by conventional methods. Our findings highlight the promise of AI-based tools in advancing our understanding of gait characteristics in dementia and supporting the development of more accurate diagnostic models for complex or large datasets. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 3120 KB  
Article
Pre-Treatment PET Radiomics for Prediction of Disease-Free Survival in Cervical Cancer
by Fereshteh Yousefirizi, Ghasem Hajianfar, Maziar Sabouri, Caroline Holloway, Pete Tonseth, Abraham Alexander, Tahir I. Yusufaly, Loren K. Mell, Sara Harsini, François Bénard, Habib Zaidi, Carlos Uribe and Arman Rahmim
Cancers 2025, 17(19), 3218; https://doi.org/10.3390/cancers17193218 - 2 Oct 2025
Abstract
Background: Cervical cancer remains a major global health concern, with high recurrence rates in advanced stages. [18F]FDG PET/CT provides prognostic biomarkers such as SUV, MTV, and TLG, though these are not routinely integrated into clinical protocols. Radiomics offers quantitative analysis of [...] Read more.
Background: Cervical cancer remains a major global health concern, with high recurrence rates in advanced stages. [18F]FDG PET/CT provides prognostic biomarkers such as SUV, MTV, and TLG, though these are not routinely integrated into clinical protocols. Radiomics offers quantitative analysis of tumor heterogeneity, supporting risk stratification. Purpose: To evaluate the prognostic value of clinical and radiomic features for disease-free survival (DFS) in locoregionally advanced cervical cancer using machine learning (ML). Methods: Sixty-three patients (mean age 47.9 ± 14.5 years) were diagnosed between 2015 and 2020. Radiomic features were extracted from pre-treatment PET/CT (IBSI-compliant PyRadiomics). Clinical variables included age, T-stage, Dmax, lymph node involvement, SUVmax, and TMTV. Forty-two models were built by combining six feature-selection techniques (UCI, MD, MI, VH, VH.VIMP, IBMA) with seven ML algorithms (CoxPH, CB, GLMN, GLMB, RSF, ST, EV) using nested 3-fold cross-validation with bootstrap resampling. External validation was performed on 95 patients (mean age 50.6 years, FIGO IIB–IIIB) from an independent cohort with different preprocessing protocols. Results: Recurrence occurred in 31.7% (n = 20). SUVmax of lymph nodes, lymph node involvement, and TMTV were the most predictive individual features (C-index ≤ 0.77). The highest performance was achieved by UCI + EV/GLMB on combined clinical + radiomic features (C-index = 0.80, p < 0.05). For single feature sets, IBMA + RSF performed best for clinical (C-index = 0.72), and VH.VIMP + GLMN for radiomics (C-index = 0.71). External validation confirmed moderate generalizability (best C-index = 0.64). Conclusions: UCI-based feature selection with GLMB or EV yielded the best predictive accuracy, while VH.VIMP + GLMN offered superior external generalizability for radiomics-only models. These findings support the feasibility of integrating radiomics and ML for individualized DFS risk stratification in cervical cancer. Full article
Show Figures

Figure 1

Back to TopTop