Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,669)

Search Parameters:
Keywords = metric learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 4926 KB  
Article
Interpretable Optimized Extreme Gradient Boosting for Prediction of Higher Heating Value from Elemental Composition of Coal Resource to Energy Conversion
by Paulino José García-Nieto, Esperanza García-Gonzalo, José Pablo Paredes-Sánchez and Luis Alfonso Menéndez-García
Big Data Cogn. Comput. 2026, 10(4), 112; https://doi.org/10.3390/bdcc10040112 (registering DOI) - 7 Apr 2026
Abstract
The higher heating value (HHV), sometimes referred to as the gross calorific value, is a crucial metric for determining a fuel’s primary energy potential in energy production systems. By combining extreme gradient boosting (XGBoost) with the differential evolution (DE) optimizer, an innovative machine [...] Read more.
The higher heating value (HHV), sometimes referred to as the gross calorific value, is a crucial metric for determining a fuel’s primary energy potential in energy production systems. By combining extreme gradient boosting (XGBoost) with the differential evolution (DE) optimizer, an innovative machine learning-based model was created in this study to forecast the HHV (dependent variable). As input variables, the model included the constituents of the coal’s ultimate analysis: carbon (C), oxygen (O), hydrogen (H), nitrogen (N), and sulfur (S). For comparative purposes, random forest regression (RFR), M5 model tree, multivariate linear regression (MLR), and previously reported empirical correlations were also applied to the experimental dataset. The results showed that the XGBoost strategy produced the most accurate predictions. An initial XGBoost analysis was carried out to identify the relative contribution of the input variables to coal HHV prediction. In particular, for coal HHV estimates reliant on experimental samples, the XGBoost regression produced a correlation coefficient of 0.9858 and a coefficient of determination of 0.9691. The excellent agreement between observed and anticipated values shows that the DE/XGBoost-based approximation performed satisfactorily. Lastly, a synopsis of the investigation’s key conclusions is provided. Full article
(This article belongs to the Special Issue Smart Manufacturing in the AI Era)
14 pages, 917 KB  
Article
Cost-Sensitive Threshold Optimization for Network Intrusion Detection: A Per-Class Approach with XGBoost
by Jaehyeok Cha, Jisoo Jang, Dongil Shin and Dongkyoo Shin
Electronics 2026, 15(7), 1542; https://doi.org/10.3390/electronics15071542 (registering DOI) - 7 Apr 2026
Abstract
Machine learning-based Network Intrusion Detection Systems (NIDSs) typically optimize uniform metrics such as accuracy and F1-score, overlooking the asymmetric cost structure of real-world security operations, where a missed attack (False Negative (FN)) far outweighs a false alarm (False Positive (FP)). We propose a [...] Read more.
Machine learning-based Network Intrusion Detection Systems (NIDSs) typically optimize uniform metrics such as accuracy and F1-score, overlooking the asymmetric cost structure of real-world security operations, where a missed attack (False Negative (FN)) far outweighs a false alarm (False Positive (FP)). We propose a cost-sensitive threshold optimization framework based on XGBoost, using a 10:1 FN-to-FP cost ratio derived from established cost models. We first demonstrate that the default threshold of 0.5 is suboptimal and that a globally optimized threshold of 0.08 substantially reduces total cost. However, a single global threshold cannot accommodate the heterogeneous detection characteristics of diverse attack types. We therefore introduce Per-Class Thresholding, which assigns independently optimized thresholds to each attack class. Evaluated on CIC-IDS2018 and UNSW-NB15 across five independent random seeds, our method achieves a 28.19% cost reduction over the Random Forest baseline on CIC-IDS2018, demonstrating that attack classes undetectable under the global threshold—including DDoS attack-LOIC-UDP (100%), DoS attacks-SlowHTTPTest (99.79%), and FTP-BruteForce (98.16%)—can achieve near-complete cost elimination through individual per-class threshold search. Cross-dataset validation on UNSW-NB15 further confirms that per-class thresholding consistently improves class-level detection, with cost reductions of 74.10% for Reconnaissance, 69.06% for Backdoor, and 54.42% for Analysis attacks. These results confirm that class-specific threshold calibration is essential for cost-effective intrusion detection. Full article
(This article belongs to the Special Issue IoT Security in the Age of AI: Innovative Approaches and Technologies)
19 pages, 4124 KB  
Article
Prediction of Maximum Usable Frequency Based on a New Hybrid Deep Learning Model
by Yuyang Li, Zhigang Zhang and Jian Shen
Electronics 2026, 15(7), 1539; https://doi.org/10.3390/electronics15071539 (registering DOI) - 7 Apr 2026
Abstract
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling [...] Read more.
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling of the complex spatiotemporal variation patterns of MUF-F2 by integrating a feature enhancement mechanism, a dual-branch feature extraction structure, and a bidirectional temporal dependency capture network. The hybrid prediction model integrates the Channel Attention mechanism (CA), Dual-Branch Convolutional Neural Network (DCNN), and Bidirectional Long Short-Term Memory network (BiLSTM). The model is trained and validated using MUF-F2 data from 5 communication links over China during geomagnetically quiet periods and 4 during geomagnetic storm periods, with the difference in the number of links attributed to experimental constraints and the disruptive effects of geomagnetic storms. Its performance is evaluated via multiple metrics, and a comparative analysis is conducted with commonly used prediction models such as the Long Short-Term Memory (LSTM) network. Experimental results show that during geomagnetically quiet periods, the proposed model achieves lower prediction errors (Root Mean Square Error (RMSE) < 1.1 MHz, Mean Absolute Percentage Error (MAPE) < 3.8%) and a higher goodness of fit (coefficient of determination (R2) > 0.94), with the average error reduction across all links ranging 8 from 6.2% to 46.9% compared with the baseline model. Under geomagnetic storm disturbance conditions, the model still maintains robust prediction performance, with R2 > 0.89 for all communication links, as well as RMSE < 0.6 MHz, Mean Absolute Error (MAE) < 0.4 MHz, and MAPE < 3.3%. The study demonstrates that the proposed CA-DCNN-BiLSTM model exhibits excellent prediction accuracy and anti-interference capability under different geomagnetic activity conditions, which can effectively improve the short-term prediction accuracy of MUF-F2 and provide more reliable technical support for HF communication frequency decision-making. Full article
Show Figures

Figure 1

35 pages, 3163 KB  
Article
An LLM-Based Agentic Network Traffic Incident-Report Approach Towards Explainable-AI Network Defense
by Chia-Hong Chou, Arjun Sudheer and Younghee Park
J. Sens. Actuator Netw. 2026, 15(2), 32; https://doi.org/10.3390/jsan15020032 (registering DOI) - 7 Apr 2026
Abstract
Traditional intrusion detection systems for IoT networks achieve high classification accuracy but lack interpretability and actionable incident-response capabilities, limiting their operational value in security-critical environments. This paper presents a graph-based multi-agent framework that integrates ensemble machine learning with Large Language Model (LLM)-powered incident [...] Read more.
Traditional intrusion detection systems for IoT networks achieve high classification accuracy but lack interpretability and actionable incident-response capabilities, limiting their operational value in security-critical environments. This paper presents a graph-based multi-agent framework that integrates ensemble machine learning with Large Language Model (LLM)-powered incident report generation via Retrieval-Augmented Generation (RAG). The system employs a three-phase architecture: (1) a lightweight Random Forest binary pre-detection, achieving 99.49% accuracy with a 6 MB model size for edge deployment; (2) ensemble classification combining Multi-Layer Perceptron, Random Forest, and XGBoost with soft voting and SHAP-based feature attribution for explainability; and (3) a ReAct-based summary agent that synthesizes classification results with external threat intelligence from Web search and scholarly databases to generate evidence-grounded incident reports. To address the challenge of evaluating non-deterministic LLM outputs, we introduce custom RAG evaluation metrics—faithfulness and groundedness implemented via the LLM-as-Judge framework. Experimental validation on the ACI IoT Network Dataset 2023 demonstrates ensemble accuracy exceeding 99.8% across 11 attack classes; perfect groundedness scores (1.0), indicating all generated claims derive from the retrieved context; and moderate faithfulness (0.64), reflecting appropriate analytical synthesis. The ensemble approach mitigates individual model weaknesses, improving the UDP Flood F1 score from 48% (MLP alone) to 95% through soft voting. This work bridges the gap between high-accuracy detection and trustworthy, actionable security analysis for automated incident-response systems. Full article
(This article belongs to the Special Issue Feature Papers in the Section of Network Security and Privacy)
37 pages, 28225 KB  
Article
Hierarchical Spectral Modelling of Pasture Nutrition: From Laboratory to Sentinel-2 via UAV Hyperspectral
by Jason Barnetson, Hemant Raj Pandeya and Grant Fraser
AgriEngineering 2026, 8(4), 143; https://doi.org/10.3390/agriengineering8040143 - 7 Apr 2026
Abstract
This study demonstrates a hierarchical spectral modelling approach for predicting pasture nutrition metrics using TabPFN (Tabular Prior-Data Fitted Network), a transformer-based machine learning architecture. In the face of climate variability, aligning stocking rates with pasture resources is crucial for sustainable livestock grazing, requiring [...] Read more.
This study demonstrates a hierarchical spectral modelling approach for predicting pasture nutrition metrics using TabPFN (Tabular Prior-Data Fitted Network), a transformer-based machine learning architecture. In the face of climate variability, aligning stocking rates with pasture resources is crucial for sustainable livestock grazing, requiring accurate assessments of both pasture biomass and nutrient composition. Our research, conducted across diverse growth stages at five tropical and subtropical savanna rangeland properties in Queensland, Australia, with native and introduced C4 grasses, employed a hierarchical sampling and modelling strategy that scales from laboratory spectroscopy to Sentinel-2 satellite predictions via uncrewed aerial vehicle (UAV) hyperspectral imaging. Spectral data were collected from leaf (laboratory spectroscopy) through field (point measurements), UAV hyperspectral imaging, and Sentinel-2 satellite imagery. Traditional laboratory wet chemistry methods determined plant leaf and stem nutrient content, from which crude protein (CP = total nitrogen (TN) × 6.25) and dry matter digestibility (DMD = 88.9–0.779 × acid detergent fibre (ADF)) were derived. TabPFN models were trained at each spatial scale, achieving validation R2 of 0.76 for crude protein at the leaf scale, 0.95 at the UAV scale, and 0.92 at the Sentinel-2 satellite scale. For dry matter digestibility, validation R2 was 0.88 at the UAV scale and 0.73 at the Sentinel-2 scale. A pasture classification masking approach using a deep neural network with 98.6% accuracy (7 classes) was implemented to focus predictions on productive pasture areas, excluding bare soil and woody vegetation. The Sentinel-2 models were trained on 462 samples from 19 site–date combinations across 11 field sites. The TabPFN architecture provided notable advantages over traditional neural networks: no hyperparameter tuning required, faster training, and superior generalisation from limited training samples. These results demonstrate the potential for accurate and efficient prediction and mapping of pasture quality across large areas (100 s–1000 s km2) using freely available satellite imagery and open-source machine learning frameworks. Full article
(This article belongs to the Special Issue The Application of Remote Sensing for Agricultural Monitoring)
24 pages, 648 KB  
Article
Intuitive Risk Equation for Post-Transplant Bloodstream Infection Prediction: A Symbolic Regression Approach
by Sungsu Oh, Jeogin Jang, Yunseong Ko, Hyunsu Lee and Seungjin Lim
Biomedicines 2026, 14(4), 840; https://doi.org/10.3390/biomedicines14040840 - 7 Apr 2026
Abstract
Background: Liver transplant recipients are highly susceptible to infectious complications due to surgical invasiveness and immunosuppressive therapy, and post-transplant bloodstream infection is associated with substantial morbidity and mortality. Although several prediction models for bloodstream infection have been proposed, most focus on emergency department [...] Read more.
Background: Liver transplant recipients are highly susceptible to infectious complications due to surgical invasiveness and immunosuppressive therapy, and post-transplant bloodstream infection is associated with substantial morbidity and mortality. Although several prediction models for bloodstream infection have been proposed, most focus on emergency department or general ward populations and rely on black-box approaches. This limits their applicability and clinical interpretability in liver transplant settings. Therefore, this study aimed to develop predictive models for post-transplant bloodstream infection using preoperative and perioperative clinical data and to derive an interpretable risk equation through symbolic regression. Methods: We conducted a retrospective observational study including 245 adult liver transplant recipients treated at a single tertiary center. Clinical and laboratory variables were extracted from electronic medical records and analyzed using standard statistical methods. For prediction tasks, multiple conventional machine learning models were developed and compared with a symbolic regression-based model. Predictive performance and model interpretability were evaluated using discrimination metrics and Shapley Additive Explanations. Results: Post-transplant bloodstream infection occurred in 82 patients (33.4%). In the test set, conventional machine learning models showed modest discriminative performance (area under the curve, 0.53–0.64). The symbolic regression model achieved comparable discrimination (area under the curve, 0.63) while providing transparent, threshold-based risk equations. While conventional models primarily relied on laboratory variables, symbolic regression additionally identified perioperative clinical factors and viral serologic markers as important predictors. Discussion: Although overall predictive performance was modest, symbolic regression highlighted viral serologic markers as potential indicators of immunologic vulnerability, extending beyond standard laboratory predictors. Conclusions: This interpretability-focused approach may inform future risk stratification models incorporating richer perioperative data. Full article
(This article belongs to the Section Microbiology in Human Health and Disease)
Show Figures

Figure 1

31 pages, 2540 KB  
Review
Comparative Diagnostic Performance of Artificial Intelligence Versus Conventional Approaches for Early Detection of Mosquito-Borne Viral Infections: A Systematic Review and Meta-Analysis, with Evidence Predominantly from Dengue Studies
by Flavia Pennisi, Antonio Pinto, Claudia Cozzolino, Andrea Cozza, Giovanni Rezza, Carlo Signorelli, Vincenzo Baldo and Vincenza Gianfredi
Mach. Learn. Knowl. Extr. 2026, 8(4), 93; https://doi.org/10.3390/make8040093 - 7 Apr 2026
Abstract
Background: Early differentiation of mosquito-borne viral infections from other causes of acute febrile illness remains challenging, particularly in endemic and resource-limited settings. Artificial intelligence (AI) models have been proposed to improve early diagnosis, but their incremental value over conventional approaches is unclear. Methods: [...] Read more.
Background: Early differentiation of mosquito-borne viral infections from other causes of acute febrile illness remains challenging, particularly in endemic and resource-limited settings. Artificial intelligence (AI) models have been proposed to improve early diagnosis, but their incremental value over conventional approaches is unclear. Methods: We conducted a systematic review and meta-analysis of comparative studies evaluating AI/machine learning models versus conventional approaches (clinical assessment, laboratory-based pathways, or traditional statistical models) for early detection of mosquito-borne viral infections. PubMed, Embase, and Scopus were searched through August 2025. Paired performance metrics were synthesized using fixed- and random-effects models. Outcomes included AUC, sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV). Risk of bias was assessed using PROBAST. Results: Thirteen studies met inclusion criteria. Under random-effects models, AI improved sensitivity (ES = 2.64, p = 0.028), specificity (ES = 5.55, p < 0.001), accuracy (ES = 3.19, p < 0.001), and NPV (ES = 13.84, p < 0.001). No consistent advantage was observed for AUC, and PPV findings were inconsistent. Substantial heterogeneity was present across outcomes (I2 = 100%). Most studies relied on internal validation, and PROBAST identified high risk of bias in the analysis domain in over half. Conclusions: AI-based models may enhance threshold-dependent performance metrics, supporting their use as adjunctive decision-support tools for early triage and case exclusion, while external validation and implementation-focused research remain essential. Full article
(This article belongs to the Section Thematic Reviews)
Show Figures

Figure 1

14 pages, 2118 KB  
Article
AI Method for Classification of Diagnosis of Near-Infrared Breast Lesion Images
by Kaiquan Chen, Fangyang Shen, Honggang Wang, Zhengchao Dong, Jizhong Xiao, Ming Ma, Afroza Aktar, Christopher Chow and Wenxiong Zhang
AI 2026, 7(4), 133; https://doi.org/10.3390/ai7040133 - 7 Apr 2026
Abstract
In near-infrared optical breast lesion screening and diagnosis systems, high-speed four-dimensional scanners can dynamically acquire tens of thousands of lesion images within a five-minute period. Currently, manual computer annotation is required to generate standard samples from these scanned breast lesion images, a process [...] Read more.
In near-infrared optical breast lesion screening and diagnosis systems, high-speed four-dimensional scanners can dynamically acquire tens of thousands of lesion images within a five-minute period. Currently, manual computer annotation is required to generate standard samples from these scanned breast lesion images, a process that depends heavily on physicians with clinical expertise. On average, a single physician can annotate only approximately ten samples per working day. As a result, this process is time-consuming and labor-intensive, and the collected samples often suffer from low accuracy, large variability, and limited diagnostic reliability. Several AI-based annotation tools, such as QuPath, HALO AI™, and X-AnyLabeling, have been developed to assist this process. However, these tools are primarily manual or semi-automated and are unable to provide rapid and high-precision recognition. To address these limitations, this study proposes a new AI-based method for the rapid, accurate, and fully automated detection and diagnosis of breast lesions. The proposed approach complements existing AI-based annotation and diagnostic methods by enabling automated detection and classification of breast lesion samples. The proposed system employs a deep learning–based classification framework to construct a professional-level AI diagnostic model. The system automatically generates diagnostic outputs based on the annotation criteria used by professional physicians, including positive/negative classification and accuracy metrics. Compared with conventional manual diagnostic methods, the proposed approach provides faster and more reliable diagnostic estimates for new patients. These results demonstrate the potential of the proposed AI-based method to advance automated breast lesion screening and diagnosis and to contribute to future research and clinical applications in this field. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

20 pages, 1234 KB  
Article
Lightweight Real-Time Navigation for Autonomous Driving Using TinyML and Few-Shot Learning
by Wajahat Ali, Arshad Iqbal, Abdul Wadood, Herie Park and Byung O Kang
Sensors 2026, 26(7), 2271; https://doi.org/10.3390/s26072271 - 7 Apr 2026
Abstract
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, [...] Read more.
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, we propose a unified TinyML-optimized navigation framework that integrates a lightweight convolutional feature extractor (MobileNetV2) with a metric-based few-shot learning classifier to enable rapid adaptation to unseen driving scenarios with minimal data. The proposed framework jointly combines feature extraction, few-shot generalization, and edge-aware optimization into a single end-to-end pipeline designed specifically for real-time autonomous decision-making. Furthermore, post-training quantization and structured pruning are employed to significantly reduce the memory footprint and inference latency while preserving the classification performance. Experimental results demonstrate that the proposed model achieved a 93.4% accuracy on previously unseen road conditions, with an average inference latency of 68 ms and a memory usage of 18 MB, outperforming traditional CNN and LSTM models in efficiency while maintaining a competitive predictive performance. These results highlight the effectiveness of the proposed approach in enabling scalable, real-time navigation on low-power edge devices. Full article
Show Figures

Figure 1

23 pages, 2118 KB  
Article
IDBspRS: An Interior Design-Built Service Package Recommendation System Using Artificial Intelligence
by Pranabanti Karmaakar, Muhammad Aslam Jarwar, Junaid Abdul Wahid and Najam Ul Hasan
Sustainability 2026, 18(7), 3605; https://doi.org/10.3390/su18073605 - 7 Apr 2026
Abstract
Digital transformation in the interior design industry has opened new opportunities for innovation; however, many cost-conscious homeowners still face difficulties in selecting and customizing design packages that achieve a balance between overall cost and sustainable quality. Existing interior design platforms lack seamless support [...] Read more.
Digital transformation in the interior design industry has opened new opportunities for innovation; however, many cost-conscious homeowners still face difficulties in selecting and customizing design packages that achieve a balance between overall cost and sustainable quality. Existing interior design platforms lack seamless support and often require homeowners to invest considerable time and effort to tailor services to their needs while staying within budget. To address these challenges, this paper explores the use of machine learning to build a predictive modelling framework that supports personalized and value-driven interior design recommendations. The proposed approach uses a hybrid recommendation system that combines content-based and collaborative filtering. It also incorporates lightweight techniques such as TF–IDF (Term Frequency–Inverse Document Frequency) and logistic regression to more effectively capture user preferences, budget limits, and several interior-design service categories. Primary data was collected from small to medium-sized interior design companies. To demonstrate the proposed approach, a user-friendly web application tool is developed to integrate machine learning-enabled recommendation services. The resulting solution provides access to professional interior design services, enhancing customization and customer satisfaction while reducing the time and effort required from homeowners. To validate and compare the performance of the proposed approach, several machine learning models including Random Forest, XGBoost and KNN (K-Nearest Neighbors) were tested using standard metrics such as accuracy, precision, recall, and ROC-AUC (Receiver Operating Characteristic-Area Under the Curve). The proposed logistic regression hybrid model achieved the strongest overall results, with an accuracy of 83.62%. These findings demonstrate the significant contribution of this work to enhancing personalization and accessibility in the interior design sector via machine learning-enabled recommendation systems. The proposed approach bridges the gap between expert-level services and financial limits, making it a practical choice for cost-conscious homeowners. Full article
(This article belongs to the Special Issue AI and ML Applications for a Sustainable Future)
Show Figures

Figure 1

12 pages, 6028 KB  
Article
A Universal Deep Learning Model for Predicting Detection Performance and Single-Event Effects of SPAD Devices
by Yilei Chen, Jin Huang, Yuxiang Zeng, Yi Jiang, Shulong Wang, Shupeng Chen and Hongxia Liu
Micromachines 2026, 17(4), 452; https://doi.org/10.3390/mi17040452 - 7 Apr 2026
Abstract
Single-event effects (SEEs) present a significant challenge to the radiation reliability of integrated circuits. Conventional SEE analysis methods for single-photon avalanche diode (SPAD) devices primarily rely on Sentaurus Technology Computer-Aided Design (TCAD) numerical simulation, which is computationally intensive and time-consuming. In this study, [...] Read more.
Single-event effects (SEEs) present a significant challenge to the radiation reliability of integrated circuits. Conventional SEE analysis methods for single-photon avalanche diode (SPAD) devices primarily rely on Sentaurus Technology Computer-Aided Design (TCAD) numerical simulation, which is computationally intensive and time-consuming. In this study, we propose a generalized deep learning (DL) model, using a silicon-based SPAD device with a double-junction double-buried-layer (DJDB) structure fabricated in 180 nm CMOS process as the research subject. By incorporating key parameters that influence SEEs as model inputs, the proposed approach enables rapid prediction of critical parameter metrics, including transient current peaks and dark count rates. Experimental results show that the DL model achieves a prediction accuracy of 97.32% for transient current peaks and 99.87% for dark count rates, demonstrating extremely high prediction precision. To further validate the generalization capability of the proposed network, the model is applied to predict the detection performance of the DJDB-SPAD device. The prediction accuracies for four key performance parameters all exceed 97.5%, further confirming the accuracy and robustness of the developed model. Meanwhile, compared with the conventional Sentaurus TCAD simulation method, the proposed method achieves a 336-fold improvement in computational efficiency. Overall, this method realizes the dual advantages of high precision and high efficiency, which provides an efficient and accurate technical solution for the rapid characteristic analysis and reliability evaluation of SPAD devices under single-event effects (SEEs). Full article
Show Figures

Figure 1

15 pages, 2566 KB  
Article
Custom Deep Learning Framework for Interpreting Diabetic Retinopathy in Healthcare Diagnostics
by Tamoor Aziz, Chalie Charoenlarpnopparut, Srijidtra Mahapakulchai, Babatunde Oluwaseun Ajayi and Mayowa Emmanuel Bamisaye
Signals 2026, 7(2), 34; https://doi.org/10.3390/signals7020034 - 7 Apr 2026
Abstract
Diabetic retinopathy is a prevalent condition and a major public health concern due to its detrimental impact on eyesight. Diabetes is a root cause of its development and damages small blood vessels caused by prolonged high blood sugar levels. The degenerative consequences of [...] Read more.
Diabetic retinopathy is a prevalent condition and a major public health concern due to its detrimental impact on eyesight. Diabetes is a root cause of its development and damages small blood vessels caused by prolonged high blood sugar levels. The degenerative consequences of diabetic retinopathy are irrevocable if not diagnosed in the early stages of its progression. This ailment triggers the development of retinal lesions, which can be identified for diagnosis and prognosis. However, lesion detection is challenging due to their similarity in intensity profiles to other retinal features, inconsistent sizes, and random locations. This research evaluates a custom deep learning network for classifying retinal images and compares it with the state-of-the-art classifiers. The novel preprocessing method is introduced to reduce the complexity of the diagnostic process and to enhance classification performance by adaptively enhancing images. Despite being a shallow network, the proposed model yields competitive results with an accuracy of 87.66% and an F1-score of 0.78. The evaluation metrics indicate that class imbalance affects the performance of the proposed model despite using the weighted cross-entropy loss. The future contribution will be the inclusion of generative adversarial networks for generating synthetic images to balance the dataset. This research aims to develop a robust computer-aided diagnostic system as a second interpreter for ophthalmologists during the diagnosis and prognosis stages. Full article
Show Figures

Figure 1

20 pages, 2366 KB  
Article
Multimodal Machine Learning Framework for Driver Mental Workload Classification: A Comparative and Interpretable Approach
by Xiaojun Shao, Xiaoxiang Ma, Feng Chen and Xiaodong Pan
Appl. Sci. 2026, 16(7), 3581; https://doi.org/10.3390/app16073581 - 7 Apr 2026
Abstract
Understanding and monitoring driver mental workload is essential for improving road safety. This study proposes a multimodal machine learning framework to classify drivers’ mental workload using eye movement metrics, physiological signals, and driving behavior features. A driving simulator experiment was conducted with 26 [...] Read more.
Understanding and monitoring driver mental workload is essential for improving road safety. This study proposes a multimodal machine learning framework to classify drivers’ mental workload using eye movement metrics, physiological signals, and driving behavior features. A driving simulator experiment was conducted with 26 participants under two workload levels induced by a secondary auditory task. Seven feature combinations and six classification algorithms were evaluated. The results showed that eye metrics were the most informative modality, and that feature selection had a greater impact on classification performance than algorithm choice. A support vector machine with optimized features was selected as the final model based on performance and stability, achieving an accuracy of 87.8% and an AUC of 0.95. To improve model transparency, SHapley Additive exPlanations (SHAP) was applied, highlighting key predictors such as blink rate and heart rate, and uncovering synergistic effects between visual and physiological variables. The model was further validated in a tunnel entrance scenario, where it identified increased workload associated with steeper longitudinal slopes. These findings emphasize the importance of multimodal data integration—particularly eye movements—for assessing mental workload. Future applications should prioritize feature diversity over algorithm complexity to enhance real-world implementation in workload monitoring systems. Full article
Show Figures

Figure 1

20 pages, 11231 KB  
Article
YOLO-Based Shading Artifact Reduction for CBCT-to-MDCT Translation Using Two-Stage Learning
by Yangheon Lee and Hyun-Cheol Park
Mathematics 2026, 14(7), 1223; https://doi.org/10.3390/math14071223 - 6 Apr 2026
Abstract
Cone-beam computed tomography (CBCT) offers advantages of low radiation dose and rapid acquisition but suffers from scatter-induced shading artifacts that limit diagnostic value compared to multi-detector CT (MDCT). While CycleGAN enables unpaired image translation, its uniform loss application struggles with localized artifact removal. [...] Read more.
Cone-beam computed tomography (CBCT) offers advantages of low radiation dose and rapid acquisition but suffers from scatter-induced shading artifacts that limit diagnostic value compared to multi-detector CT (MDCT). While CycleGAN enables unpaired image translation, its uniform loss application struggles with localized artifact removal. We propose a two-stage learning framework with YOLO-based region correction loss. Stage 1 trains a standard CycleGAN to establish stable CBCT-MDCT domain mapping. Stage 2 fine-tunes the model by applying gradient magnitude minimization loss selectively to artifact regions detected by a pretrained YOLO detector, enabling focused correction while preserving anatomical structures. Using 11,000 2D CBCT slices from 17 patients (14 training, 3 testing) and 23,500 2D MDCT slices from 50 patients, our method achieves a 14.0% reduction in artifact score compared to baseline CycleGAN while maintaining high structural similarity (SSIM > 0.96). Independent evaluation using integral nonuniformity (INU) and shading index (SI) confirms consistent improvement across physics-based metrics. The self-regulating mechanism, where YOLO detection confidence naturally decreases as artifacts diminish, provides automatic adjustment without manual intervention. This work demonstrates that combining staged learning with object detection offers an effective solution for localized artifact removal in medical image translation, potentially improving diagnostic accuracy while preserving the low-dose benefits of CBCT. Full article
Show Figures

Figure 1

18 pages, 2383 KB  
Article
Position-Independent Lactate Kinetic Phenotypes in Professional Soccer Players: A Machine Learning Approach for Maximal Running Velocity Prediction
by Erkan Tortu, İzzet İnce, Salih Çabuk, Süleyman Ulupınar, Cebrail Gençoğlu, Serhat Özbay and Kaan Kaya
Sensors 2026, 26(7), 2252; https://doi.org/10.3390/s26072252 - 6 Apr 2026
Abstract
This study aimed to identify distinct lactate kinetic phenotypes in professional soccer players using unsupervised machine learning and determine their relationship with maximal running velocity (Vmax) through explainable artificial intelligence methods. A total of 361 professional male soccer players from the [...] Read more.
This study aimed to identify distinct lactate kinetic phenotypes in professional soccer players using unsupervised machine learning and determine their relationship with maximal running velocity (Vmax) through explainable artificial intelligence methods. A total of 361 professional male soccer players from the First Division participated in the study. Incremental treadmill tests measured lactate concentrations at five standardized velocities, alongside VO2max, Vmax, lactate threshold (LT), and anaerobic threshold (AT) parameters. Three distinct lactate kinetic phenotypes emerged: Economical Aerobic (n = 216), Balanced Metabolic (n = 19), and High Producer (n = 126). The Economical Aerobic phenotype demonstrated superior performance metrics compared to High Producer (Vmax: 15.85 ± 0.85 km/h; VO2max: 56.20 ± 4.26 mL/kg/min; p < 0.001). Initial multicollinearity assessment revealed notable collinearity among all 10 candidate predictors (VIF > 10; maximum VIF = 10.75 for VAT), necessitating rigorous feature selection. Ridge regression with 4 selected features (VAT, VO2max, 9.5 km/h lactate, 14 km/h lactate) achieved moderate but statistically significant predictive performance: 10-fold cross-validation R2= 0.392 ± 0.147 (permutation test p = 0.001). Standardized coefficients identified VAT (β = 0.399) as the dominant predictor, followed by VO2max (β = 0.253), 9.5 km/h lactate (β = 0.107), and 14 km/h lactate (β = −0.066). Lactate kinetic phenotyping reveals position-independent metabolic profiles with potentially meaningful performance associations in professional soccer. The Economical Aerobic phenotype demonstrates performance advantages associated with superior anaerobic threshold capacity. These exploratory findings suggest that individualized training strategies based on metabolic phenotype rather than playing position alone warrant further investigation, with potential applications for talent identification, training periodization, and return-to-play protocols pending prospective validation. Full article
Show Figures

Figure 1

Back to TopTop