Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (281)

Search Parameters:
Keywords = weighted voting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 517 KB  
Proceeding Paper
Reinforcement Learning for the Optimization of Adaptive Intrusion Detection Systems
by Óscar Mogollón-Gutiérrez, David Escudero García, José Carlos Sancho Núñez and Noemí DeCastro-García
Eng. Proc. 2026, 123(1), 2; https://doi.org/10.3390/engproc2026123002 - 29 Jan 2026
Viewed by 23
Abstract
Network intrusion detection is an activity of increasing importance due to the annual rise in attacks. In the literature, the use of machine learning solutions is one of the most common mechanisms for improving the performance of detection systems. One of the main [...] Read more.
Network intrusion detection is an activity of increasing importance due to the annual rise in attacks. In the literature, the use of machine learning solutions is one of the most common mechanisms for improving the performance of detection systems. One of the main problems with these approaches is data imbalance: the volume of malicious traffic is much lower than normal traffic, making it difficult to create an effective model. The use of ensemble models, which combine several individual models, can increase robustness against data imbalance. To try to improve the effectiveness of ensemble models against imbalance, in this work, we apply reinforcement learning to combine the individual predictions of the ensemble models, with the objective of improving predictions compared to classic weighted voting algorithms. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
Show Figures

Figure 1

33 pages, 550 KB  
Article
Intelligent Information Processing for Corporate Performance Prediction: A Hybrid Natural Language Processing (NLP) and Deep Learning Approach
by Qidi Yu, Chen Xing, Yanjing He, Sunghee Ahn and Hyung Jong Na
Electronics 2026, 15(2), 443; https://doi.org/10.3390/electronics15020443 - 20 Jan 2026
Viewed by 197
Abstract
This study proposes a hybrid machine learning framework that integrates structured financial indicators and unstructured textual strategy disclosures to improve firm-level management performance prediction. Using corporate business reports from South Korean listed firms, strategic text was extracted and categorized under the Balanced Scorecard [...] Read more.
This study proposes a hybrid machine learning framework that integrates structured financial indicators and unstructured textual strategy disclosures to improve firm-level management performance prediction. Using corporate business reports from South Korean listed firms, strategic text was extracted and categorized under the Balanced Scorecard (BSC) framework into financial, customer, internal process, and learning and growth dimensions. Various machine learning and deep learning models—including k-nearest neighbors (KNNs), support vector machine (SVM), light gradient boosting machine (LightGBM), convolutional neural network (CNN), long short-term memory (LSTM), autoencoder, and transformer—were evaluated, with results showing that the inclusion of strategic textual data significantly enhanced prediction accuracy, precision, recall, area under the curve (AUC), and F1-score. Among individual models, the transformer architecture demonstrated superior performance in extracting context-rich semantic features. A soft-voting ensemble model combining autoencoder, LSTM, and transformer achieved the best overall performance, leading in accuracy and AUC, while the best single deep learning model (transformer) obtained a marginally higher F1 score, confirming the value of hybrid learning. Furthermore, analysis revealed that customer-oriented strategy disclosures were the most predictive among BSC dimensions. These findings highlight the value of integrating financial and narrative data using advanced NLP and artificial intelligence (AI) techniques to develop interpretable and robust corporate performance forecasting models. In addition, we operationalize information security narratives using a reproducible cybersecurity lexicon and derive security disclosure intensity and weight share features that are jointly evaluated with BSC-based strategic vectors. Full article
(This article belongs to the Special Issue Advances in Intelligent Information Processing)
Show Figures

Figure 1

31 pages, 1485 KB  
Article
Explainable Multi-Modal Medical Image Analysis Through Dual-Stream Multi-Feature Fusion and Class-Specific Selection
by Naeem Ullah, Ivanoe De Falco and Giovanna Sannino
AI 2026, 7(1), 30; https://doi.org/10.3390/ai7010030 - 16 Jan 2026
Viewed by 414
Abstract
Effective and transparent medical diagnosis relies on accurate and interpretable classification of medical images across multiple modalities. This paper introduces an explainable multi-modal image analysis framework based on a dual-stream architecture that fuses handcrafted descriptors with deep features extracted from a custom MobileNet. [...] Read more.
Effective and transparent medical diagnosis relies on accurate and interpretable classification of medical images across multiple modalities. This paper introduces an explainable multi-modal image analysis framework based on a dual-stream architecture that fuses handcrafted descriptors with deep features extracted from a custom MobileNet. Handcrafted descriptors include frequency-domain and texture features, while deep features are summarized using 26 statistical metrics to enhance interpretability. In the fusion stage, complementary features are combined at both the feature and decision levels. Decision-level integration combines calibrated soft voting, weighted voting, and stacking ensembles with optimized classifiers, including decision trees, random forests, gradient boosting, and logistic regression. To further refine performance, a hybrid class-specific feature selection strategy is proposed, combining mutual information, recursive elimination, and random forest importance to select the most discriminative features for each class. This hybrid selection approach eliminates redundancy, improves computational efficiency, and ensures robust classification. Explainability is provided through Local Interpretable Model-Agnostic Explanations, which offer transparent details about the ensemble model’s predictions and link influential handcrafted features to clinically meaningful image characteristics. The framework is validated on three benchmark datasets, i.e., BTTypes (brain MRI), Ultrasound Breast Images, and ACRIMA Retinal Fundus Images, demonstrating generalizability across modalities (MRI, ultrasound, retinal fundus) and disease categories (brain tumor, breast cancer, glaucoma). Full article
(This article belongs to the Special Issue Digital Health: AI-Driven Personalized Healthcare and Applications)
Show Figures

Figure 1

19 pages, 1973 KB  
Article
Continuous Smartphone Authentication via Multimodal Biometrics and Optimized Ensemble Learning
by Chia-Sheng Cheng, Ko-Chien Chang, Hsing-Chung Chen and Chao-Lung Chou
Mathematics 2026, 14(2), 311; https://doi.org/10.3390/math14020311 - 15 Jan 2026
Viewed by 451
Abstract
The ubiquity of smartphones has transformed them into primary repositories of sensitive data; however, traditional one-time authentication mechanisms create a critical trust gap by failing to verify identity post-unlock. Our aim is to mitigate these vulnerabilities and align with the Zero Trust Architecture [...] Read more.
The ubiquity of smartphones has transformed them into primary repositories of sensitive data; however, traditional one-time authentication mechanisms create a critical trust gap by failing to verify identity post-unlock. Our aim is to mitigate these vulnerabilities and align with the Zero Trust Architecture (ZTA) framework and philosophy of “never trust, always verify,” as formally defined by the National Institute of Standards and Technology (NIST) in Special Publication 800-207. This study introduces a robust continuous authentication (CA) framework leveraging multimodal behavioral biometrics. A dedicated application was developed to synchronously capture touch, sliding, and inertial sensor telemetry. For feature modeling, a heterogeneous deep learning pipeline was employed to capture modality-specific characteristics, utilizing Convolutional Neural Networks (CNNs) for sensor data, Long Short-Term Memory (LSTM) networks for curvilinear sliding, and Gated Recurrent Units (GRUs) for discrete touch. To resolve performance degradation caused by class imbalance in Zero Trust environments, a Grid Search Optimization (GSO) strategy was applied to optimize a weighted voting ensemble, identifying the global optimum for decision thresholds and modality weights. Empirical validation on a dataset of 35,519 samples from 15 subjects demonstrates that the optimized ensemble achieves a peak accuracy of 99.23%. Sensor kinematics emerged as the primary biometric signature, followed by touch and sliding features. This framework enables high-precision, non-intrusive continuous verification, bridging the critical security gap in contemporary mobile architectures. Full article
Show Figures

Figure 1

43 pages, 6570 KB  
Article
A Multimodal Phishing Website Detection System Using Explainable Artificial Intelligence Technologies
by Alexey Vulfin, Alexey Sulavko, Vladimir Vasiliev, Alexander Minko, Anastasia Kirillova and Alexander Samotuga
Mach. Learn. Knowl. Extr. 2026, 8(1), 11; https://doi.org/10.3390/make8010011 - 4 Jan 2026
Viewed by 405
Abstract
The purpose of the present study is to improve the efficiency of phishing web resource detection through multimodal analysis and using methods of explainable artificial intelligence. We propose a late fusion architecture in which independent specialized models process four modalities and are combined [...] Read more.
The purpose of the present study is to improve the efficiency of phishing web resource detection through multimodal analysis and using methods of explainable artificial intelligence. We propose a late fusion architecture in which independent specialized models process four modalities and are combined using weighted voting. The first branch uses CatBoost for URL features and metadata; the second uses CNN1D for symbolic-level URL representation; the third uses a Transformer based on a pretrained CodeBERT for the homepage HTML code; and the fourth uses EfficientNet-B7 for page screenshot analysis. SHAP, Grad-CAM, and attention matrices are used to interpret decisions; a local LLM generates a consolidated textual explanation. A prototype system based on a microservice architecture, integrated with the SOC, has been developed. This integration enables streaming processing and reproducible validation. Computational experiments using our own updated dataset and the public MTLP dataset show high performance: F1-scores of up to 0.989 on our own dataset and 0.953 on MTLP; multimodal fusion consistently outperforms single-modal baseline models. The practical significance of this approach for zero-day detection and false positive reduction, through feature alignment across modalities and explainability, is demonstrated. All limitations and operational aspects (data drift, adversarial robustness, LLM latency) of the proposed prototype are presented. We also outline areas for further research. Full article
(This article belongs to the Section Safety, Security, Privacy, and Cyber Resilience)
Show Figures

Figure 1

21 pages, 45399 KB  
Article
Co-Seismic Landslide Detection Combining Multiple Classifiers Based on Weighted Voting: A Case Study of the Jiuzhaigou Earthquake in 2017
by Yaohui Liu, Xinkai Wang, Jie Zhou and Zhengguang Zhao
GeoHazards 2026, 7(1), 3; https://doi.org/10.3390/geohazards7010003 - 1 Jan 2026
Viewed by 348
Abstract
Co-seismic landslides are major secondary hazards in earthquakes, and their rapid detection is essential for emergency response, disaster assessment, and post-earthquake reconstruction. However, single classifiers often fail to meet practical detection requirements. This study proposes WPU, a weighted-voting-based multi-classifier method that assigns category-specific [...] Read more.
Co-seismic landslides are major secondary hazards in earthquakes, and their rapid detection is essential for emergency response, disaster assessment, and post-earthquake reconstruction. However, single classifiers often fail to meet practical detection requirements. This study proposes WPU, a weighted-voting-based multi-classifier method that assigns category-specific weights using the producer’s accuracy and user’s accuracy. A case study was conducted in Jiuzhaigou County, Sichuan Province, China, affected by the Ms 7.0 earthquake on 8 August 2017. A dataset of 193 co-seismic landslides was built through manual interpretation, and six commonly used remote-sensing-based detection methods were employed. The WPU method fused the outputs of all classifiers using PA- and UA-based weights. Results show that WPU achieved an overall accuracy of 0.9755 and a Kappa coefficient of 0.7848, demonstrating substantial improvement over individual classifiers while maintaining efficiency and timeliness. The proposed approach supports rapid emergency assessment and enhances the effectiveness of co-seismic landslide detection, providing a valuable reference for future post-earthquake hazard evaluations and enabling governments to respond more quickly to landslide disasters. Full article
(This article belongs to the Special Issue Landslide Research: State of the Art and Innovations)
Show Figures

Figure 1

18 pages, 2688 KB  
Article
Rolling Bearing Fault Diagnosis Based on Multi-Source Domain Joint Structure Preservation Transfer with Autoencoder
by Qinglei Jiang, Tielin Shi, Xiuqun Hou, Biqi Miao, Zhaoguang Zhang, Yukun Jin, Zhiwen Wang and Hongdi Zhou
Sensors 2026, 26(1), 222; https://doi.org/10.3390/s26010222 - 29 Dec 2025
Viewed by 322
Abstract
Domain adaptation methods have been extensively studied for rolling bearing fault diagnosis under various conditions. However, some existing methods only consider the one-way embedding of original space into a low-dimensional subspace without backward validation, which leads to inaccurate embeddings of data and poor [...] Read more.
Domain adaptation methods have been extensively studied for rolling bearing fault diagnosis under various conditions. However, some existing methods only consider the one-way embedding of original space into a low-dimensional subspace without backward validation, which leads to inaccurate embeddings of data and poor diagnostic performance. In this paper, a rolling bearing fault diagnosis method based on multi-source domain joint structure preservation transfer with autoencoder (MJSPTA) is proposed. Firstly, similar source domains are screened by inter-domain metrics; then, the high-dimensional data of both the source and target domains are projected into a shared subspace with different projection matrices, respectively, during the encoding stage. Finally, the decoding stage reconstructs the low-dimensional data back to the original high-dimensional space to minimize the reconstruction accuracy. In the shared subspace, the difference between source and target domains is reduced through distribution matching and sample weighting. Meanwhile, graph embedding theory is introduced to maximally preserve the local manifold structure of the samples during domain adaptation. Next, label propagation is used to obtain the predicted labels, and a voting mechanism ultimately determines the fault type. The effectiveness and robustness of the method are verified through a series of diagnostic tests. Full article
Show Figures

Figure 1

29 pages, 9470 KB  
Article
Dendro-AutoCount Enhanced Using Pith Localization and Peak Analysis Method for Anomalous Images
by Sumitra Nuanmeesri and Lap Poomhiran
Mathematics 2026, 14(1), 94; https://doi.org/10.3390/math14010094 - 26 Dec 2025
Viewed by 327
Abstract
Dendrochronology serves as a vital tool for analyzing the long-term interactions between commercial timber growth and environmental variables such as soil, water, and climate. This study presents Dendro-AutoCount, an innovative image processing framework designed for identifying obscured tree rings in cross-sectional images of [...] Read more.
Dendrochronology serves as a vital tool for analyzing the long-term interactions between commercial timber growth and environmental variables such as soil, water, and climate. This study presents Dendro-AutoCount, an innovative image processing framework designed for identifying obscured tree rings in cross-sectional images of Pinus taeda L. The methodology integrates Hessian-based ridge detection with a weighted radial voting gradient method to precisely locate the pith. Following pith detection, the system performs radial cropping to generate directional sub-images (north, east, south, west), where rings are identified via intensity profile analysis, signal smoothing, and peak detection. By filtering outliers and averaging directional counts, the system effectively mitigates common visual interference from black molds, fungus, structural cracks, buds, knots, and cracks. Experimental results confirm the high efficacy of Dendro-AutoCount in processing anomalous tree ring images. Full article
Show Figures

Figure 1

27 pages, 4420 KB  
Article
Real-Time Quarry Truck Monitoring with Deep Learning and License Plate Recognition: Weighbridge Reconciliation for Production Control
by Ibrahima Dia, Bocar Sy, Ousmane Diagne, Sidy Mané and Lamine Diouf
Mining 2025, 5(4), 84; https://doi.org/10.3390/mining5040084 - 14 Dec 2025
Viewed by 516
Abstract
This paper presents a real-time quarry truck monitoring system that combines deep learning and license plate recognition (LPR) for operational monitoring and weighbridge reconciliation. Rather than estimating load volumes directly from imagery, the system ensures auditable matching between detected trucks and official weight [...] Read more.
This paper presents a real-time quarry truck monitoring system that combines deep learning and license plate recognition (LPR) for operational monitoring and weighbridge reconciliation. Rather than estimating load volumes directly from imagery, the system ensures auditable matching between detected trucks and official weight records. Deployed at quarry checkpoints, fixed cameras stream to an edge stack that performs truck detection, line-crossing counts, and per-frame plate Optical Character Recognition (OCR); a temporal voting and format-constrained post-processing step consolidates plate strings for registry matching. The system exposes a dashboard with auditable session bundles (model/version hashes, Region of Interest (ROI)/line geometry, thresholds, logs) to ensure replay and traceability between offline evaluation and live operations. We evaluate detection (precision, recall, mAP@0.5, and mAP@0.5:0.95), tracking (ID metrics), and (LPR) usability, and we quantify operational validity by reconciling estimated shift-level tonnage T against weighbridge tonnage T* using Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), R2, and Bland–Altman analysis. Results show stable convergence of the detection models, reliable plate usability under varied optics (day, dusk, night, and dust), low-latency processing suitable for commodity hardware, and close agreement with weighbridge references at the shift level. The study demonstrates that vision-based counting coupled with plate linkage can provide regulator-ready KPIs and auditable evidence for production control in quarry operations. Full article
(This article belongs to the Special Issue Mine Management Optimization in the Era of AI and Advanced Analytics)
Show Figures

Graphical abstract

16 pages, 271 KB  
Article
Preferences Among Expert Physicians in Areas of Uncertainty in Venous Thromboembolism Management: Results from a Multiple-Choice Questionnaire
by Alessandro Di Minno, Gaia Spadarella, Ilenia Lorenza Calcaterra, Antonella Tufano, Alessandro Monaco, Franco Maria Pio Mondello Malvestiti, Elena Tremoli and Domenico Prisco
J. Clin. Med. 2025, 14(23), 8531; https://doi.org/10.3390/jcm14238531 - 1 Dec 2025
Viewed by 447
Abstract
Background/Objectives: Prevention and treatment of venous thromboembolism (VTE), including deep vein thrombosis (DVT) and pulmonary embolism (PE), is a major clinical issue in hospitalized patients. Some aspects of VTE management lack clarity due to differing physicians’ opinions and behaviors. Methods: A [...] Read more.
Background/Objectives: Prevention and treatment of venous thromboembolism (VTE), including deep vein thrombosis (DVT) and pulmonary embolism (PE), is a major clinical issue in hospitalized patients. Some aspects of VTE management lack clarity due to differing physicians’ opinions and behaviors. Methods: A multidisciplinary steering committee identified two main areas of uncertainty: VTE prophylaxis and PE management in special settings. A multiple-choice questionnaire including 10 statements was circulated to 183 doctors trained in VTE management. The expected benefit-to-harm ratio was represented on a nine-point Likert scale, with consensus (≥75% agreement) on scores of 1–3 indicating inappropriate and 7–9 indicating appropriate care measures. Results: In online voting, a consensus was reached for 9/10 statements. Respondents considered the following to be appropriate: risk assessment of VTE (93.44%) and bleeding (91.6%) in hospitalized medical patients; low-molecular weight heparin (LMWH) prophylaxis for inpatients with pneumonia and malignancy (82.78%); therapeutic doses of LMWH/fondaparinux in patients with intermediate/high risk of PE with (80.9%) or without (77.97%) instability criteria; and echocardiography to manage patients with a post-PE syndrome (93.99%). Respondents considered the following to be inappropriate: use of 4000 IU LMWH in chronic renal failure (80.46%); use of 2000 IU LMWH in persons on dual antiplatelet therapy (77.01%); and use of low-dose apixaban (2.5 mg) in pregnancy (88.57%) or in subsegmental PE with hypoxemia (82.46%). No consensus was reached on the identification of PE cases eligible for outpatient treatment. Conclusions: Our findings show persistent gaps between guideline recommendations and clinical implementation despite improved awareness among physicians. Uncertainty persists regarding criteria for outpatient PE eligibility and/or for validation of bleeding-risk models. Full article
(This article belongs to the Section Hematology)
22 pages, 666 KB  
Article
A Multi-Scale Suitability Assessment Framework for Deep Geological Storage of High-Salinity Mine Water in Coal Mines
by Zhe Jiang, Song Du, Songyu Ren, Qiaohui Che, Xiao Zhang and Yinglin Fan
Water 2025, 17(23), 3407; https://doi.org/10.3390/w17233407 - 29 Nov 2025
Viewed by 619
Abstract
Deep well injection and storage (DWIS) technology provides an effective alternative to address the high cost, energy intensity, and limited scalability of conventional treatments for high-salinity mine water from coal mines. However, the absence of a dedicated site suitability evaluation framework remains a [...] Read more.
Deep well injection and storage (DWIS) technology provides an effective alternative to address the high cost, energy intensity, and limited scalability of conventional treatments for high-salinity mine water from coal mines. However, the absence of a dedicated site suitability evaluation framework remains a major gap. Unlike previous approaches that directly applied CO2 storage criteria, this study refines and restructures the framework based on a systematic analysis of the fundamental differences in mechanisms and risk characteristics unique to mine water storage. Building on the experience of CO2 geological storage assessment, this study analyzes the key differences in fluid properties and storage mechanisms between water and CO2 and, for the first time, establishes a comprehensive site suitability evaluation framework for mine water geological storage. The framework integrates three main dimensions—stability and safety, effectiveness, and socio-economic factors—covering 80 key parameters. The indicator system is organized hierarchically at the basin, target-area, and site levels, and incorporates a multi-scale weight adaptation mechanism that assigns scale-dependent weights to the most influential indicators at each evaluation level. An innovative evaluation methodology combining a “one-vote veto” mechanism, progressive filtering, and multi-factor weighted superposition is proposed to determine storage suitability. This work fills a critical research gap in systematic site selection for deep mine water storage in China. It offers theoretical guidance and an engineering paradigm for overcoming technological bottlenecks in high-salinity water treatment, enabling efficient and low-carbon disposal. The study has important implications for promoting the green transformation of the mining industry and achieving national carbon peaking and neutrality goals. Full article
(This article belongs to the Special Issue Mine Water Treatment, Utilization and Storage Technology)
Show Figures

Figure 1

32 pages, 16310 KB  
Article
AI-Driven Multi-Model Classification of Rural Settlements for Targeted Rural Revitalization: A Case Study of Gaoqing County, Shandong Province, China
by Jing He, Xinlei Wang, Yingtao Qi, Jinghan Jiang, Dian Zhou, Ding Ma and Jing Ying
Land 2025, 14(12), 2298; https://doi.org/10.3390/land14122298 - 21 Nov 2025
Viewed by 882
Abstract
Rural settlements are the fundamental socio-economic units of China’s countryside. In line with national strategies that emphasize place-based and category-specific pathways for rural revitalization, accurate classification of rural settlements is essential for differentiated planning and policy delivery. However, given the sheer number of [...] Read more.
Rural settlements are the fundamental socio-economic units of China’s countryside. In line with national strategies that emphasize place-based and category-specific pathways for rural revitalization, accurate classification of rural settlements is essential for differentiated planning and policy delivery. However, given the sheer number of settlements, manual classification is time-consuming and resource-intensive, limiting scalability. This study proposes an AI-driven, multi-model framework to automate rural settlement classification with high stability and accuracy. First, informed by a rigorous literature review, we construct a multidimensional indicator system that integrates natural conditions, socio-economic attributes, and land-use factors to capture spatial and functional characteristics at the settlement scale. Using Gaoqing County (Shandong Province) as the study area, we collect and curate survey data and apply outlier detection for preprocessing. We then benchmark multiple machine learning models and find that algorithms with native handling of missing values perform markedly better—a critical advantage given the prevalence of missingness in survey-based datasets. Finally, we assemble the three best-performing models—LightGBM, CatBoost, and XGBoost—into a weighted-voting ensemble, achieving an overall classification accuracy of approximately 88%. The results demonstrate that the refined indicator system, coupled with a multi-model ensemble, substantially improves both accuracy and robustness. This work provides a methodological foundation and empirical evidence to support differentiated planning and targeted rural revitalization at the settlement level, offering a scalable blueprint for broader regional and national implementation. Full article
Show Figures

Figure 1

20 pages, 1550 KB  
Article
Machine Learning-Based Algorithm for Tacrolimus Dose Optimization in Hospitalized Kidney Transplant Patients
by Dong Jin Park, Mihyeong Kim, Hyungjin Cho, Jung Soo Kim, Jeongkye Hwang and Jehoon Lee
Diagnostics 2025, 15(23), 2948; https://doi.org/10.3390/diagnostics15232948 - 21 Nov 2025
Cited by 1 | Viewed by 736
Abstract
Background: Tacrolimus is a cornerstone immunosuppressant in kidney transplantation, but its narrow therapeutic index and marked inter-patient variability complicate dose optimization. Conventional therapeutic drug monitoring (TDM) relies on empirical adjustments that often overlook individual pharmacokinetics. Machine learning (ML) offers a precision dosing [...] Read more.
Background: Tacrolimus is a cornerstone immunosuppressant in kidney transplantation, but its narrow therapeutic index and marked inter-patient variability complicate dose optimization. Conventional therapeutic drug monitoring (TDM) relies on empirical adjustments that often overlook individual pharmacokinetics. Machine learning (ML) offers a precision dosing alternative by integrating diverse clinical and biochemical variables into predictive models. Methods: We retrospectively analyzed 1351 data points from 87 kidney transplant patients at Eunpyeong St. Mary’s Hospital (April 2019–November 2023). Clinical, demographic, and laboratory information, including tacrolimus trough levels and dosing history, were extracted from electronic medical records. Four predictive models—XGBoost, CatBoost, LightGBM, and a multilayer perceptron (MLP)—were trained to forecast next-day tacrolimus concentrations, and model serum creatinine level performance was evaluated using R-squared (R2), mean absolute error (MAE), and root-mean-squared error (RMSE). An ensemble model with weighted soft voting was applied to enhance predictive accuracy, and model interpretability was assessed using SHapley Additive exPlanations (SHAP). Results: The ensemble model achieved the best overall performance (R2 = 0.6297, MAE = 1.0181, RMSE = 1.2999), outperforming all individual models, whereas the MLP model showed superior predictive power among single models, reflecting the significance of nonlinear interactions in tacrolimus pharmacokinetics. SHAP analysis highlighted prior tacrolimus levels, cumulative dose, renal function markers (eGFR level, serum creatinine level), and albumin concentration as the most influential predictors. Conclusions: We present a robust ML-based algorithm for tacrolimus dose optimization in hospitalized kidney transplant recipients. By improving predictions of tacrolimus concentrations, the model may help reduce inter-patient dose variability and lower the risk of nephrotoxicity, supporting safer and more individualized immunosuppressive management. This approach advances AI-driven precision medicine in transplant care, offering a pathway to safer and more effective immunosuppression. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

31 pages, 5285 KB  
Article
Ensemble Deep Learning for Real–Bogus Classification with Sky Survey Images
by Pakpoom Prommool, Sirikan Chucherd, Natthakan Iam-On and Tossapon Boongoen
Biomimetics 2025, 10(11), 781; https://doi.org/10.3390/biomimetics10110781 - 17 Nov 2025
Viewed by 758
Abstract
The discovery of the fifth gravitational wave, GW170817, and its electromagnetic counterpart, resulting from the merger of neutron stars by the LIGO and Virgo teams, marked a major milestone in astronomy. It was the first time that gravitational waves and light from the [...] Read more.
The discovery of the fifth gravitational wave, GW170817, and its electromagnetic counterpart, resulting from the merger of neutron stars by the LIGO and Virgo teams, marked a major milestone in astronomy. It was the first time that gravitational waves and light from the same cosmic event were observed simultaneously. The LIGO detectors in the United States recorded the signal for 100 s, longer than in previous detections. The merging of neutron stars emits both gravitational and electromagnetic waves across all frequencies—from radio to gamma rays. However, pinpointing the exact source remains difficult, requiring rapid sky scanning to locate it. To address this challenge, the Gravitational-Wave Optical Transient Observer (GOTO) project was established. It is specifically designed to detect optical light from transient events associated with gravitational waves, enabling faster follow-up observations and a deeper study of these short-lived astronomical phenomena, which appear and disappear quickly in the universe. In astrophysics, it has become more important to find astronomical transient events like supernovae, gamma-ray bursts, and stellar flares because they are linked to extreme cosmic processes. However, finding these short-lived events in huge sky survey datasets, like those from the GOTO project, is very hard for traditional analysis methods. This study suggests a deep learning methodology employing Convolutional Neural Networks (CNNs) to enhance transient classification. CNNs are based on how biological vision systems work and how they are structured. They mimic how animal brains hierarchically process visual information, making it possible to automatically find complex spatial patterns in astronomical images. Transfer learning and fine-tuning on pretrained ImageNet models are utilized to emulate adaptive learning observed in biological organisms, enabling swift adaptation to new tasks with minimal data. Data augmentation methods like rotation, flipping, and noise injection mimic changes in the environment to improve model generalization. Dropout and different batch sizes are used to stop overfitting, which is similar to how biological systems use redundancy and noise tolerance. Ensemble learning strategies, such as Soft Voting and Weighted Voting, draw inspiration from collective intelligence in biological systems, integrating multiple CNN models to enhance decision-making robustness. Our findings indicate that this bio-inspired framework substantially improves the precision and dependability of transient detection, providing a scalable solution for real-time applications in extensive sky surveys such as GOTO. Full article
Show Figures

Figure 1

15 pages, 1165 KB  
Article
Multiscale Bootstrap Correction for Random Forest Voting: A Statistical Inference Approach to Stock Index Trend Prediction
by Aizhen Ren, Yanqiong Duan and Juhong Liu
Mathematics 2025, 13(22), 3601; https://doi.org/10.3390/math13223601 - 10 Nov 2025
Viewed by 368
Abstract
This paper proposes a novel multiscale random forest model for stock index trend prediction, incorporating statistical inference principles to improve classification confidence. Traditional random forest classifiers rely on majority voting, which can yield biased estimates of class probabilities, especially under small sample sizes. [...] Read more.
This paper proposes a novel multiscale random forest model for stock index trend prediction, incorporating statistical inference principles to improve classification confidence. Traditional random forest classifiers rely on majority voting, which can yield biased estimates of class probabilities, especially under small sample sizes. To address this, we introduce a multiscale bootstrap correction mechanism into the ensemble framework, enabling the estimation of third-order accurate approximately unbiased p-values. This modification replaces naive voting with statistically grounded decision thresholds, improving the robustness of the model. Additionally, stepwise regression is employed for feature selection to enhance generalization. Experimental results on CSI 300 index data demonstrate that the proposed method consistently outperforms standard classifiers, including standard random forest, support vector machine, and weighted k-nearest neighbors model, across multiple performance metrics. The contribution of this work lies in the integration of hypothesis testing techniques into ensemble learning and the pioneering application of multiscale bootstrap inference to financial time series forecasting. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop