Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,150)

Search Parameters:
Keywords = learning rate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 923 KB  
Article
Youth and ChatGPT: Perceptions of Usefulness and Usage Patterns of Generation Z in Polish Higher Education
by Marian Oliński and Kacper Sieciński
Youth 2025, 5(4), 106; https://doi.org/10.3390/youth5040106 (registering DOI) - 5 Oct 2025
Abstract
This article examines how young adults in higher education (Generation Z) perceive the usefulness of ChatGPT by analyzing five learning-support roles within the Technology Acceptance Model (TAM), Expectation–Confirmation Theory (ECT), and Task–Technology Fit (TTF). Drawing on an online survey of 409 students from [...] Read more.
This article examines how young adults in higher education (Generation Z) perceive the usefulness of ChatGPT by analyzing five learning-support roles within the Technology Acceptance Model (TAM), Expectation–Confirmation Theory (ECT), and Task–Technology Fit (TTF). Drawing on an online survey of 409 students from Polish universities and nonparametric analyses, this study consistently finds that students rate ChatGPT’s potential higher than its current usefulness. The tool is evaluated most favorably as a tutor, task assistant, text editor, and teacher, while its motivational role is rated least effective. Usage patterns matter: students who used ChatGPT for writing tasks rated its assistance with educational assignments more highly, and those who used it for learning activities rated its teaching role more strongly. The strongest evaluations appear when model capabilities such as structuring, summarizing, step-by-step explanations, and personalization align with task requirements. By integrating TAM, ECT, and TTF, this study advances evidence on how Gen Z engages with conversational AI and offers practical guidance for educators, support services, and youth-focused policymakers on equitable and responsible use. Full article
Show Figures

Figure 1

29 pages, 9032 KB  
Article
Multi-Agent Deep Reinforcement Learning for Joint Task Offloading and Resource Allocation in IIoT with Dynamic Priorities
by Yongze Ma, Yanqing Zhao, Yi Hu, Xingyu He and Sifang Feng
Sensors 2025, 25(19), 6160; https://doi.org/10.3390/s25196160 (registering DOI) - 4 Oct 2025
Abstract
The rapid growth of Industrial Internet of Things (IIoT) terminals has resulted in tasks exhibiting increased concurrency, heterogeneous resource demands, and dynamic priorities, significantly increasing the complexity of task scheduling in edge computing. Cloud–edge–end collaborative computing leverages cross-layer task offloading to alleviate edge [...] Read more.
The rapid growth of Industrial Internet of Things (IIoT) terminals has resulted in tasks exhibiting increased concurrency, heterogeneous resource demands, and dynamic priorities, significantly increasing the complexity of task scheduling in edge computing. Cloud–edge–end collaborative computing leverages cross-layer task offloading to alleviate edge node resource contention and improve task scheduling efficiency. However, existing methods generally neglect the joint optimization of task offloading, resource allocation, and priority adaptation, making it difficult to balance task execution and resource utilization under resource-constrained and competitive conditions. To address this, this paper proposes a two-stage dynamic-priority-aware joint task offloading and resource allocation method (DPTORA). In the first stage, an improved Multi-Agent Proximal Policy Optimization (MAPPO) algorithm integrated with a Priority-Gated Attention Module (PGAM) enhances the robustness and accuracy of offloading strategies under dynamic priorities; in the second stage, the resource allocation problem is formulated as a single-objective convex optimization task and solved globally using the Lagrangian dual method. Simulation results show that DPTORA significantly outperforms existing multi-agent reinforcement learning baselines in terms of task latency, energy consumption, and the task completion rate. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

15 pages, 2358 KB  
Article
Optimized Lung Nodule Classification Using CLAHE-Enhanced CT Imaging and Swin Transformer-Based Deep Feature Extraction
by Dorsaf Hrizi, Khaoula Tbarki and Sadok Elasmi
J. Imaging 2025, 11(10), 346; https://doi.org/10.3390/jimaging11100346 (registering DOI) - 4 Oct 2025
Abstract
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD [...] Read more.
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD pipeline integrates ten image preprocessing techniques and ten pretrained deep learning models for feature extraction including convolutional neural networks and transformer-based architectures, and four classical machine learning classifiers. Unlike traditional end-to-end deep learning systems, our approach decouples feature extraction from classification, enhancing interpretability and reducing the risk of overfitting. A total of 400 model configurations were evaluated to identify the optimal combination. The proposed approach was evaluated on the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset, which comprises 1018 thoracic CT scans annotated by four thoracic radiologists. For the classification task, the dataset included a total of 6568 images labeled as malignant and 4849 images labeled as benign. Experimental results show that the best performing pipeline, combining Contrast Limited Adaptive Histogram Equalization, Swin Transformer feature extraction, and eXtreme Gradient Boosting, achieved an accuracy of 95.8%. Full article
(This article belongs to the Special Issue Advancements in Imaging Techniques for Detection of Cancer)
15 pages, 1603 KB  
Article
EEG-Powered UAV Control via Attention Mechanisms
by Jingming Gong, He Liu, Liangyu Zhao, Taiyo Maeda and Jianting Cao
Appl. Sci. 2025, 15(19), 10714; https://doi.org/10.3390/app151910714 (registering DOI) - 4 Oct 2025
Abstract
This paper explores the development and implementation of a brain–computer interface (BCI) system that utilizes electroencephalogram (EEG) signals for real-time monitoring of attention levels to control unmanned aerial vehicles (UAVs). We propose an innovative approach that combines spectral power analysis and machine learning [...] Read more.
This paper explores the development and implementation of a brain–computer interface (BCI) system that utilizes electroencephalogram (EEG) signals for real-time monitoring of attention levels to control unmanned aerial vehicles (UAVs). We propose an innovative approach that combines spectral power analysis and machine learning classification techniques to translate cognitive states into precise UAV command signals. This method overcomes the limitations of traditional threshold-based approaches by adapting to individual differences and improving classification accuracy. Through comprehensive testing with 20 participants in both controlled laboratory environments and real-world scenarios, our system achieved an 85% accuracy rate in distinguishing between high and low attention states and successfully mapped these cognitive states to vertical UAV movements. Experimental results demonstrate that our machine learning-based classification method significantly enhances system robustness and adaptability in noisy environments. This research not only advances UAV operability through neural interfaces but also broadens the practical applications of BCI technology in aviation. Our findings contribute to the expanding field of neurotechnology and underscore the potential for neural signal processing and machine learning integration to revolutionize human–machine interaction in industries where dynamic relationships between cognitive states and automated systems are beneficial. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Graphical abstract

18 pages, 1641 KB  
Article
Using Non-Lipschitz Signum-Based Functions for Distributed Optimization and Machine Learning: Trade-Off Between Convergence Rate and Optimality Gap
by Mohammadreza Doostmohammadian, Amir Ahmad Ghods, Alireza Aghasi, Zulfiya R. Gabidullina and Hamid R. Rabiee
Math. Comput. Appl. 2025, 30(5), 108; https://doi.org/10.3390/mca30050108 (registering DOI) - 4 Oct 2025
Abstract
In recent years, the prevalence of large-scale datasets and the demand for sophisticated learning models have necessitated the development of efficient distributed machine learning (ML) solutions. Convergence speed is a critical factor influencing the practicality and effectiveness of these distributed frameworks. Recently, non-Lipschitz [...] Read more.
In recent years, the prevalence of large-scale datasets and the demand for sophisticated learning models have necessitated the development of efficient distributed machine learning (ML) solutions. Convergence speed is a critical factor influencing the practicality and effectiveness of these distributed frameworks. Recently, non-Lipschitz continuous optimization algorithms have been proposed to improve the slow convergence rate of the existing linear solutions. The use of signum-based functions was previously considered in consensus and control literature to reach fast convergence in the prescribed time and also to provide robust algorithms to noisy/outlier data. However, as shown in this work, these algorithms lead to an optimality gap and steady-state residual of the objective function in discrete-time setup. This motivates us to investigate the distributed optimization and ML algorithms in terms of trade-off between convergence rate and optimality gap. In this direction, we specifically consider the distributed regression problem and check its convergence rate by applying both linear and non-Lipschitz signum-based functions. We check our distributed regression approach by extensive simulations. Our results show that although adopting signum-based functions may give faster convergence, it results in large optimality gaps. The findings presented in this paper may contribute to and advance the ongoing discourse of similar distributed algorithms, e.g., for distributed constrained optimization and distributed estimation. Full article
58 pages, 4299 KB  
Article
Optimisation of Cryptocurrency Trading Using the Fractal Market Hypothesis with Symbolic Regression
by Jonathan Blackledge and Anton Blackledge
Commodities 2025, 4(4), 22; https://doi.org/10.3390/commodities4040022 - 3 Oct 2025
Abstract
Cryptocurrencies such as Bitcoin can be classified as commodities under the Commodity Exchange Act (CEA), giving the Commodity Futures Trading Commission (CFTC) jurisdiction over those cryptocurrencies deemed commodities, particularly in the context of futures trading. This paper presents a method for predicting both [...] Read more.
Cryptocurrencies such as Bitcoin can be classified as commodities under the Commodity Exchange Act (CEA), giving the Commodity Futures Trading Commission (CFTC) jurisdiction over those cryptocurrencies deemed commodities, particularly in the context of futures trading. This paper presents a method for predicting both long- and short-term trends in selected cryptocurrencies based on the Fractal Market Hypothesis (FMH). The FMH applies the self-affine properties of fractal stochastic fields to model financial time series. After introducing the underlying theory and mathematical framework, a fundamental analysis of Bitcoin and Ethereum exchange rates against the U.S. dollar is conducted. The analysis focuses on changes in the polarity of the ‘Beta-to-Volatility’ and ‘Lyapunov-to-Volatility’ ratios as indicators of impending shifts in Bitcoin/Ethereum price trends. These signals are used to recommend long, short, or hold trading positions, with corresponding algorithms (implemented in Matlab R2023b) developed and back-tested. An optimisation of these algorithms identifies ideal parameter ranges that maximise both accuracy and profitability, thereby ensuring high confidence in the predictions. The resulting trading strategy provides actionable guidance for cryptocurrency investment and quantifies the likelihood of bull or bear market dominance. Under stable market conditions, machine learning (using the ‘TuringBot’ platform) is shown to produce reliable short-horizon estimates of future price movements and fluctuations. This reduces trading delays caused by data filtering and increases returns by identifying optimal positions within rapid ‘micro-trends’ that would otherwise remain undetected—yielding gains of up to approximately 10%. Empirical results confirm that Bitcoin and Ethereum exchanges behave as self-affine (fractal) stochastic fields with Lévy distributions, exhibiting a Hurst exponent of roughly 0.32, a fractal dimension of about 1.68, and a Lévy index near 1.22. These findings demonstrate that the Fractal Market Hypothesis and its associated indices provide a robust market model capable of generating investment returns that consistently outperform standard Buy-and-Hold strategies. Full article
Show Figures

Figure 1

19 pages, 827 KB  
Article
Optimized Hybrid Ensemble Intrusion Detection for VANET-Based Autonomous Vehicle Security
by Ahmad Aloqaily, Emad E. Abdallah, Aladdin Baarah, Mohammad Alnabhan, Esra’a Alshdaifat and Hind Milhem
Network 2025, 5(4), 43; https://doi.org/10.3390/network5040043 - 3 Oct 2025
Abstract
Connected and Autonomous Vehicles are promising for advancing traffic safety and efficiency. However, the increased connectivity makes these vehicles vulnerable to a broad array of cyber threats. This paper presents a novel hybrid approach for intrusion detection in in-vehicle networks, specifically focusing on [...] Read more.
Connected and Autonomous Vehicles are promising for advancing traffic safety and efficiency. However, the increased connectivity makes these vehicles vulnerable to a broad array of cyber threats. This paper presents a novel hybrid approach for intrusion detection in in-vehicle networks, specifically focusing on the Controller Area Network bus. Ensemble learning techniques are combined with sophisticated optimization techniques and dynamic adaptation mechanisms to develop a robust, accurate, and computationally efficient intrusion detection system. The proposed system is evaluated on real-world automotive network datasets that include various attack types (e.g., Denial of Service, fuzzy, and spoofing attacks). With these results, the proposed hybrid adaptive system achieves an unprecedented accuracy of 99.995% with a 0.00001% false positive rate, which is significantly more accurate than traditional methods. In addition, the system is very robust to novel attack patterns and is tolerant to varying computational constraints and is suitable for deployment on a real-time basis in various automotive platforms. As this research represents a significant advancement in automotive cybersecurity, a scalable and proactive defense mechanism is necessary to safely operate next-generation vehicles. Full article
(This article belongs to the Special Issue Emerging Trends and Applications in Vehicular Ad Hoc Networks)
14 pages, 2927 KB  
Systematic Review
Real-Time Artificial Intelligence Versus Standard Colonoscopy in the Early Detection of Colorectal Cancer: A Systematic Review and Meta-Analysis
by Abdullah Sultany, Rahul Chikatimalla, Adishwar Rao, Mohamed A. Omar, Abdulkader Shaar, Hassam Ali, Fariha Hasan, Sheza Malik, Saqr Alsakarneh and Dushyant Singh Dahiya
Healthcare 2025, 13(19), 2517; https://doi.org/10.3390/healthcare13192517 - 3 Oct 2025
Abstract
Background: Colonoscopy remains the gold standard for colorectal cancer screening. Deep learning systems with real-time computer-aided polyp detection (CADe) demonstrate high accuracy in controlled research settings and preliminary randomized controlled trials (RCTs) report favorable outcomes in clinical settings. This study aims to evaluate [...] Read more.
Background: Colonoscopy remains the gold standard for colorectal cancer screening. Deep learning systems with real-time computer-aided polyp detection (CADe) demonstrate high accuracy in controlled research settings and preliminary randomized controlled trials (RCTs) report favorable outcomes in clinical settings. This study aims to evaluate the efficacy of AI-assisted colonoscopy compared to standard colonoscopy focusing on Polyp Detection Rate (PDR) and Adenoma Detection Rate (ADR), and to explore their implications for clinical practice. Methods: A systematic search was conducted using multiple indexing databases for RCTs comparing AI-assisted to standard colonoscopy. Random-effect models were utilized to calculate pooled odds ratios (ORs) with 95% confidence intervals. The risk of bias was assessed using the Cochrane Risk of Bias Tool, and heterogeneity was quantified using I statistics. Results: From 22,762 studies, 12 RCTs (n = 11,267) met the inclusion criteria. AI-assisted colonoscopy significantly improved PDR (OR 1.31, 95% CI 1.08–1.59, p = 0.005), despite heterogeneity among studies (I2 = 79%). While ADR showed improvement with AI-assisted colonoscopy (OR 1.24, 95% CI, 0.98–1.58, p = 0.08), the result was not statistically significant and had high heterogeneity (I2 = 81%). Conclusions: AI-assisted colonoscopy significantly enhances PDR, highlighting its potential role in colorectal cancer screening programs. However, while an improvement in the ADR was observed, the results were not statistically significant and showed considerable variability. These findings highlight the promise of AI in improving diagnostic accuracy but also point to the need for further research to better understand its impact on meaningful clinical outcomes. Full article
Show Figures

Figure 1

12 pages, 1436 KB  
Article
Enhancing Lesion Detection in Rat CT Images: A Deep Learning-Based Super-Resolution Study
by Sungwon Ham, Sang Hoon Jeong, Hong Lee, Yoon Jeong Nam, Hyejin Lee, Jin Young Choi, Yu-Seon Lee, Yoon Hee Park, Su A Park, Wooil Kim, Hangseok Choi, Haewon Kim, Ju-Han Lee and Cherry Kim
Biomedicines 2025, 13(10), 2421; https://doi.org/10.3390/biomedicines13102421 - 3 Oct 2025
Abstract
Background/Objectives: Preclinical chest computed tomography (CT) imaging in small animals is often limited by low resolution due to scan time and dose constraints, which hinders accurate detection of subtle lesions. Traditional super-resolution (SR) metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity [...] Read more.
Background/Objectives: Preclinical chest computed tomography (CT) imaging in small animals is often limited by low resolution due to scan time and dose constraints, which hinders accurate detection of subtle lesions. Traditional super-resolution (SR) metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), may not adequately reflect clinical interpretability. We aimed to evaluate whether deep learning-based SR models could enhance image quality and lesion detectability in rat chest CT, balancing quantitative metrics with radiologist assessment. Methods: We retrospectively analyzed 222 chest CT scans acquired from polyhexamethylene guanidine phosphate (PHMG-p) exposure studies in Sprague Dawley rats. Three SR models were implemented and compared: single-image SR (SinSR), segmentation-guided SinSR with lung cropping (SinSR3), and omni-super-resolution (OmniSR). Models were trained on rat CT data and evaluated using PSNR and SSIM. Two board-certified thoracic radiologists independently performed blinded evaluations of lesion margin clarity, nodule detectability, image noise, artifacts, and overall image quality. Results: SinSR1 achieved the highest PSNR (33.64 ± 1.30 dB), while SinSR3 showed the highest SSIM (0.72 ± 0.08). Despite lower PSNR (29.21 ± 1.46 dB), OmniSR received the highest radiologist ratings for lesion margin clarity, nodule detectability, and overall image quality (mean score 4.32 ± 0.41, κ = 0.74). Reader assessments diverged from PSNR and SSIM, highlighting the limited correlation between conventional metrics and clinical interpretability. Conclusions: Deep learning-based SR improved visualization of rat chest CT images, with OmniSR providing the most clinically interpretable results despite modest numerical scores. These findings underscore the need for reader-centered evaluation when applying SR techniques to preclinical imaging. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

38 pages, 5753 KB  
Article
EfficientNet-B3-Based Automated Deep Learning Framework for Multiclass Endoscopic Bladder Tissue Classification
by A. A. Abd El-Aziz, Mahmood A. Mahmood and Sameh Abd El-Ghany
Diagnostics 2025, 15(19), 2515; https://doi.org/10.3390/diagnostics15192515 - 3 Oct 2025
Abstract
Background: Bladder cancer (BLCA) is a malignant growth that originates from the urothelial lining of the urinary bladder. Diagnosing BLCA is complex due to the variety of tumor features and its heterogeneous nature, which leads to significant morbidity and mortality. Understanding tumor [...] Read more.
Background: Bladder cancer (BLCA) is a malignant growth that originates from the urothelial lining of the urinary bladder. Diagnosing BLCA is complex due to the variety of tumor features and its heterogeneous nature, which leads to significant morbidity and mortality. Understanding tumor histopathology is crucial for developing tailored therapies and improving patient outcomes. Objectives: Early diagnosis and treatment are essential to lower the mortality rate associated with bladder cancer. Manual classification of muscular tissues by pathologists is labor-intensive and relies heavily on experience, which can result in interobserver variability due to the similarities in cancerous cell morphology. Traditional methods for analyzing endoscopic images are often time-consuming and resource-intensive, making it difficult to efficiently identify tissue types. Therefore, there is a strong demand for a fully automated and reliable system for classifying smooth muscle images. Methods: This paper proposes a deep learning (DL) technique utilizing the EfficientNet-B3 model and a five-fold cross-validation method to assist in the early detection of BLCA. This model enables timely intervention and improved patient outcomes while streamlining the diagnostic process, ultimately reducing both time and costs for patients. We conducted experiments using the Endoscopic Bladder Tissue Classification (EBTC) dataset for multiclass classification tasks. The dataset was preprocessed using resizing and normalization methods to ensure consistent input. In-depth experiments were carried out utilizing the EBTC dataset, along with ablation studies to evaluate the best hyperparameters. A thorough statistical analysis and comparisons with five leading DL models—ConvNeXtBase, DenseNet-169, MobileNet, ResNet-101, and VGG-16—showed that the proposed model outperformed the others. Conclusions: The EfficientNet-B3 model achieved impressive results: accuracy of 99.03%, specificity of 99.30%, precision of 97.95%, recall of 96.85%, and an F1-score of 97.36%. These findings indicate that the EfficientNet-B3 model demonstrates significant potential in accurately and efficiently diagnosing BLCA. Its high performance and ability to reduce diagnostic time and cost make it a valuable tool for clinicians in the field of oncology and urology. Full article
(This article belongs to the Special Issue AI and Big Data in Medical Diagnostics)
36 pages, 9762 KB  
Article
Mineral Prospectivity Mapping for Exploration Targeting of Porphyry Cu-Polymetallic Deposits Based on Machine Learning Algorithms, Remote Sensing and Multi-Source Geo-Information
by Jialiang Tang, Hongwei Zhang, Ru Bai, Jingwei Zhang and Tao Sun
Minerals 2025, 15(10), 1050; https://doi.org/10.3390/min15101050 - 3 Oct 2025
Abstract
Machine learning (ML) algorithms have promoted the development of predictive modeling of mineral prospectivity, enabling data-driven decision-making processes by integrating multi-source geological information, leading to efficient and accurate prediction of mineral exploration targets. However, it is challenging to conduct ML-based mineral prospectivity mapping [...] Read more.
Machine learning (ML) algorithms have promoted the development of predictive modeling of mineral prospectivity, enabling data-driven decision-making processes by integrating multi-source geological information, leading to efficient and accurate prediction of mineral exploration targets. However, it is challenging to conduct ML-based mineral prospectivity mapping (MPM) in under-explored areas where scarce data are available. In this study, the Narigongma district of the Qiangtang block in the Himalayan–Tibetan orogen was chosen as a case study. Five typical alterations related to porphyry mineralization in the study area, namely pyritization, sericitization, silicification, chloritization and propylitization, were extracted by remote sensing interpretation to enrich the data source for MPM. The extracted alteration evidences, combined with geological, geophysical and geochemical multi-source information, were employed to train the ML models. Four machine learning models, including artificial neural network (ANN), random forest (RF), support vector machine and logistic regression, were employed to map the Cu-polymetallic prospectivity in the study area. The predictive performances of the models were evaluated through confusion matrix-based indices and success-rate curves. The results show that the classification accuracy of the four models all exceed 85%, among which the ANN model achieves the highest accuracy of 96.43% and a leading Kappa value of 92.86%. In terms of predictive efficiency, the RF model outperforms the other models, which captures 75% of the mineralization sites within only 3.5% of the predicted area. A total of eight exploration targets were delineated upon a comprehensive assessment of all ML models, and these targets were further ranked based on the verification of high-resolution geochemical anomalies and evaluation of the transportation condition. The interpretability analyses emphasize the key roles of spatial proxies of porphyry intrusions and geochemical exploration in model prediction as well as significant influences everted by pyritization and chloritization, which accords well with the established knowledge about porphyry mineral systems in the study area. The findings of this study provide a robust ML-based framework for the exploration targeting in greenfield areas with good outcrops but low exploration extent, where fusion of a remote sensing technique and multi-source geo-information serve as an effective exploration strategy. Full article
17 pages, 6267 KB  
Article
Local and Remote Digital Pre-Distortion for 5G Power Amplifiers with Safe Deep Reinforcement Learning
by Christian Spano, Damiano Badini, Lorenzo Cazzella and Matteo Matteucci
Sensors 2025, 25(19), 6102; https://doi.org/10.3390/s25196102 - 3 Oct 2025
Abstract
The demand for higher data rates and energy efficiency in wireless communication systems drives power amplifiers (PAs) into nonlinear operation, causing signal distortions that hinder performance. Digital Pre-Distortion (DPD) addresses these distortions, but existing systems face challenges with complexity, adaptability, and resource limitations. [...] Read more.
The demand for higher data rates and energy efficiency in wireless communication systems drives power amplifiers (PAs) into nonlinear operation, causing signal distortions that hinder performance. Digital Pre-Distortion (DPD) addresses these distortions, but existing systems face challenges with complexity, adaptability, and resource limitations. This paper introduces DRL-DPD, a Deep Reinforcement Learning-based solution for DPD that aims to reduce computational burden, improve adaptation to dynamic environments, and minimize resource consumption. To ensure safety and regulatory compliance, we integrate an ad-hoc Safe Reinforcement Learning algorithm, CRE-DDPG (Cautious-Recoverable-Exploration Deep Deterministic Policy Gradient), which prevents ACLR measurements from falling below safety thresholds. Simulations and hardware experiments demonstrate the potential of DRL-DPD with CRE-DDPG to surpass current DPD limitations in both local and remote configurations, paving the way for more efficient communication systems, especially in the context of 5G and beyond. Full article
Show Figures

Figure 1

13 pages, 1292 KB  
Article
Development and Internal Validation of Machine Learning Algorithms to Predict 30-Day Readmission in Patients Undergoing a C-Section: A Nation-Wide Analysis
by Audrey Andrews, Nadia Islam, George Bcharah, Hend Bcharah and Misha Pangasa
J. Pers. Med. 2025, 15(10), 476; https://doi.org/10.3390/jpm15100476 - 2 Oct 2025
Abstract
Background/Objectives: Cesarean section (C-section) is a common surgical procedure associated with an increased risk of 30-day postpartum hospital readmissions. This study utilized machine learning (ML) to predict readmissions using a nationwide database. Methods: A retrospective analysis of the National Surgical Quality [...] Read more.
Background/Objectives: Cesarean section (C-section) is a common surgical procedure associated with an increased risk of 30-day postpartum hospital readmissions. This study utilized machine learning (ML) to predict readmissions using a nationwide database. Methods: A retrospective analysis of the National Surgical Quality Improvement Project (2012–2022) included 54,593 patients who underwent C-sections. Random Forests (RF) and Extreme Gradient Boosting (XGBoost) models were developed and compared to logistic regression (LR) using demographic, preoperative, and perioperative data. Results: Of the cohort, 1306 (2.39%) patients were readmitted. Readmitted patients had higher rates of being of African American race (17.99% vs. 9.83%), diabetes (11.03% vs. 8.19%), and hypertension (11.49% vs. 4.68%) (p < 0.001). RF achieved the highest performance (AUC = 0.737, sensitivity = 72.03%, specificity: 61.33%), and a preoperative-only RF model achieved a sensitivity of 83.14%. Key predictors included age, BMI, operative time, white blood cell count, and hematocrit. Conclusions: ML effectively predicts C-section readmissions, supporting early identification and interventions to improve patient outcomes and reduce healthcare costs. Full article
(This article belongs to the Special Issue Advances in Prenatal Diagnosis and Maternal Fetal Medicine)
Show Figures

Figure 1

25 pages, 6498 KB  
Article
SCPL-TD3: An Intelligent Evasion Strategy for High-Speed UAVs in Coordinated Pursuit-Evasion
by Xiaoyan Zhang, Tian Yan, Tong Li, Can Liu, Zijian Jiang and Jie Yan
Drones 2025, 9(10), 685; https://doi.org/10.3390/drones9100685 - 2 Oct 2025
Abstract
The rapid advancement of kinetic pursuit technologies has significantly increased the difficulty of evasion for high-speed UAVs (HSUAVs), particularly in scenarios where two collaboratively operating pursuers approach from the same direction with optimized initial space intervals. This paper begins by deriving an optimal [...] Read more.
The rapid advancement of kinetic pursuit technologies has significantly increased the difficulty of evasion for high-speed UAVs (HSUAVs), particularly in scenarios where two collaboratively operating pursuers approach from the same direction with optimized initial space intervals. This paper begins by deriving an optimal initial space interval to enhance cooperative pursuit effectiveness and introduces an evasion difficulty classification framework, thereby providing a structured approach for evaluating and optimizing evasion strategies. Based on this, an intelligent maneuver evasion strategy using semantic classification progressive learning with twin delayed deep deterministic policy gradient (SCPL-TD3) is proposed to address the challenging scenarios identified through the analysis. Training efficiency is enhanced by the proposed SCPL-TD3 algorithm through the employment of progressive learning to dynamically adjust training complexity and the integration of semantic classification to guide the learning process via meaningful state-action pattern recognition. Built upon the twin delayed deep deterministic policy gradient framework, the algorithm further enhances both stability and efficiency in complex environments. A specially designed reward function is incorporated to balance evasion performance with mission constraints, ensuring the fulfillment of HSUAV’s operational objectives. Simulation results demonstrate that the proposed approach significantly improves training stability and evasion effectiveness, achieving a 97.04% success rate and a 7.10–14.85% improvement in decision-making speed. Full article
Show Figures

Figure 1

14 pages, 879 KB  
Article
Predicting Factors Associated with Extended Hospital Stay After Postoperative ICU Admission in Hip Fracture Patients Using Statistical and Machine Learning Methods: A Retrospective Single-Center Study
by Volkan Alparslan, Sibel Balcı, Ayetullah Gök, Can Aksu, Burak İnner, Sevim Cesur, Hadi Ufuk Yörükoğlu, Berkay Balcı, Pınar Kartal Köse, Veysel Emre Çelik, Serdar Demiröz and Alparslan Kuş
Healthcare 2025, 13(19), 2507; https://doi.org/10.3390/healthcare13192507 - 2 Oct 2025
Abstract
Background: Hip fractures are common in the elderly and often require ICU admission post-surgery due to high ASA scores and comorbidities. Length of hospital stay after ICU is a crucial indicator affecting patient recovery, complication rates, and healthcare costs. This study aimed to [...] Read more.
Background: Hip fractures are common in the elderly and often require ICU admission post-surgery due to high ASA scores and comorbidities. Length of hospital stay after ICU is a crucial indicator affecting patient recovery, complication rates, and healthcare costs. This study aimed to develop and validate a machine learning-based model to predict the factors associated with extended hospital stay (>7 days from surgery to discharge) in hip fracture patients requiring postoperative ICU care. The findings could help clinicians optimize ICU bed utilization and improve patient management strategies. Methods: In this retrospective single-centre cohort study conducted in a tertiary ICU in Turkey (2017–2024), 366 ICU-admitted hip fracture patients were analysed. Conventional statistical analyses were performed using SPSS 29, including Mann–Whitney U and chi-squared tests. To identify independent predictors associated with extended hospital stay, Least Absolute Shrinkage and Selection Operator (LASSO) regression was applied for variable selection, followed by multivariate binary logistic regression analysis. In addition, machine learning models (binary logistic regression, random forest (RF), extreme gradient boosting (XGBoost) and decision tree (DT)) were trained to predict the likelihood of extended hospital stay, defined as the total number of days from the date of surgery until hospital discharge, including both ICU and subsequent ward stay. Model performance was evaluated using AUROC, F1 score, accuracy, precision, recall, and Brier score. SHAP (SHapley Additive exPlanations) values were used to interpret feature contributions in the XGBoost model. Results: The XGBoost model showed the best performance, except for precision. The XGBoost model gave an AUROC of 0.80, precision of 0.67, recall of 0.92, F1 score of 0.78, accuracy of 0.71 and Brier score of 0.18. According to SHAP analysis, time from fracture to surgery, hypoalbuminaemia and ASA score were the variables that most affected the length of stay of hospitalisation. Conclusions: The developed machine learning model successfully classified hip fracture patients into short and extended hospital stay groups following postoperative intensive care. This classification model has the potential to aid in patient flow management, resource allocation, and clinical decision support. External validation will further strengthen its applicability across different settings. Full article
Show Figures

Figure 1

Back to TopTop