Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (97)

Search Parameters:
Keywords = approximate learning entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1560 KB  
Article
Performance Comparison of U-Net and Its Variants for Carotid Intima–Media Segmentation in Ultrasound Images
by Seungju Jeong, Minjeong Park, Sumin Jeong and Dong Chan Park
Diagnostics 2026, 16(1), 2; https://doi.org/10.3390/diagnostics16010002 - 19 Dec 2025
Viewed by 260
Abstract
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid [...] Read more.
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid Ultrasound Boundary Study (CUBS) dataset (2176 images from 1088 subjects). Images were preprocessed using histogram-based smoothing and resized to a resolution of 256 × 256 pixels. Model training was conducted using identical hyperparameters (50 epochs, batch size 8, Adam optimizer with a learning rate of 1 × 10−4, and binary cross-entropy loss). Segmentation accuracy was assessed using Dice, Intersection over Union (IoU), Precision, Recall, and Accuracy metrics, while real-time performance was evaluated based on training/inference times and the model parameter counts. Results: All models achieved high accuracy, with Dice/IoU scores above 0.80/0.67. Attention U-Net achieved the highest segmentation accuracy, while UNeXt demonstrated the fastest training/inference speeds (approximately 420,000 parameters). Qualitatively, UNet++ produced smooth and natural boundaries, highlighting its strength in boundary reconstruction. Additionally, the relationship between the model parameter count and Dice performance was visualized to illustrate the tradeoff between accuracy and efficiency. Conclusions: This study provides a quantitative/qualitative evaluation of the accuracy, efficiency, and boundary reconstruction characteristics of U-Net-based models for CIMT segmentation, offering guidance for model selection according to clinical requirements (accuracy vs. real-time performance). Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

25 pages, 7436 KB  
Article
Assessing the Functional–Efficiency Mismatch of Territorial Space Using Explainable Machine Learning: A Case Study of Quanzhou, China
by Zehua Ke, Wei Wei, Mengyao Hong, Junnan Xia and Liming Bo
Land 2025, 14(12), 2403; https://doi.org/10.3390/land14122403 - 11 Dec 2025
Viewed by 256
Abstract
As the foundational carrier of socio-economic development and ecological security, territorial space reflects the degree of coordination between functional structure and efficiency output. However, most existing evaluation methods overlook the heterogeneous functional endowments of spatial units and therefore cannot reasonably assess the efficiency [...] Read more.
As the foundational carrier of socio-economic development and ecological security, territorial space reflects the degree of coordination between functional structure and efficiency output. However, most existing evaluation methods overlook the heterogeneous functional endowments of spatial units and therefore cannot reasonably assess the efficiency that each unit should achieve under comparable conditions. To address this limitation, this study proposes a function-oriented and interpretable framework for territorial spatial efficiency evaluation based on the Production–Living–Ecological (PLE) paradigm. An entropy-weighted indicator system is constructed to measure production, living, and ecological efficiency, and an XGBoost–SHAP model is developed to infer the nonlinear mapping between functional attributes and efficiency performance and to estimate the ideal efficiency of each spatial unit under Quanzhou’s prevailing macro-environment. By comparing ideal and observed efficiency, functional–efficiency deviations are identified and spatially diagnosed. The results show that territorial efficiency exhibits strong spatial heterogeneity: production and living efficiency concentrate in the southeastern coastal belt, whereas ecological efficiency dominates in the northwestern mountainous region. The mechanisms differ substantially across dimensions. Production efficiency is primarily driven by neighborhood living and productive conditions; living efficiency is dominated by structural inheritance and strengthened by service-related spillovers; and ecological efficiency depends overwhelmingly on local ecological endowments with additional neighborhood synergy. Approximately 45% of spatial units achieve functional–efficiency alignment, while peri-urban transition zones and hilly areas present significant negative deviations. This study advances territorial efficiency research by linking functional structure to efficiency generation through explainable machine learning, providing an interpretable analytical tool and actionable guidance for place-based spatial optimization and high-quality territorial governance. Full article
(This article belongs to the Special Issue Land Space Optimization and Governance)
Show Figures

Figure 1

28 pages, 4585 KB  
Article
Uncertainty-Aware Adaptive Intrusion Detection Using Hybrid CNN-LSTM with cWGAN-GP Augmentation and Human-in-the-Loop Feedback
by Clinton Manuel de Nascimento and Jin Hou
Safety 2025, 11(4), 120; https://doi.org/10.3390/safety11040120 - 5 Dec 2025
Viewed by 486
Abstract
Intrusion detection systems (IDSs) must operate under severe class imbalance, evolving attack behavior, and the need for calibrated decisions that integrate smoothly with security operations. We propose a human-in-the-loop IDS that combines a convolutional neural network and a long short-term memory network (CNN–LSTM) [...] Read more.
Intrusion detection systems (IDSs) must operate under severe class imbalance, evolving attack behavior, and the need for calibrated decisions that integrate smoothly with security operations. We propose a human-in-the-loop IDS that combines a convolutional neural network and a long short-term memory network (CNN–LSTM) classifier with a variational autoencoder (VAE)-seeded conditional Wasserstein generative adversarial network with gradient penalty (cWGAN-GP) augmentation and entropy-based abstention. Minority classes are reinforced offline via conditional generative adversarial (GAN) sampling, whereas high-entropy predictions are escalated for analysts and are incorporated into a curated retraining set. On CIC-IDS2017, the resulting framework delivered well-calibrated binary performance (ACC = 98.0%, DR = 96.6%, precision = 92.1%, F1 = 94.3%; baseline ECE ≈ 0.04, Brier ≈ 0.11) and substantially improved minority recall (e.g., Infiltration from 0% to >80%, Web Attack–XSS +25 pp, and DoS Slowhttptest +15 pp, for an overall +11 pp macro-recall gain). The deployed model remained lightweight (~42 MB, <10 ms per batch; ≈32 k flows/s on RTX-3050 Ti), and only approximately 1% of the flows were routed for human review. Extensive evaluation, including ROC/PR sweeps, reliability diagrams, cross-domain tests on CIC-IoT2023, and FGSM/PGD adversarial stress, highlights both the strengths and remaining limitations, notably residual errors on rare web attacks and limited IoT transfer. Overall, the framework provides a practical, calibrated, and extensible machine learning (ML) tier for modern IDS deployment and motivates future research on domain alignment and adversarial defense. Full article
Show Figures

Graphical abstract

18 pages, 1993 KB  
Article
Prediction, Uncertainty Quantification, and ANN-Assisted Operation of Anaerobic Digestion Guided by Entropy Using Machine Learning
by Zhipeng Zhuang, Xiaoshan Liu, Jing Jin, Ziwen Li, Yanheng Liu, Adriano Tavares and Dalin Li
Entropy 2025, 27(12), 1233; https://doi.org/10.3390/e27121233 - 5 Dec 2025
Viewed by 261
Abstract
Anaerobic digestion (AD) is a nonlinear and disturbance-sensitive process in which instability is often induced by feedstock variability and biological fluctuations. To address this challenge, this study develops an entropy-guided machine learning framework that integrates parameter prediction, uncertainty quantification, and entropy-based evaluation of [...] Read more.
Anaerobic digestion (AD) is a nonlinear and disturbance-sensitive process in which instability is often induced by feedstock variability and biological fluctuations. To address this challenge, this study develops an entropy-guided machine learning framework that integrates parameter prediction, uncertainty quantification, and entropy-based evaluation of AD operation. Using six months of industrial data (~10,000 samples), three models—support vector machine (SVM), random forest (RF), and artificial neural network (ANN)—were compared for predicting biogas yield, fermentation temperature, and volatile fatty acid (VFA) concentration. The ANN achieved the highest performance (accuracy = 96%, F1 = 0.95, root mean square error (RMSE) = 1.2 m3/t) and also exhibited the lowest prediction error entropy, indicating reduced uncertainty compared to RF and SVM. Feature entropy and permutation analysis consistently identified feed solids, organic matter, and feed rate as the most influential variables (>85% contribution), in agreement with the RF importance ranking. When applied as a real-time prediction and decision-support tool in the plant (“sensor → prediction → programmable logic controller (PLC)/operation → feedback”), the ANN model was associated with a reduction in gas-yield fluctuation from approximately ±18% to ±5%, a decrease in process entropy, and an improvement in operational stability of about 23%. Techno-economic and life-cycle assessments further indicated a 12–15 USD/t lower operating cost, 8–10% energy savings, and 5–7% CO2 reduction compared with baseline operation. Overall, this study demonstrates that combining machine learning with entropy-based uncertainty analysis offers a reliable and interpretable pathway for more stable and low-carbon AD operation. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications, 2nd Edition)
Show Figures

Figure 1

24 pages, 1694 KB  
Systematic Review
Advanced Clustering for Mobile Network Optimization: A Systematic Literature Review
by Claude Mukatshung Nawej, Pius Adewale Owolawi and Tom Mmbasu Walingo
Sensors 2025, 25(23), 7370; https://doi.org/10.3390/s25237370 - 4 Dec 2025
Viewed by 457
Abstract
5G technology represents a transformative shift in mobile communications, delivering improved ultra-low latency, data throughput, and the capacity to support huge device connectivity, surpassing the capabilities of LTE systems. As global telecommunication operators shift toward widespread 5G implementation, ensuring optimal network performance and [...] Read more.
5G technology represents a transformative shift in mobile communications, delivering improved ultra-low latency, data throughput, and the capacity to support huge device connectivity, surpassing the capabilities of LTE systems. As global telecommunication operators shift toward widespread 5G implementation, ensuring optimal network performance and intelligent resource management has become increasingly obvious. To address these challenges, this study explored the role of advanced clustering methods in optimizing cellular networks under heterogeneous and dynamic conditions. A systematic literature review (SLR) was conducted by analyzing 40 peer-reviewed and non-peer-reviewed studies selected from an initial collection of 500 papers retrieved from the Semantic Scholar Open Research Corpus. This review examines a diversity of clustering approaches, including spectral clustering with Bayesian non-parametric models and K-means, density-based clustering such as DBSCAN, and deep representation-based methods like Differential Evolution Memetic Clustering (DEMC) and Domain Adaptive Neighborhood Clustering via Entropy Optimization (DANCE). Key performance outcomes reported across studies include anomaly detection accuracy of up to 98.8%, delivery rate improvements of up to 89.4%, and handover prediction accuracy improvements of approximately 43%, particularly when clustering techniques are combined with machine learning models. In addition to summarizing their effectiveness, this review highlights methodological trends in clustering parameters, mechanisms, experimental setups, and quality metrics. The findings suggest that advanced clustering models play a crucial role in intelligent spectrum sensing, adaptive mobility management, and efficient resource allocation, thereby contributing meaningfully to the development of intelligent 5G/6G mobile network infrastructures. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 2583 KB  
Article
Hybrid Demand Forecasting in Fuel Supply Chains: ARIMA with Non-Homogeneous Markov Chains and Feature-Conditioned Evaluation
by Daniel Kubek and Paweł Więcek
Energies 2025, 18(22), 6044; https://doi.org/10.3390/en18226044 - 19 Nov 2025
Viewed by 539
Abstract
In the context of growing data availability and increasing complexity of demand patterns in retail fuel distribution, selecting effective forecasting models for large collections of time series is becoming a key operational challenge. This study investigates the effectiveness of a hybrid forecasting approach [...] Read more.
In the context of growing data availability and increasing complexity of demand patterns in retail fuel distribution, selecting effective forecasting models for large collections of time series is becoming a key operational challenge. This study investigates the effectiveness of a hybrid forecasting approach combining ARIMA models with dynamically updated Markov Chains. Unlike many existing studies that focus on isolated or small-scale experiments, this research evaluates the hybrid model across a full set of approximately 150 time series collected from multiple petrol stations, without pre-clustering or manual selection. A comprehensive set of statistical and structural features is extracted from each time series to analyze their relation to forecast performance. The results show that the hybrid ARIMA–Markov approach significantly outperforms both individual statistical models and commonly applied machine learning methods in many cases, particularly for non-stationary or regime-shifting series. In 100% of cases, the hybrid model reduced the error compared to both baseline models—the median RMSE improvement over ARIMA was 13.03%, and 15.64% over the Markov model, with statistical significance confirmed by the Wilcoxon signed-rank test. The analysis also highlights specific time series features—such as entropy, regime shift frequency, and autocorrelation structure—as strong indicators of whether hybrid modeling yields performance gains. Feature-conditioning analyses (e.g., lag-1 autocorrelation, volatility, entropy) explain when hybridization helps, enabling a feature-aware workflow that selectively deploys model components and narrows parameter searches. The greatest benefits of applying the hybrid model were observed for time series characterized by high variability, moderate entropy of differences, and a well-defined temporal dependency structure—the correlation values between these features and the improvement in hybrid performance relative to ARIMA and Markov models reached 0.55–0.58, ensuring adequate statistical significance. Such approaches are particularly valuable in enterprise environments dealing with thousands of time series, where automated model configuration becomes essential. The findings position interpretable, adaptive hybrids as a practical default for short-horizon demand forecasting in fuel supply chains and, more broadly, in energy-use applications characterized by heterogeneous profiles and evolving regimes. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

32 pages, 2788 KB  
Article
Improving Variable-Rate Learned Image Compression with Transformer-Based QR Prediction and Perceptual Optimization
by Yong-Hwan Lee and Wan-Bum Lee
Appl. Sci. 2025, 15(22), 12151; https://doi.org/10.3390/app152212151 - 16 Nov 2025
Viewed by 1152
Abstract
We present a variable-rate learned image compression (LIC) model that integrates Transformer-based quantization–reconstruction (QR) offset prediction, entropy-guided hyper-latent quantization, and perceptually informed multi-objective optimization. Unlike existing LIC frameworks that train separate networks for each bitrate, the proposed method achieves continuous rate adaptation within [...] Read more.
We present a variable-rate learned image compression (LIC) model that integrates Transformer-based quantization–reconstruction (QR) offset prediction, entropy-guided hyper-latent quantization, and perceptually informed multi-objective optimization. Unlike existing LIC frameworks that train separate networks for each bitrate, the proposed method achieves continuous rate adaptation within a single model by dynamically balancing rate, distortion and perceptual objectives. Channel-wise asymmetric quantization and a composite loss combining MSE and LPIPS further enhance reconstruction fidelity and subjective quality. Experiments on the Kodak, CLIC2020 and Tecnick datasets show gains of +1.15 dB PSNR, +0.065 MS-SSIM, and −0.32 LPIPS relative to the baselines variable-rate method, while improving bitrate-control accuracy by 62.5%. With approximately 15% computational overhead, the framework achieves competitive compression efficiency and enhanced perceptual quality, offering a practical solution for adaptive, high-quality image delivery. Full article
Show Figures

Figure 1

50 pages, 837 KB  
Article
FedEHD: Entropic High-Order Descent for Robust Federated Multi-Source Environmental Monitoring
by Koffka Khan, Winston Elibox, Treina Dinoo Ramlochan, Wayne Rajkumar and Shanta Ramnath
AI 2025, 6(11), 293; https://doi.org/10.3390/ai6110293 - 14 Nov 2025
Viewed by 777
Abstract
We propose Federated Entropic High-Order Descent (FedEHD), a drop-in client optimizer that augments local SGD with (i) an entropy (sign) term and (ii) quadratic and cubic gradient components for drift control and implicit clipping. Across non-IID CIFAR-10 and CIFAR-100 benchmarks (100 clients, 10% [...] Read more.
We propose Federated Entropic High-Order Descent (FedEHD), a drop-in client optimizer that augments local SGD with (i) an entropy (sign) term and (ii) quadratic and cubic gradient components for drift control and implicit clipping. Across non-IID CIFAR-10 and CIFAR-100 benchmarks (100 clients, 10% sampled per round), FedEHD achieves faster and higher convergence than strong baselines including FedAvg, FedProx, SCAFFOLD, FedDyn, MOON, and FedAdam. On CIFAR-10, it reaches 70% accuracy in approximately 80 rounds (versus 100 for MOON and 130 for SCAFFOLD) and attains a final accuracy of 72.5%. On CIFAR-100, FedEHD surpasses 60% accuracy by about 150 rounds (compared with 250 for MOON and 300 for SCAFFOLD) and achieves a final accuracy of 68.0%. In an environmental monitoring case study involving four distributed air-quality stations, FedEHD yields the highest macro AUC/F1 and improved calibration (ECE 0.183 versus 0.186–0.210 for competing federated methods) without additional communication and with only O(d) local overhead. The method further provides scale-invariant coefficients with optional automatic adaptation, theoretical guarantees for surrogate descent and drift reduction, and convergence curves that illustrate smooth and stable learning dynamics. Full article
Show Figures

Figure 1

25 pages, 1653 KB  
Article
Dynamic Heterogeneous Multi-Agent Inverse Reinforcement Learning Based on Graph Attention Mean Field
by Li Song, Irfan Ali Channa, Zeyu Wang and Guangyu Sun
Symmetry 2025, 17(11), 1951; https://doi.org/10.3390/sym17111951 - 13 Nov 2025
Viewed by 841
Abstract
Multi-agent inverse reinforcement learning (MA-IRL) infers the underlying reward functions or objectives of multiple agents by observing their behavioral data, thereby providing insights into collaboration, competition, or mixed interaction strategies among agents, and addressing the symmetrical ambiguity problem where multiple rewards may correspond [...] Read more.
Multi-agent inverse reinforcement learning (MA-IRL) infers the underlying reward functions or objectives of multiple agents by observing their behavioral data, thereby providing insights into collaboration, competition, or mixed interaction strategies among agents, and addressing the symmetrical ambiguity problem where multiple rewards may correspond to the same strategy. However, most existing algorithms mainly focus on solving cooperative and non-cooperative tasks among homogeneous multi-agent systems, making it difficult to adapt to the dynamic topologies and heterogeneous behavioral strategies of multi-agent systems in real-world applications. This makes it difficult for the algorithm to adapt to scenarios with locally sparse interactions and dynamic heterogeneity, such as autonomous driving, drone swarms, and robot clusters. To address this problem, this study proposes a dynamic heterogeneous multi-agent inverse reinforcement learning framework (GAMF-DHIRL) based on a graph attention mean field (GAMF) to infer the potential reward functions of agents. In GAMF-DHIRL, we introduce a graph attention mean field theory based on adversarial maximum entropy inverse reinforcement learning to dynamically model dependencies between agents and adaptively adjust the influence weights of neighboring nodes through attention mechanisms. Specifically, the GAMF module uses a dynamic adjacency matrix to capture the time-varying characteristics of the interactions among agents. Meanwhile, the typed mean-field approximation reduces computational complexity. Experiments demonstrate that the proposed method can efficiently recover reward functions of heterogeneous agents in collaborative tasks and adversarial environments, and it outperforms traditional MA-IRL methods. Full article
Show Figures

Figure 1

16 pages, 4636 KB  
Article
Radiomics for Dynamic Lung Cancer Risk Prediction in USPSTF-Ineligible Patients
by Morteza Salehjahromi, Hui Li, Eman Showkatian, Maliazurina B. Saad, Mohamed Qayati, Sherif M. Ismail, Sheeba J. Sujit, Amgad Muneer, Muhammad Aminu, Lingzhi Hong, Xiaoyu Han, Simon Heeke, Tina Cascone, Xiuning Le, Natalie Vokes, Don L. Gibbons, Iakovos Toumazis, Edwin J. Ostrin, Mara B. Antonoff, Ara A. Vaporciyan, David Jaffray, Fernando U. Kay, Brett W. Carter, Carol C. Wu, Myrna C. B. Godoy, J. Jack Lee, David E. Gerber, John V. Heymach, Jianjun Zhang and Jia Wuadd Show full author list remove Hide full author list
Cancers 2025, 17(21), 3406; https://doi.org/10.3390/cancers17213406 - 23 Oct 2025
Viewed by 1154
Abstract
Background: Non-smokers and individuals with minimal smoking history represent a significant proportion of lung cancer cases but are often overlooked in current risk assessment models. Pulmonary nodules are commonly detected incidentally—appearing in approximately 24–31% of all chest CT scans regardless of smoking [...] Read more.
Background: Non-smokers and individuals with minimal smoking history represent a significant proportion of lung cancer cases but are often overlooked in current risk assessment models. Pulmonary nodules are commonly detected incidentally—appearing in approximately 24–31% of all chest CT scans regardless of smoking status. However, most established risk models, such as the Brock model, were developed using cohorts heavily enriched with individuals who have substantial smoking histories. This limits their generalizability to non-smoking and light-smoking populations, highlighting the need for more inclusive and tailored risk prediction strategies. Purpose: We aimed to develop a longitudinal radiomics-based approach for lung cancer risk prediction, integrating time-varying radiomic modeling to enhance early detection in USPSTF-ineligible patients. Methods: Unlike conventional models that rely on a single scan, we conducted a longitudinal analysis of 122 patients who were later diagnosed with lung cancer, with a total of 622 CT scans analyzed. Of these patients, 69% were former smokers, while 30% had never smoked. Quantitative radiomic features were extracted from serial chest CT scans to capture temporal changes in nodule evolution. A time-varying survival model was implemented to dynamically assess lung cancer risk. Additionally, we evaluated the integration of handcrafted radiomic features and the deep learning-based Sybil model to determine the added value of combining local nodule characteristics with global lung assessments. Results: Our radiomic analysis identified specific CT patterns associated with malignant transformation, including increased nodule size, voxel intensity, textural entropy, as indicators of tumor heterogeneity and progression. Integrating radiomics, delta-radiomics, and longitudinal imaging features resulted in the optimal predictive performance during cross-validation (concordance index [C-index]: 0.69), surpassing that of models using demographics alone (C-index: 0.50) and Sybil alone (C-index: 0.54). Compared to the Brock model (67% accuracy, 100% sensitivity, 33% specificity), our composite risk model achieved 78% accuracy, 89% sensitivity, and 67% specificity, demonstrating improved early cancer risk stratification. Kaplan–Meier curves and individualized cancer development probability functions further validated the model’s ability to track dynamic risk progression for individual patients. Visual analysis of longitudinal CT scans confirmed alignment between predicted risk and evolving nodule characteristics. Conclusions: Our study demonstrates that integrating radiomics, sybil, and clinical factors enhances future lung cancer risk prediction in USPSTF-ineligible patients, outperforming existing models and supporting personalized screening and early intervention strategies. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Lung Cancer)
Show Figures

Figure 1

21 pages, 2556 KB  
Article
Comparison of Machine Learning Models in Nonlinear and Stochastic Signal Classification
by Elzbieta Olejarczyk and Carlo Massaroni
Appl. Sci. 2025, 15(20), 11226; https://doi.org/10.3390/app152011226 - 20 Oct 2025
Viewed by 502
Abstract
This study aims to compare different classifiers in the context of distinguishing two classes of signals: nonlinear electrocardiography (ECG) signals and stochastic artifacts occurring in ECG signals. The ECG signals from a single-lead wearable Movesense device were analyzed with a set of eight [...] Read more.
This study aims to compare different classifiers in the context of distinguishing two classes of signals: nonlinear electrocardiography (ECG) signals and stochastic artifacts occurring in ECG signals. The ECG signals from a single-lead wearable Movesense device were analyzed with a set of eight features: variance (VAR), three fractal dimension measures (Higuchi fractal dimension (HFD), Katz fractal dimension (KFD), and Detrended Fluctuation Analysis (DFA)), and four entropy measures (approximate entropy (ApEn), sample entropy (SampEn), and multiscale entropy (MSE) for scales 1 and 2). The minimum-redundancy maximum-relevance algorithm was applied for evaluation of feature importance. A broad spectrum of machine learning models was considered for classification. The proposed approach allowed for comparison of classifier features, as well as providing a broader insight into the characteristics of the signals themselves. The most important features for classification were VAR, DFA, ApEn, and HFD. The best performance among 34 classifiers was obtained using an optimized RUSBoosted Trees ensemble classifier (sensitivity, specificity, and positive and negative predictive values were 99.8, 73.7%, 99.8, and 74.3, respectively). The accuracy of the Movesense device was very high (99.6%). Moreover, the multifractality of ECG during sleep was observed in the relationship between SampEn (or ApEn) and MSE. Full article
(This article belongs to the Special Issue New Advances in Electrocardiogram (ECG) Signal Processing)
Show Figures

Figure 1

27 pages, 1706 KB  
Article
An End-to-End Framework for Spatiotemporal Data Recovery and Unsupervised Cluster Partitioning in Distributed PV Systems
by Bingxu Zhai, Yuanzhuo Li, Wei Qiu, Rui Zhang, Zhilin Jiang, Yinuo Zeng, Tao Qian and Qinran Hu
Processes 2025, 13(10), 3186; https://doi.org/10.3390/pr13103186 - 7 Oct 2025
Viewed by 544
Abstract
The growing penetration of distributed photovoltaic (PV) systems presents significant operational challenges for power grids, driven by the scarcity of historical data and the high spatiotemporal variability of PV generation. To address these challenges, we propose Generative Reconstruction and Adaptive Identification via Latents [...] Read more.
The growing penetration of distributed photovoltaic (PV) systems presents significant operational challenges for power grids, driven by the scarcity of historical data and the high spatiotemporal variability of PV generation. To address these challenges, we propose Generative Reconstruction and Adaptive Identification via Latents (GRAIL), a unified, end-to-end framework that integrates generative modeling with adaptive clustering to discover latent structures and representative scenarios in PV datasets. GRAIL operates through a closed-loop mechanism where clustering feedback guides a cluster-aware data generation process, and the resulting generative augmentation strengthens partitioning in the latent space. Evaluated on a real-world, multi-site PV dataset with a high missing data rate of 45.4%, GRAIL consistently outperforms both classical clustering algorithms and deep embedding-based methods. Specifically, GRAIL achieves a Silhouette Score of 0.969, a Calinski–Harabasz index exceeding 4.132×106, and a Davies–Bouldin index of 0.042, demonstrating superior intra-cluster compactness and inter-cluster separation. The framework also yields a normalized entropy of 0.994, which indicates highly balanced partitioning. These results underscore that coupling data generation with clustering is a powerful strategy for expressive and robust structure learning in data-sparse environments. Notably, GRAIL achieves significant performance gains over the strongest deep learning baseline that lacks a generative component, securing the highest composite score among all evaluated methods. The framework is also computationally efficient. Its alternating optimization converges rapidly, and clustering and reconstruction metrics stabilize within approximately six iterations. Beyond quantitative performance, GRAIL produces physically interpretable clusters that correspond to distinct weather-driven regimes and capture cross-site dependencies. These clusters serve as compact and robust state descriptors, valuable for downstream applications such as PV forecasting, dispatch optimization, and intelligent energy management in modern power systems. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

21 pages, 4721 KB  
Article
Automated Brain Tumor MRI Segmentation Using ARU-Net with Residual-Attention Modules
by Erdal Özbay and Feyza Altunbey Özbay
Diagnostics 2025, 15(18), 2326; https://doi.org/10.3390/diagnostics15182326 - 13 Sep 2025
Viewed by 1239
Abstract
Background/Objectives: Accurate segmentation of brain tumors in Magnetic Resonance Imaging (MRI) scans is critical for diagnosis and treatment planning due to their life-threatening nature. This study aims to develop a robust and automated method capable of precisely delineating heterogeneous tumor regions while improving [...] Read more.
Background/Objectives: Accurate segmentation of brain tumors in Magnetic Resonance Imaging (MRI) scans is critical for diagnosis and treatment planning due to their life-threatening nature. This study aims to develop a robust and automated method capable of precisely delineating heterogeneous tumor regions while improving segmentation accuracy and generalization. Methods: We propose Attention Res-UNet (ARU-Net), a novel Deep Learning (DL) architecture integrating residual connections, Adaptive Channel Attention (ACA), and Dimensional-space Triplet Attention (DTA) modules. The encoding module efficiently extracts and refines relevant feature information by applying ACA to the lower layers of convolutional and residual blocks. The DTA is fixed to the upper layers of the decoding module, decoupling channel weights to better extract and fuse multi-scale features, enhancing both performance and efficiency. Input MRI images are pre-processed using Contrast Limited Adaptive Histogram Equalization (CLAHE) for contrast enhancement, denoising filters, and Linear Kuwahara filtering to preserve edges while smoothing homogeneous regions. The network is trained using categorical cross-entropy loss with the Adam optimizer on the BTMRII dataset, and comparative experiments are conducted against baseline U-Net, DenseNet121, and Xception models. Performance is evaluated using accuracy, precision, recall, F1-score, Dice Similarity Coefficient (DSC), and Intersection over Union (IoU) metrics. Results: Baseline U-Net showed significant performance gains after adding residual connections and ACA modules, with DSC improving by approximately 3.3%, accuracy by 3.2%, IoU by 7.7%, and F1-score by 3.3%. ARU-Net further enhanced segmentation performance, achieving 98.3% accuracy, 98.1% DSC, 96.3% IoU, and a superior F1-score, representing additional improvements of 1.1–2.0% over the U-Net + Residual + ACA variant. Visualizations confirmed smoother boundaries and more precise tumor contours across all six tumor classes, highlighting ARU-Net’s ability to capture heterogeneous tumor structures and fine structural details more effectively than both baseline U-Net and other conventional DL models. Conclusions: ARU-Net, combined with an effective pre-processing strategy, provides a highly reliable and precise solution for automated brain tumor segmentation. Its improvements across multiple evaluation metrics over U-Net and other conventional models highlight its potential for clinical application and contribute novel insights to medical image analysis research. Full article
(This article belongs to the Special Issue Advances in Functional and Structural MR Image Analysis)
Show Figures

Figure 1

25 pages, 1689 KB  
Article
A Data-Driven Framework for Modeling Car-Following Behavior Using Conditional Transfer Entropy and Dynamic Mode Decomposition
by Poorendra Ramlall and Subhradeep Roy
Appl. Sci. 2025, 15(17), 9700; https://doi.org/10.3390/app15179700 - 3 Sep 2025
Viewed by 1057
Abstract
Accurate modeling of car-following behavior is essential for understanding traffic dynamics and enabling predictive control in intelligent transportation systems. This study presents a novel data-driven framework that combines information-theoretic input selection via conditional transfer entropy (CTE) with dynamic mode decomposition with control (DMDc) [...] Read more.
Accurate modeling of car-following behavior is essential for understanding traffic dynamics and enabling predictive control in intelligent transportation systems. This study presents a novel data-driven framework that combines information-theoretic input selection via conditional transfer entropy (CTE) with dynamic mode decomposition with control (DMDc) for identifying and forecasting car-following dynamics. In the first step, CTE is employed to identify the specific vehicles that exert directional influence on a given subject vehicle, thereby systematically determining the relevant control inputs for modeling its behavior. In the second step, DMDc is applied to estimate and predict the dynamics by reconstructing the closed-form expression of the dynamical system governing the subject vehicle’s motion. Unlike conventional machine learning models that typically seek a single generalized representation across all drivers, our framework develops individualized models that explicitly preserve driver heterogeneity. Using both synthetic data from multiple traffic models and real-world naturalistic driving datasets, we demonstrate that DMDc accurately captures nonlinear vehicle interactions and achieves high-fidelity short-term predictions. Analysis of the estimated system matrices reveals that DMDc naturally approximates kinematic relationships, further reinforcing its interpretability. Importantly, this is the first study to apply DMDc to model and predict car-following behavior using real-world driving data. The proposed framework offers a computationally efficient and interpretable tool for traffic behavior analysis, with potential applications in adaptive traffic control, autonomous vehicle planning, and human-driver modeling. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

31 pages, 18843 KB  
Article
Liquid Adaptive AI: A Theoretical Framework for Continuously Self-Improving Artificial Intelligence
by Thomas R. Caulfield, Naeyma N. Islam and Rohit Chitale
AI 2025, 6(8), 186; https://doi.org/10.3390/ai6080186 - 14 Aug 2025
Viewed by 3738
Abstract
We present Liquid Adaptive AI as a theoretical framework and mathematical basis for artificial intelligence systems capable of continuous structural adaptation and autonomous capability development. This work explores the conceptual boundaries of adaptive AI by formalizing three interconnected mechanisms: (1) entropy-guided hyperdimensional knowledge [...] Read more.
We present Liquid Adaptive AI as a theoretical framework and mathematical basis for artificial intelligence systems capable of continuous structural adaptation and autonomous capability development. This work explores the conceptual boundaries of adaptive AI by formalizing three interconnected mechanisms: (1) entropy-guided hyperdimensional knowledge graphs that could autonomously restructure based on information-theoretic criteria; (2) a self-development engine using hierarchical Bayesian optimization for runtime architecture modification; and (3) a federated multi-agent framework with emergent specialization through distributed reinforcement learning. We address fundamental limitations in current AI systems through mathematically formalized processes of dynamic parameter adjustment, structural self-modification, and cross-domain knowledge synthesis, while immediate implementation faces substantial computational challenges requiring infrastructure on the scale of current large language model training facilities, we provide architectural specifications, theoretical convergence bounds, and evaluation criteria as a foundation for future research. This theoretical exploration establishes mathematical foundations for a potential new paradigm in artificial intelligence that would transition from episodic training to persistent autonomous development, offering a long-term research direction for the field. A comprehensive Supplementary Materials document provides detailed technical analysis, computational requirements, and an incremental development roadmap spanning approximately a decade. Full article
Show Figures

Figure 1

Back to TopTop