Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (101)

Search Parameters:
Keywords = approximate learning entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5063 KB  
Article
Comparative Analysis of Surrogate Models for Organic Rankine Cycle Turbine Optimization
by Yeun-Seop Kim, Jong-Beom Seo, Ho-Saeng Lee and Sang-Jo Han
Energies 2026, 19(5), 1372; https://doi.org/10.3390/en19051372 - 8 Mar 2026
Viewed by 290
Abstract
To enhance the aerodynamic performance of organic Rankine cycle (ORC) turbines under increasing energy demands, surrogate-based optimization was applied to a 100 kW ORC turbine rotor. Four representative surrogate models—a radial basis neural network (RBNN), Kriging, response surface approximation (RSA), and a PRESS-based [...] Read more.
To enhance the aerodynamic performance of organic Rankine cycle (ORC) turbines under increasing energy demands, surrogate-based optimization was applied to a 100 kW ORC turbine rotor. Four representative surrogate models—a radial basis neural network (RBNN), Kriging, response surface approximation (RSA), and a PRESS-based weighted (PBW) ensemble—were comparatively evaluated under identical numerical conditions. Independent optimizations of the first- and second-stage rotors enabled an examination of how different design variable space characteristics influenced surrogate predictive behavior. A fractional factorial sampling strategy was used to construct the training dataset, and learning curve analysis was conducted to assess sample size adequacy. Sensitivity estimation revealed distinct response surface characteristics between stages, allowing the interpretation of variations in surrogate stability. In both stages, geometric modifications were primarily concentrated near the outlet blade angle, identified as a dominant variable influencing efficiency. CFD validation confirmed that surrogate-based exploration successfully identified improved rotor geometries. Flow-field analysis indicated reduced entropy generation near the trailing edge region, suggesting the mitigation of aerodynamic losses. The results demonstrate that surrogate-based optimization can reliably improve turbine performance within a bounded design space, while the relative effectiveness of surrogate models depends on the sensitivity structure of the underlying problem. Full article
Show Figures

Figure 1

18 pages, 10989 KB  
Article
Ensemble Entropy with Adaptive Deep Fusion for Short-Term Power Load Forecasting
by Yiling Wang, Yan Niu, Xuejun Li, Xianglong Dai, Xiaopeng Wang, Yong Jiang, Chenghu He and Li Zhou
Entropy 2026, 28(2), 158; https://doi.org/10.3390/e28020158 - 31 Jan 2026
Viewed by 285
Abstract
Accurate power load forecasting is crucial for ensuring the safety and economic operation of power systems. However, the complex, non-stationary, and heterogeneous nature of power load data presents significant challenges for traditional prediction methods, particularly in capturing instantaneous dynamics and effectively fusing multi-feature [...] Read more.
Accurate power load forecasting is crucial for ensuring the safety and economic operation of power systems. However, the complex, non-stationary, and heterogeneous nature of power load data presents significant challenges for traditional prediction methods, particularly in capturing instantaneous dynamics and effectively fusing multi-feature information. This paper proposes a novel framework—Ensemble Entropy with Adaptive Deep Fusion (EEADF)—for short-term multi-feature power load forecasting. The framework introduces an ensemble instantaneous entropy extraction module to compute and fuse multiple entropy types (approximate, sample, and permutation entropies) in real-time within sliding windows, creating a sensitive representation of system states. A task-adaptive hierarchical fusion mechanism is employed to balance computational efficiency and model expressivity. For time-series forecasting tasks with relatively structured patterns, feature concatenation fusion is used that directly combines LSTM sequence features with multimodal entropy features. For complex multimodal understanding tasks requiring nuanced cross-modal interactions, multi-head self-attention fusion is implemented that dynamically weights feature importance based on contextual relevance. A dual-branch deep learning model is constructed that processes both raw sequences (via LSTM) and extracted entropy features (via MLP) in parallel. Extensive experiments on a carefully designed simulated multimodal dataset demonstrate the framework’s robustness in recognizing diverse dynamic patterns, achieving MSE of 0.0125, MAE of 0.0794, and R2 of 0.9932. Validation on the real-world ETDataset for power load forecasting confirms that the proposed method significantly outperforms baseline models (LSTM, TCN, transformer, and informer) and traditional entropy methods across standard evaluation metrics (MSE, MAE, RMSE, MAPE, and R2). Ablation studies further verify the critical roles of both the entropy features and the fusion mechanism. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

40 pages, 5081 KB  
Article
HAO-AVP: An Entropy-Gini Reinforcement Learning Assisted Hierarchical Void Repair Protocol for Underwater Wireless Sensor Networks
by Lijun Hao, Chunbo Ma and Jun Ao
Sensors 2026, 26(2), 684; https://doi.org/10.3390/s26020684 - 20 Jan 2026
Viewed by 278
Abstract
Wireless Sensor Networks (WSNs) are pivotal for data acquisition, yet reliability is severely constrained by routing voids induced by sparsity, uneven energy, and high dynamicity. To address these challenges, the Hybrid Acoustic-Optical Adaptive Void-handling Protocol (HAO-AVP) is proposed to satisfy the requirements for [...] Read more.
Wireless Sensor Networks (WSNs) are pivotal for data acquisition, yet reliability is severely constrained by routing voids induced by sparsity, uneven energy, and high dynamicity. To address these challenges, the Hybrid Acoustic-Optical Adaptive Void-handling Protocol (HAO-AVP) is proposed to satisfy the requirements for highly reliable communication in complex underwater environments. First, targeting uneven energy, a reinforcement learning mechanism utilizing Gini coefficient and entropy is adopted. By optimizing energy distribution, voids are proactively avoided. Second, to address routing interruptions caused by the high dynamicity of topology, a collaborative mechanism for active prediction and real-time identification is constructed. Specifically, this mechanism integrates a Markov chain energy prediction model with on-demand hop discovery technology. Through this integration, precise anticipation and rapid localization of potential void risks are achieved. Finally, to recover damaged links at the minimum cost, a four-level progressive recovery strategy, comprising intra-medium adjustment, cross-medium hopping, path backtracking, and Autonomous Underwater Vehicle (AUV)-assisted recovery, is designed. This strategy is capable of adaptively selecting recovery measures based on the severity of the void. Simulation results demonstrate that, compared with existing mainstream protocols, the void identification rate of the proposed protocol is improved by approximately 7.6%, 8.4%, 13.8%, 19.5%, and 25.3%, respectively, and the void recovery rate is increased by approximately 4.3%, 9.6%, 12.0%, 18.4%, and 24.2%, respectively. In particular, enhanced robustness and a prolonged network life cycle are exhibited in sparse and dynamic networks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

33 pages, 4465 KB  
Article
Environmentally Sustainable HVAC Management in Smart Buildings Using a Reinforcement Learning Framework SACEM
by Abdullah Alshammari, Ammar Ahmed E. Elhadi and Ashraf Osman Ibrahim
Sustainability 2026, 18(2), 1036; https://doi.org/10.3390/su18021036 - 20 Jan 2026
Viewed by 473
Abstract
Heating, ventilation, and air-conditioning (HVAC) systems dominate energy consumption in hot-climate buildings, where maintaining occupant comfort under extreme outdoor conditions remains a critical challenge, particularly under emerging time-of-use (TOU) electricity pricing schemes. While deep reinforcement learning (DRL) has shown promise for adaptive HVAC [...] Read more.
Heating, ventilation, and air-conditioning (HVAC) systems dominate energy consumption in hot-climate buildings, where maintaining occupant comfort under extreme outdoor conditions remains a critical challenge, particularly under emerging time-of-use (TOU) electricity pricing schemes. While deep reinforcement learning (DRL) has shown promise for adaptive HVAC control, existing approaches often suffer from comfort violations, myopic decision making, and limited robustness to uncertainty. This paper proposes a comfort-first hybrid control framework that integrates Soft Actor–Critic (SAC) with a Cross-Entropy Method (CEM) refinement layer, referred to as SACEM. The framework combines data-efficient off-policy learning with short-horizon predictive optimization and safety-aware action projection to explicitly prioritize thermal comfort while minimizing energy use, operating cost, and peak demand. The control problem is formulated as a Markov Decision Process using a simplified thermal model representative of commercial buildings in hot desert climates. The proposed approach is evaluated through extensive simulation using Saudi Arabian summer weather conditions, realistic occupancy patterns, and a three-tier TOU electricity tariff. Performance is assessed against state-of-the-art baselines, including PPO, TD3, and standard SAC, using comfort, energy, cost, and peak demand metrics, complemented by ablation and disturbance-based stress tests. Results show that SACEM achieves a comfort score of 95.8%, while reducing energy consumption and operating cost by approximately 21% relative to the strongest baseline. The findings demonstrate that integrating comfort-dominant reward design with decision-time look-ahead yields robust, economically viable HVAC control suitable for deployment in hot-climate smart buildings. Full article
Show Figures

Figure 1

16 pages, 1560 KB  
Article
Performance Comparison of U-Net and Its Variants for Carotid Intima–Media Segmentation in Ultrasound Images
by Seungju Jeong, Minjeong Park, Sumin Jeong and Dong Chan Park
Diagnostics 2026, 16(1), 2; https://doi.org/10.3390/diagnostics16010002 - 19 Dec 2025
Viewed by 721
Abstract
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid [...] Read more.
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid Ultrasound Boundary Study (CUBS) dataset (2176 images from 1088 subjects). Images were preprocessed using histogram-based smoothing and resized to a resolution of 256 × 256 pixels. Model training was conducted using identical hyperparameters (50 epochs, batch size 8, Adam optimizer with a learning rate of 1 × 10−4, and binary cross-entropy loss). Segmentation accuracy was assessed using Dice, Intersection over Union (IoU), Precision, Recall, and Accuracy metrics, while real-time performance was evaluated based on training/inference times and the model parameter counts. Results: All models achieved high accuracy, with Dice/IoU scores above 0.80/0.67. Attention U-Net achieved the highest segmentation accuracy, while UNeXt demonstrated the fastest training/inference speeds (approximately 420,000 parameters). Qualitatively, UNet++ produced smooth and natural boundaries, highlighting its strength in boundary reconstruction. Additionally, the relationship between the model parameter count and Dice performance was visualized to illustrate the tradeoff between accuracy and efficiency. Conclusions: This study provides a quantitative/qualitative evaluation of the accuracy, efficiency, and boundary reconstruction characteristics of U-Net-based models for CIMT segmentation, offering guidance for model selection according to clinical requirements (accuracy vs. real-time performance). Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

25 pages, 7436 KB  
Article
Assessing the Functional–Efficiency Mismatch of Territorial Space Using Explainable Machine Learning: A Case Study of Quanzhou, China
by Zehua Ke, Wei Wei, Mengyao Hong, Junnan Xia and Liming Bo
Land 2025, 14(12), 2403; https://doi.org/10.3390/land14122403 - 11 Dec 2025
Viewed by 465
Abstract
As the foundational carrier of socio-economic development and ecological security, territorial space reflects the degree of coordination between functional structure and efficiency output. However, most existing evaluation methods overlook the heterogeneous functional endowments of spatial units and therefore cannot reasonably assess the efficiency [...] Read more.
As the foundational carrier of socio-economic development and ecological security, territorial space reflects the degree of coordination between functional structure and efficiency output. However, most existing evaluation methods overlook the heterogeneous functional endowments of spatial units and therefore cannot reasonably assess the efficiency that each unit should achieve under comparable conditions. To address this limitation, this study proposes a function-oriented and interpretable framework for territorial spatial efficiency evaluation based on the Production–Living–Ecological (PLE) paradigm. An entropy-weighted indicator system is constructed to measure production, living, and ecological efficiency, and an XGBoost–SHAP model is developed to infer the nonlinear mapping between functional attributes and efficiency performance and to estimate the ideal efficiency of each spatial unit under Quanzhou’s prevailing macro-environment. By comparing ideal and observed efficiency, functional–efficiency deviations are identified and spatially diagnosed. The results show that territorial efficiency exhibits strong spatial heterogeneity: production and living efficiency concentrate in the southeastern coastal belt, whereas ecological efficiency dominates in the northwestern mountainous region. The mechanisms differ substantially across dimensions. Production efficiency is primarily driven by neighborhood living and productive conditions; living efficiency is dominated by structural inheritance and strengthened by service-related spillovers; and ecological efficiency depends overwhelmingly on local ecological endowments with additional neighborhood synergy. Approximately 45% of spatial units achieve functional–efficiency alignment, while peri-urban transition zones and hilly areas present significant negative deviations. This study advances territorial efficiency research by linking functional structure to efficiency generation through explainable machine learning, providing an interpretable analytical tool and actionable guidance for place-based spatial optimization and high-quality territorial governance. Full article
(This article belongs to the Special Issue Land Space Optimization and Governance)
Show Figures

Figure 1

28 pages, 4585 KB  
Article
Uncertainty-Aware Adaptive Intrusion Detection Using Hybrid CNN-LSTM with cWGAN-GP Augmentation and Human-in-the-Loop Feedback
by Clinton Manuel de Nascimento and Jin Hou
Safety 2025, 11(4), 120; https://doi.org/10.3390/safety11040120 - 5 Dec 2025
Viewed by 1324
Abstract
Intrusion detection systems (IDSs) must operate under severe class imbalance, evolving attack behavior, and the need for calibrated decisions that integrate smoothly with security operations. We propose a human-in-the-loop IDS that combines a convolutional neural network and a long short-term memory network (CNN–LSTM) [...] Read more.
Intrusion detection systems (IDSs) must operate under severe class imbalance, evolving attack behavior, and the need for calibrated decisions that integrate smoothly with security operations. We propose a human-in-the-loop IDS that combines a convolutional neural network and a long short-term memory network (CNN–LSTM) classifier with a variational autoencoder (VAE)-seeded conditional Wasserstein generative adversarial network with gradient penalty (cWGAN-GP) augmentation and entropy-based abstention. Minority classes are reinforced offline via conditional generative adversarial (GAN) sampling, whereas high-entropy predictions are escalated for analysts and are incorporated into a curated retraining set. On CIC-IDS2017, the resulting framework delivered well-calibrated binary performance (ACC = 98.0%, DR = 96.6%, precision = 92.1%, F1 = 94.3%; baseline ECE ≈ 0.04, Brier ≈ 0.11) and substantially improved minority recall (e.g., Infiltration from 0% to >80%, Web Attack–XSS +25 pp, and DoS Slowhttptest +15 pp, for an overall +11 pp macro-recall gain). The deployed model remained lightweight (~42 MB, <10 ms per batch; ≈32 k flows/s on RTX-3050 Ti), and only approximately 1% of the flows were routed for human review. Extensive evaluation, including ROC/PR sweeps, reliability diagrams, cross-domain tests on CIC-IoT2023, and FGSM/PGD adversarial stress, highlights both the strengths and remaining limitations, notably residual errors on rare web attacks and limited IoT transfer. Overall, the framework provides a practical, calibrated, and extensible machine learning (ML) tier for modern IDS deployment and motivates future research on domain alignment and adversarial defense. Full article
Show Figures

Graphical abstract

18 pages, 1993 KB  
Article
Prediction, Uncertainty Quantification, and ANN-Assisted Operation of Anaerobic Digestion Guided by Entropy Using Machine Learning
by Zhipeng Zhuang, Xiaoshan Liu, Jing Jin, Ziwen Li, Yanheng Liu, Adriano Tavares and Dalin Li
Entropy 2025, 27(12), 1233; https://doi.org/10.3390/e27121233 - 5 Dec 2025
Viewed by 632
Abstract
Anaerobic digestion (AD) is a nonlinear and disturbance-sensitive process in which instability is often induced by feedstock variability and biological fluctuations. To address this challenge, this study develops an entropy-guided machine learning framework that integrates parameter prediction, uncertainty quantification, and entropy-based evaluation of [...] Read more.
Anaerobic digestion (AD) is a nonlinear and disturbance-sensitive process in which instability is often induced by feedstock variability and biological fluctuations. To address this challenge, this study develops an entropy-guided machine learning framework that integrates parameter prediction, uncertainty quantification, and entropy-based evaluation of AD operation. Using six months of industrial data (~10,000 samples), three models—support vector machine (SVM), random forest (RF), and artificial neural network (ANN)—were compared for predicting biogas yield, fermentation temperature, and volatile fatty acid (VFA) concentration. The ANN achieved the highest performance (accuracy = 96%, F1 = 0.95, root mean square error (RMSE) = 1.2 m3/t) and also exhibited the lowest prediction error entropy, indicating reduced uncertainty compared to RF and SVM. Feature entropy and permutation analysis consistently identified feed solids, organic matter, and feed rate as the most influential variables (>85% contribution), in agreement with the RF importance ranking. When applied as a real-time prediction and decision-support tool in the plant (“sensor → prediction → programmable logic controller (PLC)/operation → feedback”), the ANN model was associated with a reduction in gas-yield fluctuation from approximately ±18% to ±5%, a decrease in process entropy, and an improvement in operational stability of about 23%. Techno-economic and life-cycle assessments further indicated a 12–15 USD/t lower operating cost, 8–10% energy savings, and 5–7% CO2 reduction compared with baseline operation. Overall, this study demonstrates that combining machine learning with entropy-based uncertainty analysis offers a reliable and interpretable pathway for more stable and low-carbon AD operation. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications, 2nd Edition)
Show Figures

Figure 1

24 pages, 1694 KB  
Systematic Review
Advanced Clustering for Mobile Network Optimization: A Systematic Literature Review
by Claude Mukatshung Nawej, Pius Adewale Owolawi and Tom Mmbasu Walingo
Sensors 2025, 25(23), 7370; https://doi.org/10.3390/s25237370 - 4 Dec 2025
Cited by 1 | Viewed by 780
Abstract
5G technology represents a transformative shift in mobile communications, delivering improved ultra-low latency, data throughput, and the capacity to support huge device connectivity, surpassing the capabilities of LTE systems. As global telecommunication operators shift toward widespread 5G implementation, ensuring optimal network performance and [...] Read more.
5G technology represents a transformative shift in mobile communications, delivering improved ultra-low latency, data throughput, and the capacity to support huge device connectivity, surpassing the capabilities of LTE systems. As global telecommunication operators shift toward widespread 5G implementation, ensuring optimal network performance and intelligent resource management has become increasingly obvious. To address these challenges, this study explored the role of advanced clustering methods in optimizing cellular networks under heterogeneous and dynamic conditions. A systematic literature review (SLR) was conducted by analyzing 40 peer-reviewed and non-peer-reviewed studies selected from an initial collection of 500 papers retrieved from the Semantic Scholar Open Research Corpus. This review examines a diversity of clustering approaches, including spectral clustering with Bayesian non-parametric models and K-means, density-based clustering such as DBSCAN, and deep representation-based methods like Differential Evolution Memetic Clustering (DEMC) and Domain Adaptive Neighborhood Clustering via Entropy Optimization (DANCE). Key performance outcomes reported across studies include anomaly detection accuracy of up to 98.8%, delivery rate improvements of up to 89.4%, and handover prediction accuracy improvements of approximately 43%, particularly when clustering techniques are combined with machine learning models. In addition to summarizing their effectiveness, this review highlights methodological trends in clustering parameters, mechanisms, experimental setups, and quality metrics. The findings suggest that advanced clustering models play a crucial role in intelligent spectrum sensing, adaptive mobility management, and efficient resource allocation, thereby contributing meaningfully to the development of intelligent 5G/6G mobile network infrastructures. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 2583 KB  
Article
Hybrid Demand Forecasting in Fuel Supply Chains: ARIMA with Non-Homogeneous Markov Chains and Feature-Conditioned Evaluation
by Daniel Kubek and Paweł Więcek
Energies 2025, 18(22), 6044; https://doi.org/10.3390/en18226044 - 19 Nov 2025
Cited by 1 | Viewed by 860
Abstract
In the context of growing data availability and increasing complexity of demand patterns in retail fuel distribution, selecting effective forecasting models for large collections of time series is becoming a key operational challenge. This study investigates the effectiveness of a hybrid forecasting approach [...] Read more.
In the context of growing data availability and increasing complexity of demand patterns in retail fuel distribution, selecting effective forecasting models for large collections of time series is becoming a key operational challenge. This study investigates the effectiveness of a hybrid forecasting approach combining ARIMA models with dynamically updated Markov Chains. Unlike many existing studies that focus on isolated or small-scale experiments, this research evaluates the hybrid model across a full set of approximately 150 time series collected from multiple petrol stations, without pre-clustering or manual selection. A comprehensive set of statistical and structural features is extracted from each time series to analyze their relation to forecast performance. The results show that the hybrid ARIMA–Markov approach significantly outperforms both individual statistical models and commonly applied machine learning methods in many cases, particularly for non-stationary or regime-shifting series. In 100% of cases, the hybrid model reduced the error compared to both baseline models—the median RMSE improvement over ARIMA was 13.03%, and 15.64% over the Markov model, with statistical significance confirmed by the Wilcoxon signed-rank test. The analysis also highlights specific time series features—such as entropy, regime shift frequency, and autocorrelation structure—as strong indicators of whether hybrid modeling yields performance gains. Feature-conditioning analyses (e.g., lag-1 autocorrelation, volatility, entropy) explain when hybridization helps, enabling a feature-aware workflow that selectively deploys model components and narrows parameter searches. The greatest benefits of applying the hybrid model were observed for time series characterized by high variability, moderate entropy of differences, and a well-defined temporal dependency structure—the correlation values between these features and the improvement in hybrid performance relative to ARIMA and Markov models reached 0.55–0.58, ensuring adequate statistical significance. Such approaches are particularly valuable in enterprise environments dealing with thousands of time series, where automated model configuration becomes essential. The findings position interpretable, adaptive hybrids as a practical default for short-horizon demand forecasting in fuel supply chains and, more broadly, in energy-use applications characterized by heterogeneous profiles and evolving regimes. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

32 pages, 2788 KB  
Article
Improving Variable-Rate Learned Image Compression with Transformer-Based QR Prediction and Perceptual Optimization
by Yong-Hwan Lee and Wan-Bum Lee
Appl. Sci. 2025, 15(22), 12151; https://doi.org/10.3390/app152212151 - 16 Nov 2025
Viewed by 1796
Abstract
We present a variable-rate learned image compression (LIC) model that integrates Transformer-based quantization–reconstruction (QR) offset prediction, entropy-guided hyper-latent quantization, and perceptually informed multi-objective optimization. Unlike existing LIC frameworks that train separate networks for each bitrate, the proposed method achieves continuous rate adaptation within [...] Read more.
We present a variable-rate learned image compression (LIC) model that integrates Transformer-based quantization–reconstruction (QR) offset prediction, entropy-guided hyper-latent quantization, and perceptually informed multi-objective optimization. Unlike existing LIC frameworks that train separate networks for each bitrate, the proposed method achieves continuous rate adaptation within a single model by dynamically balancing rate, distortion and perceptual objectives. Channel-wise asymmetric quantization and a composite loss combining MSE and LPIPS further enhance reconstruction fidelity and subjective quality. Experiments on the Kodak, CLIC2020 and Tecnick datasets show gains of +1.15 dB PSNR, +0.065 MS-SSIM, and −0.32 LPIPS relative to the baselines variable-rate method, while improving bitrate-control accuracy by 62.5%. With approximately 15% computational overhead, the framework achieves competitive compression efficiency and enhanced perceptual quality, offering a practical solution for adaptive, high-quality image delivery. Full article
Show Figures

Figure 1

50 pages, 837 KB  
Article
FedEHD: Entropic High-Order Descent for Robust Federated Multi-Source Environmental Monitoring
by Koffka Khan, Winston Elibox, Treina Dinoo Ramlochan, Wayne Rajkumar and Shanta Ramnath
AI 2025, 6(11), 293; https://doi.org/10.3390/ai6110293 - 14 Nov 2025
Viewed by 1063
Abstract
We propose Federated Entropic High-Order Descent (FedEHD), a drop-in client optimizer that augments local SGD with (i) an entropy (sign) term and (ii) quadratic and cubic gradient components for drift control and implicit clipping. Across non-IID CIFAR-10 and CIFAR-100 benchmarks (100 clients, 10% [...] Read more.
We propose Federated Entropic High-Order Descent (FedEHD), a drop-in client optimizer that augments local SGD with (i) an entropy (sign) term and (ii) quadratic and cubic gradient components for drift control and implicit clipping. Across non-IID CIFAR-10 and CIFAR-100 benchmarks (100 clients, 10% sampled per round), FedEHD achieves faster and higher convergence than strong baselines including FedAvg, FedProx, SCAFFOLD, FedDyn, MOON, and FedAdam. On CIFAR-10, it reaches 70% accuracy in approximately 80 rounds (versus 100 for MOON and 130 for SCAFFOLD) and attains a final accuracy of 72.5%. On CIFAR-100, FedEHD surpasses 60% accuracy by about 150 rounds (compared with 250 for MOON and 300 for SCAFFOLD) and achieves a final accuracy of 68.0%. In an environmental monitoring case study involving four distributed air-quality stations, FedEHD yields the highest macro AUC/F1 and improved calibration (ECE 0.183 versus 0.186–0.210 for competing federated methods) without additional communication and with only O(d) local overhead. The method further provides scale-invariant coefficients with optional automatic adaptation, theoretical guarantees for surrogate descent and drift reduction, and convergence curves that illustrate smooth and stable learning dynamics. Full article
Show Figures

Figure 1

25 pages, 1653 KB  
Article
Dynamic Heterogeneous Multi-Agent Inverse Reinforcement Learning Based on Graph Attention Mean Field
by Li Song, Irfan Ali Channa, Zeyu Wang and Guangyu Sun
Symmetry 2025, 17(11), 1951; https://doi.org/10.3390/sym17111951 - 13 Nov 2025
Cited by 1 | Viewed by 1211
Abstract
Multi-agent inverse reinforcement learning (MA-IRL) infers the underlying reward functions or objectives of multiple agents by observing their behavioral data, thereby providing insights into collaboration, competition, or mixed interaction strategies among agents, and addressing the symmetrical ambiguity problem where multiple rewards may correspond [...] Read more.
Multi-agent inverse reinforcement learning (MA-IRL) infers the underlying reward functions or objectives of multiple agents by observing their behavioral data, thereby providing insights into collaboration, competition, or mixed interaction strategies among agents, and addressing the symmetrical ambiguity problem where multiple rewards may correspond to the same strategy. However, most existing algorithms mainly focus on solving cooperative and non-cooperative tasks among homogeneous multi-agent systems, making it difficult to adapt to the dynamic topologies and heterogeneous behavioral strategies of multi-agent systems in real-world applications. This makes it difficult for the algorithm to adapt to scenarios with locally sparse interactions and dynamic heterogeneity, such as autonomous driving, drone swarms, and robot clusters. To address this problem, this study proposes a dynamic heterogeneous multi-agent inverse reinforcement learning framework (GAMF-DHIRL) based on a graph attention mean field (GAMF) to infer the potential reward functions of agents. In GAMF-DHIRL, we introduce a graph attention mean field theory based on adversarial maximum entropy inverse reinforcement learning to dynamically model dependencies between agents and adaptively adjust the influence weights of neighboring nodes through attention mechanisms. Specifically, the GAMF module uses a dynamic adjacency matrix to capture the time-varying characteristics of the interactions among agents. Meanwhile, the typed mean-field approximation reduces computational complexity. Experiments demonstrate that the proposed method can efficiently recover reward functions of heterogeneous agents in collaborative tasks and adversarial environments, and it outperforms traditional MA-IRL methods. Full article
Show Figures

Figure 1

16 pages, 4636 KB  
Article
Radiomics for Dynamic Lung Cancer Risk Prediction in USPSTF-Ineligible Patients
by Morteza Salehjahromi, Hui Li, Eman Showkatian, Maliazurina B. Saad, Mohamed Qayati, Sherif M. Ismail, Sheeba J. Sujit, Amgad Muneer, Muhammad Aminu, Lingzhi Hong, Xiaoyu Han, Simon Heeke, Tina Cascone, Xiuning Le, Natalie Vokes, Don L. Gibbons, Iakovos Toumazis, Edwin J. Ostrin, Mara B. Antonoff, Ara A. Vaporciyan, David Jaffray, Fernando U. Kay, Brett W. Carter, Carol C. Wu, Myrna C. B. Godoy, J. Jack Lee, David E. Gerber, John V. Heymach, Jianjun Zhang and Jia Wuadd Show full author list remove Hide full author list
Cancers 2025, 17(21), 3406; https://doi.org/10.3390/cancers17213406 - 23 Oct 2025
Viewed by 1619
Abstract
Background: Non-smokers and individuals with minimal smoking history represent a significant proportion of lung cancer cases but are often overlooked in current risk assessment models. Pulmonary nodules are commonly detected incidentally—appearing in approximately 24–31% of all chest CT scans regardless of smoking [...] Read more.
Background: Non-smokers and individuals with minimal smoking history represent a significant proportion of lung cancer cases but are often overlooked in current risk assessment models. Pulmonary nodules are commonly detected incidentally—appearing in approximately 24–31% of all chest CT scans regardless of smoking status. However, most established risk models, such as the Brock model, were developed using cohorts heavily enriched with individuals who have substantial smoking histories. This limits their generalizability to non-smoking and light-smoking populations, highlighting the need for more inclusive and tailored risk prediction strategies. Purpose: We aimed to develop a longitudinal radiomics-based approach for lung cancer risk prediction, integrating time-varying radiomic modeling to enhance early detection in USPSTF-ineligible patients. Methods: Unlike conventional models that rely on a single scan, we conducted a longitudinal analysis of 122 patients who were later diagnosed with lung cancer, with a total of 622 CT scans analyzed. Of these patients, 69% were former smokers, while 30% had never smoked. Quantitative radiomic features were extracted from serial chest CT scans to capture temporal changes in nodule evolution. A time-varying survival model was implemented to dynamically assess lung cancer risk. Additionally, we evaluated the integration of handcrafted radiomic features and the deep learning-based Sybil model to determine the added value of combining local nodule characteristics with global lung assessments. Results: Our radiomic analysis identified specific CT patterns associated with malignant transformation, including increased nodule size, voxel intensity, textural entropy, as indicators of tumor heterogeneity and progression. Integrating radiomics, delta-radiomics, and longitudinal imaging features resulted in the optimal predictive performance during cross-validation (concordance index [C-index]: 0.69), surpassing that of models using demographics alone (C-index: 0.50) and Sybil alone (C-index: 0.54). Compared to the Brock model (67% accuracy, 100% sensitivity, 33% specificity), our composite risk model achieved 78% accuracy, 89% sensitivity, and 67% specificity, demonstrating improved early cancer risk stratification. Kaplan–Meier curves and individualized cancer development probability functions further validated the model’s ability to track dynamic risk progression for individual patients. Visual analysis of longitudinal CT scans confirmed alignment between predicted risk and evolving nodule characteristics. Conclusions: Our study demonstrates that integrating radiomics, sybil, and clinical factors enhances future lung cancer risk prediction in USPSTF-ineligible patients, outperforming existing models and supporting personalized screening and early intervention strategies. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Lung Cancer)
Show Figures

Figure 1

21 pages, 2556 KB  
Article
Comparison of Machine Learning Models in Nonlinear and Stochastic Signal Classification
by Elzbieta Olejarczyk and Carlo Massaroni
Appl. Sci. 2025, 15(20), 11226; https://doi.org/10.3390/app152011226 - 20 Oct 2025
Viewed by 741
Abstract
This study aims to compare different classifiers in the context of distinguishing two classes of signals: nonlinear electrocardiography (ECG) signals and stochastic artifacts occurring in ECG signals. The ECG signals from a single-lead wearable Movesense device were analyzed with a set of eight [...] Read more.
This study aims to compare different classifiers in the context of distinguishing two classes of signals: nonlinear electrocardiography (ECG) signals and stochastic artifacts occurring in ECG signals. The ECG signals from a single-lead wearable Movesense device were analyzed with a set of eight features: variance (VAR), three fractal dimension measures (Higuchi fractal dimension (HFD), Katz fractal dimension (KFD), and Detrended Fluctuation Analysis (DFA)), and four entropy measures (approximate entropy (ApEn), sample entropy (SampEn), and multiscale entropy (MSE) for scales 1 and 2). The minimum-redundancy maximum-relevance algorithm was applied for evaluation of feature importance. A broad spectrum of machine learning models was considered for classification. The proposed approach allowed for comparison of classifier features, as well as providing a broader insight into the characteristics of the signals themselves. The most important features for classification were VAR, DFA, ApEn, and HFD. The best performance among 34 classifiers was obtained using an optimized RUSBoosted Trees ensemble classifier (sensitivity, specificity, and positive and negative predictive values were 99.8, 73.7%, 99.8, and 74.3, respectively). The accuracy of the Movesense device was very high (99.6%). Moreover, the multifractality of ECG during sleep was observed in the relationship between SampEn (or ApEn) and MSE. Full article
(This article belongs to the Special Issue New Advances in Electrocardiogram (ECG) Signal Processing)
Show Figures

Figure 1

Back to TopTop