Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (272)

Search Parameters:
Keywords = heuristic feature optimization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2728 KB  
Article
Monthly Power Outage Maintenance Scheduling for Power Grids Based on Interpretable Reinforcement Learning
by Wei Tang, Xun Mao, Kai Lv, Zhichen Cai and Zhenhuan Ding
Energies 2025, 18(20), 5454; https://doi.org/10.3390/en18205454 - 16 Oct 2025
Abstract
This paper proposes an interpretable optimization method for power grid outage scheduling based on reinforcement learning. An outage scheduling optimization model is proposed, considering the convergence of power flow calculation, voltage violations, and operational economic behavior as objectives, while considering constraints such as [...] Read more.
This paper proposes an interpretable optimization method for power grid outage scheduling based on reinforcement learning. An outage scheduling optimization model is proposed, considering the convergence of power flow calculation, voltage violations, and operational economic behavior as objectives, while considering constraints such as simultaneous outage constraints, mutually exclusive constraints, and maintenance windows. Key features of the outage schedule are selected based on Shapley values to construct a Markov optimization model for outage scheduling. A deep reinforcement learning agent is established to optimize the outage schedule. The proposed method is applied to the IEEE-39 and IEEE-118 bus system for validation. Experimental results show that the proposed method outperforms existing algorithms in terms of voltage violation, total power losses, and computational time. The proposed method eliminates all voltage violations and reduces active power losses up to 5.7% and computation time by 6.8 h compared to conventional heuristic algorithms. Full article
Show Figures

Figure 1

29 pages, 3437 KB  
Article
Integrating Process Mining and Machine Learning for Surgical Workflow Optimization: A Real-World Analysis Using the MOVER EHR Dataset
by Ufuk Celik, Adem Korkmaz and Ivaylo Stoyanov
Appl. Sci. 2025, 15(20), 11014; https://doi.org/10.3390/app152011014 - 14 Oct 2025
Viewed by 106
Abstract
The digitization of healthcare has enabled the application of advanced analytics, such as process mining and machine learning, to electronic health records (EHRs). This study aims to identify workflow inefficiencies, temporal bottlenecks, and risk factors for delayed recovery in surgical pathways using the [...] Read more.
The digitization of healthcare has enabled the application of advanced analytics, such as process mining and machine learning, to electronic health records (EHRs). This study aims to identify workflow inefficiencies, temporal bottlenecks, and risk factors for delayed recovery in surgical pathways using the open-access MOVER dataset. A multi-stage framework was implemented, including heuristic control-flow discovery, Petri net-based conformance checking, temporal performance analysis, unsupervised clustering, and Random Forest-based classification. All analyses were simulated on pre-discharge (“preliminary”) patient records to enhance real-time applicability. Control-flow models revealed deviations from expected pathways and issues with data quality. Conformance checking yielded perfect fitness (1.0) and moderate precision (0.46), indicating that the model generalizes despite clinical variability. Stratified performance analysis exposed duration differences across ASA scores and age groups. Clustering revealed latent patient subgroups with distinct perioperative timelines. The predictive model achieved 90.33% accuracy, though recall for delayed recovery cases was limited (24.23%), reflecting class imbalance challenges. Key features included procedural delays, ICU status, and ASA classification. This study highlights the translational potential of integrating process mining and predictive modeling to optimize perioperative workflows, stratify recovery risk, and plan resources. Full article
(This article belongs to the Special Issue Machine Learning for Healthcare Analytics)
Show Figures

Figure 1

17 pages, 1985 KB  
Article
Game-Theoretic Secure Socket Transmission with a Zero Trust Model
by Evangelos D. Spyrou, Vassilios Kappatos and Chrysostomos Stylios
Appl. Sci. 2025, 15(19), 10535; https://doi.org/10.3390/app151910535 - 29 Sep 2025
Viewed by 244
Abstract
A significant problem in cybersecurity is to accurately detect malicious network activities in real-time by analyzing patterns in socket-level packet transmissions. This challenge involves distinguishing between legitimate and adversarial behaviors while optimizing detection strategies to minimize false alarms and resource costs under intelligent, [...] Read more.
A significant problem in cybersecurity is to accurately detect malicious network activities in real-time by analyzing patterns in socket-level packet transmissions. This challenge involves distinguishing between legitimate and adversarial behaviors while optimizing detection strategies to minimize false alarms and resource costs under intelligent, adaptive attacks. This paper presents a comprehensive framework for network security by modeling socket-level packet transmissions and extracting key features for temporal analysis. A long short-term memory (LSTM)-based anomaly detection system predicts normal traffic behavior and identifies significant deviations as potential cyber threats. Integrating this with a zero trust signaling game, the model updates beliefs about agent legitimacy based on observed signals and anomaly scores. The interaction between defender and attacker is formulated as a Stackelberg game, where the defender optimizes detection strategies anticipating attacker responses. This unified approach combines machine learning and game theory to enable robust, adaptive cybersecurity policies that effectively balance detection performance and resource costs in adversarial environments. Two baselines are considered for comparison. The static baseline applies fixed transmission and defense policies, ignoring anomalies and environmental feedback, and thus serves as a control case of non-reactive behavior. In contrast, the adaptive non-strategic baseline introduces simple threshold-based heuristics that adjust to anomaly scores, allowing limited adaptability without strategic reasoning. The proposed fully adaptive Stackelberg strategy outperforms both partial and discrete adaptive baselines, achieving higher robustness across trust thresholds, superior attacker–defender utility trade-offs, and more effective anomaly mitigation under varying strategic conditions. Full article
(This article belongs to the Special Issue Wireless Networking: Application and Development)
Show Figures

Figure 1

21 pages, 1231 KB  
Article
Two Modifications of MinSum Algorithm for Efficient System-Optimal Traffic Assignment
by Nikica Hlupić, Danko Basch, Edouard Ivanjko and Martin Gregurić
Algorithms 2025, 18(10), 609; https://doi.org/10.3390/a18100609 - 29 Sep 2025
Viewed by 279
Abstract
Traffic assignment in large urban areas is an old but increasingly important problem because of the rapid growth of the world population and traffic demands. Many algorithms have been developed but their convergence rates and complexities are still prohibitive for real-time applications. The [...] Read more.
Traffic assignment in large urban areas is an old but increasingly important problem because of the rapid growth of the world population and traffic demands. Many algorithms have been developed but their convergence rates and complexities are still prohibitive for real-time applications. The recently developed MinSum algorithm introduces a new approach. It is a highly efficient discrete-domain optimization algorithm for system-optimized route assignment between two city zones. Its complexity (the number of critical operations) is O(R3), where R is the number of routes. Nonetheless, there is still room for improvements, and this paper presents two modified MinSum variants, heuristic and approximate, that are significantly faster and of lower complexity, while retaining MinSum’s prominent features. Heuristic variant MinSumH is up to five times faster than MinSum and its complexity theoretically remains O(R3), though experiments indicate that it is closer to O(R2). Approximate variant MinSumA is faster by up to over 100 times and reduces the complexity to O(R). Both proposed variants are progressively faster as R grows. Due to their high convergence rate and exceptionally low complexity, along with other prominent features, the proposed algorithms are ready for real-time system-optimal traffic assignment in a real urban environment. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 471 KB  
Article
Mitigating Membership Inference Attacks via Generative Denoising Mechanisms
by Zhijie Yang, Xiaolong Yan, Guoguang Chen and Xiaoli Tian
Mathematics 2025, 13(19), 3070; https://doi.org/10.3390/math13193070 - 24 Sep 2025
Viewed by 427
Abstract
Membership Inference Attacks (MIAs) pose a significant threat to privacy in modern machine learning systems, enabling adversaries to determine whether a specific data record was used during model training. Existing defense techniques often degrade model utility or rely on heuristic noise injection, which [...] Read more.
Membership Inference Attacks (MIAs) pose a significant threat to privacy in modern machine learning systems, enabling adversaries to determine whether a specific data record was used during model training. Existing defense techniques often degrade model utility or rely on heuristic noise injection, which fails to provide a robust, mathematically grounded defense. In this paper, we propose Diffusion-Driven Data Preprocessing (D3P), a novel privacy-preserving framework leveraging generative diffusion models to transform sensitive training data before learning, thereby reducing the susceptibility of trained models to MIAs. Our method integrates a mathematically rigorous denoising process into a privacy-oriented diffusion pipeline, which ensures that the reconstructed data maintains essential semantic features for model utility while obfuscating fine-grained patterns that MIAs exploit. We further introduce a privacy–utility optimization strategy grounded in formal probabilistic analysis, enabling adaptive control of the diffusion noise schedule to balance attack resilience and predictive performance. Experimental evaluations across multiple datasets and architectures demonstrate that D3P significantly reduces MIA success rates by up to 42.3% compared to state-of-the-art defenses, with a less than 2.5% loss in accuracy. This work provides a theoretically principled and empirically validated pathway for integrating diffusion-based generative mechanisms into privacy-preserving AI pipelines, which is particularly suitable for deployment in cloud-based and blockchain-enabled machine learning environments. Full article
Show Figures

Figure 1

26 pages, 1823 KB  
Article
Scalable Gender Profiling from Turkish Texts Using Deep Embeddings and Meta-Heuristic Feature Selection
by Hakan Gunduz
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 253; https://doi.org/10.3390/jtaer20040253 - 24 Sep 2025
Viewed by 420
Abstract
Accurate gender identification from written text is critical for author profiling, recommendation systems, and demographic analytics in digital ecosystems. This study introduces a scalable framework for gender classification in Turkish, combining contextualized BERTurk and subword-aware FastText embeddings with three meta-heuristic feature selection algorithms: [...] Read more.
Accurate gender identification from written text is critical for author profiling, recommendation systems, and demographic analytics in digital ecosystems. This study introduces a scalable framework for gender classification in Turkish, combining contextualized BERTurk and subword-aware FastText embeddings with three meta-heuristic feature selection algorithms: Genetic Algorithm (GA), Jaya and Artificial Rabbit Optimization (ARO). Evaluated on the IAG-TNKU corpus of 43,292 balanced Turkish news articles, the best-performing model—BERTurk+GA+LSTM—achieves 89.7% accuracy, while ARO reduces feature dimensionality by 90% with minimal performance loss. Beyond in-domain results, exploratory zero-shot and few-shot adaptation experiments on Turkish e-commerce product reviews demonstrate the framework’s transferability: while zero-shot performance dropped to 59.8%, few-shot adaptation with only 200–400 labeled samples raised accuracy to 69.6–72.3%. These findings highlight both the limitations of training exclusively on news articles and the practical feasibility of adapting the framework to consumer-generated content with minimal supervision. In addition to technical outcomes, we critically examine ethical considerations in gender inference, including fairness, representation, and the binary nature of current datasets. This work contributes a reproducible and linguistically informed baseline for gender profiling in morphologically rich, low-resource languages, with demonstrated potential for adaptation across domains such as social media and e-commerce personalization. Full article
(This article belongs to the Special Issue Human–Technology Synergies in AI-Driven E-Commerce Environments)
Show Figures

Figure 1

32 pages, 2075 KB  
Review
A Comprehensive Review of AI Integration for Fault Detection in Modern Power Systems: Data Processing, Modeling, and Optimization
by Youping Liu, Pin Li, Yang Si and Linrui Ma
Energies 2025, 18(18), 4983; https://doi.org/10.3390/en18184983 - 19 Sep 2025
Viewed by 839
Abstract
Driven by the high penetration of renewable energy sources and power electronic devices, modern power systems have become increasingly complex, intensifying the demand for accurate and intelligent fault detection. This paper analyzes a total of 81 references to explore the integrated application of [...] Read more.
Driven by the high penetration of renewable energy sources and power electronic devices, modern power systems have become increasingly complex, intensifying the demand for accurate and intelligent fault detection. This paper analyzes a total of 81 references to explore the integrated application of artificial intelligence (AI) technologies across all stages of fault data processing, modeling, and optimization. The application potential of AI in fault data processing is firstly analyzed in terms of its performance in mitigating class imbalance, extracting feature information, handling data noise and classification. Then, the modeling of fault detection is classified into rule-driven, data-driven and hybrid-driven methods to evaluate their applicability in scenarios such as transmission lines and distribution networks. The accuracy of fault detection models is also investigated by studying the hyperparameter optimization (HPO) methods. The results indicate that the utilization of AI-driven imbalance handling enhances model accuracy by a range of 16.2% to 26.2%, while deep learning-based feature extraction techniques sustain accuracy levels exceeding 98.5% under a signal-to-noise ratio (SNR) of 10 dB. With a 99.96% detection accuracy, hybrid-driven models applied in fault detection perform the best. For the optimization of fault detection models, heuristic algorithms provide 6.92–19.375% improvement over the baseline models. The findings suggest that AI-driven methodologies in data processing demonstrate notable noise resilience and other benefits. For modeling fault detection, data-driven and hybrid-driven models are presently extensively employed for detecting short-circuit faults, predicting transformer gas trends, and identifying faults in complex and uncertain scenarios. Conversely, rule-driven models are better suited for scenarios possessing a comprehensive experience library and are utilized with less frequency. In the optimization of fault detection models, heuristic algorithms occupy a pivotal position, whereas hyperparameter optimization incorporating reinforcement learning (RL) is better suited for real-time fault detection. The discoveries presented in this paper facilitate the seamless integration of AI with fault detection in modern power systems, thereby advancing their intelligent evolution. Full article
Show Figures

Figure 1

26 pages, 3077 KB  
Review
A Point-Line-Area Paradigm: 3D Printing for Next-Generation Health Monitoring Sensors
by Mei Ming, Xiaohong Yin, Yinchen Luo, Bin Zhang and Qian Xue
Sensors 2025, 25(18), 5777; https://doi.org/10.3390/s25185777 - 16 Sep 2025
Viewed by 510
Abstract
Three-dimensional printing technology is fundamentally reshaping the design and fabrication of health monitoring sensors. While it holds great promise for achieving miniaturization, multi-material integration, and personalized customization, the lack of a clear selection framework hinders the optimal matching of printing technologies to specific [...] Read more.
Three-dimensional printing technology is fundamentally reshaping the design and fabrication of health monitoring sensors. While it holds great promise for achieving miniaturization, multi-material integration, and personalized customization, the lack of a clear selection framework hinders the optimal matching of printing technologies to specific sensor requirements. This review presents a classification framework based on existing standards and specifically designed to address sensor-related requirements, categorizing 3D printing technologies into point-based, line-based, and area-based modalities according to their fundamental fabrication unit. This framework directly bridges the capabilities of each modality, such as nanoscale resolution, multi-material versatility, and high-throughput production, with the critical demands of modern health monitoring sensors. We systematically demonstrate how this approach guides technology selection: Point-based methods (e.g., stereolithography, inkjet) enable micron-scale features for ultra-sensitive detection; line-based techniques (e.g., Direct Ink Writing, Fused Filament Fabrication) excel in multi-material integration for creating complex functional devices such as sweat-sensing patches; and area-based approaches (e.g., Digital Light Processing) facilitate rapid production of sensor arrays and intricate structures for applications like continuous glucose monitoring. The point–line–area paradigm offers a powerful heuristic for designing and manufacturing next-generation health monitoring sensors. We also discuss strategies to overcome existing challenges, including material biocompatibility and cross-scale manufacturing, through the integration of AI-driven design and stimuli-responsive materials. This framework not only clarifies the current research landscape but also accelerates the development of intelligent, personalized, and sustainable health monitoring systems. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

20 pages, 775 KB  
Article
Optimization Scheduling of Dynamic Industrial Systems Based on Reinforcement Learning
by Xiang Zhang, Zhongfu Li, Simin Fu, Qiancheng Xu, Zhaolong Du and Guan Yuan
Appl. Sci. 2025, 15(18), 10108; https://doi.org/10.3390/app151810108 - 16 Sep 2025
Viewed by 498
Abstract
The flexible job shop scheduling problem (FJSP) is a fundamental challenge in modern industrial manufacturing, where efficient scheduling is critical for optimizing both resource utilization and overall productivity. Traditional heuristic algorithms have been widely used to solve the FJSP, but they are often [...] Read more.
The flexible job shop scheduling problem (FJSP) is a fundamental challenge in modern industrial manufacturing, where efficient scheduling is critical for optimizing both resource utilization and overall productivity. Traditional heuristic algorithms have been widely used to solve the FJSP, but they are often tailored to specific scenarios and struggle to cope with the dynamic and complex nature of real-world manufacturing environments. Although deep learning approaches have been proposed recently, they typically require extensive feature engineering, lack interpretability, and fail to generalize well under unforeseen disturbances such as machine failures or order changes. To overcome these limitations, we introduce a novel hierarchical reinforcement learning (HRL) framework for FJSP, which decomposes the scheduling task into high-level strategic decisions and low-level task allocations. This hierarchical structure allows for more efficient learning and decision-making. By leveraging policy gradient methods at both levels, our approach learns adaptive scheduling policies directly from raw system states, eliminating the need for manual feature extraction. Our HRL-based method enables real-time, autonomous decision-making that adapts to changing production conditions. Experimental results show our approach achieves a cumulative reward of 199.50 for Brandimarte, 2521.17 for Dauzère, and 2781.56 for Taillard, with success rates of 25.00%, 12.30%, and 19.00%, respectively, demonstrating the robustness of our approach in real-world job shop scheduling tasks. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

11 pages, 361 KB  
Article
Theoretical Analysis and Verification of Loop Cutsets in Bayesian Network Inference
by Jie Wei, Wenxian Xie and Zhanbin Yuan
Mathematics 2025, 13(18), 2992; https://doi.org/10.3390/math13182992 - 16 Sep 2025
Viewed by 320
Abstract
Bayesian networks are widely used in probabilistic graphical modeling, but inference in multiply connected networks remains computationally challenging due to loop structures. The loop cutset, a critical component of Pearl’s conditioning method, directly determines inference complexity. This paper presents a systematic theoretical analysis [...] Read more.
Bayesian networks are widely used in probabilistic graphical modeling, but inference in multiply connected networks remains computationally challenging due to loop structures. The loop cutset, a critical component of Pearl’s conditioning method, directly determines inference complexity. This paper presents a systematic theoretical analysis of loop cutsets and develops a Bayesian estimation framework that quantifies the probability of nodes and node pairs being included in the minimal loop cutset. By incorporating structural features such as node degree and shared nodes into a posterior probability model, we provide a unified statistical framework for interpreting cutset membership. Experiments on synthetic and real-world networks validate the proposed approach, demonstrating that Bayesian estimation effectively captures the influence of structural metrics and achieves better predictive accuracy and stability than classical heuristic and randomized algorithms. The findings offer new insights and practical strategies for optimizing loop cutset computation, thereby improving the efficiency and reliability of Bayesian network inference. Full article
(This article belongs to the Special Issue Bayesian Statistical Analysis of Big Data and Complex Data)
Show Figures

Figure 1

29 pages, 5370 KB  
Article
Quadratic Control Model for Shuttle Dispatching in Automated Overhead Rail Systems
by Thuy Duy Truong, Xuan Tuan Nguyen and Tuong Quan Vo
Symmetry 2025, 17(9), 1518; https://doi.org/10.3390/sym17091518 - 11 Sep 2025
Viewed by 427
Abstract
Automated Overhead Rail Systems (AORs) have a key role in warehouse and in-house industry, as well as in the modern hospital, where efficient shuttle dispatching directly impacts throughput and reliability. This paper presents a quadratic control model formulated in symmetric quadratic matrix form [...] Read more.
Automated Overhead Rail Systems (AORs) have a key role in warehouse and in-house industry, as well as in the modern hospital, where efficient shuttle dispatching directly impacts throughput and reliability. This paper presents a quadratic control model formulated in symmetric quadratic matrix form to capture balanced interactions between shuttles, tasks, and priorities. A Genetic Algorithm (GA) is employed to solve the optimization problem, with operators adapted to exploit quadratic symmetry, for faster convergence and stable performance. Simulation results on a microcontroller-based testbed demonstrated that the proposed model achieved shorter dispatching times, reduced waiting costs, and more symmetrically distributed workloads compared with conventional heuristic approaches. The study shows that symmetry is not only a modeling feature, but also a design principle, supporting future extensions such as emergency handling and multi-priority dispatching. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

25 pages, 6752 KB  
Article
Hybrid Deep Learning Combining Mode Decomposition and Intelligent Optimization for Discharge Forecasting: A Case Study of the Baiquan Karst Spring
by Yanling Li, Tianxing Dong, Yingying Shao and Xiaoming Mao
Sustainability 2025, 17(18), 8101; https://doi.org/10.3390/su17188101 - 9 Sep 2025
Viewed by 517
Abstract
Karst springs play a critical strategic role in regional economic and ecological sustainability, yet their spatiotemporal heterogeneity and hydrological complexity pose substantial challenges for flow prediction. This study proposes FMD-mGTO-BiGRU-KAN, a four-stage hybrid deep learning architecture for daily spring flow prediction that integrates [...] Read more.
Karst springs play a critical strategic role in regional economic and ecological sustainability, yet their spatiotemporal heterogeneity and hydrological complexity pose substantial challenges for flow prediction. This study proposes FMD-mGTO-BiGRU-KAN, a four-stage hybrid deep learning architecture for daily spring flow prediction that integrates multi-feature signal decomposition, meta-heuristic optimization, and interpretable neural network design: constructing an Feature Mode Decomposition (FMD) decomposition layer to mitigate modal aliasing in meteorological signals; employing the improved Gorilla Troops Optimizer (mGTO) optimization algorithm to enable autonomous hyperparameter evolution, overcoming the limitations of traditional grid search; designing a Bidirectional Gated Recurrent Unit (BiGRU) network to capture long-term historical dependencies in spring flow sequences through bidirectional recurrent mechanisms; introducing Kolmogorov–Arnold Networks (KAN) to replace the fully connected layer, and improving the model interpretability through differentiable symbolic operations; Additionally, residual modules and dropout blocks are incorporated to enhance generalization capability, reduce overfitting risks. By integrating multiple deep learning algorithms, this hybrid model leverages their respective strengths to adeptly accommodate intricate meteorological conditions, thereby enhancing its capacity to discern the underlying patterns within complex and dynamic input features. Comparative results against benchmark models (LSTM, GRU, and Transformer) show that the proposed framework achieves 82.47% and 50.15% reductions in MSE and RMSE, respectively, with the NSE increasing by 8.01% to 0.9862. The prediction errors are more tightly distributed, and the proposed model surpasses the benchmark model in overall performance, validating its superiority. The model’s exceptional prediction ability offers a novel high-precision solution for spring flow prediction in complex hydrological systems. Full article
Show Figures

Figure 1

24 pages, 1064 KB  
Article
Arabic Abstractive Text Summarization Using an Ant Colony System
by Amal M. Al-Numai and Aqil M. Azmi
Mathematics 2025, 13(16), 2613; https://doi.org/10.3390/math13162613 - 15 Aug 2025
Viewed by 809
Abstract
Arabic abstractive summarization presents a complex multi-objective optimization challenge, balancing readability, informativeness, and conciseness. While extractive approaches dominate NLP, abstractive methods—particularly for Arabic—remain underexplored due to linguistic complexity. This study introduces, for the first time, ant colony system (ACS) for Arabic abstractive summarization [...] Read more.
Arabic abstractive summarization presents a complex multi-objective optimization challenge, balancing readability, informativeness, and conciseness. While extractive approaches dominate NLP, abstractive methods—particularly for Arabic—remain underexplored due to linguistic complexity. This study introduces, for the first time, ant colony system (ACS) for Arabic abstractive summarization (named AASAC—Arabic Abstractive Summarization using Ant Colony), framing it as a combinatorial evolutionary optimization task. Our method integrates collocation and word-relation features into heuristic-guided fitness functions, simultaneously optimizing content coverage and linguistic coherence. Evaluations on a benchmark dataset using LemmaRouge, a lemma-based metric that evaluates semantic similarity rather than surface word forms, demonstrate consistent superiority. For 30% summaries, AASAC achieves 51.61% (LemmaRouge-1) and 46.82% (LemmaRouge-L), outperforming baselines by 13.23% and 20.49%, respectively. At 50% summary length, it reaches 64.56% (LemmaRouge-1) and 61.26% (LemmaRouge-L), surpassing baselines by 10.73% and 3.23%. These results highlight AASAC’s effectiveness in addressing multi-objective NLP challenges and establish its potential for evolutionary computation applications in language generation, particularly for complex morphological languages like Arabic. Full article
Show Figures

Figure 1

18 pages, 768 KB  
Article
Uncertainty-Aware Design of High-Entropy Alloys via Ensemble Thermodynamic Modeling and Search Space Pruning
by Roman Dębski, Władysław Gąsior, Wojciech Gierlotka and Adam Dębski
Appl. Sci. 2025, 15(16), 8991; https://doi.org/10.3390/app15168991 - 14 Aug 2025
Viewed by 563
Abstract
The discovery and design of high-entropy alloys (HEAs) faces significant challenges due to the vast combinatorial design space and uncertainties in thermodynamic data. This work presents a modular, uncertainty-aware computational framework with the primary objective of accelerating the discovery of solid-solution HEA candidates. [...] Read more.
The discovery and design of high-entropy alloys (HEAs) faces significant challenges due to the vast combinatorial design space and uncertainties in thermodynamic data. This work presents a modular, uncertainty-aware computational framework with the primary objective of accelerating the discovery of solid-solution HEA candidates. The proposed pipeline integrates ensemble thermodynamic modeling, Monte Carlo-based estimation, and a structured three-phase pruning algorithm for efficient search space reduction. Key quantitative results are achieved in two main areas. First, for binary alloy thermodynamics, a Bayesian Neural Network (BNN) ensemble trained on domain-informed features predicts mixing enthalpies with high accuracy, yielding a mean absolute error (MAE) of 0.48 kJ/mol—substantially outperforming the classical Miedema model (MAE = 4.27 kJ/mol). These probabilistic predictions are propagated through Monte Carlo sampling to estimate multi-component thermodynamic descriptors, including ΔHmix and the Ω parameter, while capturing predictive uncertainty. Second, in a case study on the Al-Cu-Fe-Ni-Ti system, the framework reduces a 2.4 million (2.4 M) candidate pool to just 91 high-confidence compositions. Final selection is guided by an uncertainty-aware viability metric, P(HEA), and supported by interpretable radar plot visualizations for multi-objective assessment. The results demonstrate the framework’s ability to combine physical priors, probabilistic modeling, and design heuristics into a data-efficient and interpretable pipeline for materials discovery. This establishes a foundation for future HEA optimization, dataset refinement, and adaptive experimental design under uncertainty. Full article
Show Figures

Figure 1

17 pages, 1850 KB  
Article
Cloud–Edge Collaborative Model Adaptation Based on Deep Q-Network and Transfer Feature Extraction
by Jue Chen, Xin Cheng, Yanjie Jia and Shuai Tan
Appl. Sci. 2025, 15(15), 8335; https://doi.org/10.3390/app15158335 - 26 Jul 2025
Viewed by 675
Abstract
With the rapid development of smart devices and the Internet of Things (IoT), the explosive growth of data has placed increasingly higher demands on real-time processing and intelligent decision making. Cloud-edge collaborative computing has emerged as a mainstream architecture to address these challenges. [...] Read more.
With the rapid development of smart devices and the Internet of Things (IoT), the explosive growth of data has placed increasingly higher demands on real-time processing and intelligent decision making. Cloud-edge collaborative computing has emerged as a mainstream architecture to address these challenges. However, in sky-ground integrated systems, the limited computing capacity of edge devices and the inconsistency between cloud-side fusion results and edge-side detection outputs significantly undermine the reliability of edge inference. To overcome these issues, this paper proposes a cloud-edge collaborative model adaptation framework that integrates deep reinforcement learning via Deep Q-Networks (DQN) with local feature transfer. The framework enables category-level dynamic decision making, allowing for selective migration of classification head parameters to achieve on-demand adaptive optimization of the edge model and enhance consistency between cloud and edge results. Extensive experiments conducted on a large-scale multi-view remote sensing aircraft detection dataset demonstrate that the proposed method significantly improves cloud-edge consistency. The detection consistency rate reaches 90%, with some scenarios approaching 100%. Ablation studies further validate the necessity of the DQN-based decision strategy, which clearly outperforms static heuristics. In the model adaptation comparison, the proposed method improves the detection precision of the A321 category from 70.30% to 71.00% and the average precision (AP) from 53.66% to 53.71%. For the A330 category, the precision increases from 32.26% to 39.62%, indicating strong adaptability across different target types. This study offers a novel and effective solution for cloud-edge model adaptation under resource-constrained conditions, enhancing both the consistency of cloud-edge fusion and the robustness of edge-side intelligent inference. Full article
Show Figures

Figure 1

Back to TopTop