Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,799)

Search Parameters:
Keywords = black box

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2241 KB  
Article
Game-Theoretic Cost-Sensitive Adversarial Training for Robust Cloud Intrusion Detection Against GAN-Based Evasion Attacks
by Jianbo Ding, Zijian Shen and Wenhe Liu
Appl. Sci. 2026, 16(8), 3944; https://doi.org/10.3390/app16083944 (registering DOI) - 18 Apr 2026
Abstract
Cloud-based intrusion detection systems (IDSs) increasingly rely on deep learning classifiers to identify malicious traffic; however, this reliance exposes them to adversarial evasion attacks in which adversaries craft near-imperceptible perturbations to bypass detection. Existing defenses based on conventional adversarial training often recover robustness [...] Read more.
Cloud-based intrusion detection systems (IDSs) increasingly rely on deep learning classifiers to identify malicious traffic; however, this reliance exposes them to adversarial evasion attacks in which adversaries craft near-imperceptible perturbations to bypass detection. Existing defenses based on conventional adversarial training often recover robustness against known perturbation patterns at the cost of degraded detection accuracy on canonical attack categories—a robustness–accuracy trade-off that remains an open challenge in the field. In this paper, we propose GT-CSAT (Game-Theoretic Cost-Sensitive Adversarial Training), a novel defense framework tailored for cloud security environments. GT-CSAT couples an improved Wasserstein GAN with Gradient Penalty (WGAN-GP) threat generator—conditioned on attack semantics to simulate functionally consistent and highly covert traffic variants—with a minimax adversarial training loop governed by a game-theoretic cost-sensitive loss function. The proposed loss function assigns asymmetric misclassification penalties derived from a two-player zero-sum payoff matrix, enabling the detector to maintain vigilance over both novel adversarial variants and well-characterized conventional threats simultaneously. Specifically, misclassifying an adversarially perturbed attack as benign incurs a strictly higher penalty than the symmetric cross-entropy baseline, while the cost weights are dynamically adapted via a Nash equilibrium-inspired update rule during training. We conduct comprehensive experiments on the Cloud Vulnerabilities Dataset (CVD), CICIDS-2017, and UNSW-NB15, which encompass diverse cloud-specific attack scenarios including denial-of-service, port scanning, brute-force, and SQL injection traffic. Under six representative evasion strategies—FGSM, PGD, C&W, BIM, DeepFool, and IDSGAN-style black-box perturbations—GT-CSAT achieves an average robust accuracy of 94.3%, surpassing standard adversarial training by 6.8 percentage points and the undefended baseline by 21.4 percentage points, while preserving clean-traffic detection at 97.1%. These results confirm that the game-theoretic cost structure effectively decouples robustness from accuracy, yielding a Pareto-superior detection profile relative to competing baselines across all evaluated threat models. The source code and experimental configurations have been publicly released to facilitate reproducibility. Full article
17 pages, 2443 KB  
Article
Knowledge-Based XGBoost Model for Predicting Corrosion-Fatigue Crack Growth Rate in Aluminum Alloys
by Peng Wang, Xin Chen and Yongzhen Zhang
Crystals 2026, 16(4), 273; https://doi.org/10.3390/cryst16040273 (registering DOI) - 18 Apr 2026
Abstract
Accurate prediction of corrosion-fatigue crack growth rate in aluminum alloys is critical for the safety assessment of aerospace structures. Conventional empirical fracture-mechanic models often struggle to capture multiphysics coupling effects, whereas purely data-driven machine-learning models may lack physical interpretability and generalize poorly beyond [...] Read more.
Accurate prediction of corrosion-fatigue crack growth rate in aluminum alloys is critical for the safety assessment of aerospace structures. Conventional empirical fracture-mechanic models often struggle to capture multiphysics coupling effects, whereas purely data-driven machine-learning models may lack physical interpretability and generalize poorly beyond the training distribution. To address this challenge, this study proposes a physics-guided knowledge-based XGBoost (KBXGB) model. Based on a comprehensive dataset comprising 2786 experimental records, Permutation Feature Importance was utilized to identify 11 key features, including the stress intensity factor range, stress ratio, frequency, and environmental parameters. The KBXGB framework learns the residual between physics-based empirical models (e.g., the Paris and Walker laws) and measured experimental data, recasting the complex nonlinear mapping into a correction of the systematic deviations of the physical models, thereby achieving deep integration of domain knowledge and data-driven learning. Test results demonstrate that the KBXGB model achieves a coefficient of determination (R2) of 0.9545 and a reduced Mean Relative Error (MRE) of 1.61% on the test set, outperforming standard XGBoost and traditional regression models. Crucially, in independent extrapolation validation, the standard XGBoost model failed (R2 = 0.2858) with non-physical staircase artifacts, whereas the KBXGB model maintained high predictive fidelity (R2 = 0.8646) and successfully reproduced physical crack growth trends. The proposed approach effectively mitigates the “black-box” limitations of machine learning in sparse data regions, offering a high-precision and physically robust tool for corrosion fatigue-life prediction under complex service conditions. Full article
(This article belongs to the Section Crystalline Metals and Alloys)
Show Figures

Figure 1

25 pages, 9088 KB  
Article
MambaKAN: An Interpretable Framework for Alzheimer’s Disease Diagnosis via Selective State Space Modeling of Dynamic Functional Connectivity
by Libin Gao and Zhongyi Hu
Brain Sci. 2026, 16(4), 421; https://doi.org/10.3390/brainsci16040421 - 17 Apr 2026
Abstract
Background/Objectives: Alzheimer’s disease (AD) is an irreversible neurodegenerative disorder that imposes a profound burden on global public health. While resting-state functional magnetic resonance imaging (rs-fMRI)-based dynamic functional connectivity (dFC) analysis has demonstrated promise in capturing time-varying brain network abnormalities, existing deep learning methods [...] Read more.
Background/Objectives: Alzheimer’s disease (AD) is an irreversible neurodegenerative disorder that imposes a profound burden on global public health. While resting-state functional magnetic resonance imaging (rs-fMRI)-based dynamic functional connectivity (dFC) analysis has demonstrated promise in capturing time-varying brain network abnormalities, existing deep learning methods suffer from three fundamental limitations: (1) an inability to model temporal dependencies across dynamic connectivity windows, (2) reliance on post hoc black-box explainability tools, and (3) misalignment between feature learning and classification objectives. Methods: To address these challenges, we propose MambaKAN, an end-to-end interpretable framework integrating a Variational Autoencoder (VAE), a Selective State Space Model (Mamba), and a Kolmogorov–Arnold Network (KAN). The VAE encodes each dFC snapshot into a compact latent representation, preserving nonlinear connectivity patterns. The Mamba encoder captures long-range temporal dynamics across the sequence of latent representations via input-selective state transitions. The KAN classifier provides intrinsic interpretability through learnable B-spline activation functions, enabling direct visualization of how latent features influence diagnostic decisions without post-hoc approximation. The entire pipeline is trained end-to-end with a joint loss function that aligns feature learning with classification. Results: Evaluated on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset across five classification tasks (CN vs. AD, CN vs. EMCI, EMCI vs. LMCI, LMCI vs. AD, and four-class), MambaKAN achieves accuracies of 95.1%, 89.8%, 84.0%, 86.7%, and 70.5%, respectively, outperforming strong baselines including LSTM, Transformer, and MLP-based variants. Conclusions: Comprehensive ablation studies confirm the indispensable contribution of each module, and the three-layer interpretability analysis reveals key temporal patterns and brain regions associated with AD progression. Full article
24 pages, 1136 KB  
Review
Explainable Deep Learning for Research on the Synergistic Mechanisms of Multiple Pollutants: A Critical Review
by Chang Liu, Anfei He, Jie Gu, Mulan Ji, Jie Hu, Shufeng Qiao, Fenghe Wang, Jing Hua and Jian Wang
Toxics 2026, 14(4), 335; https://doi.org/10.3390/toxics14040335 - 16 Apr 2026
Abstract
The synergistic control of multiple pollutants is critically challenged by complex nonlinear interactions, strong spatiotemporal heterogeneity, and the difficulty of tracing causal drivers. Deep learning offers high predictive power but suffers from the “black-box” problem, limiting its acceptance in environmental decision-making. Explainable Deep [...] Read more.
The synergistic control of multiple pollutants is critically challenged by complex nonlinear interactions, strong spatiotemporal heterogeneity, and the difficulty of tracing causal drivers. Deep learning offers high predictive power but suffers from the “black-box” problem, limiting its acceptance in environmental decision-making. Explainable Deep Learning (XDL) integrates physical mechanisms with interpretable algorithms, achieving both prediction accuracy and explanatory transparency. This review systematically evaluates the effectiveness and limitations of XDL in analyzing multi-pollutant interactions, with a comparative focus on atmospheric and aquatic environments. Key techniques, including SHAP, attention mechanisms, and physics-informed neural networks, are examined for their roles in synergistic monitoring, source apportionment, and regulatory optimization. The main findings reveal that: (1) XDL, particularly the “tree model + SHAP” paradigm, has become a dominant tool for quantifying driving factors, yet most attributions remain correlational rather than causal; (2) physics-informed fusion (soft vs. hard constraints) improves physical consistency but faces unresolved conflicts between data and physical laws, with current models lacking a conflict detection mechanism; (3) cross-media comparison shows a unified technical logic of “physical mechanism guidance + post hoc feature attribution”, but atmospheric applications lead in embedding advection–diffusion constraints, while aquatic research excels in spatial topology modeling via graph neural networks; (4) critical bottlenecks include the lack of causal inference, uncertainty-unaware interpretations, and data scarcity. Future directions demand a shift from correlation-only to causal-aware attribution, from blind fusion to conflict-detecting systems, and from no evaluation standards to domain-specific validation benchmarks. XDL is poised to transform multi-pollutant governance from experience-driven to intelligence-driven approaches, provided that verifiable interpretability and physical consistency become core design principles. Full article
Show Figures

Figure 1

24 pages, 30745 KB  
Review
Vision–Language Models in Medical Imaging for Cancer Diagnosis: A Bibliometric Review
by Musa Adamu Wakili, Aminu Bashir Suleiman, Kaloma Usman Majikumna, Harisu Abdullahi Shehu, Huseyin Kusetogullari and Md. Haidar Sharif
Bioengineering 2026, 13(4), 466; https://doi.org/10.3390/bioengineering13040466 - 16 Apr 2026
Viewed by 33
Abstract
The demand for advanced detection methods and accurate staging remains a global challenge in cancer diagnosis. Even though traditional deep learning models in medical imaging achieve high precision, they suffer from limited explainability and multimodal reasoning due to their black-box nature, thereby limiting [...] Read more.
The demand for advanced detection methods and accurate staging remains a global challenge in cancer diagnosis. Even though traditional deep learning models in medical imaging achieve high precision, they suffer from limited explainability and multimodal reasoning due to their black-box nature, thereby limiting their clinical applicability. To address this gap, recent research has increasingly explored multimodal approaches that integrate visual and textual clinical data to enhance diagnostic accuracy and interpretability. This study presents a bibliometric analysis of 408 publications from 2021 to 2025, collected from Web of Science and Scopus, using VOSviewer and R-Bibliometrix to map citation networks, co-authorship, and keyword co-occurrences. The results reveal a rapid growth from 1 publication in 2021 to 269 in 2025, with significant contributions from leading countries and institutions. Thematic analysis indicates a shift from conventional convolutional approaches toward transformer-based and self-supervised methods, alongside increasing attention to multimodal learning in cancer imaging tasks such as breast, lung, and brain cancer analysis. Overall, this study provides a structured overview of the evolving research landscape, highlighting key trends, emerging themes, and research gaps to inform future developments in multimodal artificial intelligence for cancer diagnosis. Full article
Show Figures

Figure 1

16 pages, 351 KB  
Article
A Black-Box Multiobjective Optimization Method for Discrete Markov Chains
by Julio B. Clempner
Math. Comput. Appl. 2026, 31(2), 63; https://doi.org/10.3390/mca31020063 - 16 Apr 2026
Viewed by 45
Abstract
In this paper, we propose a Newton-inspired black-box optimization algorithm for multiobjective optimization in constrained ergodic Markov chain environments. The method is motivated by challenges in application areas, where decision-making under uncertainty and limited access to structural information is pervasive. A central contribution [...] Read more.
In this paper, we propose a Newton-inspired black-box optimization algorithm for multiobjective optimization in constrained ergodic Markov chain environments. The method is motivated by challenges in application areas, where decision-making under uncertainty and limited access to structural information is pervasive. A central contribution of the proposed algorithm is the complexity analysis, which yields substantial computational advantages over conventional optimization approaches. Operating in a purely black-box setting, the algorithm relies exclusively on function evaluations and derivative approximations, without requiring explicit knowledge of the objective function’s internal structure. To approximate system dynamics, we employ an Euler-based scheme that enhances the scalability and adaptability of convex optimization problems. While Markov chains are seldom leveraged in black-box optimization, we demonstrate that constrained ergodic Markov chains constitute a powerful and underexplored modeling framework for learning and decision-making under structural constraints. We provide a complexity analysis and illustrate the effectiveness of the proposed method through a numerical example, highlighting its potential to advance applications in multiobjective optimization and decision-making. Full article
Show Figures

Figure 1

18 pages, 2181 KB  
Article
Explainable AI in Pharmaceutics: Grad-CAM Analysis of Surface Dissolution Imaging Using Convolutional Neural Networks
by Abdullah Al-Baghdadi, Adam Pacławski, Jakub Szlęk and Aleksander Mendyk
Pharmaceutics 2026, 18(4), 481; https://doi.org/10.3390/pharmaceutics18040481 - 14 Apr 2026
Viewed by 279
Abstract
Background: The dissolution of oral solid dosage forms is a key determinant of drug bioavailability, yet traditional testing methods do not capture the real-time surface dynamics of drug release. This study introduces a novel framework combining surface dissolution imaging (SDi2) with an interpretable, [...] Read more.
Background: The dissolution of oral solid dosage forms is a key determinant of drug bioavailability, yet traditional testing methods do not capture the real-time surface dynamics of drug release. This study introduces a novel framework combining surface dissolution imaging (SDi2) with an interpretable, dual-wavelength convolutional neural network (CNN) to predict and understand dissolution behavior. Methods: Eight tablet formulations containing acetylsalicylic acid, sodium salicylate, or salicylamide, combined with either lactose or methylcellulose, were analyzed under two distinct, compendial conditions (pH 1.2 and pH 6.8). Results: Our final CNN model, which synergistically processes spectral images (280 nm for API release and 520 nm for structural changes), temporal data, and formulation composition, accurately predicted dissolution profiles, achieving a coefficient of determination of 0.89 and a root mean square error (RMSE) of 11.57. To overcome the “black-box” nature of deep learning, we employed Gradient-weighted Class Activation Mapping (Grad-CAM) to interpret the model’s predictions. The analysis revealed that the model focused on tablet edges at 280 nm, consistent with surface dissolution, and on bulk regions at 520 nm, reflecting structural changes including erosion and gel-layer growth. Conclusions: These findings suggest that integrating real-time imaging with explainable AI methods can support better understanding of dissolution processes in pharmaceutical formulation development. Full article
Show Figures

Graphical abstract

16 pages, 1470 KB  
Article
Physics-Guided Deep Learning for Interpretable Biomedical Image Reconstruction and Pattern Recognition in Diagnostic Frameworks
by Akeel Qadir, Saad Arif, Prajoona Valsalan and Osama Khan
Bioengineering 2026, 13(4), 457; https://doi.org/10.3390/bioengineering13040457 - 13 Apr 2026
Viewed by 249
Abstract
This study introduces a physics-guided deep learning architecture designed for the simulation, reconstruction, and pattern recognition of biomedical images. By explicitly integrating physical priors into the learning model, the framework addresses the black-box nature of traditional artificial intelligence (AI). It provides an explainable [...] Read more.
This study introduces a physics-guided deep learning architecture designed for the simulation, reconstruction, and pattern recognition of biomedical images. By explicitly integrating physical priors into the learning model, the framework addresses the black-box nature of traditional artificial intelligence (AI). It provides an explainable AI pathway that enhances diagnostic accuracy, robustness, and clinical interpretation. The proposed framework was evaluated through systematic simulation studies. It involved complex geometric configurations, multimodal physical fields, and noise-corrupted synthetic three-dimensional brain volumes. Quantitative analysis demonstrates consistent improvements in reconstruction fidelity, with the peak signal-to-noise ratio (PSNR) reaching 47 dB and the structural similarity index exceeding 0.90 across all scenarios. Notably, at moderate noise levels (0.05), the framework maintains a PSNR greater than 32 dB, ensuring structural integrity essential for computer-aided diagnosis. Volumetric brain experiments further reveal a 38–44% reduction in activation localization errors, highlighting the framework’s utility in functional imaging and disease prognosis. By grounding deep learning in physical constraints, this study provides a transparent and robust solution for automated disease classification and advanced biomedical imaging tasks within clinical decision support systems. Full article
Show Figures

Figure 1

22 pages, 2471 KB  
Article
Interpretable Grey-Box Residual Learning Framework for State-of-Health Prognostics in Electric Vehicle Batteries Using Real-World Data
by Zahra Tasnim, Kian Lun Soon, Wei Hown Tee, Lam Tatt Soon, Wai Leong Pang, Sui Ping Lee, Fazliyatul Azwa Md Rezali, Nai Shyan Lai and Wen Xun Lian
World Electr. Veh. J. 2026, 17(4), 201; https://doi.org/10.3390/wevj17040201 - 11 Apr 2026
Viewed by 211
Abstract
Conventional black-box models for electric vehicle (EV) battery State-of-Health (SOH) prediction achieve high accuracy but lack interpretability, limiting their practical deployment in Battery Management Systems (BMSs). To circumvent these limitations, this study proposes a novel Grey-Box Residual-Driven Framework (GBRDF) that synergizes Deep Symbolic [...] Read more.
Conventional black-box models for electric vehicle (EV) battery State-of-Health (SOH) prediction achieve high accuracy but lack interpretability, limiting their practical deployment in Battery Management Systems (BMSs). To circumvent these limitations, this study proposes a novel Grey-Box Residual-Driven Framework (GBRDF) that synergizes Deep Symbolic Regression (DSR) with a residual-learning BiLSTM network with two contributions: (1) the DSR component derives explicit, interpretable mathematical expressions governing global degradation trajectories based on electrochemical features, and (2) the BiLSTM network models the residual errors to capture high-frequency nonlinearities and complex sequential dependencies not addressed by the symbolic baseline. By fusing the physics-informed transparency of DSR with the data-driven refinement of BiLSTM, the GBRDF significantly enhances forecasting precision. Experimental validation across four independent EV datasets shows that the GBRDF achieves the highest coefficient of determination (R2) of 0.982, and the lowest mean absolute error (MAE) of 0.1398 and root mean square error (RMSE) of 0.3176, significantly outperforming existing methods. Furthermore, the DSR-derived SOH equation shows that battery degradation is primarily driven by high voltage exposure and charging time, with mathematical transformations reflecting how degradation accelerates initially then slows, matching real-world aging patterns where voltage stress dominates over temperature and usage variations. Full article
(This article belongs to the Section Storage Systems)
23 pages, 4811 KB  
Article
Improving Transferability of Adversarial Attacks via Frequency-Consistent Regularization
by Tengfei Shi, Shihai Wang and Bin Liu
Appl. Sci. 2026, 16(8), 3748; https://doi.org/10.3390/app16083748 - 11 Apr 2026
Viewed by 376
Abstract
Adversarial examples have revealed the vulnerability of deep neural networks, and their transferability makes black-box attacks particularly concerning. However, perturbations crafted on a surrogate model often do not remain sufficiently effective on unseen target models. In this paper, we revisit this issue from [...] Read more.
Adversarial examples have revealed the vulnerability of deep neural networks, and their transferability makes black-box attacks particularly concerning. However, perturbations crafted on a surrogate model often do not remain sufficiently effective on unseen target models. In this paper, we revisit this issue from a frequency-domain perspective and observe that perturbation optimization can become overly dependent on specific spectral patterns, which weakens cross-model transfer. To address this problem, we propose frequency-consistent regularization (FCR), a simple plug-in strategy that can be combined with existing iterative attacks. FCR introduces multiple low-frequency preserving views with randomly sampled frequency ranges at each iteration and optimizes perturbations across these varied views. In this way, the generated perturbations are less tied to a specific frequency configuration and show improved transferability. Experimental results show that FCR consistently improves the transfer performance of various iterative attacks. The improvement is observed not only in standard target models but also in adversarially trained models, where the gain is often more pronounced. Full article
Show Figures

Figure 1

24 pages, 3387 KB  
Article
Optimisation-Based Tuning of a Triple-Loop Vehicle Controller to Mimic Professional Driver Performance in a DiL Simulator
by Vincenzo Palermo, Marco Gabiccini, Eugeniu Grabovic, Massimo Guiggiani, Matteo Pergoli and Luca Bergianti
Vehicles 2026, 8(4), 87; https://doi.org/10.3390/vehicles8040087 - 10 Apr 2026
Viewed by 296
Abstract
This paper presents a simulation-based methodology for automated tuning of a triple-loop controller (steering, throttle, and braking) for a Dallara single-seater race car. The approach targets on-track driving at handling limits, where strong nonlinearities and coupled dynamics dominate, treating the vehicle as a [...] Read more.
This paper presents a simulation-based methodology for automated tuning of a triple-loop controller (steering, throttle, and braking) for a Dallara single-seater race car. The approach targets on-track driving at handling limits, where strong nonlinearities and coupled dynamics dominate, treating the vehicle as a black box. Five controller gains are optimized via derivative-free pattern search, using reference trajectories from a professional driver in a Driver-in-the-Loop (DiL) simulator. Human-likeness is promoted by penalty terms on state and control trajectories while maximizing distance over a fixed horizon as a proxy for lap-time reduction. The application uses a high-fidelity multibody vehicle model with realistic tire, suspension, and actuator dynamics in the DiL environment, rather than simplified single-track representations. Contributions are: (i) effective application of derivative-free optimization to complex, high-dimensional, black-box vehicle systems; and (ii) a systematic, reproducible procedure for automatic tuning of controller parameters with a predetermined architecture to reproduce a professional driver’s performance and embed human-likeness. Optimization required approximately 2.4 h. Results show that the optimized controller improves track coverage by 63.6 m (1.1% increase) compared to manual tuning while maintaining a realistic driving style, offering a more systematic and reliable solution than manual, trial-and-error calibration. Full article
(This article belongs to the Special Issue Advanced Control Strategies for Vehicle Dynamics and Aerodynamics)
Show Figures

Figure 1

18 pages, 2083 KB  
Article
GenAI-Enabled AI Teachers and Student Learning Engagement Across International Higher Education Contexts
by Anders Berglund, Pauldy C. J. Otermans and Dev Aditya
Educ. Sci. 2026, 16(4), 600; https://doi.org/10.3390/educsci16040600 - 9 Apr 2026
Viewed by 298
Abstract
Generative Artificial Intelligence (GenAI) is reshaping how students engage with learning both within and beyond traditional classroom settings. In a time when the development of transferable skills is essential for enabling students to thrive in varied and rapidly evolving environments, the potential of [...] Read more.
Generative Artificial Intelligence (GenAI) is reshaping how students engage with learning both within and beyond traditional classroom settings. In a time when the development of transferable skills is essential for enabling students to thrive in varied and rapidly evolving environments, the potential of GenAI to enhance learning engagement remains insufficiently understood. Despite rising interest in interactive, personalised learning companions that enable deep engagement and ongoing skills development, scholarly research remains limited. This gap constrains effective institutional use of GenAI, reinforces black-box thinking, and restricts understanding of meaningful student engagement and skills acquisition. This paper investigates how a GenAI-enabled AI teacher supports student learning engagement, focusing on behavioral engagement as evidenced by learner interaction and participation patterns across diverse international higher education institutions. Using a combination of quantitative engagement metrics and qualitative learner reflections, the study examines how GenAI supports personalised learning, sustained interaction, autonomy, and cognitive engagement among students with varying educational backgrounds. The findings demonstrate that GenAI-based teaching systems can promote meaningful learning engagement, enhance motivation, and strengthen the development of transferable and employability skills. The study contributes empirical evidence to current debates on GenAI integration, teacher practices, and student engagement, offering implications for curriculum design and institutional adoption of GenAI-enabled learning tools. Full article
Show Figures

Figure 1

21 pages, 337 KB  
Article
Black Box Optimization for Ergodic Systems in Markov Chains
by Julio B. Clempner
Mathematics 2026, 14(8), 1246; https://doi.org/10.3390/math14081246 - 9 Apr 2026
Viewed by 153
Abstract
This paper studies a black-box methodology for optimizing ergodic stochastic systems, focusing on the construction of scalar measures that reliably indicate progress toward optimality. Our starting point is a state-value quantity that inherently exhibits oscillatory behavior and does not converge under standard conditions. [...] Read more.
This paper studies a black-box methodology for optimizing ergodic stochastic systems, focusing on the construction of scalar measures that reliably indicate progress toward optimality. Our starting point is a state-value quantity that inherently exhibits oscillatory behavior and does not converge under standard conditions. We show that, despite its fluctuations, this quantity admits a recursive representation derived from a one-step-ahead fixed-local-optimal policy. The approach relies on identifying a Lyapunov-like function whose evolution reflects the long-run behavior of the system without requiring explicit knowledge of its internal dynamics. Such a function provides a monotonic indicator—non-increasing over time—that remains valid for any initial probability distribution. Whenever an optimal trajectory of the Markov chain exists, the proposed method guarantees convergence to it. We also provide a constructive procedure for obtaining the Lyapunov-like function and validate the methodology through theoretical analysis and numerical simulations. Full article
Show Figures

Figure 1

56 pages, 3022 KB  
Review
From Mechanics to Machine Learning in Additive Manufacturing: A Review of Deformation, Fatigue, and Fracture
by Murat Demiral and Murat Otkur
Technologies 2026, 14(4), 218; https://doi.org/10.3390/technologies14040218 - 9 Apr 2026
Viewed by 445
Abstract
Additive manufacturing (AM) enables a level of design flexibility that is difficult to achieve with conventional techniques, yet it inherently yields materials marked by significant variability, anisotropy, and sensitivity to defects that challenge classical mechanics-of-materials assumptions. Process-driven microstructural heterogeneity, stochastic defect populations, and [...] Read more.
Additive manufacturing (AM) enables a level of design flexibility that is difficult to achieve with conventional techniques, yet it inherently yields materials marked by significant variability, anisotropy, and sensitivity to defects that challenge classical mechanics-of-materials assumptions. Process-driven microstructural heterogeneity, stochastic defect populations, and residual stresses strongly influence deformation, fatigue, and fracture behavior, often outweighing nominal material properties and constraining the predictive capability of traditional constitutive and fracture mechanics models. Machine learning (ML) has emerged as a powerful means of handling the complexity of AM data; however, many current approaches depend on black-box models that lack physical transparency, extrapolate poorly, and treat uncertainty inadequately. This review contends that ML should augment—rather than replace—mechanics-based modeling, and that dependable prediction of AM material behavior requires mechanics-informed ML frameworks. We critically analyze the central mechanics challenges in AM and evaluate established modeling strategies alongside emerging ML methods relevant to deformation, damage, fatigue, and fracture. Particular emphasis is given to physics-informed and hybrid ML approaches that explicitly incorporate anisotropy, defect sensitivity, residual stress effects, and uncertainty quantification within learning architectures. Recent progress in ML-assisted constitutive modeling, fatigue and fracture prediction, and digital twin development is synthesized, and the implications for qualification, certification, and structural deployment of AM components are discussed. Full article
(This article belongs to the Collection Review Papers Collection for Advanced Technologies)
Show Figures

Figure 1

32 pages, 7135 KB  
Article
Evolutionary Multi-Objective Prompt Learning for Synthetic Text Data Generation with Black-Box Large Language Models
by Diego Pastrián, Nicolás Hidalgo, Víctor Reyes and Erika Rosas
Appl. Sci. 2026, 16(8), 3623; https://doi.org/10.3390/app16083623 - 8 Apr 2026
Viewed by 284
Abstract
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are [...] Read more.
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are scarce or difficult to obtain. Large Language Models (LLMs) provide powerful capabilities for synthetic text generation, yet the quality of generated data strongly depends on the design of input prompts. Prompt engineering is therefore critical, but it remains largely manual and difficult to scale, particularly in black-box settings where model internals are inaccessible. This work introduces EVOLMD-MO, a multi-objective evolutionary framework for automated prompt learning aimed at generating high-quality synthetic text datasets using black-box LLMs. The proposed approach formulates prompt optimization as a multi-objective search problem in which candidate prompts evolve through genetic operators guided by two complementary objectives: semantic fidelity to reference data and generative diversity of the produced samples. To support scalable optimization, the framework integrates a modular multi-agent architecture that decouples prompt evolution, LLM interaction, and evaluation mechanisms. The evolutionary process is implemented using the NSGA-II algorithm, enabling the discovery of diverse Pareto-optimal prompts that balance semantic preservation and diversity. Experimental evaluation using large-scale disaster-related social media data demonstrates that the proposed approach consistently improves prompt quality across generations while maintaining a stable trade-off between fidelity and diversity. Compared with a single-objective baseline, EVOLMD-MO explores a significantly broader semantic search space and produces more diverse yet semantically coherent synthetic datasets. These results indicate that multi-objective evolutionary prompt learning constitutes a promising strategy for black-box LLM-driven data generation, with potential applicability to adaptive data analytics and real-time decision-support systems in highly dynamic environments, pending broader validation across domains and models. Full article
(This article belongs to the Special Issue Resource Management for AI-Centric Computing Systems)
Show Figures

Figure 1

Back to TopTop