Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,097)

Search Parameters:
Keywords = constraint-based modelling approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 8197 KB  
Article
A Hybrid Game Engine–Generative AI Framework for Overcoming Data Scarcity in Open-Pit Crack Detection
by Rohan Le Roux, Siavash Khaksar, Mohammadali Sepehri and Iain Murray
Mach. Learn. Knowl. Extr. 2026, 8(4), 99; https://doi.org/10.3390/make8040099 (registering DOI) - 12 Apr 2026
Abstract
Open-pit mining operations rely heavily on visual inspection to identify indicators of slope instability such as surface cracks. Early identification of these geotechnical hazards enables timely safety interventions to protect both workers and assets in the event of slope failures or landslides. While [...] Read more.
Open-pit mining operations rely heavily on visual inspection to identify indicators of slope instability such as surface cracks. Early identification of these geotechnical hazards enables timely safety interventions to protect both workers and assets in the event of slope failures or landslides. While computer vision (CV) approaches offer a promising avenue for autonomous crack detection, their effectiveness remains constrained by the scarcity of labelled geotechnical datasets. Deep learning (DL)-based models, in particular, require large amounts of representative training data to generalize to unseen conditions; however, collecting such data from operational mine sites is limited by safety, cost, and data confidentiality constraints. To address this challenge, this study proposes a novel hybrid game engine–generative artificial intelligence (AI) framework for large-scale dataset generation without requiring real-world training data. Leveraging a parameterized virtual environment developed in Unreal Engine 5 (UE5), the framework generates realistic images of open-pit surface cracks and enhances their fidelity and diversity using StyleGAN2-ADA. The synthesized datasets were used to train the YOLOv11 real-time object detection model and evaluated on a held-out real-world dataset of open-pit slope imagery to assess the effectiveness of the proposed framework in improving model generalizability under extreme data scarcity. Experimental results demonstrated that models trained using the proposed framework consistently outperformed the UE5 baseline, with average precision (AP) at intersection over union (IoU) thresholds of 0.5 and [0.5:0.95] increasing from 0.792 to 0.922 (+16.4%) and 0.536 to 0.722 (+34.7%), respectively, across the best-performing configurations. These findings demonstrate the effectiveness of hybrid generative AI frameworks in mitigating data scarcity in CV applications and supporting the development of scalable automated slope monitoring systems for improved worker safety and operational efficiency in open-pit mining. Full article
18 pages, 5351 KB  
Article
Dual-Factor Adaptive Robust Aggregation for Secure Federated Learning in IoT Networks
by Zuan Song, Wuzheng Tan, Hailong Wang, Guilong Zhang and Jian Weng
Future Internet 2026, 18(4), 201; https://doi.org/10.3390/fi18040201 - 10 Apr 2026
Abstract
Federated Learning (FL) has been widely adopted in privacy-sensitive and distributed environments. However, training stability becomes significantly challenged when differential privacy (DP) noise and Byzantine client behaviors coexist, as these heterogeneous perturbations jointly introduce time-varying distortions to model updates. Existing approaches typically address [...] Read more.
Federated Learning (FL) has been widely adopted in privacy-sensitive and distributed environments. However, training stability becomes significantly challenged when differential privacy (DP) noise and Byzantine client behaviors coexist, as these heterogeneous perturbations jointly introduce time-varying distortions to model updates. Existing approaches typically address privacy and robustness in isolation. Under DP constraints, noise injection increases gradient variance and obscures the distinction between benign and adversarial updates, causing many robust aggregation methods to misclassify normal clients or fail to detect malicious ones. As a result, their effectiveness degrades substantially in practical IoT environments where noise and attacks interact. In this work, we propose a dual-factor adaptive and robust aggregation framework (DARA) to improve the stability of FL under such combined disturbances. DARA adjusts the differential privacy noise scale by jointly considering local update magnitudes and training-round dynamics, aiming to mitigate noise-induced bias under a fixed privacy budget. Meanwhile, a direction-aware weighted aggregation scheme assigns continuous trust weights based on cosine similarity between updates, thereby suppressing the influence of potentially anomalous or adversarial clients. We conduct extensive experiments on multiple benchmark datasets to evaluate DARA under differential privacy constraints and Byzantine attack scenarios. The results indicate that DARA achieves favorable robustness and convergence behavior compared with representative aggregation baselines, while maintaining competitive model accuracy. Full article
(This article belongs to the Special Issue Federated Learning: Challenges, Methods, and Future Directions)
Show Figures

Figure 1

29 pages, 2742 KB  
Article
AH-CGAN: An Adaptive Hybrid-Loss Conditional GAN for Class-Imbalance Mitigation in Intrusion Detection Systems
by Ya Zhang, Faizan Qamar, Ravie Chandren Muniyandi and Yuqing Dai
Mathematics 2026, 14(8), 1264; https://doi.org/10.3390/math14081264 - 10 Apr 2026
Abstract
With the explosive growth of the Internet of Things (IoT) and cloud-computing traffic, Intrusion Detection Systems (IDSs) have become a cornerstone of network security. However, modern traffic data often exhibits extreme class imbalance and long-tailed distributions, leading to persistently high miss rates for [...] Read more.
With the explosive growth of the Internet of Things (IoT) and cloud-computing traffic, Intrusion Detection Systems (IDSs) have become a cornerstone of network security. However, modern traffic data often exhibits extreme class imbalance and long-tailed distributions, leading to persistently high miss rates for minority attack categories in Machine Learning (ML)-based IDSs. Conventional oversampling may introduce decision noise, whereas standard Generative Adversarial Networks (GANs) can suffer from training instability and mode collapse when modeling high-dimensional tabular traffic features. To address these challenges, we propose a high-fidelity traffic augmentation framework based on an Adaptive Hybrid-loss Conditional GAN (AH-CGAN). Specifically, AH-CGAN introduces an iteration-dependent adaptive gradient penalty (AGP) schedule to enforce the Lipschitz continuity constraint more effectively during training and incorporates a feature-matching objective to align intermediate critic representations between real and synthetic traffic. Experiments on the CIC-IDS2017 benchmark show that AH-CGAN generates distribution-consistent synthetic samples and that augmentation improves downstream detection across multiple classifiers. In particular, the weighted F1-score of Logistic Regression increases from 0.8237 to 0.8697 (Δ = +0.0460, i.e., +4.6%). Overall, the proposed approach enhances minority coverage in the feature space and can improve class separability, providing a practical solution for long-tailed IDS. Full article
Show Figures

Figure 1

30 pages, 939 KB  
Article
AI-Driven Financial Solutions for Climate Resilience and Geopolitical Risk Mitigation in Low- and Middle-Income Countries
by Abdelrahman Mohamed Mohamed Saeed and Muhammad Ali
Economies 2026, 14(4), 134; https://doi.org/10.3390/economies14040134 - 10 Apr 2026
Abstract
Climate change disproportionately threatens low- and middle-income countries, yet integrated assessments combining socio-economic fragility with physical hazards remain limited. This study quantifies multi-dimensional climate vulnerability and derives optimized adaptation policies for six representative nations (Bangladesh, Colombia, Kenya, Morocco, Pakistan, Vietnam) by fusing socio-economic [...] Read more.
Climate change disproportionately threatens low- and middle-income countries, yet integrated assessments combining socio-economic fragility with physical hazards remain limited. This study quantifies multi-dimensional climate vulnerability and derives optimized adaptation policies for six representative nations (Bangladesh, Colombia, Kenya, Morocco, Pakistan, Vietnam) by fusing socio-economic indicators with climate risk data (2000–2024). A computational framework integrating unsupervised learning, dimensionality reduction, and predictive modeling was employed. Principal Component Analysis synthesized eight indicators into a Compound Vulnerability Score (CVS), while K-Means and DBSCAN identified distinct vulnerability regimes. XGBoost quantified driver importance, and Graph Neural Networks captured systemic interconnections. XGBoost identified projected drought risk (31.2%), precipitation change (18.1%), and poverty headcount (14.3%) as primary drivers. Graph networks demonstrated significant risk amplification in African nations (Morocco SRS: 0.728–0.874; Kenya SRS: 0.504–0.641) versus damping in Asian countries. A Reinforcement Learning (RL) agent was trained using Deep Q-Networks with experience replay to optimize intervention portfolios under budget constraints. The RL policy achieved a 23% reduction in systemic risk compared to uniform allocation baselines, generating context-specific priorities: drought management for Morocco (score 50) and Pakistan (40); poverty alleviation for Kenya (40); coastal protection for Bangladesh (40); agricultural resilience for Vietnam (35); and institutional capacity building for Colombia (50). In conclusion, socio-economic fragility non-linearly amplifies climate hazards, with poverty and drought risk constituting critical vulnerability multipliers. The AI-driven framework demonstrates that targeted interventions in high-sensitivity systems maximize systemic risk reduction. This integrated approach provides a replicable, evidence-based foundation for strategic adaptation finance allocation in an increasingly uncertain climate future. Full article
(This article belongs to the Special Issue Energy Consumption, Financial Development and Economic Growth)
Show Figures

Figure 1

29 pages, 2174 KB  
Review
Energy Management Technologies for All-Electric Ships: A Comprehensive Review for Sustainable Maritime Transport
by Lyu Xing, Yiqun Wang, Han Zhang, Guangnian Xiao, Xinqiang Chen, Qingjun Li, Lan Mu and Li Cai
Sustainability 2026, 18(8), 3778; https://doi.org/10.3390/su18083778 - 10 Apr 2026
Viewed by 33
Abstract
To systematically review the research progress, methodological frameworks, and application characteristics of energy management technologies for All-Electric Ships (AES), this review provides a comprehensive and critical survey of studies published over the past two decades, following the technical trajectory of multi-energy coupling–multi-objective optimization–engineering-oriented [...] Read more.
To systematically review the research progress, methodological frameworks, and application characteristics of energy management technologies for All-Electric Ships (AES), this review provides a comprehensive and critical survey of studies published over the past two decades, following the technical trajectory of multi-energy coupling–multi-objective optimization–engineering-oriented operation. Based on a structured analysis of representative literature, the review first elucidates the overall architecture and operational characteristics of AES energy systems from a system-level perspective, highlighting their core advantages as “mobile microgrids” in terms of multi-energy coordination and dispatch flexibility. On this basis, a structured classification framework for energy management strategies is established, and the theoretical foundations, applicable scenarios, and engineering feasibility of rule-based, optimization-based, uncertainty-aware, and intelligent/data-driven approaches are comparatively reviewed and discussed. Furthermore, focusing on key research themes—including multi-energy system optimization, ship–port–microgrid coordinated operation, battery safety and lifetime-oriented management, and real-time energy management strategies—the review synthesizes the main findings and engineering validation progress reported in recent studies. The analysis indicates that, with the integration of fuel cells, renewable energy sources, and Hybrid Energy Storage Systems (HESS), energy management for AES has evolved from a single power allocation problem into a system-level optimization challenge involving multiple time scales, multiple objectives, and diverse sources of uncertainty. Optimization-based and Model Predictive Control (MPC) methods have shown promising performance in many simulation and pilot-scale studies for improving energy efficiency and emission performance, while robust optimization and data-driven approaches offer useful support for enhancing operational resilience, prediction capability, and decision quality under complex and uncertain conditions. These advances collectively contribute to the environmental, economic, and operational sustainability of maritime transport by reducing greenhouse gas emissions, extending equipment lifetime, and enabling efficient integration of renewable energy sources. At the same time, the current literature still reveals important limitations related to model fidelity, data availability, validation maturity, and the gap between methodological sophistication and practical deployment. Overall, an increasingly structured but still evolving research framework has emerged in this field. Future research should further strengthen ship–port–microgrid coordinated energy management frameworks, develop system-level optimization methods that integrate safety constraints and uncertainty, and advance intelligent Energy Management Systems (EMS) oriented toward sustainable zero-carbon shipping objectives. Full article
Show Figures

Figure 1

15 pages, 4228 KB  
Article
Interpretable Machine-Learning Prediction of Atmospheric Zinc Corrosion Depth Under Diverse Environmental Conditions
by Sandeep Jain, Rahul Singh Mourya, Reliance Jain, Sheetal Kumar Dewangan and Saurabh Tiwari
Processes 2026, 14(8), 1214; https://doi.org/10.3390/pr14081214 - 10 Apr 2026
Viewed by 98
Abstract
Understanding the depth and severity of corrosion is vital for evaluating the long-term durability and economic performance of Zn-based structures. In this study, a machine learning (ML) framework was applied to forecast the corrosion depth of zinc under varying environmental circumstances. A dataset [...] Read more.
Understanding the depth and severity of corrosion is vital for evaluating the long-term durability and economic performance of Zn-based structures. In this study, a machine learning (ML) framework was applied to forecast the corrosion depth of zinc under varying environmental circumstances. A dataset consisting of 300 samples compiled from previously published atmospheric corrosion studies under various environmental conditions was used to develop and evaluate the machine learning models. Seven ML algorithms were developed by integrating different environmental constraints such as temperature, time of wetness (TOW), SO2 concentration, Cl concentration, and exposure time as input parameters. The models were trained using cross-validation and hyperparameter optimization to ensure robust predictive performance and minimize overfitting. The Random Forest (RF) model confirmed superior predictive performance with an R2 of 96.4% and RMSE of 0.642 µm among all used models. The predictive ability of the optimized RF model was further confirmed using five new environmental systems, attaining excellent agreement with predicted values (R2 = 97.9%, RMSE = 0.87 µm). Model interpretability analysis using SHAP (SHapley Additive exPlanations) discovered that exposure time and SO2 concentration are the most significant parameters leading zinc corrosion behaviour. The developed ML framework provides interpretable insights into the influence of environmental parameters on atmospheric zinc corrosion behaviour and provides a reliable tool for forecasting corrosion depth. These findings highlight the potential of ML approaches to support corrosion mitigation strategies and accelerate materials design by reducing reliance on conventional trial-and-error experimentation. Full article
Show Figures

Figure 1

31 pages, 2759 KB  
Article
Uncertainty-Aware Groundwater Potential Mapping in Arid Basement Terrain Using AHP and Dirichlet-Based Monte Carlo Simulation: Evidence from the Sudanese Nubian Shield
by Mahmoud M. Kazem, Fadlelsaid A. Mohammed, Abazar M. A. Daoud and Tamás Buday
Water 2026, 18(8), 901; https://doi.org/10.3390/w18080901 (registering DOI) - 9 Apr 2026
Viewed by 82
Abstract
Groundwater sustains human activity in arid crystalline terrains where surface water is scarce and hydrogeological data are limited. However, most groundwater potential mapping approaches depend on deterministic weighting methods without quantifying model variability. This study describes an uncertainty-aware Remote Sensing and Geographic Information [...] Read more.
Groundwater sustains human activity in arid crystalline terrains where surface water is scarce and hydrogeological data are limited. However, most groundwater potential mapping approaches depend on deterministic weighting methods without quantifying model variability. This study describes an uncertainty-aware Remote Sensing and Geographic Information Systems (RS–GIS) framework to delineate groundwater potential zones in the Wadi Arab Watershed, Northeastern Sudan. Nine thematic factors—geology and lithology, rainfall, slope, drainage density, lineament density, soil, land use/land cover, topographic wetness index, and height above nearest drainage—were integrated using the Analytical Hierarchy Process (AHP), with acceptable consistency (Consistency Ratio (CR) < 0.1). To address subjectivity in weights, a Dirichlet-based Monte Carlo simulation (500 iterations) was implemented to perturb AHP weights whilst preserving compositional constraints. The resulting Groundwater Potential Index (GWPI) classified 32.69% of the watershed as high to very high potential, primarily associated with alluvial deposits and fractured crystalline rocks. Model validation using Receiver Operating Characteristic (ROC) analysis yielded an Area Under the Curve (AUC) of 0.704, indicating acceptable predictive performance. Uncertainty assessment showed low spatial variability (mean standard deviation (SD) = 0.215) and stable exceedance probabilities, verifying the robustness of predicted high-potential zones. The proposed probabilistic AHP framework augments decision reliability and provides a transferable, cost-effective tool for groundwater planning in data-limited arid basement environments. Full article
(This article belongs to the Section Hydrogeology)
33 pages, 2162 KB  
Article
Hybrid Narwhale Optimization with Super Modified Simplex and Runge–Kutta Enhancements: Benchmark Validation and Application to Fuzzy Aggregate Production Planning
by Pasura Aungkulanon, Anucha Hirunwat, Roberto Montemanni and Pongchanun Luangpaiboon
Algorithms 2026, 19(4), 295; https://doi.org/10.3390/a19040295 - 9 Apr 2026
Viewed by 68
Abstract
Aggregate production planning (APP) helps medium-term production, manpower, inventory, and subcontracting decisions match expected demand. Deterministic planning models are generally ineffective in manufacturing due to demand and operational variability. Fuzzy linear programming (FLP) has been frequently used to describe imprecision using membership functions [...] Read more.
Aggregate production planning (APP) helps medium-term production, manpower, inventory, and subcontracting decisions match expected demand. Deterministic planning models are generally ineffective in manufacturing due to demand and operational variability. Fuzzy linear programming (FLP) has been frequently used to describe imprecision using membership functions and satisfaction levels. Despite its versatility, accurate approaches for solving multi-objective FLP-based APP models become computationally expensive as issue size and complexity increase. Thus, metaheuristic algorithms are widely used, although many still have premature convergence, parameter sensitivity, and restricted scalability. This study investigates the Narwhal Optimization Algorithm (NO) as a population-based metaheuristic framework. It proposes two hybrid variants to improve convergence reliability and constraint-handling capability: NO combined with the Super Modified Simplex Method (SMS) for local refinement and NO integrated with a Runge–Kutta-based optimizer (RK) for search stability. These hybrid techniques are tested for solution quality, convergence behavior, and robustness using eight response-surface benchmark functions and four constrained optimization problems. A real-parameter fuzzy APP problem with three goods and a six-month planning horizon uses the best variations. The Elevator Kinematic Optimization (EKO) algorithm, chosen for its compliance with the same mathematical framework and consistent parameter values, is used to compare the offered solutions fairly and controlled. Fuzzy programming uses a max–min satisfaction framework with linear membership functions from positive and negative ideal solutions. Computational experiments assess solution quality, stability, and efficiency for nominal and ±10% demand disturbances. The hybrid NO variants better resist premature convergence, stabilize solutions, and satisfy users more than the original NO and benchmark approaches. For small and medium-sized organizations in dynamic situations, hybrid narwhal-based optimization appears to be a reliable and scalable decision-support solution for APP problems under uncertainty. Full article
(This article belongs to the Special Issue Optimizing Logistics Activities: Models and Applications)
Show Figures

Figure 1

22 pages, 6746 KB  
Article
Bidirectional T1–T2 Brain MRI Synthesis Using a Fusion U-Net Transformer for Real-World Clinical Data
by Zeynep Cantemir, Hacer Karacan, Emetullah Cindil and Burak Kalafat
Appl. Sci. 2026, 16(8), 3674; https://doi.org/10.3390/app16083674 - 9 Apr 2026
Viewed by 70
Abstract
Obtaining multiple MRI contrasts for each patient prolongs scan acquisition time, increases healthcare costs, and may not always be feasible due to patient specific constraints. Deep learning-based MRI contrast synthesis offers a potential solution, yet most existing approaches are evaluated on preprocessed public [...] Read more.
Obtaining multiple MRI contrasts for each patient prolongs scan acquisition time, increases healthcare costs, and may not always be feasible due to patient specific constraints. Deep learning-based MRI contrast synthesis offers a potential solution, yet most existing approaches are evaluated on preprocessed public benchmarks that do not reflect real-world clinical variability. In this study, we propose a fusion U-Net transformer framework for bidirectional T1-weighted ↔ T2-weighted brain MRI synthesis trained and evaluated exclusively on retrospectively acquired clinical data. The proposed architecture integrates multiscale convolutional feature extraction with axial attention mechanisms and a transformer bottleneck for efficient global context modeling. A fusion refinement block is incorporated to mitigate skip connection artifacts. An adversarial training strategy with the least squares GAN objective and a hybrid loss combining L1 reconstruction and structural similarity (SSIM) is employed to promote both pixel-level accuracy and perceptual fidelity. The model is evaluated using SSIM and PSNR metrics alongside qualitative expert assessment conducted by two board-certified radiologists. For both synthesis directions, the framework achieves competitive quantitative performance against baseline models under the challenging conditions of clinical data. Expert evaluation confirms high anatomical fidelity and clinically acceptable image quality across both synthesis directions. These results indicate that the proposed framework represents a promising approach for multi-contrast MRI synthesis in clinically heterogeneous data environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

34 pages, 24391 KB  
Article
Multi-Objective Sizing of a Run-of-River Hydro–PV–Battery–Diesel Microgrid Under Seasonal River-Flow Variability Using MOPSO
by Yining Chen, Rovick P. Tarife, Jared Jan A. Abayan, Sophia Mae M. Gascon and Yosuke Nakanishi
Electricity 2026, 7(2), 36; https://doi.org/10.3390/electricity7020036 - 9 Apr 2026
Viewed by 79
Abstract
Hybrid hydro–solar microgrids offer a practical electrification option for remote and weak-grid communities by combining run-of-river hydropower with photovoltaic generation. However, their performance depends strongly on coordinated decisions across three layers: (i) system sizing and architecture, (ii) turbine selection and rating under variable [...] Read more.
Hybrid hydro–solar microgrids offer a practical electrification option for remote and weak-grid communities by combining run-of-river hydropower with photovoltaic generation. However, their performance depends strongly on coordinated decisions across three layers: (i) system sizing and architecture, (ii) turbine selection and rating under variable river flow, and (iii) operational energy dispatch under time-varying solar resource and demand. This paper develops an optimization-driven planning framework for a run-of-river hydro–PV microgrid that co-optimizes component capacities and turbine-related design choices while enforcing time-series operational feasibility. Physics-based component models translate river discharge into hydroelectric output via turbine efficiency characteristics and operating limits, and compute PV generation and storage trajectories under dispatch and state-of-charge constraints. The planning problem is formulated as a multi-objective optimization that quantifies trade-offs among life-cycle cost, supply reliability (e.g., unmet-load metrics), and sustainability indicators (e.g., diesel-free operation or emissions when backup generation is present). A Pareto-optimal set of designs is obtained using a population-based multi-objective algorithm, and representative knee-point (balanced) solutions are selected to illustrate how turbine choice and dispatch strategy interact with seasonal hydrology and solar variability. The proposed approach supports transparent and robust design decisions for hybrid hydro–solar microgrids. Full article
33 pages, 3526 KB  
Review
A Comprehensive Survey of AI/ML-Driven Optimization, Predictive Control, and Innovative Solar Technologies
by Ali Alhazmi
Energies 2026, 19(8), 1847; https://doi.org/10.3390/en19081847 - 9 Apr 2026
Viewed by 113
Abstract
By 2024, global photovoltaic (PV) capacity exceeded 2000 GW, corresponding with a decline in levelized costs of approximately 90% since 2010. Artificial intelligence (AI) and machine learning (ML) are enabling novel approaches to solar energy system design and implementation. This survey offers a [...] Read more.
By 2024, global photovoltaic (PV) capacity exceeded 2000 GW, corresponding with a decline in levelized costs of approximately 90% since 2010. Artificial intelligence (AI) and machine learning (ML) are enabling novel approaches to solar energy system design and implementation. This survey offers a detailed evaluation of AI/ML methodologies utilized across the solar energy value chain, with a focus on solar irradiance forecasting, maximum power point tracking (MPPT), fault identification, and the expeditious discovery of system materials. The distinction between AI as the broader paradigm and ML as its data-driven subset is drawn and maintained throughout. The primary results cite forecasting improvements via deep learning architectures (LSTM, CNN, Transformer) of 10–40% over traditional methods, while hybrid numerical weather prediction and deep learning models achieve mean absolute error reductions of 15–25%. Reinforcement learning-based MPPT achieves tracking efficiencies in excess of 99% under partial shading, CNN-based fault classification reaches accuracies above 95%, and ML-based screening of materials accelerates perovskite optimization by a factor of 5–10×. Promising paradigms such as explainable AI, federated learning, digital twins, and physics-informed neural networks are evaluated alongside technical, economic, and regulatory constraints. This survey provides a consolidated reference and practical roadmap for the advancement of AI-driven solar energy technologies. Full article
Show Figures

Figure 1

44 pages, 8017 KB  
Article
Reinforcement Learning-Based Landing Impact Mitigation and Stabilization Control for Lunar Quadruped Robots Under Complex Operating Conditions
by Jianfei Li, Yeqing Yuan, Zhiyong Liu and Shengxin Sun
Machines 2026, 14(4), 417; https://doi.org/10.3390/machines14040417 - 9 Apr 2026
Viewed by 77
Abstract
Lunar quadruped robots face landing challenges including weak gravity, large mass variations, uncertain sloped terrain, and strict payload acceleration limits, requiring effective impact mitigation and rapid post-landing stabilization. This paper presents a novel end-to-end reinforcement learning-based landing controller with three key novelties: (i) [...] Read more.
Lunar quadruped robots face landing challenges including weak gravity, large mass variations, uncertain sloped terrain, and strict payload acceleration limits, requiring effective impact mitigation and rapid post-landing stabilization. This paper presents a novel end-to-end reinforcement learning-based landing controller with three key novelties: (i) a phase-structured yet implicitly encoded formulation that distinguishes contact preparation, energy dissipation, and stabilization without explicit phase switching; (ii) a terrain-agnostic state and control representation using equivalent support direction construction and contact-gated modulation to decouple normal–tangential dynamics; and (iii) an extremum oriented learning strategy that directly captures peak impact suppression and buffering sufficiency, addressing limitations of cumulative rewards in hybrid, peak-dominated tasks. A hybrid control model for lunar quadruped landing dynamics is established, incorporating variable mass, low impact, and full stroke as key constraints during training. Simulation and full-scale experimental prototypes are built to validate the controller. Simulation results demonstrate robust landing buffering and stability control under varying mass, landing velocity, and slope conditions, with favorable robustness against parameter variations. Experimental verification is conducted under diverse conditions including different masses (200 kg, 250 kg), vertical/horizontal landing velocities (0.8 m/s, 0.2 m/s), and slopes (0, 8). The deviation between simulation and experimental results does not exceed 30%, confirming the effectiveness and transferability of the proposed approach. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
24 pages, 1584 KB  
Review
From Dialogue Systems to Autonomous Agents: A Modeling Framework for Ethical Generative AI in Healthcare
by James C. L. Chow and Kay Li
Information 2026, 17(4), 361; https://doi.org/10.3390/info17040361 - 9 Apr 2026
Viewed by 240
Abstract
The advancement of generative artificial intelligence (GAI) in healthcare is driving a transition from dialogue-based medical chatbots to workflow-embedded clinical AI agents. These agentic systems incorporate persistent state management, coordinated tool invocation, and bounded autonomy, enabling multi-step reasoning within institutional processes. As a [...] Read more.
The advancement of generative artificial intelligence (GAI) in healthcare is driving a transition from dialogue-based medical chatbots to workflow-embedded clinical AI agents. These agentic systems incorporate persistent state management, coordinated tool invocation, and bounded autonomy, enabling multi-step reasoning within institutional processes. As a result, traditional response-level evaluation frameworks are insufficient for understanding system behavior. This review provides a conceptual synthesis of the evolution from conversational systems to agentic architectures and proposes a system-level modeling framework for ethical clinical AI agents. We identify core architectural dimensions, including autonomy gradients, state persistence, tool orchestration, workflow coupling, and human–AI co-agency, and examine how these features reshape bias propagation pathways, error cascade dynamics, trust calibration, and accountability structures. Emphasizing that ethical risks emerge from longitudinal system interactions rather than isolated outputs, we argue for embedding fairness constraints, transparency mechanisms, and lifecycle governance directly within AI design. By outlining trajectory-level evaluation strategies, equity-aware development approaches, collaborative oversight models, and adaptive regulatory frameworks, this paper establishes a foundation for the responsible and trustworthy integration of agentic AI in healthcare. Full article
(This article belongs to the Special Issue Modeling in the Era of Generative AI)
Show Figures

Graphical abstract

26 pages, 565 KB  
Article
Multi-Strategy Improvement and Comparative Research on Data-Driven Social Network Construction in Edge-Deficient Scenarios for Social Bot Account Detection
by Junjie Wang and Minghu Tang
Information 2026, 17(4), 360; https://doi.org/10.3390/info17040360 - 9 Apr 2026
Viewed by 126
Abstract
Accurate social bot detection relies on simulated data to alleviate the scarcity of labeled real-world datasets. Synthetic graph data serves as the core training resource for detection models within simulated data; nevertheless, edge deficiency in real social networks (induced by privacy constraints and [...] Read more.
Accurate social bot detection relies on simulated data to alleviate the scarcity of labeled real-world datasets. Synthetic graph data serves as the core training resource for detection models within simulated data; nevertheless, edge deficiency in real social networks (induced by privacy constraints and data collection limitations) gives rise to “pseudo-isolated nodes” and distorts the quality of synthetic graph data. Furthermore, mainstream data-driven synthetic graph generation methods lack systematic and credible comparative analyses. To tackle these problems, this study optimizes two representative synthetic graph generation approaches (the Chung-Lu model and the Random Classifier-based Multi-Hop (RCMH) sampling + diffusion model) and puts forward an edge completion strategy grounded in sociological theories. Multiple groups of comparative experiments are conducted to assess the performance of the improved methods and the edge completion strategy. Experimental results demonstrate that the “interest + social association” edge completion strategy achieves an F1-score (F1) of 0.7051, and the improved sampling + diffusion model integrated with edge completion reaches an F1-score of 0.7071, which performs better than traditional and unmodified methods to a certain extent. This work preliminarily enhances the reliability of synthetic graph generation methods and provides relatively high-quality synthetic social graph data for social bot detection. It should be noted that the proposed methods are validated solely on Twitter-derived datasets, and their effectiveness remains to be verified in cross-platform adaptation and dynamic social network scenarios. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

28 pages, 1349 KB  
Review
Adversarial Robustness in Quantum Machine Learning: A Scoping Review
by Yanche Ari Kustiawan and Khairil Imran Ghauth
Computers 2026, 15(4), 233; https://doi.org/10.3390/computers15040233 - 9 Apr 2026
Viewed by 125
Abstract
Quantum machine learning (QML) is emerging as a promising paradigm at the intersection of quantum computing and artificial intelligence, yet its security under adversarial conditions remains insufficiently understood. This scoping review aims to systematically map empirical research on adversarial robustness in QML and [...] Read more.
Quantum machine learning (QML) is emerging as a promising paradigm at the intersection of quantum computing and artificial intelligence, yet its security under adversarial conditions remains insufficiently understood. This scoping review aims to systematically map empirical research on adversarial robustness in QML and to identify dominant threat models, defense strategies, evaluation approaches, practical constraints, and future research directions. Following PRISMA-ScR guidelines, four major databases were searched, resulting in 53 eligible empirical studies published between 2020 and 2026. The findings show that most research concentrates on input-level evasion attacks, particularly adversarial examples, and primarily evaluates robustness in classification-oriented models such as variational quantum circuits and quantum neural networks. Defense strategies are largely adapted from classical adversarial training and noise-based mitigation, with limited deployment on real quantum hardware. Robustness assessment is predominantly empirical, relying on accuracy degradation and attack success rate, while formal certification methods remain less common. The literature also highlights substantial constraints related to hardware limitations, NISQ noise, computational cost, and dataset scale. Overall, the evidence indicates that adversarial robustness research in QML is expanding but remains methodologically concentrated, underscoring the need for standardized benchmarking, scalable defenses, and hardware-validated robustness evaluation frameworks. Full article
Show Figures

Figure 1

Back to TopTop