Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (166)

Search Parameters:
Keywords = particle swarm optimisation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
82 pages, 6468 KB  
Article
Correction Functions and Refinement Algorithms for Enhancing the Performance of Machine Learning Models
by Attila Kovács, Judit Kovácsné Molnár and Károly Jármai
Automation 2026, 7(2), 45; https://doi.org/10.3390/automation7020045 - 6 Mar 2026
Viewed by 795
Abstract
The aim of this study is to investigate and demonstrate the role of correction functions and optimisation-based refinement algorithms in enhancing the performance of machine learning models, particularly in predictive anomaly detection tasks applied in industrial environments. The performance of machine learning models [...] Read more.
The aim of this study is to investigate and demonstrate the role of correction functions and optimisation-based refinement algorithms in enhancing the performance of machine learning models, particularly in predictive anomaly detection tasks applied in industrial environments. The performance of machine learning models is highly dependent on the quality of data preprocessing, model architecture, and post-processing methodology. In many practical applications—particularly in time-series forecasting and anomaly detection—the conventional training pipeline alone is insufficient, because model uncertainty, structural bias and the handling of rare events require specialised post hoc calibration and refinement mechanisms. This study provides a systematic overview of the role of correction functions (e.g., Principal Component Analysis (PCA), Squared Prediction Error (SPE)/Q-statistics, Hotelling’s T2, Bayesian calibration) and adaptive improvement algorithms (e.g., Genetic Algorithms (GA), Particle Swarm Optimisation (PSO), Simulated Annealing (SA), Gaussian Mixture Model (GMM) and ensemble-based techniques) in enhancing the performance of machine learning pipelines. The models were trained on a real industrial dataset compiled from power network analytics and harmonic-injection-based loading conditions. Model validation and equipment-level testing were performed using a large-scale harmonic measurement dataset collected over a five-year period. The reliability of the approach was confirmed by comparing predicted state transitions with actual fault occurrences, demonstrating its practical applicability and suitability for integration into predictive maintenance frameworks. The analysis demonstrates that correction functions introduce deterministic transformations in the data or error space, whereas improvement algorithms apply adaptive optimisation to fine-tune model parameters or decision boundaries. The combined use of these approaches significantly reduces overfitting, improves predictive accuracy and lowers false alarm rates. This work introduces the concept of an Organically Adaptive Predictive (OAP) ML model. The proposed model presents organic adaptivity, continuously adjusting its predictive behaviour in response to dynamic variations in network loading and harmonic spectrum composition. The introduced terminology characterises the organically emergent nature of the adaptive learning mechanism. Full article
Show Figures

Figure 1

21 pages, 5351 KB  
Article
PSO-Based Ensemble Learning Enhanced with Explainable Artificial Intelligence for Breast Glandular Dose Estimation in Mammography
by Sevgi Ünal and Remzi Gürfidan
Appl. Sci. 2026, 16(5), 2514; https://doi.org/10.3390/app16052514 - 5 Mar 2026
Viewed by 384
Abstract
Objectives: This study aims to predict patient-specific Average Glandular Dose (AGD) in mammography using machine learning-based models to support personalised radiation dose optimisation and reduce unnecessary exposure during breast cancer screening. Methods: A retrospective dataset of 671 female patients who underwent full-field digital [...] Read more.
Objectives: This study aims to predict patient-specific Average Glandular Dose (AGD) in mammography using machine learning-based models to support personalised radiation dose optimisation and reduce unnecessary exposure during breast cancer screening. Methods: A retrospective dataset of 671 female patients who underwent full-field digital mammography between 2020 and 2024 was analysed. Right craniocaudal (CC) images were used to construct a structured dataset including mAs, kVp, compressed breast thickness, air kerma (k_air), half-value layer (HVL), and breast pattern. Five regression-based machine learning models (CatBoost, Gradient Boosting, Random Forest, Extra Trees, and AdaBoost) and their Particle Swarm Optimisation (PSO)-enhanced versions were evaluated. Model performance was assessed using MSE, RMSE, MAE, MAPE, and R2. SHAP analysis was applied to interpret model predictions and determine variable importance. Results: PSO integration significantly reduced prediction errors, particularly in boosting-based models. The CatBoost + PSO model achieved the best performance (RMSE = 0.0100, MAPE ≈ 1.74%, R2 = 0.9846), followed by the Gradient Boosting + PSO model (R2 = 0.9787). PSO reduced RMSE and MAPE by approximately 55% and 52%, respectively. SHAP analysis identified k_air, breast thickness, and breast pattern as the most influential factors affecting AGD. Conclusions: Machine learning models enhanced with PSO, especially CatBoost + PSO, provide accurate and reliable patient-specific AGD predictions. The proposed approach enables rapid and clinically applicable dose estimation and highlights breast pattern as a critical parameter influencing glandular dose, supporting personalised radiation dose optimisation in mammography. Full article
Show Figures

Figure 1

23 pages, 4100 KB  
Article
A Comparative Study of Hybridized Machine Learning Models for Short-Term Load Prediction in Medium-Voltage Electricity Networks
by Augustine B. Makokha, Simiyu Sitati and Abraham Arusei
Electricity 2026, 7(1), 21; https://doi.org/10.3390/electricity7010021 - 2 Mar 2026
Viewed by 397
Abstract
Increasing variability in electricity load patterns, driven by end-use behaviour, grid-related technological changes, and socio-economic factors, calls for more accurate and efficient short-term load prediction (STLP) models. This study evaluates the predictive performance of four hybrid models for short-term Amp-load prediction: Adaptive Neuro-Fuzzy [...] Read more.
Increasing variability in electricity load patterns, driven by end-use behaviour, grid-related technological changes, and socio-economic factors, calls for more accurate and efficient short-term load prediction (STLP) models. This study evaluates the predictive performance of four hybrid models for short-term Amp-load prediction: Adaptive Neuro-Fuzzy Inference System (ANFIS) combined with Genetic Algorithms (GA) and Particle Swarm Optimisation (PSO), as well as convolutional neural networks (CNN) integrated with long short-term memory (LSTM) and extreme gradient boosting (XGB). The models were developed using hourly Amp-load data collected from a power utility substation in Kenya, together with corresponding meteorological data (temperature, wind speed, and humidity) covering a period from January 2023 to June 2024. Results show that the ANFIS-PSO and ANFIS-GA models outperform the CNN-based models, achieving MAPE values of 4.519 and 4.363, RMSE values of 0.3901 and 0.4024, and R2 scores of 0.8513 and 0.8481, respectively, due to the adaptive nature of ANFIS, which enables effective modelling of the irregular, nonlinear, and complex temporal behaviour of the Amp load. Enhanced prediction accuracy was observed across all models when variational mode decomposition (VMD) was applied to pre-process the input data. This result was corroborated through further analysis of the Amp-load signals using Taylor plots. Among all of the configurations tested, the CNN-LSTM-VMD model exhibited the highest overall prediction accuracy, with MAPE of 2.625, RMSE of 0.1898, and R2 of 0.9702, marginally outperforming the ANFIS-PSO-VMD model, thus making it more suitable for short-term load prediction applications. Full article
Show Figures

Figure 1

18 pages, 2969 KB  
Article
Comminution Fault Detection and Diagnosis via Autoencoders and the Sobol Method
by Freddy A. Lucay
Minerals 2026, 16(3), 244; https://doi.org/10.3390/min16030244 - 27 Feb 2026
Viewed by 331
Abstract
Fault detection and diagnosis (FDD) are critical for maintaining efficiency and operational stability of comminution systems. However, conventional methods struggle to capture their complex dynamic behaviour, while data-driven approaches are constrained by limited labelled fault data and the need for interpretable diagnostic models. [...] Read more.
Fault detection and diagnosis (FDD) are critical for maintaining efficiency and operational stability of comminution systems. However, conventional methods struggle to capture their complex dynamic behaviour, while data-driven approaches are constrained by limited labelled fault data and the need for interpretable diagnostic models. Progress is further hindered by the scarcity of publicly available industrial datasets. This study presents an explainable FDD framework that integrates unsupervised autoencoder (AE)-based anomaly detection with variance-based global sensitivity analysis (GSA) for quantitative fault diagnosis. A simulated comminution control system was developed to enable controlled validation under realistic operating variability. Multiple AE architectures were trained with hyperparameters optimised using chaotic particle swarm optimisation and evaluated using statistical and reconstruction-based metrics combined with multi-criteria decision analysis. The sparse AE achieved the best performance, with an MSE of 5.6 × 10−5, F1-score of 0.9930, and accuracy of 0.986 in detecting faults in P80 and P20. To diagnose detected faults, Sobol’s variance-based GSA was applied to quantify both the main and interaction effects of operational variables on particle size distribution. The results identify circuit feed rate, ball mill critical speed, and the pulp solids fraction supplied to the hydrocyclones as dominant drivers of faults associated with product coarsening, whereas circuit feed rate and ball mill critical speed primarily govern ultrafine particle generation. By integrating deep learning with explainable sensitivity analysis, this study advances transparent and quantitative diagnosis of complex mineral processing systems. Full article
(This article belongs to the Section Mineral Processing and Extractive Metallurgy)
Show Figures

Figure 1

19 pages, 7242 KB  
Article
Artificial Neural Network-Based Optimisation of Geometric Characteristics in Laser Metal Deposition of TiC/Ti6Al4V
by Thabo Tlale, Peter Mashinini and Bathusile Masina
Metals 2026, 16(3), 242; https://doi.org/10.3390/met16030242 - 24 Feb 2026
Viewed by 404
Abstract
Laser metal deposition operates on the principle of layer-by-layer material addition, wherein each layer is formed by overlapping individual single tracks. Consequently, clads formed serve as the fundamental building blocks for this technology. Their quality directly affects the overall build quality, particularly the [...] Read more.
Laser metal deposition operates on the principle of layer-by-layer material addition, wherein each layer is formed by overlapping individual single tracks. Consequently, clads formed serve as the fundamental building blocks for this technology. Their quality directly affects the overall build quality, particularly the geometric characteristics, which are also critical to process productivity. In the present work, geometric characteristics of TiC/Ti6Al4V single tracks fabricated via laser metal deposition are optimised. An artificial neural network model was developed to predict the clad width, height, and dilution using processing parameters, laser power, scan speed, and powder feed rate, as model inputs. The Particle Swarm Optimisation algorithm was employed for hyperparameter selection. The hyperparameter-optimised model achieved a mean squared error of 0.00183 and an R2 score of 0.979 during training, and a mean squared error of 0.00709 and an R2 score of 0.887 during testing. Although the small discrepancy between training and testing metrics suggests slight overfitting, likely due to the size of the dataset, the model achieved a mean absolute percentage error of less than 10% during testing. Subsequently, process plots generated by the model predictions were used to identify suitable parameters, and a processing map was developed to highlight the window that achieves suitable dilution (14–24%), defect-free sound bonding, and thick and dense clads. Full article
Show Figures

Figure 1

26 pages, 1919 KB  
Article
Optimising Harbour Construction Projects for Environmental Sustainability: A Hybrid Artificial Intelligence Approach
by Mohamed T. Elnabwy, Mohamed ElAgroudy, Emad Elbeltagi, Mahmoud M. El Banna, Ehab A. Mlybari and Hossam Wefki
Sustainability 2026, 18(5), 2162; https://doi.org/10.3390/su18052162 - 24 Feb 2026
Viewed by 374
Abstract
Harbour sedimentation represents a major challenge to the environmental sustainability and operational efficiency of coastal infrastructure, as frequent dredging activities increase maintenance costs, ecological disturbance, and carbon emissions. Conventional physical and numerical sediment transport models, while widely applied, are computationally intensive and often [...] Read more.
Harbour sedimentation represents a major challenge to the environmental sustainability and operational efficiency of coastal infrastructure, as frequent dredging activities increase maintenance costs, ecological disturbance, and carbon emissions. Conventional physical and numerical sediment transport models, while widely applied, are computationally intensive and often unsuitable for early-stage, sustainability-oriented design optimisation. To address these limitations, this study proposes a hybrid artificial intelligence-based optimisation framework integrating Artificial Neural Networks (ANNs), Genetic Algorithms (GAs), and Particle Swarm Optimisation (PSO) for sustainable breakwater and harbour layout design. Hydrodynamic simulations using the Coastal Modelling System (CMS) were conducted to generate a comprehensive dataset describing sediment transport behaviour under varying geometric and structural configurations. An ANN surrogate model was trained to capture nonlinear relationships between breakwater parameters and accumulated sedimentation volume, while GA-based global optimisation and PSO-based validation and local refinement were employed to identify optimal design solutions. Comparative assessment demonstrated consistent convergence of ANN–GA and ANN–PSO solutions within the same design region, with a maximum deviation of 8.46% between design variables and a sedimentation difference of 2.4%. The hybrid ANN–GA–PSO framework achieved the lowest predicted sedimentation volume, representing an improvement of approximately 2.3% relative to the ANN–GA baseline. The proposed framework supports Integrated Coastal Structures Management (ICSM) by enabling proactive, design-stage reduction in long-term sediment accumulation and dredging requirements, offering a scalable pathway toward sustainable and digital-twin-enabled harbour planning. Full article
Show Figures

Figure 1

16 pages, 1586 KB  
Article
Gamma-RayBurst Polarimetry with the COMCUBE-S CubeSat Swarm—Design and Performance Simulations
by Nathan Franel, Vincent Tatischeff, David Murphy, Alexey Ulyanov, Caimin McKenna, Lorraine Hanlon, Prerna Baranwal, Christophe Beigbeder, Arnaud Claret, Ion Cojocari, Nicolas de Séréville, Nicolas Dosme, Eric Doumayrou, Mariya Georgieva, Clarisse Hamadache, Sally Hankache, Jimmy Jeglot, Mózsi Kiss, Beng-Yun Ky, Vincent Lafage, Philippe Laurent, Christine Le Galliard, Joseph Mangan, Aline Meuris, Mark Pearce, Jean Peyré, Arjun Poitaya, Diana Renaud, Arnaud Saussac, Varun Varun, Matias Vecchio and Colin Wadeadd Show full author list remove Hide full author list
Particles 2026, 9(1), 13; https://doi.org/10.3390/particles9010013 - 6 Feb 2026
Cited by 1 | Viewed by 661
Abstract
COMCUBE-S (Compton Telescope CubeSat Swarm) is a proposed mission aimed at understanding the radiation mechanisms of ultra-relativistic jets from Gamma-Ray Bursts (GRBs). It consists of a swarm of 16U CubeSats carrying a state-of-the-art Compton polarimeter and a bismuth germanium oxide (BGO) spectrometer to [...] Read more.
COMCUBE-S (Compton Telescope CubeSat Swarm) is a proposed mission aimed at understanding the radiation mechanisms of ultra-relativistic jets from Gamma-Ray Bursts (GRBs). It consists of a swarm of 16U CubeSats carrying a state-of-the-art Compton polarimeter and a bismuth germanium oxide (BGO) spectrometer to perform timing, spectroscopic and polarimetric measurements of the prompt emission from GRBs. The mission is currently in a feasibility study phase (Phase A) with the European Space Agency to prepare an in-orbit demonstration. Here, we present the simulation work used to optimise the design and operational concept of the microsatellite constellation, as well as estimate the mission performance in terms of GRB detection rate and polarimetry. We used the MEGAlib software to simulate the response function of the gamma-ray instruments, together with a detailed model for the background particle and radiation fluxes in low-Earth orbit. We also developed a synthetic GRB population model to best estimate the detection rate. These simulations show that COMCUBE-S will detect about 2 GRBs per day, which is significantly higher than that of all past and current GRB missions. Furthermore, simulated performance for linear polarisation measurements shows that COMCUBE-S will be able to uniquely distinguish between competing models of the GRB prompt emission, thereby shedding new light on some of the most fundamental aspects of GRB physics. Full article
Show Figures

Figure 1

14 pages, 1968 KB  
Article
Multispectral Camouflage Photonic Structure for Visible–IR–LiDAR Bands with Radiative Cooling
by Lehong Huang, Yuting Gao, Bo Peng and Caiwen Ma
Photonics 2026, 13(1), 31; https://doi.org/10.3390/photonics13010031 - 30 Dec 2025
Viewed by 752
Abstract
The rapid development of detection technologies has increased the demand for multispectral camouflage materials capable of broadband concealment and effective thermal management. To address the conflicting optical requirements between infrared camouflage and LiDAR camouflage, we propose a composite design combining a germanium–ytterbium fluoride [...] Read more.
The rapid development of detection technologies has increased the demand for multispectral camouflage materials capable of broadband concealment and effective thermal management. To address the conflicting optical requirements between infrared camouflage and LiDAR camouflage, we propose a composite design combining a germanium–ytterbium fluoride (Ge/YbF3) selective emitter with an amorphous silicon (a-Si) two-dimensional periodic microstructure. The multilayer film, optimized using the transfer-matrix method and a particle swarm optimisation algorithm, achieves low emissivity in the 3–5 μm and 8–14 μm infrared atmospheric windows and high emissivity within 5–8 μm for radiative cooling, while introducing a narrowband absorption peak at 1.55 μm. Additionally, the a-Si microstructure provides strong narrowband absorption at 10.6 μm via a grating-resonance mechanism. FDTD simulations confirm low emissivity in the infrared windows, high absorptance at LiDAR wavelengths, and good angular and polarization robustness. This work demonstrates a multifunctional photonic structure capable of integrating infrared camouflage, laser camouflage, and thermal-radiation control. Full article
(This article belongs to the Section Optoelectronics and Optical Materials)
Show Figures

Figure 1

28 pages, 7867 KB  
Article
Efficiency and Running Time Robustness in Real Metro Automatic Train Operation Systems: Insights from a Comprehensive Comparative Study
by María Domínguez, Adrián Fernández-Rodríguez, Asunción P. Cucala and Antonio Fernández-Cardador
Sustainability 2025, 17(24), 11371; https://doi.org/10.3390/su172411371 - 18 Dec 2025
Viewed by 590
Abstract
Automatic Train Operation (ATO) systems are widely deployed in metro networks to improve punctuality, service regularity, and ultimately the sustainability of rail operation. Although eco-driving optimisation has been extensively studied, no previous work has provided a systematic, side-by-side comparison of the two ATO [...] Read more.
Automatic Train Operation (ATO) systems are widely deployed in metro networks to improve punctuality, service regularity, and ultimately the sustainability of rail operation. Although eco-driving optimisation has been extensively studied, no previous work has provided a systematic, side-by-side comparison of the two ATO control philosophies most commonly implemented in metro systems worldwide: (i) Type 1, based on speed holding followed by a single terminal coasting at a kilometre point, and (ii) Type 2, which uses speed thresholds to apply either continuous speed holding or iterative coasting–remotoring cycles. These strategies differ fundamentally in their control logic and may lead to distinct operational and energetic behaviours. This paper presents a comprehensive comparison of these two ATO philosophies using a high-fidelity train movement simulator and Pareto-front optimisation via a multi-objective particle swarm algorithm. 40 interstations of a real metro line were evaluated under realistic comfort and operational constraints, and robustness was assessed through sensitivity to three different passenger-load variations (empty train, nominal load and full load). Results show that, once nominal profiles are implemented, Type 1 has up to 5% variability in running times, and Type 2 has up to 20% variability in energy consumption. In conclusion, a new ATO deployment combining both strategies could better balance energy efficiency and timetable robustness in metro operations. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

22 pages, 13704 KB  
Article
Application of Metaheuristic Optimisation Techniques for the Optimisation of a Solid-State Circuit Breaker
by Adam P. Lewis, Gerardo Calderon-Lopez, Ingo Lüdtke, Jason Vincent-Newson, Sahil Upadhaya, Jas Singh and Matt Grubb
Appl. Sci. 2025, 15(24), 12983; https://doi.org/10.3390/app152412983 - 9 Dec 2025
Viewed by 610
Abstract
Designing solid-state circuit breakers (SSCBs) involves a large discrete design space spanning MOSFET type, bypass configuration, and heatsink selection. This work formulates SSCB design as a multi-objective combinatorial optimisation problem that minimises conduction loss and material cost subject to electrothermal feasibility constraints. A [...] Read more.
Designing solid-state circuit breakers (SSCBs) involves a large discrete design space spanning MOSFET type, bypass configuration, and heatsink selection. This work formulates SSCB design as a multi-objective combinatorial optimisation problem that minimises conduction loss and material cost subject to electrothermal feasibility constraints. A validated electrothermal model was developed using experimentally measured RDSon(T) data and thermal-impedance characterisation, allowing rapid and accurate evaluation of candidate configurations. Because the full design space exceeds one million combinations, five representative metaheuristic algorithms: Genetic Algorithm (GA), Particle Swarm Optimisation (PSO), Grey Wolf Optimisation (GWO), Ant Colony Optimisation (ACO), and Gorilla Troops Optimisation (GTO), were benchmarked under an identical computational budget of 2000 evaluations. Sobol sequence initialisation was used to enhance search diversity. Each algorithm was executed 100 times, and its performance was quantitatively assessed using hypervolume, generational distance (GD), inverted generational distance (IGD), Hausdorff distance, overlapping-point score (OP), overall spread (OS), and distribution metrics (DM). GA consistently produced the closest approximation to the true Pareto front obtained from brute-force enumeration, achieving superior accuracy, coverage, and robustness. GTO offered strong secondary performance, while PSO, GWO, and ACO delivered partial front reconstruction. The results demonstrate that metaheuristic optimisation, particularly GA, can reduce SSCB design time significantly while retaining high fidelity, offering a scalable and efficient framework for future power-electronics design tasks. Full article
(This article belongs to the Special Issue New Challenges in Low-Power Electronics Design)
Show Figures

Figure 1

21 pages, 1766 KB  
Article
Floating Offshore Wind Farm Inter-Array Cabling Topology Optimisation with Metaheuristic Particle Swarm Optimisation
by Sergi Vilajuana Llorente, José Ignacio Rapha, Magnus Daniel Kallinger and José Luis Domínguez-García
Clean Technol. 2025, 7(4), 110; https://doi.org/10.3390/cleantechnol7040110 - 4 Dec 2025
Viewed by 960
Abstract
Floating offshore wind is now receiving much attention as an expansion to bottom-fixed, especially in deep waters with large wind resources. In this regard, improving the performance and efficiency of floating offshore wind farms (FOWFs) is currently a highly addressed topic. The inter-array [...] Read more.
Floating offshore wind is now receiving much attention as an expansion to bottom-fixed, especially in deep waters with large wind resources. In this regard, improving the performance and efficiency of floating offshore wind farms (FOWFs) is currently a highly addressed topic. The inter-array (IA) cable connection is a key aspect to be optimised. Due to floating offshore wind (FOW) particularities such as dynamic cable designs, higher power capacities, and challenging installation, IA cabling is expected to be a primary cost driver for commercial-scale FOWFs. Therefore, IA cabling optimisation can lead to large cost reductions. In this work, an optimisation with an adaptive particle swarm optimisation (PSO) algorithm for such wind farms is proposed, considering the floating substructures’ horizontal translations and its impact on the dynamic cable length. The method provides an optimised IA connection, reducing acquisition costs and power losses by using a clustered minimum spanning tree (MST) as an initial solution and improving it with the PSO algorithm. The PSO achieves a reduction in the levelised cost of energy (LCOE) between 0.018% (0.022 EUR/MWh) and 0.10% (0.12 EUR/MWh) and a reduction in cable acquisition costs between 0.18% (0.3 M EUR) and 1.34% (3.8 M EUR) compared to the initial solution, showing great potential for future commercial-sized FOWFs. Full article
Show Figures

Figure 1

33 pages, 8481 KB  
Article
Assessment of Hybrid Renewable Energy System: A Particle Swarm Optimization Approach to Power Demand Profile and Generation Management
by Luis José Turcios, José Luis Torres-Madroñero, Laura M. Cárdenas, Maritza Jiménez and César Nieto-Londoño
Energies 2025, 18(23), 6141; https://doi.org/10.3390/en18236141 - 24 Nov 2025
Cited by 1 | Viewed by 851
Abstract
The use of non-renewable energy resources is one of the main drivers of climate change. In response, the United Nations established the seventh Sustainable Development Goal, “Affordable and clean energy”, which promotes the transition toward renewable and environmentally friendly sources such as wind [...] Read more.
The use of non-renewable energy resources is one of the main drivers of climate change. In response, the United Nations established the seventh Sustainable Development Goal, “Affordable and clean energy”, which promotes the transition toward renewable and environmentally friendly sources such as wind and solar energy. However, the intermittent nature of these resources poses challenges for maintaining a stable, continuous power supply, highlighting the need for hybrid technology approaches, such as Hybrid Renewable Energy Systems (HRES), which integrate complementary renewable sources with energy storage. In this context, this study applies a Particle Swarm Optimisation (PSO)-based approach to determine the optimal sizing and operating strategy for a hybrid system comprising photovoltaic, wind, battery storage, and diesel backup units under various synthetic load profiles. The results indicate that diesel-assisted configurations achieve lower levelized costs of energy (0.23–0.35 USD/kWh) and maintain high reliability (LPSP < 0.25%), although at the expense of higher fuel consumption and CO2 emissions. Conversely, fully renewable configurations present higher energy costs (0.29–0.44 USD/kWh), but reduce annual CO2 emissions by up to 50% and create more employment opportunities, particularly in regions with abundant wind resources such as La Guajira, Colombia. Full article
Show Figures

Figure 1

32 pages, 9121 KB  
Review
Generative Design of Concentrated Solar Thermal Tower Receivers—State of the Art and Trends
by Jorge Moreno García-Moreno and Kypros Milidonis
Energies 2025, 18(22), 5890; https://doi.org/10.3390/en18225890 - 8 Nov 2025
Viewed by 1035
Abstract
The rapid advances in artificial intelligence (AI) and high-performance computing (HPC) are transforming the landscape of engineering design, and the concentrated solar power (CSP) tower sector is no exception. As these technologies increasingly penetrate the energy domain, they bring new capabilities for addressing [...] Read more.
The rapid advances in artificial intelligence (AI) and high-performance computing (HPC) are transforming the landscape of engineering design, and the concentrated solar power (CSP) tower sector is no exception. As these technologies increasingly penetrate the energy domain, they bring new capabilities for addressing the complex, multi-variable nature of receiver design and optimisation. This review explores the application of AI-driven generative design techniques in the context of CSP tower receivers, with a particular focus on the use of metaheuristic algorithms and machine learning models. A structured classification is presented, highlighting the most commonly employed methods, such as Genetic Algorithms (GAs), Particle Swarm Optimisation (PSO), and Artificial Neural Networks (ANNs), and mapping them to specific receiver types: cavity, external, and volumetric. GAs are found to dominate multi-objective optimisation tasks, especially those involving trade-offs between thermal efficiency and heat flux uniformity, while ANNs offer strong potential as surrogate models for accelerating design iterations. The review also identifies existing gaps in the literature and outlines future opportunities, including the integration of high-fidelity simulations and experimental validation into AI design workflows. These insights demonstrate the growing relevance and impact of AI in advancing the next generation of high-performance CSP receiver systems. Full article
Show Figures

Figure 1

38 pages, 766 KB  
Article
Sustainable Swarm Intelligence: Assessing Carbon-Aware Optimization in High-Performance AI Systems
by Vasileios Alevizos, Nikitas Gerolimos, Eleni Aikaterini Leligkou, Giorgos Hompis, Georgios Priniotakis and George A. Papakostas
Technologies 2025, 13(10), 477; https://doi.org/10.3390/technologies13100477 - 21 Oct 2025
Cited by 8 | Viewed by 1509
Abstract
Carbon-aware AI demands clear links between algorithmic choices and verified emission outcomes. This study measures and steers the carbon footprint of swarm-based optimization in HPC by coupling a job-level Emission Impact Metric with sub-minute power and grid-intensity telemetry. Across 480 runs covering 41 [...] Read more.
Carbon-aware AI demands clear links between algorithmic choices and verified emission outcomes. This study measures and steers the carbon footprint of swarm-based optimization in HPC by coupling a job-level Emission Impact Metric with sub-minute power and grid-intensity telemetry. Across 480 runs covering 41 algorithms, we report grams CO2 per successful optimisation and an efficiency index η (objective gain per kg CO2). Results show faster swarms achieve lower integral energy: Particle Swarm emits 24.9 g CO2 per optimum versus 61.3 g for GridSearch on identical hardware; Whale and Cuckoo approach the best η frontier, while L-SHADE exhibits front-loaded power spikes. Conservative scale factor schedules and moderate populations reduce emissions without degrading fitness; idle-node suppression further cuts leakage. Agreement between CodeCarbon, MLCO2, and vendor telemetry is within 1.8%, supporting reproducibility. The framework offers auditable, runtime controls (throttle/hold/release) that embed carbon objectives without violating solution quality budgets. Full article
Show Figures

Figure 1

30 pages, 15268 KB  
Article
Multi-Objective Two-Layer Robust Optimisation Model for Water Resource Allocation in the Basin: A Case Study of Yellow River Basin, China
by Danyang Di, Hao Hu, Shikun Duan, Qi Shi, Huiliang Wang and Lizhong Xiao
Water 2025, 17(20), 3009; https://doi.org/10.3390/w17203009 - 20 Oct 2025
Viewed by 854
Abstract
The continuous growth of the social economy and the accelerated urbanisation process have led to a rising increase in the demand for water resources in river basins. The uneven temporal and spatial distribution of water resources has further exacerbated the contradiction between supply [...] Read more.
The continuous growth of the social economy and the accelerated urbanisation process have led to a rising increase in the demand for water resources in river basins. The uneven temporal and spatial distribution of water resources has further exacerbated the contradiction between supply and demand. The traditional extensive water resource allocation model is no longer suitable for the diverse demands of sustainable development in river basins. Therefore, there is an urgent demand to determine how to reconcile the supply and demand of water resources in river basins to achieve a rational allocation. Taking the Yellow River Basin as an example, an optimal water allocation framework based on multi-objective robust optimisation method was proposed in this study. A robust constraint boundary conditions for the industrial, agricultural, construction and service, ecological, and social water demand were selected from the perspective of the economy–society–ecology nexus. Then, Latin hypercube sampling was adopted to modify the Monte Carlo method to improve the dispersion of sampling values for quantifying the uncertainty of water allocation parameters. Furthermore, a multi-dimensional spatial equilibrium optimal allocation combining adjustable robust optimisation and multi-objective optimisation was established. Finally, a multi-objective particle swarm optimisation algorithm based on a crossover operator was constructed to obtain the Pareto-optimal solution for multi-dimensional spatial equilibrium optimal allocation. The primary findings were as follows: (1) Parameter uncertainty had a significant effect on the provincial/regional revenues of water resources but has no obvious effect on basin revenue. (2) The uncertainty in runoff and parameters had a significant influence on decisions for optimal water allocation. The optimal volume of water purchased by different provinces (regions) varied greatly under different scenarios. Full article
(This article belongs to the Section Water Resources Management, Policy and Governance)
Show Figures

Figure 1

Back to TopTop