Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = sequential least squares programming algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1486 KB  
Article
A Deep Learning-Based Ensemble System for Brent and WTI Crude Oil Price Analysis and Prediction
by Yiwen Zhang and Salim Lahmiri
Entropy 2025, 27(11), 1122; https://doi.org/10.3390/e27111122 - 31 Oct 2025
Cited by 1 | Viewed by 898
Abstract
Crude oil price forecasting is an important task in energy management and storage. In this regard, deep learning has been applied in the literature to generate accurate forecasts. The main purpose of this study is to design an ensemble prediction system based on [...] Read more.
Crude oil price forecasting is an important task in energy management and storage. In this regard, deep learning has been applied in the literature to generate accurate forecasts. The main purpose of this study is to design an ensemble prediction system based on various deep learning systems. Specifically, in the first stage of our proposed ensemble system, convolutional neural networks (CNNs), long short-term memory networks (LSTMs), bidirectional LSTM (BiLSTM), gated recurrent units (GRUs), bidirectional GRU (BiGRU), and deep feedforward neural networks (DFFNNs) are used as individual predictive systems to predict crude oil prices. Their respective parameters are fine-tuned by Bayesian optimization (BO). In the second stage, forecasts from the previous stage are all weighted by using the sequential least squares programming (SLSQP) algorithm. The standard tree-based ensemble models, namely, extreme gradient boosting (XGBoost) and random forest (RT), are implemented as baseline models. The main findings can be summarized as follows. First, the proposed ensemble system outperforms the individual CNN, LSTM, BiLSTM, GRU, BiGRU, and DFFNN. Second, it outperforms the standard XGBoost and RT models. Governments and policymakers can use these models to design more effective energy policies and better manage supply in fluctuating markets. For investors, improved predictions of price trends present opportunities for strategic investments, reducing risk while maximizing returns in the energy market. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 2377 KB  
Article
A FinTech-Aligned Optimization Framework for IoT-Enabled Smart Agriculture to Mitigate Greenhouse Gas Emissions
by Sofia Polymeni, Dimitrios N. Skoutas, Georgios Kormentzas and Charalabos Skianis
Information 2025, 16(9), 797; https://doi.org/10.3390/info16090797 - 14 Sep 2025
Viewed by 800
Abstract
With agriculture being the second biggest contributor to greenhouse gas (GHG) emissions through the excessive use of fertilizers, machinery, and inefficient farming practices, global efforts to reduce emissions have been intensified, opting for smarter, data-driven solutions. However, while machine learning (ML) offers powerful [...] Read more.
With agriculture being the second biggest contributor to greenhouse gas (GHG) emissions through the excessive use of fertilizers, machinery, and inefficient farming practices, global efforts to reduce emissions have been intensified, opting for smarter, data-driven solutions. However, while machine learning (ML) offers powerful predictive capabilities, its black-box nature presents a challenge for trust and adoption, particularly when integrated with auditable financial technology (FinTech) principles. To address this gap, this work introduces a novel, explanation-focused GHG emission optimization framework for IoT-enabled smart agriculture that is both transparent and prescriptive, distinguishing itself from macro-level land-use solutions by focusing on optimizable management practices while aligning with core FinTech principles and pollutant stock market mechanisms. The framework employs a two-stage statistical methodology that first identifies distinct agricultural emission profiles from macro-level data, and then models these emissions by developing a cluster-oriented principal component regression (PCR) model, which outperforms simpler variants by approximately 35% on average across all clusters. This interpretable model then serves as the core of a FinTech-aligned optimization framework that combines cluster-oriented modeling knowledge with a sequential least squares quadratic programming (SLSQP) algorithm to minimize emission-related costs under a carbon pricing mechanism, showcasing forecasted cost reductions as high as 43.55%. Full article
(This article belongs to the Special Issue Technoeconomics of the Internet of Things)
Show Figures

Graphical abstract

28 pages, 16152 KB  
Article
A Smooth-Delayed Phase-Type Mixture Model for Human-Driven Process Duration Modeling
by Dongwei Wang, Sally McClean, Lingkai Yang, Ian McChesney and Zeeshan Tariq
Algorithms 2025, 18(9), 575; https://doi.org/10.3390/a18090575 - 11 Sep 2025
Viewed by 574
Abstract
Activities in business processes primarily depend on human behavior for completion. Due to human agency, the behavior underlying individual activities may occur in multiple phases and can vary in execution. As a result, the execution duration and nature of such activities may exhibit [...] Read more.
Activities in business processes primarily depend on human behavior for completion. Due to human agency, the behavior underlying individual activities may occur in multiple phases and can vary in execution. As a result, the execution duration and nature of such activities may exhibit complex multimodal characteristics. Phase-type distributions are useful for analyzing the underlying behavioral structure, which may consist of multiple sub-activities. The phenomenon of delayed start is also common in such activities, possibly due to the minimum task completion time or prerequisite tasks. As a result, the distribution of durations or certain components does not start at zero but has a minimum value, and the probability below this value is zero. When using phase-type models to fit such distributions, a large number of phases are often required, which exceed the actual number of sub-activities. This reduces the interpretability of the parameters and may also lead to optimization difficulties due to overparameterization. In this paper, we propose a smooth-delayed phase-type mixture model that introduces delay parameters to address the difficulty of fitting this kind of distribution. Since durations shorter than the delay should have zero probability, such hard truncation renders the parameter not estimable under the Expectation–Maximization (EM) framework. To overcome this, we design a soft-truncation mechanism to improve model convergence. We further develop an inference framework that combines the EM algorithm, Bayesian inference, and Sequential Least Squares Programming for comprehensive and efficient parameter estimation. The method is validated on a synthetic dataset and two real-world datasets. Results demonstrate that the proposed approach maintains a suitable performance comparable to purely data-driven methods while providing good interpretability to reveal the potential underlying structure behind human-driven activities. Full article
Show Figures

Graphical abstract

27 pages, 11820 KB  
Article
Collaborative Optimization Method of Structural Lightweight Design Integrating RSM-GA for an Electric Vehicle BIW
by Hongjiang Li, Shijie Sun, Hong Fang, Xiaojuan Hu, Junjian Hou and Yudong Zhong
World Electr. Veh. J. 2025, 16(8), 415; https://doi.org/10.3390/wevj16080415 - 23 Jul 2025
Viewed by 1155
Abstract
The body-in-white (BIW) is an important part of the electric vehicle body, its mass accounts for about 30% of the vehicle mass, and reducing its mass can significantly contribute to energy savings and emission reduction. In this paper, a collaborative optimization method combining [...] Read more.
The body-in-white (BIW) is an important part of the electric vehicle body, its mass accounts for about 30% of the vehicle mass, and reducing its mass can significantly contribute to energy savings and emission reduction. In this paper, a collaborative optimization method combining the response surface method and genetic algorithm (RSM-GA) is developed to perform the lightweight optimization of the body-in-white of an electric vehicle. Seventeen design variables were screened by relative sensitivity calculations based on modal and stiffness sensitivity analysis, and the data samples were collected using the Taguchi experiment and Hammersley experiment during the designing of the experiment methods. To further maintain the accuracy rate, the least squares regression, moving least squares method, and radial basis function are applied to fitting data to obtain the response surface, and the error analysis of the fitting results is carried out to correct the response surface. Finally, the genetic algorithm based on the response surface is employed to optimize the structure of the body-in-white, and the results are compared with those of the adaptive response surface method and sequential quadratic programming method. Through comparison, the paper found that the optimization effect obtained by the proposed method has a relatively high accuracy rate. Full article
Show Figures

Figure 1

20 pages, 12773 KB  
Article
Multi-Scale Sponge Capacity Trading and SLSQP for Stormwater Management Optimization
by An-Kang Liu, Qing Xu, Wen-Jin Zhu, Yang Zhang, De-Long Huang, Qing-Hai Xie, Chun-Bo Jiang and Hai-Ruo Wang
Sustainability 2025, 17(10), 4646; https://doi.org/10.3390/su17104646 - 19 May 2025
Viewed by 788
Abstract
Low-impact development (LID) facilities serve as a fundamental approach in urban stormwater management. However, significant variations in land use among different plots lead to discrepancies in runoff reduction demands, frequently leading to either the over- or under-implementation of LID infrastructure. To address this [...] Read more.
Low-impact development (LID) facilities serve as a fundamental approach in urban stormwater management. However, significant variations in land use among different plots lead to discrepancies in runoff reduction demands, frequently leading to either the over- or under-implementation of LID infrastructure. To address this issue, we propose a cost-effective optimization framework grounded in the concept of “Capacity Trading (CT)”. The study area was partitioned into multi-scale grids (CT-100, CT-200, CT-500, and CT-1000) to systematically investigate runoff redistribution across heterogeneous land parcels. Integrated with the Sequential Least Squares Programming (SLSQP) optimization algorithm, LID facilities are allocated according to demand under two independent constraint conditions: runoff coefficient (φ ≤ 0.49) and runoff control rate (η ≥ 70%). A quantitative analysis was conducted to evaluate the construction cost and reduction effectiveness across different trading scales. The key findings include the following: (1) At a constant return period, increasing the trading scale significantly reduces the demand for LID facility construction. Expanding trading scales from CT-100 to CT-1000 reduces LID area requirements by 28.33–142.86 ha under the φ-constraint and 25.5–197.19 ha under the η-constraint. (2) Systematic evaluations revealed that CT-500 optimized cost-effectiveness by balancing infrastructure investments and hydrological performance. This scale allows for coordinated construction, avoiding the high costs associated with small-scale trading (CT-100 and CT-200) while mitigating the diminishing returns observed in large-scale trading (CT-1000). This study provides a refined and efficient solution for urban stormwater management, overcoming the limitations of traditional approaches and demonstrating significant practical value. Full article
(This article belongs to the Special Issue Sustainable Stormwater Management and Green Infrastructure)
Show Figures

Graphical abstract

16 pages, 1716 KB  
Article
Research on Prediction of Dissolved Gas Concentration in a Transformer Based on Dempster–Shafer Evidence Theory-Optimized Ensemble Learning
by Pan Zhang, Kang Hu, Yuting Yang, Guowei Yi, Xianya Zhang, Runze Peng and Jiaqi Liu
Electronics 2025, 14(7), 1266; https://doi.org/10.3390/electronics14071266 - 24 Mar 2025
Cited by 4 | Viewed by 862
Abstract
The variation in dissolved gas concentration in the transformer serves as a crucial indicator for assessing the health status and potential faults of the transformer. However, traditional models and existing machine learning and deep learning models exhibit limitations when applied to real-world scenarios [...] Read more.
The variation in dissolved gas concentration in the transformer serves as a crucial indicator for assessing the health status and potential faults of the transformer. However, traditional models and existing machine learning and deep learning models exhibit limitations when applied to real-world scenarios in power systems, lacking adaptability and failing to meet the requirements for accuracy and efficiency of prediction in practical applications. This paper proposes a Dempster–Shafer evidence theory-optimized Bagging ensemble learning model, aiming to improve the accuracy and stability of dissolved gas concentration prediction in transformers. By incorporating Dempster–Shafer evidence theory for the fusion of base learners and optimizing the basic probability distribution parameters by using the sequential least squares programming algorithm, this model significantly improves the adaptability and robustness of prediction. The experimental results show that compared to the ordinary Bagging method and the SARIMA model, the overall mean squared error of the Bagging prediction results optimized by the Dempster–Shafer evidence theory is only 22% of the mean square error of the Bagging prediction results and 38% of the mean square error of the SARIMA prediction results. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Electrical and Energy Systems)
Show Figures

Figure 1

14 pages, 1424 KB  
Article
Rice Disease Classification Using a Stacked Ensemble of Deep Convolutional Neural Networks
by Zhibin Wang, Yana Wei, Cuixia Mu, Yunhe Zhang and Xiaojun Qiao
Sustainability 2025, 17(1), 124; https://doi.org/10.3390/su17010124 - 27 Dec 2024
Cited by 3 | Viewed by 2415
Abstract
Rice is a staple food for almost half of the world’s population, and the stability and sustainability of rice production plays a decisive role in food security. Diseases are a major cause of loss in rice crops. The timely discovery and control of [...] Read more.
Rice is a staple food for almost half of the world’s population, and the stability and sustainability of rice production plays a decisive role in food security. Diseases are a major cause of loss in rice crops. The timely discovery and control of diseases are important in reducing the use of pesticides, protecting the agricultural eco-environment, and improving the yield and quality of rice crops. Deep convolutional neural networks (DCNNs) have achieved great success in disease image classification. However, most models have complex network structures that frequently cause problems, such as redundant network parameters, low training efficiency, and high computational costs. To address this issue and improve the accuracy of rice disease classification, a lightweight deep convolutional neural network (DCNN) ensemble method for rice disease classification is proposed. First, a new lightweight DCNN model (called CG-EfficientNet), which is based on an attention mechanism and EfficientNet, was designed as the base learner. Second, CG-EfficientNet models with different optimization algorithms and network parameters were trained on rice disease datasets to generate seven different CG-EfficientNets, and a resampling strategy was used to enhance the diversity of the individual models. Then, the sequential least squares programming algorithm was used to calculate the weight of each base model. Finally, logistic regression was used as the meta-classifier for stacking. To verify the effectiveness, classification experiments were performed on five classes of rice tissue images: rice bacterial blight, rice kernel smut, rice false smut, rice brown spot, and healthy leaves. The accuracy of the proposed method was 96.10%, which is higher than the results of the classic CNN models VGG16, InceptionV3, ResNet101, and DenseNet201 and four integration methods. The experimental results show that the proposed method is not only capable of accurately identifying rice diseases but is also computationally efficient. Full article
Show Figures

Figure 1

25 pages, 396 KB  
Article
Causal Economic Machine Learning (CEML): “Human AI”
by Andrew Horton
AI 2024, 5(4), 1893-1917; https://doi.org/10.3390/ai5040094 - 11 Oct 2024
Viewed by 4265
Abstract
This paper proposes causal economic machine learning (CEML) as a research agenda that utilizes causal machine learning (CML), built on causal economics (CE) decision theory. Causal economics is better suited for use in machine learning optimization than expected utility theory (EUT) and behavioral [...] Read more.
This paper proposes causal economic machine learning (CEML) as a research agenda that utilizes causal machine learning (CML), built on causal economics (CE) decision theory. Causal economics is better suited for use in machine learning optimization than expected utility theory (EUT) and behavioral economics (BE) based on its central feature of causal coupling (CC), which models decisions as requiring upfront costs, some certain and some uncertain, in anticipation of future uncertain benefits that are linked by causation. This multi-period causal process, incorporating certainty and uncertainty, replaces the single-period lottery outcomes augmented with intertemporal discounting used in EUT and BE, providing a more realistic framework for AI machine learning modeling and real-world application. It is mathematically demonstrated that EUT and BE are constrained versions of CE. With the growing interest in natural experiments in statistics and causal machine learning (CML) across many fields, such as healthcare, economics, and business, there is a large potential opportunity to run AI models on CE foundations and compare results to models based on traditional decision-making models that focus only on rationality, bounded to various degrees. To be most effective, machine learning must mirror human reasoning as closely as possible, an alignment established through CEML, which represents an evolution to truly “human AI”. This paper maps out how the non-linear optimization required for the CEML structural response functions can be accomplished through Sequential Least Squares Programming (SLSQP) and applied to data sets through the S-Learner CML meta-algorithm. Upon this foundation, the next phase of research is to apply CEML to appropriate data sets in various areas of practice where causality and accurate modeling of human behavior are vital, such as precision healthcare, economic policy, and marketing. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
17 pages, 3989 KB  
Article
An Obstacle Avoidance Trajectory Planning Methodology Based on Energy Minimization (OTPEM) for the Tilt-Wing eVTOL in the Takeoff Phase
by Guangyu Zheng, Peng Li and Dongsu Wu
World Electr. Veh. J. 2024, 15(7), 300; https://doi.org/10.3390/wevj15070300 - 6 Jul 2024
Viewed by 2143
Abstract
Electric tilt-wing flying cars are an efficient, economical, and environmentally friendly solution to urban traffic congestion and travel efficiency issues. This article addresses the high energy consumption and obstacle interference during the takeoff phase of the tilt-wing eVTOL (electric Vertical Takeoff and Landing), [...] Read more.
Electric tilt-wing flying cars are an efficient, economical, and environmentally friendly solution to urban traffic congestion and travel efficiency issues. This article addresses the high energy consumption and obstacle interference during the takeoff phase of the tilt-wing eVTOL (electric Vertical Takeoff and Landing), proposing a trajectory planning method based on energy minimization and obstacle avoidance. Firstly, based on the dynamics analysis, the relationship between energy consumption, spatial trajectory, and obstacles is sorted out and the decision variables for the trajectory planning problem with obstacle avoidance are determined. Secondly, based on the power discretization during the takeoff phase, the energy minimization objective function is established and the constraints of performance limitations and spatial obstacles are derived. Thirdly, by integrating the optimization model with the SLSQP (Sequential Least Squares Quadratic Programming algorithm), the second-order sequential quadratic programming model and decision variable update equations are derived, establishing the solution process for the trajectory planning problem of the tilt-wing eVTOL takeoff with obstacle avoidance. Finally, the Airbus Vahana A3 is taken as an example to verify and validate the effectiveness, stability, and robustness of the model and optimization algorithm proposed. The validation results show that the OTPEM (obstacle avoidance trajectory planning methodology based on energy minimization) can effectively handle changes in the takeoff end state and exhibits good stability and robustness in different obstacle environments. It can provide a certain reference for the three-dimensional obstacle avoidance trajectory planning of Airbus Vahana A3 and other tilt-wing eVTOL trajectory planning problems. Full article
Show Figures

Figure 1

39 pages, 1740 KB  
Review
Stress-Constrained Topology Optimization for Commercial Software: A Python Implementation for ABAQUS®
by Pedro Fernandes, Àlex Ferrer, Paulo Gonçalves, Marco Parente, Ricardo Pinto and Nuno Correia
Appl. Sci. 2023, 13(23), 12916; https://doi.org/10.3390/app132312916 - 2 Dec 2023
Cited by 8 | Viewed by 6055
Abstract
Topology optimization has evidenced its capacity to provide new optimal designs in many different disciplines. However, most novel methods are difficult to apply in commercial software, limiting their use in the academic field and hindering their application in the industry. This article presents [...] Read more.
Topology optimization has evidenced its capacity to provide new optimal designs in many different disciplines. However, most novel methods are difficult to apply in commercial software, limiting their use in the academic field and hindering their application in the industry. This article presents a new open methodology for solving geometrically complex non-self-adjoint topology optimization problems, including stress-constrained and stress minimization formulations, using validated FEM commercial software. The methodology was validated by comparing the sensitivity analysis with the results obtained through finite differences and solving two benchmark problems with the following optimizers: Optimality Criteria, Method of Moving Asymptotes, Sequential Least-Squares Quadratic Programming (SLSQP), and Trust-constr optimization algorithms. The SLSQP and Trust-constr optimization algorithms obtained better results in stress-minimization problem statements than the methodology available in ABAQUS®. A Python implementation of this methodology is proposed, working in conjunction with the commercial software ABAQUS® 2023 to allow a straightforward application to new problems while benefiting from a graphic user interface and validated finite element solver. Full article
Show Figures

Figure 1

14 pages, 5878 KB  
Article
Novel Multivariable Evolutionary Algorithm-Based Method for Modal Reconstruction of the Corneal Surface from Sparse and Incomplete Point Clouds
by Francisco L. Sáez-Gutiérrez, Jose S. Velázquez, Jorge L. Alió del Barrio, Jorge L. Alio and Francisco Cavas
Bioengineering 2023, 10(8), 989; https://doi.org/10.3390/bioengineering10080989 - 21 Aug 2023
Cited by 5 | Viewed by 2015
Abstract
Three-dimensional reconstruction of the corneal surface provides a powerful tool for managing corneal diseases. This study proposes a novel method for reconstructing the corneal surface from elevation point clouds, using modal schemes capable of reproducing corneal shapes using surface polynomial functions. The multivariable [...] Read more.
Three-dimensional reconstruction of the corneal surface provides a powerful tool for managing corneal diseases. This study proposes a novel method for reconstructing the corneal surface from elevation point clouds, using modal schemes capable of reproducing corneal shapes using surface polynomial functions. The multivariable polynomial fitting was performed using a non-dominated sorting multivariable genetic algorithm (NS-MVGA). Standard reconstruction methods using least-squares discrete fitting (LSQ) and sequential quadratic programming (SQP) were compared with the evolutionary algorithm-based approach. The study included 270 corneal surfaces of 135 eyes of 102 patients (ages 11–63) sorted in two groups: control (66 eyes of 33 patients) and keratoconus (KC) (69 eyes of 69 patients). Tomographic information (Sirius, Costruzione Strumenti Oftalmici, Italy) was processed using Matlab. The goodness of fit for each method was evaluated using mean squared error (MSE), measured at the same nodes where the elevation data were collected. Polynomial fitting based on NS-MVGA improves MSE values by 86% compared to LSQ-based methods in healthy patients. Moreover, this new method improves aberrated surface reconstruction by an average value of 56% if compared with LSQ-based methods in keratoconus patients. Finally, significant improvements were also found in morpho-geometric parameters, such as asphericity and corneal curvature radii. Full article
Show Figures

Figure 1

20 pages, 2370 KB  
Article
Simpler Is Better—Calibration of Pipe Roughness in Water Distribution Systems
by Qi Zhao, Wenyan Wu, Angus R. Simpson and Ailsa Willis
Water 2022, 14(20), 3276; https://doi.org/10.3390/w14203276 - 17 Oct 2022
Cited by 15 | Viewed by 4729
Abstract
Hydraulic models of water distribution systems (WDSs) need to be calibrated, so they can be used to help to make informed decisions. Usually, hydraulic model calibration follows an iterative process of comparing the simulation results from the model with field observations and making [...] Read more.
Hydraulic models of water distribution systems (WDSs) need to be calibrated, so they can be used to help to make informed decisions. Usually, hydraulic model calibration follows an iterative process of comparing the simulation results from the model with field observations and making adjustments to model parameters to make sure an acceptable level of agreement between predicted and measured values (e.g., water pressure) has been achieved. However, the manual process can be time-consuming, and the termination criterion relies on the modeler’s judgment. Therefore, various optimization-based calibration methods have been developed. In this study, three different optimization methods, i.e., Sequential Least Squares Programming (SLSQP), a Genetic Algorithm (GA) and Differential Evolution (DE), are compared for calibrating the pipe roughness of WDS models. Their performance is investigated over four different decision variable set formulations with different levels of discretization of the search space. Results obtained from a real-world case study demonstrate that compared to traditional engineering practice, optimization is effective for hydraulic model calibration. However, a finer search space discretization does not necessarily guarantee better results; and when multiple methods lead to similar performance, a simpler method is better. This study provides guidance on method and formulation selection for calibrating WDS models. Full article
(This article belongs to the Special Issue Optimization Studies for Water Distribution Systems)
Show Figures

Figure 1

22 pages, 1277 KB  
Article
Techno-Economic Optimization Study of Interconnected Heat and Power Multi-Microgrids with a Novel Nature-Inspired Evolutionary Method
by Paolo Fracas, Edwin Zondervan, Meik Franke, Kyle Camarda, Stanimir Valtchev and Svilen Valtchev
Electronics 2022, 11(19), 3147; https://doi.org/10.3390/electronics11193147 - 30 Sep 2022
Cited by 5 | Viewed by 2848
Abstract
The world is once again facing massive energy- and environmental challenges, caused by global warming. This time, the situation is complicated by the increase in energy demand after the pandemic years, and the dramatic lack of basic energy supply. The purely “green” energy [...] Read more.
The world is once again facing massive energy- and environmental challenges, caused by global warming. This time, the situation is complicated by the increase in energy demand after the pandemic years, and the dramatic lack of basic energy supply. The purely “green” energy is still not ready to substitute the fossil energy, but this year the fossil supplies are heavily questioned. Consequently, engineering must take flexible, adaptive, unexpected directions. For example, even the natural gas power plants are currently considered “green” by the European Union Taxonomy, joining the “green” hydrogen. Through a tight integration of highly intermittent renewable, or other distributed energy resources, the microgrid is the technology of choice to guarantee the expected impacts, making clean energy affordable. The focus of this work lies in the techno-economic optimization analysis of Combined Heat and Power (CHP) Multi-Micro Grids (MMG), a novel distribution system architecture comprising two interconnected hybrid microgrids. High computational resources are needed to investigate the CHP-MMG. To this aim, a novel nature-inspired two-layer optimization-simulation algorithm is discussed. The proposed algorithm is used to execute a techno-economic analysis and find the best settings while the energy balance is achieved at minimum operational costs and highest revenues. At a lower level, inside the algorithm, a Sequential Least Squares Programming (SLSQP) method ensures that the stochastic generation and consumption of energy deriving from CHP-MMG trial settings are balanced at each time-step. At the upper level, a novel multi-objective self-adaptive evolutionary algorithm is discussed. This upper level is searching for the best design, sizing, siting, and setting, which guarantees the highest internal rate of return (IRR) and the lowest Levelized Cost of Energy (LCOE). The Artificial Immune Evolutionary (AIE) algorithm imitates how the immune system fights harmful viruses that enter the body. The optimization method is used for sensitivity analysis of hydrogen costs in off-grid and on-grid highly perturbed contexts. It has been observed that the best CHP-MMG settings are those that promote a tight thermal and electrical energy balance between interconnected microgrids. The results demonstrate that such mechanism of energy swarm can keep the LCOE lower than 15 c€/kWh and IRR of over 55%. Full article
(This article belongs to the Special Issue Smart Energy Control & Conversion Systems)
Show Figures

Figure 1

16 pages, 3416 KB  
Article
Closed-Loop Combustion Optimization Based on Dynamic and Adaptive Models with Application to a Coal-Fired Boiler
by Chuanpeng Zhu, Pu Huang and Yiguo Li
Energies 2022, 15(14), 5289; https://doi.org/10.3390/en15145289 - 21 Jul 2022
Cited by 3 | Viewed by 3297
Abstract
To increase combustion efficiency and reduce pollutant emissions, this study presents an online closed-loop optimization method and its application in a boiler combustion system. To begin with, three adaptive dynamic models are established to predict NOx emission, the carbon content of fly ash [...] Read more.
To increase combustion efficiency and reduce pollutant emissions, this study presents an online closed-loop optimization method and its application in a boiler combustion system. To begin with, three adaptive dynamic models are established to predict NOx emission, the carbon content of fly ash (Cfh), and exhaust gas temperature (Teg), respectively. In these models, the orders of the input variables are considered to enable them to reflect the dynamics of the combustion system under load changes. Meanwhile, an adaptive least squares support vector machine (ALSSVM) algorithm is adopted to cope with the nonlinearity and the time-varying characteristics of the combustion system. Subsequently, based on the established models, an economic model predictive control (EMPC) problem is formulated and solved by a sequential quadratic programming (SQP) algorithm to calculate the optimal control variables satisfying the constraints on the control and control moves. The closed-loop optimization system is applied on a 600 MW boiler, and the performance analysis is conducted based on the operation data. The results show that the system can effectively increase boiler efficiency by about 0.5%. Full article
Show Figures

Figure 1

19 pages, 14175 KB  
Article
Numerical Analysis of Electrohydrodynamic Flow in a Circular Cylindrical Conduit by Using Neuro Evolutionary Technique
by Naveed Ahmad Khan, Muhammad Sulaiman, Carlos Andrés Tavera Romero and Fawaz Khaled Alarfaj
Energies 2021, 14(22), 7774; https://doi.org/10.3390/en14227774 - 19 Nov 2021
Cited by 17 | Viewed by 3137
Abstract
This paper analyzes the mathematical model of electrohydrodynamic (EHD) fluid flow in a circular cylindrical conduit with an ion drag configuration. The phenomenon was modelled as a nonlinear differential equation. Furthermore, an application of artificial neural networks (ANNs) with a generalized normal distribution [...] Read more.
This paper analyzes the mathematical model of electrohydrodynamic (EHD) fluid flow in a circular cylindrical conduit with an ion drag configuration. The phenomenon was modelled as a nonlinear differential equation. Furthermore, an application of artificial neural networks (ANNs) with a generalized normal distribution optimization algorithm (GNDO) and sequential quadratic programming (SQP) were utilized to suggest approximate solutions for the velocity, displacements, and acceleration profiles of the fluid by varying the Hartmann electric number (Ha2) and the strength of nonlinearity (α). ANNs were used to model the fitness function for the governing equation in terms of mean square error (MSE), which was further optimized initially by GNDO to exploit the global search. Then SQP was implemented to complement its local convergence. Numerical solutions obtained by the design scheme were compared with RK-4, the least square method (LSM), and the orthonormal Bernstein collocation method (OBCM). Stability, convergence, and robustness of the proposed algorithm were endorsed by the statistics and analysis on results of absolute errors, mean absolute deviation (MAD), Theil’s inequality coefficient (TIC), and error in Nash Sutcliffe efficiency (ENSE). Full article
(This article belongs to the Special Issue Computational Fluid Dynamics (CFD) 2021)
Show Figures

Figure 1

Back to TopTop