Topic Editors

Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Dr. Yunfei Gao
Shanghai Engineering Research Center of Coal Gasification, East China University of Science and Technology, Shanghai 200237, China
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, 13/15 Armii Krajowej Av., 42-200 Czestochowa, Poland
Dr. Ghulam Moeen Uddin
Department of Mechanical Engineering, University of Engineering & Technology, Lahore, Punjab 54890, Pakistan
Dr. Anna Kulakowska
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200, Czestochowa, Poland
Division of Advanced Computational Methods, Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, 42-200 Czestochowa, Poland
Dr. Bachil El Fil
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA

Artificial Intelligence and Computational Methods: Modeling, Simulations and Optimization of Complex Systems

Abstract submission deadline
closed (30 September 2022)
Manuscript submission deadline
20 October 2023
Viewed by
43049

Topic Information

Dear Colleagues,

Due to the increasing computational capability of current data processing systems, new opportunities emerge in the modeling, simulations, and optimization of complex systems and devices. Methods that are difficult to apply, highly demanding, and time-consuming may now be considered when developing complete and sophisticated models in many areas of science and technology. The combination of computational methods and AI algorithms allows conducting multi-threaded analyses to solve advanced and interdisciplinary problems. This article collection aims to bring together research on advances in modeling, simulations, and optimization issues of complex systems. Original research, as well as review articles and short communications, with a particular focus on (but not limited to) artificial intelligence and other computational methods, are welcomed.

Prof. Dr. Jaroslaw Krzywanski
Dr. Yunfei Gao
Dr. Marcin Sosnowski
Dr. Karolina Grabowska
Dr. Dorian Skrobek
Dr. Ghulam Moeen Uddin
Dr. Anna Kulakowska
Dr. Anna Zylka
Dr. Bachil El Fil
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • artificial neural networks
  • deep learning
  • genetic and evolutionary algorithms
  • artificial immune systems
  • fuzzy logic
  • expert systems
  • bio-inspired methods
  • CFD
  • modeling
  • simulation
  • optimization
  • complex systems

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Entropy
entropy
2.738 4.4 1999 19.9 Days 2000 CHF Submit
Algorithms
algorithms
- 3.3 2008 17.6 Days 1600 CHF Submit
Computation
computation
- 3.3 2013 16.2 Days 1600 CHF Submit
Machine Learning and Knowledge Extraction
make
- - 2019 16.7 Days 1400 CHF Submit
Energies
energies
3.252 5.0 2008 15.5 Days 2200 CHF Submit
Materials
materials
3.748 4.7 2008 13.9 Days 2300 CHF Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (51 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Reviving the Dynamics of Attacked Reservoir Computers
Entropy 2023, 25(3), 515; https://doi.org/10.3390/e25030515 - 16 Mar 2023
Viewed by 321
Abstract
Physically implemented neural networks are subject to external perturbations and internal variations. Existing works focus on the adversarial attacks but seldom consider attack on the network structure and the corresponding recovery method. Inspired by the biological neural compensation mechanism and the neuromodulation technique [...] Read more.
Physically implemented neural networks are subject to external perturbations and internal variations. Existing works focus on the adversarial attacks but seldom consider attack on the network structure and the corresponding recovery method. Inspired by the biological neural compensation mechanism and the neuromodulation technique in clinical practice, we propose a novel framework of reviving attacked reservoir computers, consisting of several strategies direct at different types of attacks on structure by adjusting only a minor fraction of edges in the reservoir. Numerical experiments demonstrate the efficacy and broad applicability of the framework and reveal inspiring insights into the mechanisms. This work provides a vehicle to improve the robustness of reservoir computers and can be generalized to broader types of neural networks. Full article
Show Figures

Figure 1

Article
Implicit Solutions of the Electrical Impedance Tomography Inverse Problem in the Continuous Domain with Deep Neural Networks
Entropy 2023, 25(3), 493; https://doi.org/10.3390/e25030493 - 13 Mar 2023
Viewed by 261
Abstract
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical [...] Read more.
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical instability is still a major hurdle due to many factors, including the discretization error of the problem. Furthermore, most algorithms with good performance are relatively time consuming and do not allow real-time applications. In our approach, the goal is to separate the unknown conductivity into two regions, namely the region of homogeneous background conductivity and the region of non-homogeneous conductivity. Therefore, we pose and solve the problem of shape reconstruction using machine learning. We propose a novel and simple jet intriguing neural network architecture capable of solving the EIT inverse problem. It addresses previous difficulties, including instability, and is easily adaptable to other ill-posed coefficient inverse problems. That is, the proposed model estimates the probability for a point of whether the conductivity belongs to the background region or to the non-homogeneous region on the continuous space RdΩ with d{2,3}. The proposed model does not make assumptions about the forward model and allows for solving the inverse problem in real time. The proposed machine learning approach for shape reconstruction is also used to improve gradient-based methods for estimating the unknown conductivity. In this paper, we propose a piece-wise constant reconstruction method that is novel in the inverse problem setting but inspired by recent approaches from the 3D vision community. We also extend this method into a novel constrained reconstruction method. We present extensive numerical experiments to show the performance of the architecture and compare the proposed method with previous analytic algorithms, mainly the monotonicity-based shape reconstruction algorithm and iteratively regularized Gauss–Newton method. Full article
Show Figures

Figure 1

Article
Feature Selection Using New Version of V-Shaped Transfer Function for Salp Swarm Algorithm in Sentiment Analysis
Computation 2023, 11(3), 56; https://doi.org/10.3390/computation11030056 - 08 Mar 2023
Viewed by 412
Abstract
(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a [...] Read more.
(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a binary version of a metaheuristic optimization algorithm based on Swarm Intelligence, namely the Salp Swarm Algorithm (SSA), as feature selection in sentiment analysis. (2) Methods: Significant feature subsets were selected using the SSA. Transfer functions with various types of the form S-TF, V-TF, X-TF, U-TF, Z-TF, and the new type V-TF with a simpler mathematical formula are used as a binary version approach to enable search agents to move in the search space. The stages of the study include data pre-processing, feature selection using SSA-TF and other conventional feature selection methods, modelling using K-Nearest Neighbor (KNN), Support Vector Machine, and Naïve Bayes, and model evaluation. (3) Results: The results showed an increase of 31.55% to the best accuracy of 80.95% for the KNN model using SSA-based New V-TF. (4) Conclusions: We have found that SSA-New V3-TF is a feature selection method with the highest accuracy and less runtime compared to other algorithms in sentiment analysis. Full article
Show Figures

Figure 1

Article
Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology
Entropy 2023, 25(3), 450; https://doi.org/10.3390/e25030450 - 04 Mar 2023
Viewed by 344
Abstract
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To [...] Read more.
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To solve this problem, in this study, we first introduced a Poisson-like randomness strategy and an enhanced randomness strategy to improve the remora optimization algorithm (ROA), i.e., the PROA. Simultaneously, its convergence speed and robustness were verified in different dimensions using the CEC benchmark function. The convergence speed of 67.5–74% of the results is better than the ROA, and the robustness results of 66.67–75% are better than those of the ROA. Second, a deployment model was established for the large-scale measurement field to obtain the maximum visible area of the target to be measured. Finally, the PROA was used as the optimizer to solve optimal deployment planning; the performance of the PROA was verified by simulation analysis. In the case of six stations, the maximum visible area of the PROA reaches 83.02%, which is 18.07% higher than that of the ROA. Compared with the traditional method, this model shortens the deployment time and calculates the overall accessibility, which is of practical significance for improving assembly efficiency in large-size measurement field environments. Full article
Show Figures

Figure 1

Review
Introduction of Materials Genome Technology and Its Applications in the Field of Biomedical Materials
Materials 2023, 16(5), 1906; https://doi.org/10.3390/ma16051906 - 25 Feb 2023
Viewed by 412
Abstract
Traditional research and development (R&D) on biomedical materials depends heavily on the trial and error process, thereby leading to huge economic and time burden. Most recently, materials genome technology (MGT) has been recognized as an effective approach to addressing this problem. In this [...] Read more.
Traditional research and development (R&D) on biomedical materials depends heavily on the trial and error process, thereby leading to huge economic and time burden. Most recently, materials genome technology (MGT) has been recognized as an effective approach to addressing this problem. In this paper, the basic concepts involved in the MGT are introduced, and the applications of MGT in the R&D of metallic, inorganic non-metallic, polymeric, and composite biomedical materials are summarized; in view of the existing limitations of MGT for R&D of biomedical materials, potential strategies are proposed on the establishment and management of material databases, the upgrading of high-throughput experimental technology, the construction of data mining prediction platforms, and the training of relevant materials talents. In the end, future trend of MGT for R&D of biomedical materials is proposed. Full article
Show Figures

Figure 1

Article
Parametric Analysis of Thick FGM Plates Based on 3D Thermo-Elasticity Theory: A Proper Generalized Decomposition Approach
Materials 2023, 16(4), 1753; https://doi.org/10.3390/ma16041753 - 20 Feb 2023
Viewed by 425
Abstract
In the present work, the general and well-known model reduction technique, PGD (Proper Generalized Decomposition), is used for parametric analysis of thermo-elasticity of FGMs (Functionally Graded Materials). The FGMs have important applications in space technologies, especially when a part undergoes an extreme thermal [...] Read more.
In the present work, the general and well-known model reduction technique, PGD (Proper Generalized Decomposition), is used for parametric analysis of thermo-elasticity of FGMs (Functionally Graded Materials). The FGMs have important applications in space technologies, especially when a part undergoes an extreme thermal environment. In the present work, material gradation is considered in one, two and three directions, and 3D heat transfer and theory of elasticity equations are solved to have an accurate temperature field and be able to consider all shear deformations. A parametric analysis of FGM materials is especially useful in material design and optimization. In the PGD technique, the field variables are separated to a set of univariate functions, and the high-dimensional governing equations reduce to a set of one-dimensional problems. Due to the curse of dimensionality, solving a high-dimensional parametric problem is considerably more computationally intensive than solving a set of one-dimensional problems. Therefore, the PGD makes it possible to handle high-dimensional problems efficiently. In the present work, some sample examples in 4D and 5D computational spaces are solved, and the results are presented. Full article
Show Figures

Figure 1

Article
Quick Estimate of Information Decomposition for Text Style Transfer
Entropy 2023, 25(2), 322; https://doi.org/10.3390/e25020322 - 10 Feb 2023
Viewed by 618
Abstract
A growing number of papers on style transfer for texts rely on information decomposition. The performance of the resulting systems is usually assessed empirically in terms of the output quality or requires laborious experiments. This paper suggests a straightforward information theoretical framework to [...] Read more.
A growing number of papers on style transfer for texts rely on information decomposition. The performance of the resulting systems is usually assessed empirically in terms of the output quality or requires laborious experiments. This paper suggests a straightforward information theoretical framework to assess the quality of information decomposition for latent representations in the context of style transfer. Experimenting with several state-of-the-art models, we demonstrate that such estimates could be used as a fast and straightforward health check for the models instead of more laborious empirical experiments. Full article
Show Figures

Figure 1

Review
A Survey on the Application of Machine Learning in Turbulent Flow Simulations
Energies 2023, 16(4), 1755; https://doi.org/10.3390/en16041755 - 09 Feb 2023
Viewed by 577
Abstract
As early as at the end of the 19th century, shortly after mathematical rules describing fluid flow—such as the Navier–Stokes equations—were developed, the idea of using them for flow simulations emerged. However, it was soon discovered that the computational requirements of problems such [...] Read more.
As early as at the end of the 19th century, shortly after mathematical rules describing fluid flow—such as the Navier–Stokes equations—were developed, the idea of using them for flow simulations emerged. However, it was soon discovered that the computational requirements of problems such as atmospheric phenomena and engineering calculations made hand computation impractical. The dawn of the computer age also marked the beginning of computational fluid mechanics and their subsequent popularization made computational fluid dynamics one of the common tools used in science and engineering. From the beginning, however, the method has faced a trade-off between accuracy and computational requirements. The purpose of this work is to examine how the results of recent advances in machine learning can be applied to further develop the seemingly plateaued method. Examples of applying this method to improve various types of computational flow simulations, both by increasing the accuracy of the results obtained and reducing calculation times, have been reviewed in the paper as well as the effectiveness of the methods presented, the chances of their acceptance by industry, including possible obstacles, and potential directions for their development. One can observe an evolution of solutions from simple determination of closure coefficients through to more advanced attempts to use machine learning as an alternative to the classical methods of solving differential equations on which computational fluid dynamics is based up to turbulence models built solely from neural networks. A continuation of these three trends may lead to at least a partial replacement of Navier–Stokes-based computational fluid dynamics by machine-learning-based solutions. Full article
Show Figures

Figure 1

Article
Predicting Terrestrial Heat Flow in North China Using Multiple Geological and Geophysical Datasets Based on Machine Learning Method
Energies 2023, 16(4), 1620; https://doi.org/10.3390/en16041620 - 06 Feb 2023
Viewed by 347
Abstract
Geothermal heat flow is an essential parameter for the exploration of geothermal energy. The cost is often prohibitive if dense heat flow measurements are arranged in the study area. Regardless, an increase in the limited and sparse heat flow observation points is needed [...] Read more.
Geothermal heat flow is an essential parameter for the exploration of geothermal energy. The cost is often prohibitive if dense heat flow measurements are arranged in the study area. Regardless, an increase in the limited and sparse heat flow observation points is needed to study the regional geothermal setting. This research is significant in order to provide a new reliable map of terrestrial heat flow for the subsequent development of geothermal resources. The Gradient Boosted Regression Tree (GBRT) prediction model used in this paper is devoted to solving the problem of an insufficient number of heat flow observations in North China. It considers the geological and geophysical information in the region by training the sample data using 12 kinds of geological and geophysical features. Finally, a robust GBRT prediction model was obtained. The performance of the GBRT method was evaluated by comparing it with the kriging interpolation, the minimum curvature interpolation, and the 3D interpolation algorithm through the prediction performance analysis. Based on the GBRT prediction model, a new heat flow map with a resolution of 0.25°×0.25° was proposed, which depicted the terrestrial heat flow distribution in the study area in a more detailed and reasonable way than the interpolation results. The high heat flow values were mostly concentrated in the northeastern boundary of the Tibet Plateau, with a few scattered and small-scale high heat flow areas in the southeastern part of the North China Craton (NCC) adjacent to the Pacific Ocean. The low heat flow values were mainly resolved in the northern part of the Trans-North China Orogenic belt (TNCO) and the southmost part of the NCC. By comparing the predicted heat flow map with the plate tectonics, the olivine-Mg#, and the hot spring distribution in North China, we found that the GBRT could obtain a reliable result under the constraint of geological and geophysical information in regions with scarce and unevenly distributed heat flow observations. Full article
Show Figures

Figure 1

Article
Mobile Application for Tomato Plant Leaf Disease Detection Using a Dense Convolutional Network Architecture
Computation 2023, 11(2), 20; https://doi.org/10.3390/computation11020020 - 31 Jan 2023
Viewed by 456
Abstract
In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be [...] Read more.
In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be preserved with the aid of computer technology. It can identify diseases in tomato plant leaves. An algorithm for deep learning with a DenseNet architecture was implemented in this study. Multiple hyperparameter tests were conducted to determine the optimal model. Using two hidden layers, a DenseNet trainable layer on dense block 5, and a dropout rate of 0.4, the optimal model was constructed. The 10-fold cross-validation evaluation of the model yielded an accuracy value of 95.7 percent and an F1-score of 95.4 percent. To recognize tomato plant leaves, the model with the best assessment results was implemented in a mobile application. Full article
Show Figures

Figure 1

Article
Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace
Materials 2023, 16(3), 1164; https://doi.org/10.3390/ma16031164 - 30 Jan 2023
Viewed by 489
Abstract
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to [...] Read more.
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to the complex structure, long transportation distance of raw materials, and high cost. To overcome these issues, the brazier-type gasification and carbonization furnace is designed to carry out dry distillation, anaerobic carbonization and have a high carbonization rate under high-temperature conditions. To improve the operation and maintenance efficiency, we formulate the operation of the brazier-type gasification and carbonization furnace as a dynamic multi-objective optimization problem (DMOP). Firstly, we analyze the dynamic factors in the work process of the brazier-type gasification and carbonization furnace, such as the equipment capacity, the operating conditions, and the biomass treated by the furnace. Afterward, we select the biochar yield and carbon monoxide emission as the dynamic objectives and model the DMOP. Finally, we apply three dynamic multiobjective evolutionary algorithms to solve the optimization problem so as to verify the effectiveness of the dynamic optimization approach in the gasification and carbonization furnace. Full article
Show Figures

Figure 1

Article
Optimizing Automated Trading Systems with Deep Reinforcement Learning
Algorithms 2023, 16(1), 23; https://doi.org/10.3390/a16010023 - 01 Jan 2023
Cited by 1 | Viewed by 1109
Abstract
In this paper, we propose a novel approach to optimize parameters for strategies in automated trading systems. Based on the framework of Reinforcement learning, our work includes the development of a learning environment, state representation, reward function, and learning algorithm for the cryptocurrency [...] Read more.
In this paper, we propose a novel approach to optimize parameters for strategies in automated trading systems. Based on the framework of Reinforcement learning, our work includes the development of a learning environment, state representation, reward function, and learning algorithm for the cryptocurrency market. Considering two simple objective functions, cumulative return and Sharpe ratio, the results showed that Deep Reinforcement Learning approach with Double Deep Q-Network setting and the Bayesian Optimization approach can provide positive average returns. Among the settings being studied, Double Deep Q-Network setting with Sharpe ratio as reward function is the best Q-learning trading system. With a daily trading goal, the system shows outperformed results in terms of cumulative return, volatility and execution time when compared with the Bayesian Optimization approach. This helps traders to make quick and efficient decisions with the latest information from the market. In long-term trading, Bayesian Optimization is a method of parameter optimization that brings higher profits. Deep Reinforcement Learning provides solutions to the high-dimensional problem of Bayesian Optimization in upcoming studies such as optimizing portfolios with multiple assets and diverse trading strategies. Full article
Show Figures

Figure 1

Article
Improved Anomaly Detection by Using the Attention-Based Isolation Forest
Algorithms 2023, 16(1), 19; https://doi.org/10.3390/a16010019 - 28 Dec 2022
Viewed by 1014
Abstract
A new modification of the isolation forest called the attention-based isolation forest (ABIForest) is proposed for solving the anomaly detection problem. It incorporates an attention mechanism in the form of Nadaraya–Watson regression into the isolation forest to improve the solution of the anomaly [...] Read more.
A new modification of the isolation forest called the attention-based isolation forest (ABIForest) is proposed for solving the anomaly detection problem. It incorporates an attention mechanism in the form of Nadaraya–Watson regression into the isolation forest to improve the solution of the anomaly detection problem. The main idea underlying the modification is the assignment of attention weights to each path of trees with learnable parameters depending on the instances and trees themselves. Huber’s contamination model is proposed to be used to define the attention weights and their parameters. As a result, the attention weights are linearly dependent on learnable attention parameters that are trained by solving a standard linear or quadratic optimization problem. ABIForest can be viewed as the first modification of the isolation forest to incorporate an attention mechanism in a simple way without applying gradient-based algorithms. Numerical experiments with synthetic and real datasets illustrate that the results of ABIForest outperform those of other methods. The code of the proposed algorithms has been made available. Full article
Show Figures

Figure 1

Article
Forecasting for Chaotic Time Series Based on GRP-lstmGAN Model: Application to Temperature Series of Rotary Kiln
Entropy 2023, 25(1), 52; https://doi.org/10.3390/e25010052 - 27 Dec 2022
Viewed by 609
Abstract
Rotary kiln temperature forecasting plays a significant part of the automatic control of the sintering process. However, accurate forecasts are difficult owing to the complex nonlinear characteristics of rotary kiln temperature time series. With the development of chaos theory, the prediction accuracy is [...] Read more.
Rotary kiln temperature forecasting plays a significant part of the automatic control of the sintering process. However, accurate forecasts are difficult owing to the complex nonlinear characteristics of rotary kiln temperature time series. With the development of chaos theory, the prediction accuracy is improved by analyzing the essential characteristics of time series. However, the existing prediction methods of chaotic time series cannot fully consider the local and global characteristics of time series at the same time. Therefore, in this study, the global recurrence plot (GRP)-based generative adversarial network (GAN) and the long short-term memory (LSTM) combination method, named GRP-lstmGAN, are proposed, which can effectively display important information about time scales. First, the data is subjected to a series of pre-processing operations, including data smoothing. Then, transforming one-dimensional time series into two-dimensional images by GRP makes full use of the global and local information of time series. Finally, the combination of LSTM and improves GAN models for temperature time series prediction. The experimental results show that our model is better than comparison models. Full article
Show Figures

Figure 1

Article
Cluster-Based Structural Redundancy Identification for Neural Network Compression
Entropy 2023, 25(1), 9; https://doi.org/10.3390/e25010009 - 21 Dec 2022
Viewed by 610
Abstract
The increasingly large structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works typically compress models based on importance, removing [...] Read more.
The increasingly large structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works typically compress models based on importance, removing unimportant filters. This paper reconsiders model pruning from the perspective of structural redundancy, claiming that identifying functionally similar filters plays a more important role, and proposes a model pruning framework for clustering-based redundancy identification. First, we perform cluster analysis on the filters of each layer to generate similar sets with different functions. We then propose a criterion for identifying redundant filters within similar sets. Finally, we propose a pruning scheme that automatically determines the pruning rate of each layer. Extensive experiments on various benchmark network architectures and datasets demonstrate the effectiveness of our proposed framework. Full article
Show Figures

Figure 1

Article
A Dual-Population-Based NSGA-III for Constrained Many-Objective Optimization
Entropy 2023, 25(1), 13; https://doi.org/10.3390/e25010013 - 21 Dec 2022
Viewed by 510
Abstract
The main challenge for constrained many-objective optimization problems (CMaOPs) is how to achieve a balance between feasible and infeasible solutions. Most of the existing constrained many-objective evolutionary algorithms (CMaOEAs) are feasibility-driven, neglecting the maintenance of population convergence and diversity when dealing with conflicting [...] Read more.
The main challenge for constrained many-objective optimization problems (CMaOPs) is how to achieve a balance between feasible and infeasible solutions. Most of the existing constrained many-objective evolutionary algorithms (CMaOEAs) are feasibility-driven, neglecting the maintenance of population convergence and diversity when dealing with conflicting objectives and constraints. This might lead to the population being stuck at some locally optimal or locally feasible regions. To alleviate the above challenges, we proposed a dual-population-based NSGA-III, named DP-NSGA-III, where the two populations exchange information through the offspring. The main population based on the NSGA-III solves CMaOPs and the auxiliary populations with different environment selection ignore the constraints. In addition, we designed an ε-constraint handling method in combination with NSGA-III, aiming to exploit the excellent infeasible solutions in the main population. The proposed DP-NSGA-III is compared with four state-of-the-art CMaOEAs on a series of benchmark problems. The experimental results show that the proposed evolutionary algorithm is highly competitive in solving CMaOPs. Full article
Show Figures

Figure 1

Article
Initial Solution Generation and Diversified Variable Picking in Local Search for (Weighted) Partial MaxSAT
Entropy 2022, 24(12), 1846; https://doi.org/10.3390/e24121846 - 18 Dec 2022
Viewed by 549
Abstract
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the [...] Read more.
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the (W)PMS problem. Our initialization strategy is based on a novel definition of variables’ structural entropy, and it aims to generate a solution that is close to a high-quality feasible one. Then, our diversification strategy picks a variable in two possible ways, depending on a parameter: continuing to pick variables with the best benefits or focusing on a clause with the greatest penalty and then selecting variables probabilistically. Based on these strategies, we developed a local search solver dubbed ImSATLike, as well as a hybrid solver ImSATLike-TT, and experimental results on (weighted) partial MaxSAT instances in recent MaxSAT Evaluations show that they outperform or have nearly the same performances as state-of-the-art local search and hybrid competitors, respectively, in general. Furthermore, we carried out experiments to confirm the individual impacts of each proposed strategy. Full article
Article
Advanced Spatial and Technological Aggregation Scheme for Energy System Models
Energies 2022, 15(24), 9517; https://doi.org/10.3390/en15249517 - 15 Dec 2022
Viewed by 499
Abstract
Energy system models that consider variable renewable energy sources (VRESs) are computationally complex. The greater spatial scope and level of detail entailed in the models exacerbates complexity. As a complexity-reduction approach, this paper considers the simultaneous spatial and technological aggregation of energy system [...] Read more.
Energy system models that consider variable renewable energy sources (VRESs) are computationally complex. The greater spatial scope and level of detail entailed in the models exacerbates complexity. As a complexity-reduction approach, this paper considers the simultaneous spatial and technological aggregation of energy system models. To that end, a novel two-step aggregation scheme is introduced. First, model regions are spatially aggregated to obtain a reduced region set. The aggregation is based on model parameters such as VRES time series, capacities, etc. In addition, spatial contiguity of regions is considered. Next, technological aggregation is performed on each VRES, in each region, based on their time series. The aggregations’ impact on accuracy and complexity of a cost-optimal, European energy system model is analyzed. The model is aggregated to obtain different combinations of numbers of regions and VRES types. Results are benchmarked against an initial resolution of 96 regions, with 68 VRES types in each. System cost deviates significantly when lower numbers of regions and/or VRES types are considered. As spatial and technological resolutions increase, the cost fluctuates initially and stabilizes eventually, approaching the benchmark. Optimal combination is determined based on an acceptable cost deviation of <5% and the point of stabilization. A total of 33 regions with 38 VRES types in each is deemed optimal. Here, the cost is underestimated by 4.42%, but the run time is reduced by 92.95%. Full article
Show Figures

Figure 1

Article
Curriculum Reinforcement Learning Based on K-Fold Cross Validation
Entropy 2022, 24(12), 1787; https://doi.org/10.3390/e24121787 - 06 Dec 2022
Viewed by 814
Abstract
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert [...] Read more.
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert experience and a single network, which has the problems of difficult curriculum task ranking and slow convergence speed. In this paper, we propose a curriculum reinforcement learning method based on K-Fold Cross Validation that can estimate the relativity score of task curriculum difficulty. Drawing lessons from the human concept of curriculum learning from easy to difficult, this method divides automatic curriculum learning into a curriculum difficulty assessment stage and a curriculum sorting stage. Through parallel training of the teacher model and cross-evaluation of task sample difficulty, the method can better sequence curriculum learning tasks. Finally, simulation comparison experiments were carried out in two types of multi-agent experimental environments. The experimental results show that the automatic curriculum learning method based on K-Fold cross-validation can improve the training speed of the MADDPG algorithm, and at the same time has a certain generality for multi-agent deep reinforcement learning algorithm based on the replay buffer mechanism. Full article
Show Figures

Figure 1

Article
Applications of Virtual Machine Using Multi-Objective Optimization Scheduling Algorithm for Improving CPU Utilization and Energy Efficiency in Cloud Computing
Energies 2022, 15(23), 9164; https://doi.org/10.3390/en15239164 - 02 Dec 2022
Cited by 1 | Viewed by 529
Abstract
Financial costs and energy savings are considered to be more critical on average for computationally intensive workflows, as such workflows which generally require extended execution times, and thus, require efficient energy consumption and entail a high financial cost. Through the effective utilization of [...] Read more.
Financial costs and energy savings are considered to be more critical on average for computationally intensive workflows, as such workflows which generally require extended execution times, and thus, require efficient energy consumption and entail a high financial cost. Through the effective utilization of scheduled gaps, the total execution time in a workflow can be decreased by placing uncompleted tasks in the gaps through approximate computations. In the current research, a novel approach based on multi-objective optimization is utilized with CloudSim as the underlying simulator in order to evaluate the VM (virtual machine) allocation performance. In this study, we determine the energy consumption, CPU utilization, and number of executed instructions in each scheduling interval for complex VM scheduling solutions to improve the energy efficiency and reduce the execution time. Finally, based on the simulation results and analyses, all of the tested parameters are simulated and evaluated with a proper validation in CloudSim. Based on the results, multi-objective PSO (particle swarm optimization) optimization can achieve better and more efficient effects for different parameters than multi-objective GA (genetic algorithm) optimization can. Full article
Show Figures

Figure 1

Article
Improved Black Widow Spider Optimization Algorithm Integrating Multiple Strategies
Entropy 2022, 24(11), 1640; https://doi.org/10.3390/e24111640 - 11 Nov 2022
Cited by 1 | Viewed by 656
Abstract
The black widow spider optimization algorithm (BWOA) had the problems of slow convergence speed and easily to falling into local optimum mode. To address these problems, this paper proposes a multi-strategy black widow spider optimization algorithm (IBWOA). First, Gauss chaotic mapping is introduced [...] Read more.
The black widow spider optimization algorithm (BWOA) had the problems of slow convergence speed and easily to falling into local optimum mode. To address these problems, this paper proposes a multi-strategy black widow spider optimization algorithm (IBWOA). First, Gauss chaotic mapping is introduced to initialize the population to ensure the diversity of the algorithm at the initial stage. Then, the sine cosine strategy is introduced to perturb the individuals during iteration to improve the global search ability of the algorithm. In addition, the elite opposition-based learning strategy is introduced to improve convergence speed of algorithm. Finally, the mutation method of the differential evolution algorithm is integrated to reorganize the individuals with poor fitness values. Through the analysis of the optimization results of 13 benchmark test functions and a part of CEC2017 test functions, the effectiveness and rationality of each improved strategy are verified. Moreover, it shows that the proposed algorithm has significant improvement in solution accuracy, performance and convergence speed compared with other algorithms. Furthermore, the IBWOA algorithm is used to solve six practical constrained engineering problems. The results show that the IBWOA has excellent optimization ability and scalability. Full article
Show Figures

Figure 1

Article
An HGA-LSTM-Based Intelligent Model for Ore Pulp Density in the Hydrometallurgical Process
Materials 2022, 15(21), 7586; https://doi.org/10.3390/ma15217586 - 28 Oct 2022
Viewed by 454
Abstract
This study focused on the intelligent model for ore pulp density in the hydrometallurgical process. However, owing to the limitations of existing instruments and devices, the feed ore pulp density of thickener, a key hydrometallurgical equipment, cannot be accurately measured online. Therefore, aiming [...] Read more.
This study focused on the intelligent model for ore pulp density in the hydrometallurgical process. However, owing to the limitations of existing instruments and devices, the feed ore pulp density of thickener, a key hydrometallurgical equipment, cannot be accurately measured online. Therefore, aiming at the problem of accurately measuring the feed ore pulp density, we proposed a new intelligent model based on the long short-term memory (LSTM) and hybrid genetic algorithm (HGA). Specifically, the HGA refers to a novel optimization search algorithm model that can optimize the hyperparameters and improve the modeling performance of the LSTM. Finally, the proposed intelligent model was successfully applied to an actual thickener case in China. The intelligent model prediction results demonstrated that the hybrid model outperformed other models and satisfied the measurement accuracy requirements in the factory well. Full article
Show Figures

Figure 1

Article
Research on Joint Resource Allocation for Multibeam Satellite Based on Metaheuristic Algorithms
Entropy 2022, 24(11), 1536; https://doi.org/10.3390/e24111536 - 26 Oct 2022
Viewed by 578
Abstract
With the rapid growth of satellite communication demand and the continuous development of high-throughput satellite systems, the satellite resource allocation problem—also called the dynamic resources management (DRM) problem—has become increasingly complex in recent years. The use of metaheuristic algorithms to obtain acceptable optimal [...] Read more.
With the rapid growth of satellite communication demand and the continuous development of high-throughput satellite systems, the satellite resource allocation problem—also called the dynamic resources management (DRM) problem—has become increasingly complex in recent years. The use of metaheuristic algorithms to obtain acceptable optimal solutions has become a hot topic in research and has the potential to be explored further. In particular, the treatment of invalid solutions is the key to algorithm performance. At present, the unused bandwidth allocation (UBA) method is commonly used to address the bandwidth constraint in the DRM problem. However, this method reduces the algorithm’s flexibility in the solution space, diminishes the quality of the optimized solution, and increases the computational complexity. In this paper, we propose a bandwidth constraint handling approach based on the non-dominated beam coding (NDBC) method, which can eliminate the bandwidth overlap constraint in the algorithm’s population evolution and achieve complete bandwidth flexibility in order to increase the quality of the optimal solution while decreasing the computational complexity. We develop a generic application architecture for metaheuristic algorithms using the NDBC method and successfully apply it to four typical algorithms. The results indicate that NDBC can enhance the quality of the optimized solution by 9–33% while simultaneously reducing computational complexity by 9–21%. Full article
Show Figures

Figure 1

Article
Model NOx, SO2 Emissions Concentration and Thermal Efficiency of CFBB Based on a Hyper-Parameter Self-Optimized Broad Learning System
Energies 2022, 15(20), 7700; https://doi.org/10.3390/en15207700 - 18 Oct 2022
Viewed by 563
Abstract
At present, establishing a multidimensional characteristic model of a boiler combustion system plays an important role in realizing its dynamic optimization and real-time control, so as to achieve the purpose of reducing environmental pollution and saving coal resources. However, the complexity of the [...] Read more.
At present, establishing a multidimensional characteristic model of a boiler combustion system plays an important role in realizing its dynamic optimization and real-time control, so as to achieve the purpose of reducing environmental pollution and saving coal resources. However, the complexity of the boiler combustion process makes it difficult to model it using traditional mathematical methods. In this paper, a kind of hyper-parameter self-optimized broad learning system by a sparrow search algorithm is proposed to model the NOx, SO2 emissions concentration and thermal efficiency of a circulation fluidized bed boiler (CFBB). A broad learning system (BLS) is a novel neural network algorithm, which shows good performance in multidimensional feature learning. However, the BLS has several hyper-parameters to be set in a wide range, so that the optimal combination between hyper-parameters is difficult to determine. This paper uses a sparrow search algorithm (SSA) to select the optimal hyper-parameters combination of the broad learning system, namely as SSA-BLS. To verify the effectiveness of SSA-BLS, ten benchmark regression datasets are applied. Experimental results show that SSA-BLS obtains good regression accuracy and model stability. Additionally, the proposed SSA-BLS is applied to model the combustion process parameters of a 330 MW circulating fluidized bed boiler. Experimental results reveal that SSA-BLS can establish the accurate prediction models for thermal efficiency, NOx emission concentration and SO2 emission concentration, separately. Altogether, SSA-BLS is an effective modelling method. Full article
Show Figures

Graphical abstract

Article
A Pattern-Recognizer Artificial Neural Network for the Prediction of New Crescent Visibility in Iraq
Computation 2022, 10(10), 186; https://doi.org/10.3390/computation10100186 - 13 Oct 2022
Viewed by 901
Abstract
Various theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories [...] Read more.
Various theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories use interpolation and extrapolation techniques to identify sighting regions through such data. In this study, a pattern recognizer artificial neural network was trained to distinguish between visibility regions. Essential parameters of crescent moon sighting were collected from moon sight datasets and used to build an intelligent system of pattern recognition to predict the crescent sight conditions. The proposed ANN learned the datasets with an accuracy of more than 72% in comparison to the actual observational results. ANN simulation gives a clear insight into three crescent moon visibility regions: invisible (I), probably visible (P), and certainly visible (V). The proposed ANN is suitable for building lunar calendars, so it was used to build a four-year calendar on the horizon of Baghdad. The built calendar was compared with the official Hijri calendar in Iraq. Full article
Show Figures

Figure 1

Article
Shear Strength Prediction Model for RC Exterior Joints Using Gene Expression Programming
Materials 2022, 15(20), 7076; https://doi.org/10.3390/ma15207076 - 12 Oct 2022
Viewed by 614
Abstract
Predictive models were developed to effectively estimate the RC exterior joint’s shear strength using gene expression programming (GEP). Two separate models are proposed for the exterior joints: the first with shear reinforcement and the second without shear reinforcement. Experimental results of the relevant [...] Read more.
Predictive models were developed to effectively estimate the RC exterior joint’s shear strength using gene expression programming (GEP). Two separate models are proposed for the exterior joints: the first with shear reinforcement and the second without shear reinforcement. Experimental results of the relevant input parameters using 253 tests were extracted from the literature to carry out a knowledge analysis of GEP. The database was further divided into two portions: 152 exterior joint experiments with joint transverse reinforcements and 101 unreinforced joint specimens. Moreover, the effects of different material and geometric factors (usually ignored in the available models) were incorporated into the proposed models. These factors are beam and column geometries, concrete and steel material properties, longitudinal and shear reinforcements, and column axial loads. Statistical analysis and comparisons with previously proposed analytical and empirical models indicate a high degree of accuracy of the proposed models, rendering them ideal for practical application. Full article
Show Figures

Figure 1

Article
Analysis of Vulnerability on Weighted Power Networks under Line Breakdowns
Entropy 2022, 24(10), 1449; https://doi.org/10.3390/e24101449 - 11 Oct 2022
Cited by 1 | Viewed by 581
Abstract
Vulnerability is a major concern for power networks. Malicious attacks have the potential to trigger cascading failures and large blackouts. The robustness of power networks against line failure has been of interest in the past several years. However, this scenario cannot cover weighted [...] Read more.
Vulnerability is a major concern for power networks. Malicious attacks have the potential to trigger cascading failures and large blackouts. The robustness of power networks against line failure has been of interest in the past several years. However, this scenario cannot cover weighted situations in the real world. This paper investigates the vulnerability of weighted power networks. Firstly, we propose a more practical capacity model to investigate the cascading failure of weighted power networks under different attack strategies. Results show that the smaller threshold of the capacity parameter can enhance the vulnerability of weighted power networks. Furthermore, a weighted electrical cyber-physical interdependent network is developed to study the vulnerability and failure dynamics of the entire power network. We perform simulations in the IEEE 118 Bus case to evaluate the vulnerability under various coupling schemes and different attack strategies. Simulation results show that heavier loads increase the likelihood of blackouts and that different coupling strategies play a crucial role in the cascading failure performance. Full article
Show Figures

Figure 1

Article
An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning
Entropy 2022, 24(10), 1377; https://doi.org/10.3390/e24101377 - 27 Sep 2022
Cited by 1 | Viewed by 661
Abstract
Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical [...] Read more.
Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack. Full article
Show Figures

Figure 1

Article
Dynamic Programming BN Structure Learning Algorithm Integrating Double Constraints under Small Sample Condition
Entropy 2022, 24(10), 1354; https://doi.org/10.3390/e24101354 - 24 Sep 2022
Viewed by 611
Abstract
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this [...] Read more.
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this paper studies the planning mode and connotation of dynamic programming, restricts its process with edge and path constraints, and proposes a dynamic programming BN structure learning algorithm with double constraints under small sample conditions. The algorithm uses double constraints to limit the planning process of dynamic programming and reduces the planning space. Then, it uses double constraints to limit the selection of the optimal parent node to ensure that the optimal structure conforms to prior knowledge. Finally, the integrating prior-knowledge method and the non-integrating prior-knowledge method are simulated and compared. The simulation results verify the effectiveness of the method proposed and prove that the integrating prior knowledge can significantly improve the efficiency and accuracy of BN structure learning. Full article
Show Figures

Figure 1

Review
Optimization-Based High-Frequency Circuit Miniaturization through Implicit and Explicit Constraint Handling: Recent Advances
Energies 2022, 15(19), 6955; https://doi.org/10.3390/en15196955 - 22 Sep 2022
Viewed by 521
Abstract
Miniaturization trends in high-frequency electronics have led to accommodation challenges in the integration of the corresponding components. Size reduction thereof has become a practical necessity. At the same time, the increasing performance demands imposed on electronic systems remain in conflict with component miniaturization. [...] Read more.
Miniaturization trends in high-frequency electronics have led to accommodation challenges in the integration of the corresponding components. Size reduction thereof has become a practical necessity. At the same time, the increasing performance demands imposed on electronic systems remain in conflict with component miniaturization. On the practical side, the challenges related to handling design constraints are aggravated by the high cost of system evaluation, normally requiring full-wave electromagnetic (EM) analysis. Some of these issues can be alleviated by implicit constraint handling using the penalty function approach. Yet, its performance depends on the arrangement of the penalty factors, necessitating a costly trial-and-error procedure to identify their optimum setup. A workaround is offered by the recently proposed algorithms with automatic adaptation of the penalty factors using different adjustment schemes. However, these intricate strategies require a continuous problem-dependent adaptation of the penalty function throughout the entire optimization process. Alternative methodologies have been proposed by taking an explicit approach to handle the inequality constraints, along with correction-based control over equality conditions, the combination of which proves to be demonstrably competitive for some miniaturization tasks. Nevertheless, optimization-based miniaturization, whether using implicit or explicit constraint handling, remains a computationally expensive task. A reliable way of reducing the aforementioned costs is the incorporation of multi-resolution EM fidelity models into the miniaturization procedure. Therein, the principal operation is based on the simultaneous monitoring of factors such as quality of the constraint satisfaction, as well as algorithm convergence status. This paper provides an overview of the abovementioned size-reduction algorithms, in which theoretical considerations are illustrated using a number of antenna and microwave circuit case studies. Full article
Show Figures

Figure 1

Article
Sensor Fusion for Occupancy Estimation: A Study Using Multiple Lecture Rooms in a Complex Building
Mach. Learn. Knowl. Extr. 2022, 4(3), 803-813; https://doi.org/10.3390/make4030039 - 16 Sep 2022
Viewed by 1084
Abstract
This paper uses various machine learning methods which explore the combination of multiple sensors for quality improvement. It is known that a reliable occupancy estimation can help in many different cases and applications. For the containment of the SARS-CoV-2 virus, in particular, room [...] Read more.
This paper uses various machine learning methods which explore the combination of multiple sensors for quality improvement. It is known that a reliable occupancy estimation can help in many different cases and applications. For the containment of the SARS-CoV-2 virus, in particular, room occupancy is a major factor. The estimation can benefit visitor management systems in real time, but can also be predictive of room reservation strategies. By using different terminal and non-terminal sensors in different premises of varying sizes, this paper aims to estimate room occupancy. In the process, the proposed models are trained with different combinations of rooms in training and testing datasets to examine distinctions in the infrastructure of the considered building. The results indicate that the estimation benefits from a combination of different sensors. Additionally, it is found that a model should be trained with data from every room in a building and cannot be transferred to other rooms. Full article
Show Figures

Figure 1

Article
A Period-Based Neural Network Algorithm for Predicting Building Energy Consumption of District Heating
Energies 2022, 15(17), 6338; https://doi.org/10.3390/en15176338 - 30 Aug 2022
Viewed by 694
Abstract
Northern China is vigorously promoting cogeneration and clean heating technologies. The accurate prediction of building energy consumption is the basis for heating regulation. In this paper, the daily, weekly, and annual periods of building energy consumption are determined by Fourier transformation. Accordingly, a [...] Read more.
Northern China is vigorously promoting cogeneration and clean heating technologies. The accurate prediction of building energy consumption is the basis for heating regulation. In this paper, the daily, weekly, and annual periods of building energy consumption are determined by Fourier transformation. Accordingly, a period-based neural network (PBNN) is proposed to predict building energy consumption. The main innovation of PBNN is the introduction of a new data structure, which is a time-discontinuous sliding window. The sliding window consists of the past 24 h, 24 h for the same period last week, and 24 h for the same period the previous year. When predicting the building energy consumption for the next 1 h, 12 h, and 24 h, the prediction errors of the PBNN are 2.30%, 3.47%, and 3.66% lower than those of the traditional sliding window PBNN (TSW-PBNN), respectively. The training time of PBNN is approximately half that of TSW-PBNN. The time-discontinuous sliding window reduces the energy consumption prediction error and neural network model training time. Full article
Show Figures

Figure 1

Article
Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
Entropy 2022, 24(9), 1213; https://doi.org/10.3390/e24091213 - 30 Aug 2022
Viewed by 696
Abstract
Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because [...] Read more.
Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because the random walk procedure is based on symmetric node similarity and fixed probability distribution, the sampled vertices’ sequences may lose local community structure information; secondly, because the feature extraction capacity of the shallow neural language model is limited, they can only extract the local structural features of networks; and thirdly, these approaches require specially designed mechanisms for different downstream tasks to integrate vertex attributes of various types. We conducted an in-depth investigation to address the aforementioned issues and propose a novel general NRL framework called dynamic structure and vertex attribute fusion network embedding, which firstly defines an asymmetric similarity and h-hop dynamic random walk strategy to guide the random walk process to preserve the network’s local community structure in walked vertex sequences. Next, we train a self-attention-based sequence prediction model on the walked vertex sequences to simultaneously learn the vertices’ local and global structural features. Finally, we introduce an attributes-driven Laplacian space optimization to converge the process of structural feature extraction and attribute feature extraction. The proposed approach is exhaustively evaluated by means of node visualization and classification on multiple benchmark datasets, and achieves superior results compared to baseline approaches. Full article
Show Figures

Figure 1

Article
Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition
Entropy 2022, 24(8), 1025; https://doi.org/10.3390/e24081025 - 26 Jul 2022
Cited by 2 | Viewed by 869
Abstract
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network [...] Read more.
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network based on attention mechanism (MSCRNN-A) is proposed. Firstly, a multi-stream sub-branches full convolution network (MSFCN) based on AlexNet is presented to limit the loss of emotional information. In MSFCN, sub-branches are added behind each pooling layer to retain the features of different resolutions, different features from which are fused by adding. Secondly, the MSFCN and Bi-LSTM network are combined to form a hybrid network to extract speech emotion features for the purpose of supplying the temporal structure information of emotional features. Finally, a feature fusion model based on a multi-head attention mechanism is developed to achieve the best fusion features. The proposed method uses an attention mechanism to calculate the contribution degree of different network features, and thereafter realizes the adaptive fusion of different network features by weighting different network features. Aiming to restrain the gradient divergence of the network, different network features and fusion features are connected through shortcut connection to obtain fusion features for recognition. The experimental results on three conventional SER corpora, CASIA, EMODB, and SAVEE, show that our proposed method significantly improves the network recognition performance, with a recognition rate superior to most of the existing state-of-the-art methods. Full article
Show Figures

Figure 1

Article
Optimal Performance and Application for Seagull Optimization Algorithm Using a Hybrid Strategy
Entropy 2022, 24(7), 973; https://doi.org/10.3390/e24070973 - 14 Jul 2022
Cited by 1 | Viewed by 758
Abstract
This paper aims to present a novel hybrid algorithm named SPSOA to address problems of low search capability and easy to fall into local optimization of seagull optimization algorithm. Firstly, the Sobol sequence in the low-discrepancy sequences is used to initialize the seagull [...] Read more.
This paper aims to present a novel hybrid algorithm named SPSOA to address problems of low search capability and easy to fall into local optimization of seagull optimization algorithm. Firstly, the Sobol sequence in the low-discrepancy sequences is used to initialize the seagull population to enhance the population’s diversity and ergodicity. Then, inspired by the sigmoid function, a new parameter is designed to strengthen the ability of the algorithm to coordinate early exploration and late development. Finally, the particle swarm optimization learning strategy is introduced into the seagull position updating method to improve the ability of the algorithm to jump out of local optimization. Through the simulation comparison with other algorithms on 12 benchmark test functions from different angles, the experimental results show that SPSOA is superior to other algorithms in stability, convergence accuracy, and speed. In engineering applications, SPSOA is applied to blind source separation of mixed images. The experimental results show that SPSOA can successfully realize the blind source separation of noisy mixed images and achieve higher separation performance than the compared algorithms. Full article
Show Figures

Figure 1

Review
Emergent Intelligence in Generalized Pure Quantum Systems
Computation 2022, 10(6), 88; https://doi.org/10.3390/computation10060088 - 31 May 2022
Viewed by 1020
Abstract
This paper presents the generalized information system theory, which is enlarged into pure quantum systems using wave probability functions. The novelty of this approach is based on analogies with electrical circuits and quantum physics. Information power was chosen as the relevant parameter, which [...] Read more.
This paper presents the generalized information system theory, which is enlarged into pure quantum systems using wave probability functions. The novelty of this approach is based on analogies with electrical circuits and quantum physics. Information power was chosen as the relevant parameter, which guarantees the balance of both components—information flow and information content. Next, the principles of quantum resonance between individual information components, which can lead to emergent behavior, are analyzed. For such a system, adding more and more probabilistic information elements can lead to better convergence of the whole to the resulting trajectory due to phase parameters. The paper also offers an original interpretation of information “source–recipient” or “resource–demand” models, including not yet implemented “unused resources” and “unmet demands”. Finally, possible applications of these principles are shown in several examples from the quantum gyrator to the hypothetical possibility of explaining some properties of the consciousness. Full article
Show Figures

Figure 1

Article
Bullet Frangibility Factor Quantification by Using Explicit Dynamic Simulation Method
Computation 2022, 10(6), 79; https://doi.org/10.3390/computation10060079 - 24 May 2022
Viewed by 1354
Abstract
Frangible bullets have a unique property that disintegrates into fragments upon hitting a hard target or obstacle. This peculiar ability to become fragments after impact is called frangibility. In this study, frangibility testing was carried out theoretically via modeling using the explicit dynamics [...] Read more.
Frangible bullets have a unique property that disintegrates into fragments upon hitting a hard target or obstacle. This peculiar ability to become fragments after impact is called frangibility. In this study, frangibility testing was carried out theoretically via modeling using the explicit dynamics method with ANSYS Autodyn solver integrated by ANSYS Workbench software. This paper aims to analyze frangibility through two main factors: material properties and projectile design. The results show the scattering and remaining bullet fragments after impact. According to the modeling results, the frangibility factor values are 9.34 and 10.79, respectively. Based on the frangibility factor, errors based on the frangibility factor by comparing the experimental results and simulations for AMMO 1 and AMMO 2 are 10.5% and 1.09%. Based on simulation results, the AMMO 2 design bullet scattering pattern shows several scattering particles more than the AMMO 1 design, with the furthest distance scattering AMMO 1 and AMMO 2 bullets being 1.01 m and 2658 m. Full article
Show Figures

Figure 1

Article
Improved Shear Strength Prediction Model of Steel Fiber Reinforced Concrete Beams by Adopting Gene Expression Programming
Materials 2022, 15(11), 3758; https://doi.org/10.3390/ma15113758 - 24 May 2022
Cited by 5 | Viewed by 1188
Abstract
In this study, an artificial intelligence tool called gene expression programming (GEP) has been successfully applied to develop an empirical model that can predict the shear strength of steel fiber reinforced concrete beams. The proposed genetic model incorporates all the influencing parameters such [...] Read more.
In this study, an artificial intelligence tool called gene expression programming (GEP) has been successfully applied to develop an empirical model that can predict the shear strength of steel fiber reinforced concrete beams. The proposed genetic model incorporates all the influencing parameters such as the geometric properties of the beam, the concrete compressive strength, the shear span-to-depth ratio, and the mechanical and material properties of steel fiber. Existing empirical models ignore the tensile strength of steel fibers, which exercise a strong influence on the crack propagation of concrete matrix, thereby affecting the beam shear strength. To overcome this limitation, an improved and robust empirical model is proposed herein that incorporates the fiber tensile strength along with the other influencing factors. For this purpose, an extensive experimental database subjected to four-point loading is constructed comprising results of 488 tests drawn from the literature. The data are divided based on different shapes (hooked or straight fiber) and the tensile strength of steel fiber. The empirical model is developed using this experimental database and statistically compared with previously established empirical equations. This comparison indicates that the proposed model shows significant improvement in predicting the shear strength of steel fiber reinforced concrete beams, thus substantiating the important role of fiber tensile strength. Full article
Show Figures

Figure 1

Article
A Tailored Pricing Strategy for Different Types of Users in Hybrid Carsharing Systems
Algorithms 2022, 15(5), 172; https://doi.org/10.3390/a15050172 - 20 May 2022
Cited by 2 | Viewed by 1108
Abstract
Considering the characteristics of different types of users in hybrid carsharing systems, in which sharing autonomous vehicles (SAVs) and conventional sharing cars (CSCs) coexist, a tailored pricing strategy (TPS) is proposed to maximize the operator’s profit and minimize all users’ costs. The fleet [...] Read more.
Considering the characteristics of different types of users in hybrid carsharing systems, in which sharing autonomous vehicles (SAVs) and conventional sharing cars (CSCs) coexist, a tailored pricing strategy (TPS) is proposed to maximize the operator’s profit and minimize all users’ costs. The fleet sizes and sizes of SAVs’ stations are also determined simultaneously. A bi-objective nonlinear programming model is established, and a genetic algorithm is applied to solve it. Based on the operational data in Lanzhou, China, carsharing users are clustered into three types. They are loyal users, losing users, and potential users, respectively. Results show the application of the TPS can help the operator increase profit and attract more users. The loyal users are assigned the highest price, while they still contribute the most to the operator’s profit with the highest number of carsharing trips. The losing users and potential users are comparable in terms of the number of trips, while the latter still makes more profit. Full article
Show Figures

Figure 1

Article
Predicting Box-Office Markets with Machine Learning Methods
Entropy 2022, 24(5), 711; https://doi.org/10.3390/e24050711 - 16 May 2022
Cited by 1 | Viewed by 1061
Abstract
The accurate prediction of gross box-office markets is of great benefit for investment and management in the movie industry. In this work, we propose a machine learning-based method for predicting the movie box-office revenue of a country based on the empirical comparisons of [...] Read more.
The accurate prediction of gross box-office markets is of great benefit for investment and management in the movie industry. In this work, we propose a machine learning-based method for predicting the movie box-office revenue of a country based on the empirical comparisons of eight methods with diverse combinations of economic factors. Specifically, we achieved a prediction performance of the relative root mean squared error of 0.056 in the US and of 0.183 in China for the two case studies of movie markets in time-series forecasting experiments from 2013 to 2016. We concluded that the support-vector-machine-based method using gross domestic product reached the best prediction performance and satisfies the easily available information of economic factors. The computational experiments and comparison studies provided evidence for the effectiveness and advantages of our proposed prediction strategy. In the validation process of the predicted total box-office markets in 2017, the error rates were 0.044 in the US and 0.066 in China. In the consecutive predictions of nationwide box-office markets in 2018 and 2019, the mean relative absolute percentage errors achieved were 0.041 and 0.035 in the US and China, respectively. The precise predictions, both in the training and validation data, demonstrate the efficiency and versatility of our proposed method. Full article
Show Figures

Figure 1

Article
PSO Optimized Active Disturbance Rejection Control for Aircraft Anti-Skid Braking System
Algorithms 2022, 15(5), 158; https://doi.org/10.3390/a15050158 - 10 May 2022
Cited by 2 | Viewed by 1178
Abstract
A high-quality and secure touchdown run for an aircraft is essential for economic, operational, and strategic reasons. The shortest viable touchdown run without any skidding requires variable braking pressure to manage the friction between the road surface and braking tire at all times. [...] Read more.
A high-quality and secure touchdown run for an aircraft is essential for economic, operational, and strategic reasons. The shortest viable touchdown run without any skidding requires variable braking pressure to manage the friction between the road surface and braking tire at all times. Therefore, the manipulation and regulation of the anti-skid braking system (ABS) should be able to handle steady nonlinearity and undetectable disturbances and to regulate the wheel slip ratio to make sure that the braking system operates securely. This work proposes an active disturbance rejection control technique for the anti-skid braking system. The control law ensures action that is bounded and manageable, and the manipulating algorithm can ensure that the closed-loop machine works around the height factor of the secure area of the friction curve, thereby improving overall braking performance and safety. The stability of the proposed algorithm is proven primarily by means of Lyapunov-based strategies, and its effectiveness is assessed by means of simulations on a semi-physical aircraft brake simulation platform. Full article
Show Figures

Figure 1

Article
Evaluation of Various Tree-Based Ensemble Models for Estimating Solar Energy Resource Potential in Different Climatic Zones of China
Energies 2022, 15(9), 3463; https://doi.org/10.3390/en15093463 - 09 May 2022
Cited by 1 | Viewed by 860
Abstract
Solar photovoltaic (PV) electricity generation is growing rapidly in China. Accurate estimation of solar energy resource potential (Rs) is crucial for siting, designing, evaluating and optimizing PV systems. Seven types of tree-based ensemble models, including classification and regression trees (CART), [...] Read more.
Solar photovoltaic (PV) electricity generation is growing rapidly in China. Accurate estimation of solar energy resource potential (Rs) is crucial for siting, designing, evaluating and optimizing PV systems. Seven types of tree-based ensemble models, including classification and regression trees (CART), extremely randomized trees (ET), random forest (RF), gradient boosting decision tree (GBDT), extreme gradient boosting (XGBoost), gradient boosting with categorical features support (CatBoost) and light gradient boosting method (LightGBM), as well as the multi-layer perceotron (MLP) and support vector machine (SVM), were applied to estimate Rs using a k-fold cross-validation method. The three newly developed models (CatBoost, LighGBM, XGBoost) and GBDT model generally outperformed the other five models with satisfactory accuracy (R2 ranging from 0.893–0.916, RMSE ranging from 1.943–2.195 MJm−2d−1, and MAE ranging from 1.457–1.646 MJm−2d−1 on average) and provided acceptable model stability (increasing the percentage in testing RMSE over training RMSE from 8.3% to 31.9%) under seven input combinations. In addition, the CatBoost (12.3 s), LightGBM (13.9 s), XGBoost (20.5 s) and GBDT (16.8 s) exhibited satisfactory computational efficiency compared with the MLP (132.1 s) and SVM (256.8 s). Comprehensively considering the model accuracy, stability and computational time, the newly developed tree-based models (CatBoost, LighGBM, XGBoost) and commonly used GBDT model were recommended for modeling Rs in contrasting climates of China and possibly similar climatic zones elsewhere around the world. This study evaluated three newly developed tree-based ensemble models of estimating Rs in various climates of China, from model accuracy, model stability and computational efficiency, which provides a new look at indicators of evaluating machine learning methods. Full article
Show Figures

Figure 1

Article
Investigating Multi-Level Semantic Extraction with Squash Capsules for Short Text Classification
Entropy 2022, 24(5), 590; https://doi.org/10.3390/e24050590 - 23 Apr 2022
Cited by 1 | Viewed by 1336
Abstract
At present, short text classification is a hot topic in the area of natural language processing. Due to the sparseness and irregularity of short text, the task of short text classification still faces great challenges. In this paper, we propose a new classification [...] Read more.
At present, short text classification is a hot topic in the area of natural language processing. Due to the sparseness and irregularity of short text, the task of short text classification still faces great challenges. In this paper, we propose a new classification model from the aspects of short text representation, global feature extraction and local feature extraction. We use convolutional networks to extract shallow features from short text vectorization, and introduce a multi-level semantic extraction framework. It uses BiLSTM as the encoding layer while the attention mechanism and normalization are used as the interaction layer. Finally, we concatenate the convolution feature vector and semantic results of the semantic framework. After several rounds of feature integration, the framework improves the quality of the feature representation. Combined with the capsule network, we obtain high-level local information by dynamic routing and then squash them. In addition, we explore the optimal depth of semantic feature extraction for short text based on a multi-level semantic framework. We utilized four benchmark datasets to demonstrate that our model provides comparable results. The experimental results show that the accuracy of SUBJ, TREC, MR and ProcCons are 93.8%, 91.94%, 82.81% and 98.43%, respectively, which verifies that our model has greatly improves classification accuracy and model robustness. Full article
Show Figures

Figure 1

Article
Optimizing Finite-Difference Operator in Seismic Wave Numerical Modeling
Algorithms 2022, 15(4), 132; https://doi.org/10.3390/a15040132 - 18 Apr 2022
Viewed by 1393
Abstract
The finite-difference method is widely used in seismic wave numerical simulation, imaging, and waveform inversion. In the finite-difference method, the finite difference operator is used to replace the differential operator approximately, which can be obtained by truncating the spatial convolution series. The properties [...] Read more.
The finite-difference method is widely used in seismic wave numerical simulation, imaging, and waveform inversion. In the finite-difference method, the finite difference operator is used to replace the differential operator approximately, which can be obtained by truncating the spatial convolution series. The properties of the truncated window function, such as the main and side lobes of the window function’s amplitude response, determine the accuracy of finite-difference, which subsequently affects the seismic imaging and inversion results significantly. Although numerical dispersion is inevitable in this process, it can be suppressed more effectively by using higher precision finite-difference operators. In this paper, we use the krill herd algorithm, in contrast with the standard PSO and CDPSO (a variant of PSO), to optimize the finite-difference operator. Numerical simulation results verify that the krill herd algorithm has good performance in improving the precision of the differential operator. Full article
Show Figures

Figure 1

Article
Hybrid-Flash Butterfly Optimization Algorithm with Logistic Mapping for Solving the Engineering Constrained Optimization Problems
Entropy 2022, 24(4), 525; https://doi.org/10.3390/e24040525 - 08 Apr 2022
Cited by 9 | Viewed by 1206
Abstract
Only the smell perception rule is considered in the butterfly optimization algorithm (BOA), which is prone to falling into a local optimum. Compared with the original BOA, an extra operator, i.e., color perception rule, is incorporated into the proposed hybrid-flash butterfly optimization algorithm [...] Read more.
Only the smell perception rule is considered in the butterfly optimization algorithm (BOA), which is prone to falling into a local optimum. Compared with the original BOA, an extra operator, i.e., color perception rule, is incorporated into the proposed hybrid-flash butterfly optimization algorithm (HFBOA), which makes it more in line with the actual foraging characteristics of butterflies in nature. Besides, updating the strategy of the control parameters by the logistic mapping is used in the HFBOA for enhancing the global optimal ability. The performance of the proposed method was verified by twelve benchmark functions, where the comparison experiment results show that the HFBOA converges quicker and has better stability for numerical optimization problems, which are compared with six state-of-the-art optimization methods. Additionally, the proposed HFBOA is successfully applied to six engineering constrained optimization problems (i.e., tubular column design, tension/compression spring design, cantilever beam design, etc.). The simulation results reveal that the proposed approach demonstrates superior performance in solving complex real-world engineering constrained tasks. Full article
Show Figures

Figure 1

Article
Insight into the Exemplary Physical Properties of Zn-Based Fluoroperovskite Compounds XZnF3 (X = Al, Cs, Ga, In) Employing Accurate GGA Approach: A First-Principles Study
Materials 2022, 15(7), 2669; https://doi.org/10.3390/ma15072669 - 05 Apr 2022
Cited by 3 | Viewed by 1004
Abstract
Using the full-potential linearized augmented plane wave (FP-LAPW) method, dependent on density functional theory, the simple cubic ternary fluoroperovskites XZnF3 (X = Al, Cs, Ga, In) compound properties, including structural, elastic, electronic, and optical, are calculated. To include the effect of exchange [...] Read more.
Using the full-potential linearized augmented plane wave (FP-LAPW) method, dependent on density functional theory, the simple cubic ternary fluoroperovskites XZnF3 (X = Al, Cs, Ga, In) compound properties, including structural, elastic, electronic, and optical, are calculated. To include the effect of exchange and correlation potentials, the generalized gradient approximation is applied for the optimization operation. This is identified, when we are changing the metallic cation specified as “X” when shifting to Al from Cs, the value of the bulk modulus is found to increase, showing the rigidity of a material. Depending upon the value of the bulk modulus, we can say that the compound AlZnF3 is harder and cannot be compressed as easily as compared to the other three compounds, which are having a lower value of the bulk modulus from AlZnF3. It is also found that the understudy compounds are mechanically well balanced and anisotropic. The determined value of the Poisson ratio, Cauchy pressure, and Pugh ratio shows our compounds have a ductile nature. From the computation of the band structure, it is found that the compound CsZnF3 is having an indirect band of 3.434 eV from (M-Γ), while the compounds AlZnF3, GaZnF3, and InZnF3 are found to have indirect band gaps of 2.425 eV, 3.665 eV, and 2.875 eV from (M-X), respectively. The optical properties are investigated for radiation up to 40 eV. The main optical spectra peaks are described as per the measured electronic structure. The above findings provide comprehensive insight into understanding the physical properties of Zn-based fluoroperovskites. Full article
Show Figures

Figure 1

Article
Continuous Simulation of the Power Flow in AC–DC Hybrid Microgrids Using Simplified Modelling
Computation 2022, 10(4), 52; https://doi.org/10.3390/computation10040052 - 29 Mar 2022
Cited by 1 | Viewed by 1762
Abstract
This paper reports the development of a model for continuous simulation of the power flow into AC–DC hybrid microgrids operating for different generation–consumption scenarios. The proposed application was assembled using a multiple-input multiple-output model which was built using blocks containing simplified models of [...] Read more.
This paper reports the development of a model for continuous simulation of the power flow into AC–DC hybrid microgrids operating for different generation–consumption scenarios. The proposed application was assembled using a multiple-input multiple-output model which was built using blocks containing simplified models of photovoltaic (PV) modules, wind turbines (WT), battery arrays (energy storage units, ESU), and power loads. The average power was used as the input/output variable of the blocks, allowing flexibility for easy reconfiguration of the microgrid and its control. By defining a generation profile, PV and WT were modeled considering environmental conditions and efficiency profiles of the maximum power point tracking (MPPT) algorithms. ESUs were modeled from intrinsic characteristics of the batteries, considering a constant power charge regime and using the State of Energy (SoE) approach to compute autonomy. To define a consumption profile, DC and AC loads were modeled as a constant real power. As an innovative characteristic, unidirectional and bidirectional power conversion stages were modeled using efficiency profiles, which can be obtained from experiments applied to the real converters. The outputs of the models of generation, consumption, and storage units were integrated as inputs of the mathematical expressions computing the power balance of the buses of the microgrid. The proposed model is suitable to analyze efficiency for different configurations of the same microgrid architecture, and can be extended by integrating additional elements. The model was implemented in LabVIEW software and three examples were developed to test its correct operation. Full article
Show Figures

Figure 1

Article
Forecasting Network Interface Flow Using a Broad Learning System Based on the Sparrow Search Algorithm
Entropy 2022, 24(4), 478; https://doi.org/10.3390/e24040478 - 29 Mar 2022
Cited by 3 | Viewed by 1093
Abstract
In this paper, we propose a broad learning system based on the sparrow search algorithm. Firstly, in order to avoid the complicated manual parameter tuning process and obtain the best combination of hyperparameters, the sparrow search algorithm is used to optimize the shrinkage [...] Read more.
In this paper, we propose a broad learning system based on the sparrow search algorithm. Firstly, in order to avoid the complicated manual parameter tuning process and obtain the best combination of hyperparameters, the sparrow search algorithm is used to optimize the shrinkage coefficient (r) and regularization coefficient (λ) in the broad learning system to improve the prediction accuracy of the model. Second, using the broad learning system to build a network interface flow forecasting model. The flow values in the time period [T11,T] are used as the characteristic values of the traffic at the moment T+1. The hyperparameters outputted in the previous step are fed into the network to train the broad learning system network traffic prediction model. Finally, to verify the model performance, this paper trains the prediction model on two public network flow datasets and real traffic data of an enterprise cloud platform switch interface and compares the proposed model with the broad learning system, long short-term memory, and other methods. The experiments show that the prediction accuracy of this method is higher than other methods, and the moving average reaches 97%, 98%, and 99% on each dataset, respectively. Full article
Show Figures

Figure 1

Article
Computer Simulations of Injection Process of Elements Used in Electromechanical Devices
Materials 2022, 15(7), 2511; https://doi.org/10.3390/ma15072511 - 29 Mar 2022
Viewed by 1020
Abstract
This paper presents the computer simulations of the injection process of elements used in electromechanical devices and an analysis of the impact of the injection molding process parameters on the quality of moldings. The study of the process was performed in Autodesk Simulation [...] Read more.
This paper presents the computer simulations of the injection process of elements used in electromechanical devices and an analysis of the impact of the injection molding process parameters on the quality of moldings. The study of the process was performed in Autodesk Simulation Moldflow Insight 2021. The setting of the injection process of the detail must be based on the material and process technological card data and knowledge of the injection molding machine work. The supervision of production quality in the case of injection moldings is based on the information and requirements received from the customer. The main goal of the analysis is to answer the question: how to properly set up the process of filling the mold cavities in order to meet the quality requirements of the presented molding. In this paper, the simulation was compared with the real process. It is extremely important to optimize the injection, including synchronizing all process parameters. Incorrectly selected values of the parameters may lead to product defects, leading to losses and destruction of raw materials, and unnecessary energy consumption connected with the process. Full article
Show Figures

Figure 1

Article
A Hybrid Method Using HAVOK Analysis and Machine Learning for Predicting Chaotic Time Series
Entropy 2022, 24(3), 408; https://doi.org/10.3390/e24030408 - 15 Mar 2022
Cited by 2 | Viewed by 1676
Abstract
The prediction of chaotic time series systems has remained a challenging problem in recent decades. A hybrid method using Hankel Alternative View Of Koopman (HAVOK) analysis and machine learning (HAVOK-ML) is developed to predict chaotic time series. HAVOK-ML simulates the time series by [...] Read more.
The prediction of chaotic time series systems has remained a challenging problem in recent decades. A hybrid method using Hankel Alternative View Of Koopman (HAVOK) analysis and machine learning (HAVOK-ML) is developed to predict chaotic time series. HAVOK-ML simulates the time series by reconstructing a closed linear model so as to achieve the purpose of prediction. It decomposes chaotic dynamics into intermittently forced linear systems by HAVOK analysis and estimates the external intermittently forcing term using machine learning. The prediction performance evaluations confirm that the proposed method has superior forecasting skills compared with existing prediction methods. Full article
Show Figures

Figure 1

Article
Ensemble Learning-Based Reactive Power Optimization for Distribution Networks
Energies 2022, 15(6), 1966; https://doi.org/10.3390/en15061966 - 08 Mar 2022
Viewed by 843
Abstract
Reactive power optimization of distribution networks is of great significance to improve power quality and reduce power loss. However, traditional methods for reactive power optimization of distribution networks either consume a lot of calculation time or have limited accuracy. In this paper, a [...] Read more.
Reactive power optimization of distribution networks is of great significance to improve power quality and reduce power loss. However, traditional methods for reactive power optimization of distribution networks either consume a lot of calculation time or have limited accuracy. In this paper, a novel data-driven-based approach is proposed to simultaneously improve the accuracy and reduce calculation time for reactive power optimization using ensemble learning. Specifically, k-fold cross-validation is used to train multiple sub-models, which are merged to obtain high-quality optimization results through the proposed ensemble framework. The simulation results show that the proposed approach outperforms popular baselines, such as light gradient boosting machine, convolutional neural network, case-based reasoning, and multi-layer perceptron. Moreover, the calculation time is much lower than the traditional heuristic methods, such as the genetic algorithm. Full article
Show Figures

Figure 1