Obtaining Bricks Using Silicon-Based Materials: Experiments, Modeling and Optimization with Artiﬁcial Intelligence Tools

: In the brick manufacturing industry, there is a growing concern among researchers to ﬁnd solutions to reduce energy consumption. An industrial process for obtaining bricks was approached, with the manufacturing mix modiﬁed via the introduction of sunﬂower seed husks and sawdust. The process was analyzed with artiﬁcial intelligence tools, with the goal of minimizing the exhaust emissions of CO and CH 4 . Optimization algorithms inspired by human and virus behaviors were applied in this approach, which were associated with neural network models. A series of feed-forward neural networks have been developed, with 6 inputs corresponding to the working conditions, one or two intermediate layers and one output (CO or CH 4 , respectively). The results for ten biologically inspired algorithms and a search grid method were compared successfully within a single objective optimization procedure. It was established that by introducing 1.9% sunﬂower seed husks and 0.8% sawdust in the brick manufacturing mix, a minimum quantity of CH 4 emissions was obtained, while 0% sunﬂower seed husks and 0.5% sawdust were the minimum quantities for CO emissions.


Introduction
The process of obtaining burnt bricks involves a significant consumption of energy. Given the current context in which the cost of energy has become a critical issue, there is growing concern among researchers to find solutions to reduce energy consumption [1]. Most of the existing studies in the literature are carried out at the laboratory scale and involve the inclusion in the manufacturing mix of bricks containing waste products, such as agricultural waste [2] and textile sludge [3], as well as wastes resulting from the steel industry, containing mainly slag, dust, mud [4], cotton micro-waste [5], sawdust [6], coffee grounds [7], brick powders and residual ceramics [8][9][10], ash [11] and others. After obtaining the finished products at the laboratory scale, they are evaluated in terms of their mechanical properties, thermal conductivity and water absorption. Beshah and others [3] obtained very good compression resilience results (approximately 30 MPa) and energy savings of 26 and 50% when used in combination with 10% clay and 20% textile sludge to obtain burned bricks. At the same laboratory scale, Cultrone and others [6] established that the addition of sawdust to the composition of bricks in different proportions from 2.5 to 10% does not cause significant changes in the color of the bricks or in their mineralogy. Manni and others [7] also found that bricks do not change color even when coffee grounds Mathematics 2022, 10, 1891 3 of 21 The specific algorithms based on human competition are sports-inspired algorithms, e.g., the Football Game Algorithm (FGA) [23,24] and Volleyball Premier League (VPL) algorithm [25], as well as the Imperialist Competitive Algorithm (ICA) [26], which was applied for a robust PID controller [27] and elsewhere in the engineering domain [28]. So far, FGA and VPL have not been the subject of a great number of studies. Among the proposed improvements, one can mention a modified VPL approach that uses the sine cosine algorithm [29] and the Multi-Objective Volleyball Premier League algorithm [30]. On the other hand, the ICA has been extensively studied, and many variants have been published, e.g., ICA combined with different chaotic maps [31], k-means clustering [32] and neural networks [33,34].
Optimization algorithms inspired by virus behavior. Some examples that belong to this class are: Viral System (VS) [35], Virulence Optimization Algorithm (VOA1) [36] and Virus Colony Search (VCS) [37] approaches. VCS was applied to find the optimal placement of distributed generators with regard to a reliability assessment [38], and for unit commitment in smart grids with wind farms [39]. Another algorithm is the Virus Optimization Algorithm (VOA2) [40][41][42], for which only one modified variant [43] is available in the literature. In general, one cannot find many reported applications for these algorithms.
In most studies, these optimization algorithms have been tested on benchmark problems, with several applications in the field, especially for the TLBO algorithm [18]. After an in-depth literature study, no applications were found in the fields addressed in this paper or in related fields in which real industrial processes are considered.
In this approach, some optimization algorithms are applied to minimize the gas emissions in an industrial process, whereby burnt bricks are obtained through the introduction in the manufacturing mix of sunflower seed husks and sawdust. The novelty of this research paper is the study via simulation of the industrial process using optimization procedures, including neural networks and biologically inspired algorithms, to provide the percentage compositions of seed husks and sawdust and dry product mass, as well as the amounts of clay, ash and organic raw materials to be used in the manufacturing mix, so that the amounts of CO and CH 4 discharged to the furnace chimney are minimal. The combination of neural networks with algorithms inspired by human and virus behaviors has not been studied in the literature, which is another contribution of the present article.

Experimental Determinations
The experimental tests were performed in an industrial system used for obtaining bricks. Here, 100 batches of bricks were evaluated, for which the manufacturing mix was modified via the introduction of sunflower seed husks and sawdust in proportions ranging between 0 and 3.5%. The impact assessment of the addition of these auxiliary materials on the exhaust emissions resulting from the manufacturing process was performed by analyzing the exhaust gases in the furnace chimney with a Testo 350 flue gas analyzer. This was equipped with detection and measurement cells specific to the gases of interest (CO,NO and CH 4 ) and metrologically calibrated. This analyzer provided CO measurement resolutions of 0.1 ppm and 1 ppm for NO and CH 4 .
The experimental determinations allowed the construction of a database containing information on the mass percentage composition of sunflower seed husks (SSH) and sawdust (S), dry product mass (DPM), the amount of clay (C), the amount of ash (A) and the amount of organic raw material (ORM) used, as well as the amounts of CO, NO and CH 4 discharged to the furnace chimney in the manufacturing process of 100 loads of bricks.

Modeling Methodology
Given the current energy crisis, process engineers are trying to maintain and improve productivity in industrial brick-making plants that are designed and put into operation for a certain type of manufacturing mix and a certain type of combustion process by adding Mathematics 2022, 10, 1891 4 of 21 auxiliary materials such as sawdust and sunflower seed husks. The manufacturing mix must follow the same combustion curves due to the constructive characteristics of the installation, and the quantity of exhaust emissions in the furnace chimney must also be carefully monitored.
In this study, it is proposed to build neural models that make predictions about the changes in quantity of exhaust emissions when different percentages of auxiliary materials are introduced into the manufacturing mix, thereby helping to reduce the number of experimental tests with significant economic impacts, because these tests are carried out on large-capacity industrial plants (approximately 90,000 bricks per day) and involve significant consumption of materials. The predictions based on the constructed neural models have the advantage of reducing the consumption of materials, energy and time. Furthermore, the neural models are then introduced into an optimization procedure with the goal of minimizing the quantities of exhaust emissions.
The input data used for the neural models were: the mass percentage compositions of sawdust (S) and sunflower seed husks (SSH), dry product mass (DPM) expressed in kg and the amount of clay (C), the amount of ash (A) and the amount of organic raw material (ORM) expressed in tons. The output parameters considered were: the amounts of CO and CH 4 present in the flue gases in the furnace chimney. Neural networks with forward propagation with 6 inputs, one or two layers with 6 to 30 hidden neurons and an output were developed. The neural models with the best results in the training stage were tested in the validation stage on new series of experimental data (unseen data).
In the modeling stage, feed-forward neural networks with one or two hidden layers were tested because in many applications it has been shown that this type of neural model provides satisfactory results as a universal approximator. The method applied for the development of the network was trial and error, and in order to obtain the best topology, many variants were tested (neural networks with one or two hidden layers and different numbers of intermediate neurons), following the errors at the end of a relatively large number of training epochs.
The feed-forward multilayer perceptron neural networks were built using the Neu-roSolutions commercial simulator produced by the company NeuroDimension. The backpropagation training algorithm was applied, since it is the most commonly used algorithm for training neural networks, which aims to minimize the error via gradient descent. The transfer function used was the hyperbolic tangent. The experimental data were randomly ordered and divided into 87 instances used for training and 13 used for validation.

Optimization Methodology
The optimization problem aims to determine the percentage compositions of seed husks and sawdust and the amounts of clay, ash and organic raw materials to be used in the manufacturing mix so that the amounts of CO and CH 4 discharged to the furnace chimney are minimal. The previously determined neural models were included in the optimization procedure.
The methodology implemented for modeling and optimizing the industrial process for obtaining bricks is presented in Figure 1.
For single-objective optimization, considering separate problems for minimizing CO and CH 4 , the following algorithms inspired by the human and virus behavior were tested: Simple Human Learning Optimization Algorithm, Teaching-Learning-Based Optimization Algorithm, Social Learning Optimization, Football Game Algorithm, Volleyball Premier League Algorithm, Imperialist Competitive Algorithm, Viral System, Virulence Optimization Algorithm, Virus Colony Search and Virus Optimization Algorithm. For single-objective optimization, considering separate problems for minimizing CO and CH4, the following algorithms inspired by the human and virus behavior were tested: Simple Human Learning Optimization Algorithm, Teaching-Learning-Based Optimization Algorithm, Social Learning Optimization, Football Game Algorithm, Volleyball Premier League Algorithm, Imperialist Competitive Algorithm, Viral System, Virulence Optimization Algorithm, Virus Colony Search and Virus Optimization Algorithm.
All the algorithms have original implementations in C#. They were included in a unified optimization framework with a flexible architecture that allows the loose coupling of the modules implementing various algorithms and optimization problems [43]. New algorithms and new optimization problems can easily be added because they are based on predefined interfaces and the main program only uses IProblem and IAlgorithm objects, respectively. This is one of the few optimization frameworks available for .NET (more specifically, .NET framework 4.7.2). The resulting software is a Windows-based, self-contained application where the user can experiment with different combinations of algorithms, problems and parameter values.
Because these algorithms are less known and applied, a short description will be made for each of them. Additionally, we applied these algorithms and their performances were compared.
The Simple Human Learning Optimization (SHLO) algorithm [12] is inspired by a human learning model in which three learning mechanisms, similar to those of humans, are addressed: random learning, individual learning and social learning. The random learning operator is used to mimic random events that may occur during learning. At the beginning of the learning process, one can expect the process to be random because people do not have previous experience of the problem. Moreover, during learning, people cannot fully replicate previous experiences due to forgetting or partial knowledge of information. The individual learning operator is based on the principle that people avoid mistakes by using their own experiences. Therefore, this type of learning can improve an individual's performance. The SHLO algorithm mimics individual learning through the use of an individual knowledge database (IKD). Each individual i in the population has an IKDi knowledge database that other individuals All the algorithms have original implementations in C#. They were included in a unified optimization framework with a flexible architecture that allows the loose coupling of the modules implementing various algorithms and optimization problems [43]. New algorithms and new optimization problems can easily be added because they are based on predefined interfaces and the main program only uses IProblem and IAlgorithm objects, respectively. This is one of the few optimization frameworks available for .NET (more specifically, .NET framework 4.7.2). The resulting software is a Windows-based, self-contained application where the user can experiment with different combinations of algorithms, problems and parameter values.
Because these algorithms are less known and applied, a short description will be made for each of them. Additionally, we applied these algorithms and their performances were compared.
The Simple Human Learning Optimization (SHLO) algorithm [12] is inspired by a human learning model in which three learning mechanisms, similar to those of humans, are addressed: random learning, individual learning and social learning. The random learning operator is used to mimic random events that may occur during learning. At the beginning of the learning process, one can expect the process to be random because people do not have previous experience of the problem. Moreover, during learning, people cannot fully replicate previous experiences due to forgetting or partial knowledge of information. The individual learning operator is based on the principle that people avoid mistakes by using their own experiences. Therefore, this type of learning can improve an individual's performance. The SHLO algorithm mimics individual learning through the use of an individual knowledge database (IKD). Each individual i in the population has an IKD i knowledge database that other individuals cannot access. However, in a social environment, people can effectively learn from shared experience [44,45]. The social learning operator attempts to eliminate the possible disadvantage of a slow and inefficient learning process of individual learning, especially for complicated problems. In the SHLO algorithm, a social knowledge database (SKD) is used to mimic learning from collective experience. The SKD database is used to store the best experience of the population. Although in SHLO there is no direct interaction between individuals, cooperative learning is indirectly achieved because (1) the SKD can be updated with better knowledge from better individuals and (2) each individual in the population can access the SKD database. An updating operation is also used in SHLO to update the IKD and SKD during the search.
The Teaching-Learning-Based Optimization (TLBO) algorithm proposed by Rao et al. [14,[46][47][48] is inspired by the process of teaching and learning. The basic principle of this process is the effect that a teacher has on the results obtained by a class of learners. Learning based on cooperation between learners is also considered. Therefore, the algorithm has two learning phases known as the teacher phase and the learner phase. TLBO is a population-based algorithm with the following analogies: the class of students represents the population, the marks of students for different subjects taught are correlated with the values for the variables of the solution in the optimization problem and the result of a student represents the fitness of the individual. In the teacher phase, the best student in the class is chosen in the algorithm to have the role of teacher. The goal of the teacher is to influence the marks of other learners in each subject. However, the teacher has a certain level of knowledge for each subject taught. The teacher's level of knowledge in a particular subject is given by the difference between the teacher's mark and the average mark of the class in that subject. When the teacher has a better level of knowledge, he or she has a more significant influence on the marks of each learner. The teacher always influences the marks of all learners in the class, but a learner accepts the new knowledge only if he or she has a better result than the previous one. In the second phase of TLBO, the learners attempt to improve their knowledge by interacting with one another. When some learner has a superior amount of knowledge, this can help a peer to improve their own knowledge. The interactions between learners are random, and each learner in the class has to interact with any other learner [49]. When two learners interact, their result identifies which one is better, while the difference between their marks influences the amount of knowledge transferred.
The TLBO algorithm has been tested on many constrained and unconstrained optimization problems in various fields of engineering [50].
The Social Learning Optimization (SLO) algorithm [17] is based on the idea that intelligence in humans is determined both by genes and by social and cultural influence. It can be improved by learning and by acquiring new knowledge from participating in collective actions. When enough knowledge has been accumulated, a form of culture is established, and this can accelerate the development of intelligence. SLO includes three co-evolution spaces, organized by layers. The first (bottom) layer is the micro-space that supports individual evolution, which is in fact based on genetic evolution. SLO does not impose any constraints on the type of evolution and does not suggest a particular algorithm; for example, differential evolution can be used here. The next layer is the learning space that fosters imitation learning and observational learning by individuals, with the goal of improving their intelligence by learning from peers. The third (top) layer is the belief space, where knowledge extracted from the middle layer is conveyed. In this space, the accumulation of knowledge creates culture, which is further used to guide the individual genetic evolution in the micro-space. The purpose of the top layer is to simulate the phenomenon by which culture can accelerate the speed of evolution of human intelligence. In the upper co-evolution spaces, the SLO algorithm uses a library of knowledge points. These knowledge points are potential solutions, and better solutions can be found in their vicinity. The best individuals in the learning space replace the poor individuals in the belief space, and those in the belief space replace the poor individuals in the micro-space.
The Football Game Algorithm (FGA) [23] is based on an idealized version of the football game. The initial population defines the initial team of players on the field. Then, each player moves around his last position with a random walk procedure combined with a motion towards the ball. The ball is passed between players, and the players in better positions, i.e., with lower values of the objective function, are more likely to receive the ball. The part related to the coach represents the local search aspect of the algorithm. In order to increase the solution quality, a hypersphere is considered around the nearest best position in the vicinity of a player, whose radius decreases as the algorithm evolves. Members farther from good solutions are pushed toward the closest good positions. The coach may also employ the change option to replace weaker players with other players around the closest good position, depending on the coach's memory.
The Volleyball Premier League (VPL) algorithm [25] is inspired by the interactions between volleyball teams during a competition season, along with the coach's decisions during a match. The members of a team are the players, the reserves and the coach. The representation of the solution has two segments called the active and passive parts. The active part, represented by the actual players of the team, includes six active players. The fitness of each solution is calculated based on this part. The passive part contains certain variables used in special rules such as the strategy for changing players, where a replacement can occupy the position of a player who was removed at the decision of the coach. In the VPL, the term "league" is used to represent the concept of a population. The teams play against each other and the winning team in each match is determined based on a power index. Furthermore, several strategies are employed for knowledge sharing, similar to crossovers, and for learning by exploiting the best solutions from the team population. Changes may also occur between active formations and passive reserves. Additionally, there is a mechanism for exchanging players between teams, as well as a mechanism for promoting and relegating teams from the league.
The Imperialist Competitive Algorithm (ICA) [26] is a metaheuristic inspired by sociopolitical behaviors, based on the phenomena of imperialism and colonialism [28]. An empire consists of an imperialist and one or more colonies. An imperialist is a developed country that attempts to expand its power by spreading its cultural values to other less developed countries called colonies. As the colonies adopt the cultural values of an imperialist, they are said to be assimilated or gradually absorbed by that imperialist. The existence of several empires implies the existence of competition for power between these empires. The ICA is a population-based algorithm with the following analogies: a country represents an individual, the socio-political elements of a country are the variables of the solution and the cost of a country represents the fitness of the individual. The set of countries, i.e., the population of individuals, is divided into several empires that compete for power. The power of an empire is determined using the cost of its imperialist and a fraction of the cost of its colonies. The following main steps can describe the ICA algorithm: assimilation, revolution, intra-empire competition and inter-empire competition. The assimilation operation is applied within each empire and imitates the effect of imperialist influence on the colonies. When assimilation is completed, the colonies of an empire will be closer to their imperialist. The revolution operation imitates the resistance of some colonies to be absorbed by their imperialists. Revolution in ICA is similar to the mutation operation of a genetic algorithm, and it randomly changes the positions of certain colonies. After the operations of assimilation and revolution, intra-empire competition follows. In this type of competition, the imperialist of an empire exchanges positions with one of the colonies if that colony has a better cost than that of the imperialist. Thus, a colony can become the new imperialist. In the last step, inter-empire competition imitates the competition between empires. Weaker empires will gradually lose their colonies to stronger empires until they collapse, and the ideal convergence of the search process is identified when only one empire remains.
The Viral System (VS) [35] is an optimization method based on an analogy of how a biological system reacts to a viral infection. The population of potential solutions is considered to be an organism composed of multiple cells. When a cell is infected, the virus starts replicating nucleus capsids inside the cell body. Once a threshold is reached and sufficient capsids have been generated, the virus is able to use the cell's DNA to replicate itself, at which point the cell is destroyed and the resulting viruses can spread to other cells from its neighborhood. The number of nucleus capsids increases over multiple iterations to a binomially distributed amount. Fitter solutions have higher upper thresholds for the number of capsids required for the infection to manifest. Furthermore, the organism may develop an antigenic response, in which case fitter cells may resist the infection altogether. The virus replication process maybe either lytic, when the infection manifests suddenly, or lysogenic, when the virus lies dormant until triggered. The duration of the lysogenic cycle is determined from binomially distributed variables, with healthier cells having longer thresholds in terms of the accumulated delay until the infection starts to manifest. Consequently, fitter cells will have a higher chance to resist the infection and longer delays until the manifestation or the replication of the virus, while less fit cells will be removed from the population more easily and will have a greater contribution to the spread of the infection.
The Virulence Optimization Algorithm (VOA) [36] is based on the behavior of viruses when attempting to spread through a host organism. The solution population is composed of both cell and virus instances, and the approach is based on the tendency of the viruses to seek regions from the problem space where there are more resources, i.e., fitter cells which offer more potential for replication. For each generation of potential solutions, the cell and virus populations are clustered together via k-means. After a mutation and crossover phase, the viruses migrate through the problem space by moving toward the best member of the best cluster. In an attempt to better generalize the algorithm, the viruses are translated toward the best solution part-way, along a percentage of the total distance, while the translation direction is deviated by a certain angle. Of the resulting virus population, the best viruses are cloned and the least fit ones are removed. Repeated clustering, migration and selection of the viruses eventually causes the population to converge towards forming a large, dense cluster containing the fittest solution.
Similar to the VOA algorithm, the Virus Colony Search (VCS) algorithm [37,43] simulates how viruses survive and propagate by attacking living cells. Although the terminologies used by the authors of the VOA and VCS are different, the general principles are the same. The major difference between the two algorithms is given by an additional step present in the VCS (virus diffusion) and by the way in which the common steps are implemented. The implementation of the VCS algorithm is based on a set of five rules: (a) two groups are simulated, the colony of viruses and the colony of host cells; (b) during the diffusion process, each virus randomly generates a new individual; (c) each virus infects a single cell; (d) the reproduction of each virus is based on the host cell destruction; (e) after the application of the immune response process, only some of the best individuals remain in the population. Thus, the steps taken by the VCS algorithm are:

1.
Initialization. Similar to VOA, this step is designed to generate the initial population.
In the VCS this is achieved through a random sampling of the search space; 2.
Virus diffusion. This step simulates the process by which a virus searches for a host cell. The mechanism used in the VCS is based on the Gaussian Random Walk: where i is the index of the current individual, V popi is the newly created individual, G g best is the best individual from a generation and r 1 and r 2 are random values generated in [0,1] interval. For the Gaussian parameters, the standard deviation τ is computed using the following relation: log(g)/g·(V popi − G g best ); 3.
Host cell infection. After the cell has been attacked, the viruses begin to multiply using its resources. This process is simulated using the CMA-ES algorithm consisting of three sub-stages: the host cell colony changes relative to the individual represented by the arithmetic mean of the virus population; the best individuals in the virus population are identified and its center is calculated; the parameters for the calculation of the center and the applied covariance matrix are updated; 4.
Immune response. At this stage, viruses evolve and the best-performing ones are selected for the next generation. Evolution is carried out on the basis of a performance order.
The Virus Optimization Algorithm (VOA) is also a population-based metaheuristic algorithm that simulates the attacking behavior of viruses [41]. The algorithm has three main steps: (1) initialization; (2) replication; (3) updating and selection.

1.
Initialization. Like other metaheuristics, the initialization consists of generating the initial population of individuals. In this step, the control parameters are also set. In the initial version, a three-level factorial design is used to determine them [41].

2.
Replication. In this step, new individuals are created. This is performed using the common and strong members of the population and is based on 2 sub-steps that include classification and replication. In the classification sub-step, the best individuals (which correspond to the strong member groups) and the rest (which correspond to the common members group) are identified. In the replication sub-step, new strong and common individuals, determined through Equations (2) and (3), are added to the population: where nv indicates the new virus, sv refers to the strong virus and cv to the common virus. In the case of strong viruses, the replication is directed by an intensity parameter that is modified in the updating and selection step of the algorithm if the right conditions occur.

3.
Updating and selection. In this case, two sub-steps are encountered: (i) updating of the exploitation mechanisms; (ii) population maintenance. In the first sub-step, the population convergence and the algorithm evolution are checked. If the average performance did not improve, the exploitation is intensified. In the population maintenance, the population undergoes a reduction phase, where if the number of viruses is higher than 1000 (value determined on the relation encountered in nature, where the average size of a virus is usually around 1000 times that of a cell), then the worst identified individuals are eliminated.

Neural Network Modeling
The evaluation of the topology of artificial neural networks (ANN) was performed by testing the performances of several networks with 6 inputs (the percentage compositions of sawdust and sunflower seed husks, dry product mass, the amount of clay, the amount of ash and the amount of organic raw material expressed in tons), with one or two intermediate layers with 6 to 30 hidden neurons and an output for predicting the amounts of CO and CH 4 , respectively, resulting in the gas discharge chimney. The selection criteria for the best topology were the mean square error (MSE), the coefficient of determination (r 2 ) and the percent error E p (%). The coding of the topology of the neural networks used was ANN(m:n:p), where m represents the number of nodes in the input layer, n is the number of neurons in the hidden layer and p is the number of neurons in the output layer.
In order to ensure the good generalization capability of the networks, they were trained while following the evolution of the MSE error on the validation set. Although the exact values differed from network to network, as an intuitive estimation, it was empirically found that after about 80,000 epochs for CO and about 50,000 epochs for CH 4 , the performance no longer improved and the network training was ended at that moment. Table 1 presents the topologies of neural models constructed for predicting the amount of CO together with the selection criteria and the time required to obtain them. The best performances in the training stage were obtained with the ANN(6:20:16:1) model. The mean square deviation calculated for the training stage was ± 34.4 mg/m 3 . The results obtained with the ANN(6:20:16:1) model in the validation stage are shown in Figure 2.
empirically found that after about 80,000 epochs for CO and about 50,000 epochs for CH4, the performance no longer improved and the network training was ended at that moment. Table 1 presents the topologies of neural models constructed for predicting the amount of CO together with the selection criteria and the time required to obtain them. The best performances in the training stage were obtained with the ANN(6:20:16:1) model. The mean square deviation calculated for the training stage was ± 34.4 mg/m 3 . The results obtained with the ANN(6:20:16:1) model in the validation stage are shown in Figure 2.  Neural models constructed to predict the amount of CH 4 resulting in the gas discharge chimney and their performance in the training stage are presented in Table 2. The results obtained indicate that the best performance for CH 4 is obtained with the ANN (6:30:18:1) model. The value of the mean square error calculated for CH 4 is ± 43.8 mg/m 3 and is slightly higher than that obtained for CO.     Neural models built to assess the impacts of the addition of auxiliary materials (sawdust and sunflower seed husks) on the amounts of exhaust gases in the furnace chimney in an industrial brick-making plant offer the possibility of making predictions that can help to reduce the number of test batches, with significant savings in time and money. Even if the mean square deviations are slightly larger than what other authors report in the literature [51][52][53][54][55][56], it must be taken into account that the modeled data are obtained experimentally in an industrial installation where the flow of gases discharged to the furnace chimney is of about 40,000 m 3 /h. In order to evaluate the influence of composition and density on the compressive strength of AAC lightweight brick, Zulkifli et al. [51] obtained a correlation coefficient of 0.984 in the training stage for the neural models with feed-forward backpropagation architecture and the Levenberg-Marquardt training algorithm. Recently, Shaban et al. [54] proposed on adaptive neuro-fuzzy Neural models built to assess the impacts of the addition of auxiliary materials (sawdust and sunflower seed husks) on the amounts of exhaust gases in the furnace chimney in an industrial brick-making plant offer the possibility of making predictions that can help to reduce the number of test batches, with significant savings in time and money. Even if the mean square deviations are slightly larger than what other authors report in the literature [51][52][53][54][55][56], it must be taken into account that the modeled data are obtained experimentally in an industrial installation where the flow of gases discharged to the furnace chimney is of about 40,000 m 3 /h. In order to evaluate the influence of composition and density on the compressive strength of AAC lightweight brick, Zulkifli et al. [51] obtained a correlation coefficient of 0.984 in the training stage for the neural models with feed-forward backpropagation architecture and the Levenberg-Marquardt training algorithm. Recently, Shaban et al. [54] proposed on adaptive neuro-fuzzy inference system (ANFIS) with particle swarm optimization (PSO) to predict the compressive strength of concrete aggregate bricks (BACs). The correlation coefficient obtained in the training stage was 0.955. The importance of the study by Shaban et al. [54] derives from the fact that built models can help reduce demolition and construction (D&C) wastes and support efficient construction management. Members of our research group obtained correlation coefficients higher than 0.999 in the training stage for the neural models with feed-forward backpropagation architecture when modeling the thermal stability of some materials [56].

Single-Objective Optimization
Two optimization problems were formulated to determine the percentage compositions of seed husks and sawdust and the amounts of clay, ash and organic raw materials to be used in the manufacturing mix so that the amounts of CO and CH 4 discharged to the furnace chimney are minimized. Optimization algorithms inspired by human and virus behavior were used.
Thus, the optimization procedure has the following characteristic elements: • The objective function is represented by the amounts of CO and CH 4 discharged to the furnace chimney (they are distinct problems, meaning the optimization process has a single objective); • The decision variables are the inputs of the neural network, i.e., the percentage compositions of seed husks and sawdust, the dry product mass, as well as the amounts of clay, ash, and organic raw materials, respectively; • The optimization process included the best models that had been previously determined, i.e., ANN (6:30:18:1) for CH4 and ANN (6:20:16:1) for CO; • The purpose of the optimization process was to determine the working conditions (the values of the five inputs of the neural networks) that lead to the minimum amount of exhaust gas.
The results obtained for optimizing the amount of CO discharged to the furnace chimney are presented in Table 3. The runtime is expressed in milliseconds (ms). In this table, for each algorithm (column 1) the best solution is specified (column 2), together with the corresponding values of the 6 decision variables and the value of the objective function, i.e., the amount of CO that was subjected to minimization. Additionally, in columns 4 and 5 the performance recorded by the algorithm for 100 simulations is presented. Table 3. Results obtained with biologically inspired optimization algorithms for CO (mg/m 3 ).

Simple Human Learning Optimization Algorithm
Excluding the algorithms with input values rather far from the input values provided by the majority, and taking the average of the input values provided by the majority of algorithms, it follows that the manufacturing mix composed of 0% sunflower seed husks, 0.5% sawdust, 14.4 kg of dry product, 729.6 tons of clay, 133.4 tons of ash and 8.9 tons of organic raw materials leads to the minimum amount of CO being discharged into the furnace chimney following the combustion process.
The results obtained for the second optimization problem, i.e., the input data necessary so that the flow of CH 4 discharged to the furnace chimney is minimal, are presented in Table 4. The best results were obtained with the Simple Human Learning Optimization Algorithm, Teaching-Learning-Based Optimization Algorithm and Imperialist Competitive Algorithm. The result that does not contain inputs close to their extreme values indicates that a minimum amount of CH 4 can be obtained if 1.9% sunflower seed husks, 0.8% sawdust, 14.9 kg of dry product, 510.8 tons of clay, 18.2 tons of ash and 26.2 tons of organic raw materials are used in the manufacturing mix.  The results in Tables 3 and 4 depend on the values of the parameters for each algorithm. They are included in Appendix A (Table A1). Different values would naturally lead to different results. Figure 4 presents an overview related to the performance of the optimization algorithms in terms of the best solution quality and execution time. However, the scales of the problems are very different. The best solutions are very close for all algorithms, while the runtimes can greatly vary. Therefore, some artificial performance indicators were computed independently for each component. For solution quality, these are based on the ratio between the solution provided by an algorithm and the minimum (best) solution found by all algorithms. The runtime indicators are based on the ratios between the logarithms of actual execution times. Thus, the graph gives more of an intuitive view of the solution quality and runtime. The best values are those close to 1; smaller values refer to lower-quality indicators.  The following explanations based on exemplification will clarify the significance of Figure 4, highlighting the accuracy and usefulness of the results obtained, while also highlighting some of the advantages and disadvantages of the applied methods. Four rectangles appear next to each algorithm-the first two reflect the quality of the results (CO and CH4) and the next two are for the execution time. The ideal case corresponds to the situation in which very good results would be obtained over a short execution time. Figure 4 shows that satisfactory results are accompanied by relatively long lead times. The best situation corresponds to the SLO algorithm, for which the result indicators equal 1, but those for the running time are at the values of 0.6-0.7. This is an acceptable  The following explanations based on exemplification will clarify the significance of Figure 4, highlighting the accuracy and usefulness of the results obtained, while also highlighting some of the advantages and disadvantages of the applied methods. Four rectangles appear next to each algorithm-the first two reflect the quality of the results (CO and CH 4 ) and the next two are for the execution time. The ideal case corresponds to the situation in which very good results would be obtained over a short execution time. Figure 4 shows that satisfactory results are accompanied by relatively long lead times. The best situation corresponds to the SLO algorithm, for which the result indicators equal 1, but those for the running time are at the values of 0.6-0.7. This is an acceptable compromise to obtain very good results by having a relatively long running time.
We also have good results reflected by values of 1 for CO and CH 4 provided by SHLO, TLBO and ICA, but with longer execution times (lower values for the following two bars). Other information provided by the figure shows unacceptable results for CO and CH 4 given by VPL, VS and VlOA and short execution times (values of 1 for bars 3 and 4) corresponding to the VrOA and VCS algorithms.
By analyzing Figure 4, the main conclusion is that most optimization results are satisfactory from a technological point of view.

Comparison with Classic Population-Based Algorithms
Although promising, the use of the algorithms applied so far is not yet widespread in the optimization community. This is why we also include a comparison with wellestablished algorithms such as the classic real-valued Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimization (PSO). The results obtained for different configurations for the two optimization problems addressed in our work are given in Tables 5 and 6.  The main parameters that influence the execution speed and solution quality are the population size and the number of generations or iterations. Other parameters have their own influence, but in order to keep the number of experiments manageable, they were given typical values. Moreover, in our case the difference in solution quality was not large, while the runtime mainly depends on the number of individuals generated throughout the execution of the algorithm. Thus, for GA we used tournament selection with two individuals (chromosomes), elitism with one individual, arithmetic crossover with 0.9 probability and mutation by gene resetting with 0.1 probability. For DE, the amplification factor was 0.8 and the crossover rate was 0.9. For PSO, we used the global best method, whereby the inertia weight was set to 0.729 and the cognitive and social coefficients were both set to 1.494.
The results show that the runtime greatly depends on the combination of parameters, although the algorithms obtain the optimal solutions most of the time. When the parameters considered have lower values, thereby decreasing the runtime, the solution may be suboptimal but still good.
However, some biologically inspired algorithms, e.g., Virus Colony Search or Virus Optimization Algorithm, are much faster, even by two orders of magnitude, and the quality of their results is still very good.
It must also be mentioned that the execution time of the algorithms can be further reduced by parallelizing some operations. Since some algorithms are easier to parallelize than others, in order to allow a fair comparison all of the implementations were sequential.

Discussion
The main goal of this approach was to develop a complex, efficient methodology based on artificial intelligence tools for modeling and simulation. The methodology was applied on an industrial process, i.e., obtaining bricks from materials with different added ingredients, aiming to streamline the process. In this respect, artificial neural networks proved to be good models. Thus, for the prediction of CO emissions, the best model was ANN (6:20:16:1) with MSE = 0.0128, r 2 = 0.959 and E p = 2.19%; while for CH 4 , the ANN(6:30:18:1) model gave values of MSE = 0.0085, r 2 = 0.973 and E p = 9.59%. The neural models were integrated into an optimization procedure solved with different algorithms inspired by human behavior (learning, cooperation and competition) and virus behavior. For the same value of the objective function, the algorithms give different results for the 6 decision variables, which is a real advantage in practice, allowing the user to choose the most convenient solution.
A comparison between the performance of the optimization algorithms graphically illustrated in Figure 4 highlights the algorithms that obtained the best results regarding the CO and CH 4 values (SLO, SHLO, TLBO, ICA) or the best (short) execution times (VrOA, VCS).
Most solutions are of the same quality as the solutions found by other commonly used population-based algorithms such as GA, DE and PSO.
Regarding the advantages and disadvantages of the applied algorithms, it must be emphasized that an optimization algorithm depends very much on the specifics of the approached problem. This was also the reason why in this approach we used 10 algorithms, tracking the quality of the results and execution times. Obviously, the accessibility of the method can be added to the aspects discussed. However, once the implementation is complete, in the version with the user-friendly interface, the handling of the program is no longer a disadvantage. In addition, it can be easily adapted to other processes and systems.
The optimization problems solved with algorithms inspired by the behavior of viruses indicate that the addition of sunflower seed husks and sawdust to the manufacturing mix contributes to increasing the amount of CO in the exhaust gases to the furnace chimney. However, the heat generated during the burning of sunflower seed husks and sawdust can supply the heat required during the manufacturing process, thereby reducing energy consumption. Ibrahim and others [57] in a laboratory-scale study established that the use of sawdust as an alternative to clay to produce bricks helps to reduce energy consumption. This aspect was also highlighted by Kurmus and Mohajerani [58] for bricks incorporating 1% waste (cigarette butts), whereby energy savings of at least 8% can be achieved. Sani and Nzihou established that the introduction of 4% olive core flour in the brick manufacturing mix leads to a 36% reduction in energy consumption [1]. Other recent studies [59,60] have shown the efficiency of using waste in the manufacture of bricks.

Conclusions
The industrial process of obtaining bricks was studied here, first experimentally by evaluating the influence of adding sunflower seed husks and sawdust on the exhaust emissions resulting from the manufacturing process. Then, using experimental data sets, artificial intelligence tools were developed and applied for modeling and optimization actions.
In the modeling step, neural network models were determined and used to make predictions about the changes in quantity of exhaust emissions when different percentages of auxiliary materials were introduced into the manufacturing mix, there helping to reduce the number of experimental tests, having a significant economic impact.
Feed-forward neural networks were developed in various configurations, with 6 input variables (percentage compositions of sawdust and sunflower seed husks, dry product mass, amount of clay, amount of ash and amount of organic raw materials), one or two intermediate layers and a single output variable (amounts of CO and CH 4 , respectively, present in the flue gases in the furnace chimney). The best models, selected using the mean square error, coefficient of determination and percent error, were ANN(6:20:16:1) for CO prediction and ANN (6:30:18:1) for CH 4 prediction.
The best neural networks were included in an optimization procedure designed to minimize the gas emissions. Algorithms from three categories, inspired by the human behaviors of learning and cooperation, human competitive behavior and virus behavior, were applied comparatively to provide the best working conditions associated with the minimum energy consumption.
An overview related to the performance of the optimization algorithms in terms of the best solution quality and execution time was provided based on certain artificial performance indicators computed independently for each component. The main conclusion was that most optimization results were acceptable from a technological point of view.
The optimization results indicated that the addition of sunflower seed husks and sawdust to the manufacturing mix contributes to increasing the amount of CO in the exhaust gases in the furnace chimney. However, the heat generated during the burning of sunflower seed husks and sawdust can supply the heat required during the manufacturing process, thereby reducing the energy consumption.
In addition to these good results, what is important in this approach is the simulation methodology, including the neural networks and optimization biologically inspired algorithms, which provide satisfactory results. In addition, the methodology developed in this approach is generalizable and flexible, meaning it can be easily adapted to other processes, in association with different types of models.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
In this section, the parameter values of the algorithms whose results are included in Tables 3 and 4 are succinctly presented in Table A1. Table A1. The parameters values of the algorithms considered for the optimization of CO and CH 4 .

Optimization Algorithm Parameters and Values
Simple  Table A1. Cont.

Optimization Algorithm Parameters and Values
Volleyball Virus Optimization Algorithm number of iterations = 10 population length = 20 strong members = 5 strong growth rate = 10 common growth rate = 2 intensity = 1