An Optimized Neural Network Acoustic Model for Porous Hemp Plastic Composite Sound-Absorbing Board

: Current acoustic modeling methods face problems such as complex processes or inaccurate sound absorption coefﬁcients, etc. Therefore, this paper studies the topic. Firstly, the material samples were prepared, and standing wave tube method experiments were conducted. Material acoustic data were obtained, while a model using improved genetic algorithm and neural network was subsequently proposed. Secondly, the acoustic data obtained from the experiment were analyzed; a neural network structure was designed; and the training, veriﬁcation, and test data were all divided. In order to facilitate data processing, a symmetrical method was used to inversely normalize all the data. Thirdly, by the design of real coding scheme, ﬁtness function, crossover, and mutation operators, an improved genetic algorithm was proposed to obtain the optimal solution, as the initial weight and threshold, which were then input into the neural network along with the training and veriﬁcation data. Finally, the test data were input into the trained neural network in order to test the model. The test results and statistical analysis showed that compared with other algorithms, the proposed model has the lower root mean squared error (RMSE) value, the maximum coefﬁcient of determination ( R 2 ) value, and shorter convergence time.


Introduction
Noise pollution has always been an issue [1][2][3][4] in fields such as architecture, transportation, and construction machinery, and many people have now recognized it as a serious problem. Directly using a sound-absorbing medium is an effective method of controlling an increasing variety and number of noise sources. In this method, noise waves pass through a sound absorber containing a series of energy conversion technologies, and any sound energy is subsequently transferred into other forms. Usually, sound absorbers used in engineering are categorized as either resonant or porous sound-absorption structures. Porous sound-absorption structures absorb sound energy through the internal structure of the material itself. These include porous metal materials, glass wool, wood fiber board, and foamed plastics, all of which are effective sound-absorption porous materials [5][6][7]. These are the main source of sound-absorbing materials currently used in noise control.
As a porous fiber material [8,9], hemp has good properties, such as dissipating sound wave, moisture absorption and permeability, fine and soft, corrosion resistance, etc., but its strength is low and cannot be applied to harsh environments such as high temperature and strong airflow. In order to improve the strength of hemp, hemp-based composites have been widely studied [10][11][12][13][14], but most of them study the preparation methods of materials,

Literature Review
Recently, some studies have been undertaken on porous material acoustic modeling. Using Delany-Bazley-Miki (DBM) and Johnson-Champoux-Allard (JCA) models, Ramamoorthy et al. [9] analyzed the sound absorption performance of a double-layer, porous, shielded, sound-absorption system supported by a rigid cut-off wall. Subsequently, they produced a program for multilayer systems based on a steady-state genetic algorithm and calculated the relevant acoustic indexes with trimmed mean of M value (TMM) and fragments trimmed mean of M value (FTMM) solvers. Based on Biot theory and different interfacial layered media boundary conditions, Bai et al. [10] established a theoretical multi-layer composite media acoustic model supporting a rigid wall structure, the calculated normal incidence absorption coefficient of which was in good agreement with the measured results. Mareze et al. [11] proposed a cellular, porous, medium model based on microscopic images, consisting of a two-dimensional network of pipes comprised of simple one-dimensional acoustic cylinders or pores. Sun et al. [12] reconstructed the spatial topological structure of porous materials, and consequently proposed an equivalent mass-spring-damping vibration system model, which was verified through an impedance tube test. Gao et al. [13] improved the average sound absorption coefficient of a porous metamaterial structure, based on a mixture of composites within the frequency range of 0 to 10 kHz. This created a teaching-based optimization algorithm using the layer spacing and side plate length as the optimization variables. Pereira et al. [15] compared the 2D finite element method (FEM) and 3D boundary element method (BEM) [16] using a reverberation chamber for the analytical equations and experimental data and investigated how the size of the porous material plate influenced the sound absorption coefficient in a diffusion field. Gao et al. [17] proposed a composite porous metamaterial (CPM) and characterized the CPM sound-absorption coefficient using the acoustic parameters of the JCA model. Hassani et al. [18] used recycled denim bonded with phenolic resin as a composite sound-absorption material and prepared corresponding specimens to determine the sound-absorption coefficient. Furthermore, to verify the relevant test data, they modeled the relationship between the sound-absorption coefficient and frequency through logarith-mic regression and the JCA model. Semeniuk et al. [19] proposed the design and analysis of a foam pool model and studied the methods used for typical melamine foam pool acoustic performance prediction. A highly satisfactory effect was obtained. The cellular automata (CA) method is suitable for describing complex dynamic systems and has been widely used in engineering acoustics modeling [20][21][22][23][24].
A review of the above relevant studies demonstrates that describing the soundwave propagation in materials is highly important in material acoustic modeling. Currently, several relevant methods, including the Biot equation, DBM model, JCA model, onedimensional CA, exist for this purpose. The Biot equation considers the coupled propagation of elastic waves in elastic porous material and pressure waves in saturated pore fluid, while incorporating the damping effect of pore fluid. The Biot equation is elaborate in its description but complicated to solve, while the DBM and JCA models treat porous material and saturated pore fluid as homogeneous equivalent fluids to model pressure waves and describe soundwave loss. When the DBM or JCA model is used, many acoustic parameters, such as acoustic and flow resistivity, porosity, tortuosity factor, thermal characteristics length, etc., need to be measured using complex methods or special equipment. For example, the porosity needs to be measured by a scanning electron microscope and the effective density of porous materials needs to be measured via an impedance tube to obtain the flow resistivity. In addition, the optimization method is used for inverse calculation. The DBM model is applied to porous materials with low porosity, and the resulting sound-absorption coefficient error exists. One-dimensional CA usually deduces the local evolution rules between cells through the difference calculation of a one-dimensional acoustic wave equation in a standing wave tube model to obtain the sound pressure distribution of plane acoustic waves. The limitations on the cell size design minimum sound pressure are inaccurate, which affects the accuracy of the sound absorption coefficient value. One-dimensional CA modeling is, therefore, not sufficiently accurate. Neural network modeling can approximate any complex nonlinear relationship, but it is rarely used in porous material acoustic modeling, which inspires us to study the acoustic modeling of materials based on the widely used BP neural network. However, the BP algorithm suffers some major drawbacks, such as the likelihood to be trapped in a local minimum, slow convergence, and dependency on the initial starting point of their search. Therefore, in recent years, Prof. Mirjalili [25,26] and other scholars [27][28][29][30][31][32][33] have proposed many neural network optimization methods, using traditional metaheuristics. Recently, novel metaheuristics, such as red deer and social engineering optimizer [34][35][36][37][38][39], have been proposed. Compared with traditional metaheuristics, they show advantages in some field, but they are rarely combined with neural network. Additionally, GA has the characteristics of group search, and it searches based on probability rules with the value of objective function, which is more flexible and scalable. Inspired by them, and from the perspective of higher accuracy and lower computational complexity, we propose the improved GA optimized neural network acoustic model. We attempt to further analyze the performance of the model and compare it with other optimization algorithms in depth in the following sections.

Obtaining Data
In this paper, a standing wave method was used to evaluate a porous hemp plastic composite sound-absorbing board and obtain relevant material acoustic data. The procedure was as follows: a porous hemp plastic composite sound-absorbing board was prepared and cut into appropriately sized samples (low frequency diameter: 100 mm, medium frequency diameter: 50 mm, as shown in Figure 1). The experimental setup used in this paper was a JTZB sound-absorption coefficient test system. Figures 2 and 3 show the experimental setup and its block diagram.    The experimental setup measures the acoustic data of samples based on the principle of standing wave tube; the key steps of the experiment are as follows: Step 1. Connect the hardware devices according to the block diagram. A sample was mounted at one end of standing wave tube, with the other end connected to a loudspeaker.
Step 2. Turn on the power for all hardware devices.
Step 3. Install JTZB special software on the computer and set the loudspeaker output frequency as the center frequency.
Step 4. Move the test car to the loudspeaker side, and then slowly slide to the right to find a maximum sound pressure value and record it. Then slowly move the test car, find the first minimum sound pressure value adjacent to the maximum sound pressure, and record it.
Step 5. Adjust the loudspeaker output frequency and repeat steps 3-5.    The experimental setup measures the acoustic data of samples based on the principle of standing wave tube; the key steps of the experiment are as follows: Step 1. Connect the hardware devices according to the block diagram. A sample was mounted at one end of standing wave tube, with the other end connected to a loudspeaker.
Step 2. Turn on the power for all hardware devices.
Step 3. Install JTZB special software on the computer and set the loudspeaker output frequency as the center frequency.
Step 4. Move the test car to the loudspeaker side, and then slowly slide to the right to find a maximum sound pressure value and record it. Then slowly move the test car, find the first minimum sound pressure value adjacent to the maximum sound pressure, and record it.
Step 5. Adjust the loudspeaker output frequency and repeat steps 3-5.    The experimental setup measures the acoustic data of samples based on the principle of standing wave tube; the key steps of the experiment are as follows: Step 1. Connect the hardware devices according to the block diagram. A sample was mounted at one end of standing wave tube, with the other end connected to a loudspeaker.
Step 2. Turn on the power for all hardware devices.
Step 3. Install JTZB special software on the computer and set the loudspeaker output frequency as the center frequency.
Step 4. Move the test car to the loudspeaker side, and then slowly slide to the right to find a maximum sound pressure value and record it. Then slowly move the test car, find the first minimum sound pressure value adjacent to the maximum sound pressure, and record it.
Step 5. Adjust the loudspeaker output frequency and repeat steps 3-5. The experimental setup measures the acoustic data of samples based on the principle of standing wave tube; the key steps of the experiment are as follows: Step 1. Connect the hardware devices according to the block diagram. A sample was mounted at one end of standing wave tube, with the other end connected to a loudspeaker.
Step 2. Turn on the power for all hardware devices.
Step 3. Install JTZB special software on the computer and set the loudspeaker output frequency as the center frequency.
Step 4. Move the test car to the loudspeaker side, and then slowly slide to the right to find a maximum sound pressure value and record it. Then slowly move the test car, find the first minimum sound pressure value adjacent to the maximum sound pressure, and record it.
Step 5. Adjust the loudspeaker output frequency and repeat steps 3-5.
Step 6. The absorption coefficient of the sample is calculated according to Equation (1).
where α is the sound absorption coefficient, P max is the maximum sound pressure value and P min is the minimum sound pressure value. Finally, acoustic data of the sample were obtained at a central frequency between 200 Hz and 4000 Hz. A total of 50 samples were measured, and Table 1 presents one of them; therefore, there are 700 pieces of data in total. The experimental principle is as follows: due to the small diameter of the tube relative to the audio sound wave wavelength, the acoustic wave surface can be regarded as a plane incident wave propagating linearly in the tube, which is reflected to the sample after incidence. As the reflected and incident waves travel in opposite directions and phases, the sound pressure superposes and interferes to form standing waves, with the maximum and minimum sound pressure values at two positions within the tube, the distance between which is 1/4 wavelength.

Proposed Model
The principle for the proposed model, shown in Figure 4, consists of designing the neural network structure, data processing, training the neural network, and result analysis. In the model design, in order to process the data, a symmetrical method is used to inverse normalize all the data. The following describes the design process of the model in detail. Step 6. The absorption coefficient of the sample is calculated according to Equation (1).
where α is the sound absorption coefficient, is the maximum sound pressure value and is the minimum sound pressure value. Finally, acoustic data of the sample were obtained at a central frequency between 200 Hz and 4000 Hz. A total of 50 samples were measured, and Table 1 presents one of them; therefore, there are 700 pieces of data in total. The experimental principle is as follows: due to the small diameter of the tube relative to the audio sound wave wavelength, the acoustic wave surface can be regarded as a plane incident wave propagating linearly in the tube, which is reflected to the sample after incidence. As the reflected and incident waves travel in opposite directions and phases, the sound pressure superposes and interferes to form standing waves, with the maximum and minimum sound pressure values at two positions within the tube, the distance between which is 1/4 wavelength.

Proposed Model
The principle for the proposed model, shown in Figure 4, consists of designing the neural network structure, data processing, training the neural network, and result analysis. In the model design, in order to process the data, a symmetrical method is used to inverse normalize all the data. The following describes the design process of the model in detail.
To analyze the experimental data, and design neural network structure To divide data, and data normalization processing   Back propagation (BP) neural networks possess high nonlinear mapping capability. A BP neural network of three or more layers can approximate a nonlinear function with arbitrary precision, if there are sufficient hidden layer neurons. Consequently, a BP neural network was adopted for this study. The designed neural network structure is displayed in Figure 5. Relevant analysis was needed for this study. Therefore, for the given center frequency, maximum and minimum sound pressures, and the material sound absorption coefficient, the network structure design consisted of three layers: a three-neuron input layer, one-neuron output layer, and eight-neuron hidden layer. I 1 , I 2 , I 3 indicate the input; a 1 1 , a 1 2 , . . . , a 1 8 , the hidden layer output; a 2 1 , the network final output; f 1 , the tansig function, as shown in Equation (2); f 2 , the linear function, as shown in Equation (3); iw 1 , the weight from the input layer to the hidden layer, as shown in Equation (4); and lw 2 , the weight from the hidden layer to the output layer, as shown in Equation (5).

Design of the Neural Network Structure
Back propagation (BP) neural networks possess high nonlinear mapping capability. A BP neural network of three or more layers can approximate a nonlinear function with arbitrary precision, if there are sufficient hidden layer neurons. Consequently, a BP neural network was adopted for this study. The designed neural network structure is displayed in Figure 5. Relevant analysis was needed for this study. Therefore, for the given center frequency, maximum and minimum sound pressures, and the material sound absorption coefficient, the network structure design consisted of three layers: a three-neuron input layer, one-neuron output layer, and eight-neuron hidden layer. , , indicate the input; , , …, , the hidden layer output; , the network final output; , the tansig function, as shown in Equation (2); , the linear function, as shown in Equation (3); , the weight from the input layer to the hidden layer, as shown in Equation (4); and , the weight from the hidden layer to the output layer, as shown in Equation (5).   Consequently, there were 32 weights and 9 threshold values (total 41 parameters) to be learned by the neural network. A genetic algorithm was used to optimize the initial parameter values.

Data Processing
Data were divided into training, verification, and test data, and were 70%, 15%, and 15% of the total sample data, respectively. The data needed to be normalized due to the difference in meaning and range. The processing method is illustrated with a row of data in Table 1 as an example. Data to be processed are represented by X, as shown in Equation (6), and data already processed by Y. The data processing was performed according to Equation (7). Consequently, there were 32 weights and 9 threshold values (total 41 parameters) to be learned by the neural network. A genetic algorithm was used to optimize the initial parameter values.

Data Processing
Data were divided into training, verification, and test data, and were 70%, 15%, and 15% of the total sample data, respectively. The data needed to be normalized due to the difference in meaning and range. The processing method is illustrated with a row of data in Table 1 as an example. Data to be processed are represented by X, as shown in Equation (6), and data already processed by Y. The data processing was performed according to Equation (7).
where y max , y min are 1, −1, and x max , x min are the maximum and minimum X values. Thus, the normalized data were as displayed in Equation (8).

Improved Genetic Algorithm
Essentially, the BP neural network was utilized for optimization. The realization of an optimal solution is closely related to the initial value selection, that is, if the initial value is fairly close to the local optimal point, the resulting optimal solution will be incorrect. Subsequently, an improved genetic algorithm was designed, and the optimal parameters learned by it were input into the BP neural network as the initial value in order to obtain the optimal solution of the BP neural network. The flow chart of the improved genetic algorithm is shown in Figure 6.
where max y , min y are 1, −1, and max x , min x are the maximum and minimum X values.
Thus, the normalized data were as displayed in Equation (8).

Improved Genetic Algorithm
Essentially, the BP neural network was utilized for optimization. The realization of an optimal solution is closely related to the initial value selection, that is, if the initial value is fairly close to the local optimal point, the resulting optimal solution will be incorrect. Subsequently, an improved genetic algorithm was designed, and the optimal parameters learned by it were input into the BP neural network as the initial value in order to obtain the optimal solution of the BP neural network. The flow chart of the improved genetic algorithm is shown in Figure 6. As shown in Figure 6, a real coded strategy, the fitness function, crossover, and mutation operators were designed in the improved genetic algorithm. In order to improve individual diversity, crossover and mutation individuals and operation positions are randomly generated, and an individual coding effectiveness verification method is proposed to verify the individual coding after crossover and mutation operations. The main process of improved genetic algorithm to learn the optimal parameters is as follows: Firstly, the population size, evolutionary generation, crossover and mutation probability are defined, and all individuals of the population are real coded, which are used as the initial solution search space. Secondly, the fitness of all individuals is calculated and the best individual is selected. As shown in Figure 6, a real coded strategy, the fitness function, crossover, and mutation operators were designed in the improved genetic algorithm. In order to improve individual diversity, crossover and mutation individuals and operation positions are randomly generated, and an individual coding effectiveness verification method is proposed to verify the individual coding after crossover and mutation operations. The main process of improved genetic algorithm to learn the optimal parameters is as follows: Firstly, the population size, evolutionary generation, crossover and mutation probability are defined, and all individuals of the population are real coded, which are used as the initial solution search space. Secondly, the fitness of all individuals is calculated and the best individual is selected. The roulette method is repeatedly used to select individuals until the number of individuals reaches the population size. Two individuals are selected for crossover operation until the individuals reach the population size. Two individuals are selected for mutation operation until the individuals reach the population size. Now, the individuals are used as the new solution search space. The fitness function value is calculated again, the best individual is updated, and this is repeated until the evolutionary generation is reached. The best individual is the optimal parameter learned.
(1) Initialization and Encoding The search space is from −3 to 3, namely, each chromosome was encoded to a real number in the range of [−3, 3], and each individual stored in a 1 × 41 row vector. The other parameters were as shown in Table 2. (2) Fitness function design The genetic algorithm searches for the optimal solution according to the fitness function value of each individual, which then determines whether it is eliminated or inherited in the next generation. Consequently, the fitness function was designed with the aim that the individual's actual output is closer to the expected output. With i set as i = 1, 2, 3, . . . , 30, the fitness function of individual i is represented by f it(i). With the model actual output matrix set as TO, and the expected output matrix set as EQ, the fitness function is as shown in Equation (9). More precisely, the training data are inputted into the neural network to calculate the difference between the actual and expected outputs for each specimen datum. The sum of the absolute values for these differences is then taken as the fitness function value.
f it(i) = sum(sum(abs(TO − EQ))) (3) Selection operator The experiment shows that the roulette wheel method has a good optimization effect. As a result, this method was used for individual selection; the pseudo-code is shown in Figure 7. The roulette method is repeatedly used to select individuals until the number of individuals reaches the population size. Two individuals are selected for crossover operation until the individuals reach the population size. Two individuals are selected for mutation operation until the individuals reach the population size. Now, the individuals are used as the new solution search space. The fitness function value is calculated again, the best individual is updated, and this is repeated until the evolutionary generation is reached. The best individual is the optimal parameter learned.
(1) Initialization and Encoding The search space is from −3 to 3, namely, each chromosome was encoded to a real number in the range of [−3, 3], and each individual stored in a 1 × 41 row vector. The other parameters were as shown in Table 2.

Evolutional Epochs Population Size Crossover Probability Mutation Probability
The genetic algorithm searches for the optimal solution according to the fitness function value of each individual, which then determines whether it is eliminated or inherited in the next generation. Consequently, the fitness function was designed with the aim that the individual's actual output is closer to the expected output. With set as = 1,2,3, … ,30, the fitness function of individual is represented by ( ). With the model actual output matrix set as , and the expected output matrix set as , the fitness function is as shown in Equation (9). More precisely, the training data are inputted into the neural network to calculate the difference between the actual and expected outputs for each specimen datum. The sum of the absolute values for these differences is then taken as the fitness function value.
(3) Selection operator The experiment shows that the roulette wheel method has a good optimization effect. As a result, this method was used for individual selection; the pseudo-code is shown in Figure 7.
Initialization and encoding; Establish neural network training; All individuals are used as weights, and the threshold is input into the trained network to obtain TO; Calculate individual i fitness fit(i)=sum(sum(abs(TO-EO)); Calculate the sum of all fitness sumfit=fit(1)+fit (2)  (4) Crossover operator When two individuals are randomly selected, the crossover probability chooses whether to perform the crossover, and a random number between 0 and 1 is generated. If the number is greater than the crossover probability, no crossover is performed for either individual. If it is lower, then a crossover operation is conducted. Specifically, a random crossover position was selected for the operation, and the individuals were checked for the effective range after the crossover. If ineffective, the crossover operation was then repeated until an effective individual was produced. The pseudo-code is shown in Figure 8. (4) Crossover operator When two individuals are randomly selected, the crossover probability chooses whether to perform the crossover, and a random number between 0 and 1 is generated. If the number is greater than the crossover probability, no crossover is performed for either individual. If it is lower, then a crossover operation is conducted. Specifically, a random crossover position was selected for the operation, and the individuals were checked for the effective range after the crossover. If ineffective, the crossover operation was then repeated until an effective individual was produced. The pseudo-code is shown in Figure 8.   The pseudo-code is shown in Figure 9. (4) Crossover operator When two individuals are randomly selected, the crossover probability chooses whether to perform the crossover, and a random number between 0 and 1 is generated. If the number is greater than the crossover probability, no crossover is performed for either individual. If it is lower, then a crossover operation is conducted. Specifically, a random crossover position was selected for the operation, and the individuals were checked for the effective range after the crossover. If ineffective, the crossover operation was then repeated until an effective individual was produced. The pseudo-code is shown in Figure 8.    The mutation method is similar to the crossover method, and whether the mutation is performed depends on the mutation probability, that is, a random number between 0 and 1 is generated. If the number is higher than the mutation probability, no individual mutation is performed and if it is lower, then a mutation operation is performed. Specifically, an individual is randomly selected to generate a random mutation position for the operation, and the individual is checked for the effective range following the mutation. If ineffective, the crossover operation is repeated until an effective individual is produced.

Training of the Neural Network Model
The optimized weights and thresholds obtained after running the genetic algorithm, as shown in Equations (10)-(13), were then input into the neural network for model training. The Levenberg-Marquardt (LM) algorithm is suitable for nonlinear optimization since it has both the local convergence of the Gauss-Newton method and the global property of the gradient descent method. Consequently, the algorithm was used to train the network, and the specific procedure was as follows: (1) Set the allowable training error ε = 10 −5 , damping constant µ 0 = 0.001, number of iterations k = 1, initial value β = (0, 1), and let µ = µ 0 . Then input the weight and threshold obtained by the genetic algorithm into the network, together with the training data, as the initialized weight and threshold vector. (2) Calculate the network output and error. The error function is shown in Equation (14), where the mean squared error (MSE) is adopted. Y i represents the ideal network output in the formula, Y i , the actual network output, p, the number of specimens, w k , the vector composed of the weight and threshold of iteration k, and e i (w), the error generated by specimen i. Symmetry (3) Calculate the Jacobian matrix J(w) according to Equation (15).
(4) Calculate the increment ∆w of weight and threshold using the method shown in Equation (16), where I represents the unit matrix, e(w) is the error matrix of all the specimens, and J(w) is the Jacobian matrix.
(5) If E w k is smaller than ε, the set error value, end the training. If not, continue to step (6). (6) Update the weight and threshold vector according to Equation (17), calculate the error E w k+1 according to Equation (14), and if E w k+1 < E w k , update k and µ according to Equations (18) and (19), respectively, and continue to step (2). Otherwise, update k and µ according to Equations (18) and (20), and then progress to step (4).

Model Testing
The function expression corresponding to the trained model is shown in Equation (21), where w1, w2, b1, and b2 are the final learned weights and thresholds, as shown in Equations (22)- (25). The test data are then input into the network, and the expected and actual outputs are compared, as discussed in Section 3.

Experiment Results and Discussion
This section discusses the experimental results of the proposed model, including four sub-sections.  Figure 10 shows the expected and actual output curves for two test data. To further reflect the relationship between noise frequency and the material sound absorption coefficient, the x-axis represents the frequency, and the y-axis the sound absorption coefficient.

Experiment Results and Discussion
This section discusses the experimental results of the proposed model, including four sub-sections.  Figure 10 shows the expected and actual output curves for two test data. To further reflect the relationship between noise frequency and the material sound absorption coefficient, the x-axis represents the frequency, and the y-axis the sound absorption coefficient. As shown in Figure 10, when the sound frequency is below 2000 Hz, the sound absorption coefficient increases with the frequency, and when the sound frequency is above 2500 Hz, the sound absorption coefficient is generally between 0.9 and 1. This indicates that the model fits the relationship between the frequency and sound absorption coefficient well. The model actual output test data are roughly as expected, at 99.57%.

Comparison of Output Results
In order to evaluate the proposed model, the unoptimized neural network and cellular automata approaches and regression models of support vector machine, random forest and XGBoost are discussed below for comparison. Figure 11 shows the actual and expected outputs of the three methods in a test dataset. The light blue upper triangle curve represents the expected output, and the green dot dotted curve, the pink lower triangle curve, and the blue star curve represent the actual output of one-dimensional cellular automata (1D CA), the proposed model, and the unoptimized neural network model approach, respectively. The model method in this paper is the best and is more accurate compared with the model based on CA in reference [40]. The reasons are As shown in Figure 10, when the sound frequency is below 2000 Hz, the sound absorption coefficient increases with the frequency, and when the sound frequency is above 2500 Hz, the sound absorption coefficient is generally between 0.9 and 1. This indicates that the model fits the relationship between the frequency and sound absorption coefficient well. The model actual output test data are roughly as expected, at 99.57%.

Comparison of Output Results
In order to evaluate the proposed model, the unoptimized neural network and cellular automata approaches and regression models of support vector machine, random forest and XGBoost are discussed below for comparison. Figure 11 shows the actual and expected outputs of the three methods in a test dataset. The light blue upper triangle curve represents the expected output, and the green dot dotted curve, the pink lower triangle curve, and the blue star curve represent the actual output of one-dimensional cellular automata (1D CA), the proposed model, and the unoptimized neural network model approach, respectively. The model method in this paper is the best and is more accurate compared with the model based on CA in reference [40]. The reasons are determined as follows: to design a CA model based on an impedance tube acoustic experimental test, it is necessary to design the cell size, which is usually limited for the convenience of modeling, leading to dense nodal distribution at the sound pressure antinode throughout the cellular space during cellular automata modeling. Consequently, for high frequency acoustic excitation, the exact position of the node is obscure, and the minimum sound pressure is not precise enough to accurately calculate the sound-absorption coefficient. Consequently, the relationship between the frequency and sound-absorption coefficient cannot be modeled precisely. mental test, it is necessary to design the cell size, which is usually limited for the convenience of modeling, leading to dense nodal distribution at the sound pressure antinode throughout the cellular space during cellular automata modeling. Consequently, for high frequency acoustic excitation, the exact position of the node is obscure, and the minimum sound pressure is not precise enough to accurately calculate the sound-absorption coefficient. Consequently, the relationship between the frequency and sound-absorption coefficient cannot be modeled precisely. Figure 12 shows the expected output and actual outputs results of our model, our model+ 10-fold cross-validation, and regression models of support vector machine, random forest and XGBoost. The light blue upper triangle curve represents the expected output, and the pink lower triangle curve represents the actual output of our model. As shown in Figure 12, our model is the best, and our model +10-fold cross-validation is better. In order to evaluate our model, statistical performance analysis is conducted below.

Validating Efficacy
Root mean squared error (RMSE) and the coefficient of determination (R 2 ) [41,42] were used to evaluate the efficacy of models including the neurogenetic model. The mathematical expression of RMSE and R 2 are shown in Equations (26) and (27), where is the number of samples; is the i-th sample actual output during the experiment; is the i-th sample expected output; and is the mean value of the expected output of samples. Figure 11. Comparison between proposed model and other approaches. Figure 12 shows the expected output and actual outputs results of our model, our model+ 10-fold cross-validation, and regression models of support vector machine, random forest and XGBoost. The light blue upper triangle curve represents the expected output, and the pink lower triangle curve represents the actual output of our model. As shown in Figure 12, our model is the best, and our model +10-fold cross-validation is better. In order to evaluate our model, statistical performance analysis is conducted below. determined as follows: to design a CA model based on an impedance tube acoustic experimental test, it is necessary to design the cell size, which is usually limited for the convenience of modeling, leading to dense nodal distribution at the sound pressure antinode throughout the cellular space during cellular automata modeling. Consequently, for high frequency acoustic excitation, the exact position of the node is obscure, and the minimum sound pressure is not precise enough to accurately calculate the sound-absorption coefficient. Consequently, the relationship between the frequency and sound-absorption coefficient cannot be modeled precisely. Figure 12 shows the expected output and actual outputs results of our model, our model+ 10-fold cross-validation, and regression models of support vector machine, random forest and XGBoost. The light blue upper triangle curve represents the expected output, and the pink lower triangle curve represents the actual output of our model. As shown in Figure 12, our model is the best, and our model +10-fold cross-validation is better. In order to evaluate our model, statistical performance analysis is conducted below.

Validating Efficacy
Root mean squared error (RMSE) and the coefficient of determination (R 2 ) [41,42] were used to evaluate the efficacy of models including the neurogenetic model. The mathematical expression of RMSE and R 2 are shown in Equations (26) and (27), where is the number of samples; is the i-th sample actual output during the experiment; is the i-th sample expected output; and is the mean value of the expected output of samples.

Validating Efficacy
Root mean squared error (RMSE) and the coefficient of determination (R 2 ) [41,42] were used to evaluate the efficacy of models including the neurogenetic model. The mathematical expression of RMSE and R 2 are shown in Equations (26) and (27), where n is the number of samples;y i is the i-th sample actual output during the experiment; y i is the i-th sample expected output; and y i is the mean value of the expected output of n samples.
RMSE is a crucial indicator to check the effectiveness of any regression model; the lower the RMSE value, the better the model. R 2 explains the relationship between the actual and expected outputs, the larger the R 2 value, the better the model. Through experimental analysis, the RMSE values and R 2 values of our model, our model+ 10-fold cross-validation (our model +10), 1D CA, the unoptimized neural network model (UNN), support vector machine (SVM), random forest (RF) and XGBoost regression models are calculated respectively. Because almost all these models use metaheuristics, which have a random search mechanism, in order to verify the accuracy and robustness of our model, through statistical analysis combined with experiments, we obtained the value range of RMSE, R 2 and convergence time, as shown in Table 3, and drew the means plot and intervals diagram; blue stars represent the mean, as shown in Figure 13. Our model has a maximum R 2 value, lower RMSE value, and shorter convergence time. Our model +10-fold cross-validation has longer convergence time. XGBoost has shorter convergence time but poor RSME and R 2 values.
RMSE is a crucial indicator to check the effectiveness of any regression model; the lower the RMSE value, the better the model. R 2 explains the relationship between the actual and expected outputs, the larger the R 2 value, the better the model. Through experimental analysis, the RMSE values and R 2 values of our model, our model+ 10-fold cross-validation (our model +10), 1D CA, the unoptimized neural network model (UNN), support vector machine (SVM), random forest (RF) and XGBoost regression models are calculated respectively. Because almost all these models use metaheuristics, which have a random search mechanism, in order to verify the accuracy and robustness of our model, through statistical analysis combined with experiments, we obtained the value range of RMSE, R 2 and convergence time, as shown in Table3, and drew the means plot and intervals diagram; blue stars represent the mean, as shown in Figure 13. Our model has a maximum R 2 value, lower RMSE value, and shorter convergence time. Our model +10-fold cross-validation has longer convergence time. XGBoost has shorter convergence time but poor RSME and R 2 values.

Performance Analysis of the Proposed Model
In order to further understand the model mechanism, the following is a performance

Performance Analysis of the Proposed Model
In order to further understand the model mechanism, the following is a performance analysis of the proposed model.

Enhanced Solution Generation
An enhanced solution was obtained by a genetic algorithm as the initial model weight and threshold. Figure 14 shows the change of average fitness curve after running the genetic algorithm. The average fitness is for all individual fitness values, and the evolutional generation number is 10. For Generation 9, the average fitness value was at the minimum, and basically remained unchanged until the end of the evolution when the optimal weight and threshold were obtained, as shown in Equations (10)- (13). The optimal weight and threshold were input into the neural network model as the initial parameters needed for model training. generation number is 10. For Generation 9, the average fitness value was at the minimum, and basically remained unchanged until the end of the evolution when the optimal weight and threshold were obtained, as shown in Equations (10)- (13). The optimal weight and threshold were input into the neural network model as the initial parameters needed for model training.

Performance Analysis of Trained Model
The model performance was subsequently analyzed. As shown in Figure 15, the three curves respectively represent the MSE changes with the iteration number as training, validation, and test data were input for the model. There were 13 iterations in total. From iteration 6, the MSE curves decrease slowly, and the MSE of the validation data reaches the minimum value, i.e., 5.7468×10 -5 by iteration 7. Another 6 iterations were performed, and the MSE basically remained unchanged, that is, it no longer decreased, and thus, the model training was complete. The model effect was evaluated after training. Figure 16 shows the model conditions at the end of training, i.e., the changes of gradient, damping factor, and generalization ability with the iteration number. Initially, the gradient was the highest, and tended to stabilize after iteration 7. The damping factor changed to 1×10 −6 from iteration 7, and then remained constant, indicating the model convergence. The generalization ability curve was at 0 for the first 7 iterations, indicating that the MSE decreased continuously during this stage. In the subsequent 6

Performance Analysis of Trained Model
The model performance was subsequently analyzed. As shown in Figure 15, the three curves respectively represent the MSE changes with the iteration number as training, validation, and test data were input for the model. There were 13 iterations in total. From iteration 6, the MSE curves decrease slowly, and the MSE of the validation data reaches the minimum value, i.e., 5.7468 × 10 −5 by iteration 7. Another 6 iterations were performed, and the MSE basically remained unchanged, that is, it no longer decreased, and thus, the model training was complete.
An enhanced solution was obtained by a genetic algorithm as the initial model weight and threshold. Figure 14 shows the change of average fitness curve after running the genetic algorithm. The average fitness is for all individual fitness values, and the evolutional generation number is 10. For Generation 9, the average fitness value was at the minimum, and basically remained unchanged until the end of the evolution when the optimal weight and threshold were obtained, as shown in Equations (10)- (13). The optimal weight and threshold were input into the neural network model as the initial parameters needed for model training.

Performance Analysis of Trained Model
The model performance was subsequently analyzed. As shown in Figure 15, the three curves respectively represent the MSE changes with the iteration number as training, validation, and test data were input for the model. There were 13 iterations in total. From iteration 6, the MSE curves decrease slowly, and the MSE of the validation data reaches the minimum value, i.e., 5.7468×10 -5 by iteration 7. Another 6 iterations were performed, and the MSE basically remained unchanged, that is, it no longer decreased, and thus, the model training was complete. The model effect was evaluated after training. Figure 16 shows the model conditions at the end of training, i.e., the changes of gradient, damping factor, and generalization ability with the iteration number. Initially, the gradient was the highest, and tended to stabilize after The model effect was evaluated after training. Figure 16 shows the model conditions at the end of training, i.e., the changes of gradient, damping factor, and generalization ability with the iteration number. Initially, the gradient was the highest, and tended to stabilize after iteration 7. The damping factor changed to 1 × 10 −6 from iteration 7, and then remained constant, indicating the model convergence. The generalization ability curve was at 0 for the first 7 iterations, indicating that the MSE decreased continuously during this stage. In the subsequent 6 iterations, the MSE was unchanged overall, and thus, the iteration was complete.
etry 2022, 14, x FOR PEER REVIEW 2 of iteration 7. The damping factor changed to 1×10 −6 from iteration 7, and then remained constan indicating the model convergence. The generalization ability curve was at 0 for the first 7 iter ations, indicating that the MSE decreased continuously during this stage. In the subsequent iterations, the MSE was unchanged overall, and thus, the iteration was complete. To reflect the model effect more intuitively, the model regression ability was visua ized, as shown in Figure 17, which visualizes the regression ability for all the model data including the training, validation, and test data. The line formed by circles reflects the rela tionship between the input and expected output for different data types. The different colo line represents the regression line, which reflects the relationship between the input dat and the model actual output data for various data types. The figure illustrates that the re gression line is basically consistent with the actual data line. In other words, the mode regression effect is good. To reflect the model effect more intuitively, the model regression ability was visualized, as shown in Figure 17, which visualizes the regression ability for all the model data, including the training, validation, and test data. The line formed by circles reflects the relationship between the input and expected output for different data types. The different color line represents the regression line, which reflects the relationship between the input data and the model actual output data for various data types. The figure illustrates that the regression line is basically consistent with the actual data line. In other words, the model regression effect is good. Test: R = 0.99994 1 5 All: R = 0.99992

Conclusions
In this work, a new acoustic model for porous hemp plastic composite is developed. Namely, samples were prepared, and the acoustic data of the materials was obtained by standing wave tube method experiments, while an acoustic model using improved genetic algorithm and neural network was subsequently proposed. The experimental results show that when the sound frequency is above 2500 Hz, the sound absorption coefficient is generally between 0.9 and 1, and the proposed model actual output test data are roughly as expected, at 99.57%. To prove the efficacy of the proposed model, experiments and statistical analyses were conducted. It was found that compared with one-dimensional cellular automata, unoptimized neural network modeling methods, support vector machine, random forest and XGBoost regression models, the modeling method proposed in this paper has the best RMSE and R 2 values, with more accurate, shorter convergence time, and less acoustic data required.
The proposed model was applied to predict sound absorption coefficient of the material, the results showed that it has better performance. In regard to experiments, it was observed that according to the designed three-layer neural network, the population size is 30 and the search space is from −3 to 3. The improved genetic algorithm can probe the search space and find an optimal solution, which is used as the initial weights and thresholds of the neural network for training. A larger population size or search space is not recommended because the time to find the optimal solution will be increased. Moreover, our model still needs to be further studied for the material sound-absorption coefficient for frequencies exceeding 4000 Hz. Based on this, the subsequent study will focus on how to improve the material sound absorption coefficient for enhanced noise control.

Conclusions
In this work, a new acoustic model for porous hemp plastic composite is developed. Namely, samples were prepared, and the acoustic data of the materials was obtained by standing wave tube method experiments, while an acoustic model using improved genetic algorithm and neural network was subsequently proposed. The experimental results show that when the sound frequency is above 2500 Hz, the sound absorption coefficient is generally between 0.9 and 1, and the proposed model actual output test data are roughly as expected, at 99.57%. To prove the efficacy of the proposed model, experiments and statistical analyses were conducted. It was found that compared with onedimensional cellular automata, unoptimized neural network modeling methods, support vector machine, random forest and XGBoost regression models, the modeling method proposed in this paper has the best RMSE and R 2 values, with more accurate, shorter convergence time, and less acoustic data required.
The proposed model was applied to predict sound absorption coefficient of the material, the results showed that it has better performance. In regard to experiments, it was observed that according to the designed three-layer neural network, the population size is 30 and the search space is from −3 to 3. The improved genetic algorithm can probe the search space and find an optimal solution, which is used as the initial weights and thresholds of the neural network for training. A larger population size or search space is not recommended because the time to find the optimal solution will be increased. Moreover, our model still needs to be further studied for the material sound-absorption coefficient for frequencies exceeding 4000 Hz. Based on this, the subsequent study will focus on how to improve the material sound absorption coefficient for enhanced noise control.  Data Availability Statement: The data presented in this study are available in article.