The Prediction of Pervious Concrete Compressive Strength Based on a Convolutional Neural Network

: To overcome limitations inherent in existing mechanical performance prediction models for pervious concrete, including material constraints, limited applicability, and inadequate accuracy, this study employs a deep learning approach to construct a Convolutional Neural Network (CNN) model with three convolutional modules. The primary objective of the model is to precisely predict the 28-day compressive strength of pervious concrete. Eight input variables, encompassing coarse and fine aggregate content, water content, admixture content, cement content, fly ash content, and silica fume content, were selected for the model. The dataset utilized for both model training and testing consists of 111 sample sets. To ensure the model’s coverage within the practical range of pervious concrete strength and to enhance its robustness in real-world applications, an additional 12 sets of experimental data were incorporated for training and testing. The research findings indicate that, in comparison to the conventional machine learning method of Backpropagation (BP) neural networks, the developed CNN prediction model in this paper demonstrates a higher coefficient of determination, reaching 0.938, on the test dataset. The mean absolute percentage error is 9.13%, signifying that the proposed prediction model exhibits notable accuracy and universality in predicting the 28-day compressive strength of pervious concrete, regardless of the materials used in its preparation.


Introduction
Pervious concrete, acknowledged as an innovative and environmentally friendly construction material, features remarkable attributes such as excellent permeability, antislip properties, corrosion resistance, and durability [1,2].With its applications ranging from urban development to environmental protection, pervious concrete offers a wide range of potential uses [3].Among its key performance indicators, compressive strength emerges as pivotal.Precise prediction of pervious concrete's compressive strength holds paramount importance in enhancing the design and construction quality of structures employing this material.
In recent years, extensive research has been conducted on the performance indicators of pervious concrete.Many studies have employed diverse experimental materials and conducted comparative experiments with varying mix proportions to investigate the influence of different materials on the compressive strength and other performance indicators of pervious concrete under different design conditions.These investigations have covered various aspects, including different fly ash substitution rates [4][5][6], various aggregate types [7][8][9], and different cement varieties [10,11], aiming to understand the impact of these factors on pervious concrete performance.With the increasing application of pervious concrete in urban construction, researchers have begun to explore the effects of novel materials on its performance.This includes investigating the influence of different types of fibers [12][13][14] and utilizing construction waste to prepare pervious concrete to meet specific performance requirements in urban construction [15,16].Considerations have also been given to factors such as curing conditions [17] and porosity [18,19], analyzing the variations in compressive strength in pervious concrete from multiple perspectives.In addition to macroscopic studies on the strength variation patterns of pervious concrete, some researchers have employed advanced techniques such as Scanning Electron Microscopy (SEM) to investigate factors influencing its performance at a microscopic level.For example, Kelly Patrícia Torres Vieira et al. [20] observed recycled aggregate pervious concrete samples using SEM, discovering a significant decrease in compressive strength with an increase in the proportion of recycled aggregate.Conversely, Xiaoyan Zheng et al. [21] utilized field emission SEM and X-ray diffraction to study the mechanism of alkali-activated materials on pervious concrete performance.These microscopic studies not only provide a profound analysis of the variation mechanism of compressive strength in pervious concrete through extensive experimental data but also reveal key factors at the microscopic level, offering valuable guidance for future pervious concrete design and construction.
While previous studies have derived variation patterns of compressive strength in pervious concrete [22] with specific material configurations based on experimental materials, proposing empirical formulas, Table 1 illustrates some of these formulas and their predictive effectiveness.However, the empirical formulas may not precisely predict the compressive strength of pervious concrete with different materials and mix proportions as the scope of application widens and new materials are developed.Given the rigor of previous research and the resource consumption of comparative experiments, comprehensive consideration of pervious concrete compressive strength may be limited by unique geological conditions and technological capabilities in different regions.Different regions possess distinct construction experiences and explorations of pervious concrete preparation methods, making it challenging to obtain a prediction method applicable to the compressive strength of all pervious concrete using traditional empirical formulas.Therefore, it is of practical significance to fully utilize the results and experimental data from previous research to accurately predict the 28-day compressive strength of commonly used formulations of pervious concrete.1 illustrates that with the introduction of complex methods such as logarithms, the predictive accuracy of traditional empirical formulas continues to improve.However, it is crucial to note that the accuracy of traditional empirical formulas often relies on specific mix proportions.Consequently, empirical formulas that perform well in certain studies may not be applicable to research based on different mix proportions.This limitation arises from the fact that mix proportions and material types are typically not considered when constructing these formulas.Incorporating mix proportion information significantly increases the data requirements for model construction.Additionally, traditional regression methods may struggle to handle such large datasets effectively.
In recent years, research in machine learning and deep learning has made it feasible to predict sample indicators by integrating past large datasets with existing sample features [28,29].Existing studies indicate that models based on machine learning and deep learning can predict the compressive strength of concrete with relatively high accuracy [30][31][32].For instance, predictive models constructed using BP neural networks have shown good predictive performance regarding the compressive strength of different types of concrete [33][34][35][36][37]. Additionally, other machine learning methods, besides BP neural networks, have demonstrated a favorable trend in predicting the 28-day compressive strength of concrete [38][39][40][41][42].With the continuous advancement of deep learning, models for predicting the compressive strength of concrete established using CNNs and improved convolutional neural networks exhibit superior predictive performance compared to traditional machine learning methods [43][44][45][46].This provides novel insights and methods for researching the prediction of compressive strength in pervious concrete.For example, Ziyue Zeng et al. [47] analyzed the effectiveness of various deep learning and machine learning methods in predicting the compressive strength of concrete and developed a CNN-based predictive model for concrete compressive strength.This model was trained using data on concrete compressive strength from various materials with different material types and mix proportions.Testing revealed an R 2 of 0.967 on the test set, confirming that the CNN-based predictive method for concrete compressive strength can be applied to concrete prepared from different materials with better applicability compared to traditional empirical formulas.
This study presents a predictive model for the 28-day compressive strength of pervious concrete utilizing CNN.The methodology integrates various material characteristics of pervious concrete, effectively merging existing research data and practical engineering experience to yield reliable strength predictions.By employing a deep neural network and utilizing the content of each component as input for data training and analysis, this approach is not only operationally straightforward but also adeptly characterizes key features influencing concrete strength, including water-to-cement ratio, sand-to-aggregate ratio, fly ash substitution ratio, etc.
The main contributions of this study are as follows: (i) Development of a CNN model to predict the 28-day compressive strength of pervious concrete.Comparative assessments based on goodness of fit, average absolute percentage error, root mean square error, and mean absolute error demonstrate the superior performance of the proposed model.(ii) Integration of existing mix proportion information into the model, primarily obtained from previous studies on the mechanical performance of pervious concrete.This utilization of component information as input simplifies the model's operation, alleviating additional workload for engineers and enhancing its practicality.(iii) The proposed model achieves a goodness of fit greater than 0.9 on the test set, indicating its effectiveness in predicting the 28-day compressive strength of pervious concrete with different materials.The average absolute percentage error on the test set is less than 10%, suggesting that the CNN model's prediction errors regarding pervious concrete strength under the influence of different materials fall within an acceptable range, affirming the applicability of the proposed method.
This research presents a novel approach, providing theoretical support for predicting the strength of pervious concrete.The findings offer valuable insights for future in-depth studies in related fields.

Data Source and Model Testing
To ensure experimental reproducibility, this section will introduce the data sources used to train the CNN, the methods employed for acquiring experimental data, as well as the structural information of the CNN and the specifics of the training and testing processes.

Data Source
To validate the universality of the predictive model established in this study, we employed data from experiments on pervious concrete using different materials reported in the literature for training and testing a convolutional neural network.Table 2 provides the sources and relevant component information for these 111 sets of data.Given that the compressive strength of pervious concrete is typically lower than that of conventional concrete and varies within the range of 2-28 MPa [22], although advances in research have led to improvements in the compressive strength of pervious concrete, it still remains lower than that of conventional concrete.Based on this, we searched for pervious concrete samples within the compressive strength range of 2 to 40 MPa for model training and prediction.Additionally, due to variations in the mix composition of pervious concrete studied in different articles, adding samples beyond a certain range of material types may lead to difficulty in model convergence during training.Therefore, we selected several sets of samples with conventional mix compositions to ensure better applicability of the model at the current stage.Specifically, during the data collection process, we gathered information on coarse and fine aggregate types and contents, water content, admixture content, cement types and contents, fly ash content, silica fume content, and 28-day compressive strength for each sample group.Furthermore, to mitigate the potential impact of minor variations in the training or testing sets on model training or prediction errors, we additionally collected 12 sets of data through experiments (as described in Section 3.2) and incorporated them into the dataset.Subsequently, each sample in the overall dataset was randomly assigned to the training set and testing set, ensuring that a different training set and testing set were used each time the model was run to enhance its robustness.Considering the aim of this study to establish a method for predicting the 28-day compressive strength of pervious concrete, it is noted that existing publicly available datasets do not encompass the mix proportion information of pervious concrete prepared from different types of materials.In such a scenario, including material types as input parameters may potentially hinder model convergence or lead to poor predictive performance.Therefore, in the process of model development, this study opts to disregard restrictions on aggregate and cement types and endeavors to seek pervious concrete data that approximate material preparation during model training.
The dataset obtained from the literature was split into two subsets, with 70% designated as the training set and the remaining 30% as the testing set, utilized for training and evaluating the CNN models.Each sample in the dataset includes nine pieces of information: coarse and fine aggregate content, water content, admixture content, cement content, fly ash content, and silica fume content, covering eight input variables, along with the corresponding compressive strength after 28 days of curing.

Model Information 2.2.1. Background
CNN is a standard neural network structure in deep learning, originating from computer vision and finding widespread applications.It typically consists of layers, including convolutional layers, pooling layers, batch normalization layers, activation functions, and more.Among these, the convolutional layer performs convolution operations on the output data from the preceding layer to extract diverse features.This process can be represented as: In the equation, Y k i,j represents the value at position (i, j) after the original data undergoes processing with the k-th convolutional kernel.H represents the height, W represents the width, C represents the number of channels in the input image, X(i, j, l) represents the value at position (i, j) on channel l of the input data, S denotes the stride of the convolutional kernel, set to 1 in this study.ω(m, n, l, k) signifies the value of the k-th convolutional kernel at position (i, j) on channel l, and b(k) denotes the bias of the k-th convolutional kernel.The convolutional layer extracts features from the output data of the preceding layer following the described rules, as illustrated in Figure 1.The pooling layer is employed to filter redundant components from the output data of the preceding layer, reducing the computed data volume, enhancing the model's noise resistance, preventing overfitting, and simultaneously preserving the original data features.Common pooling methods include max pooling and average pooling, with this study choosing max pooling to filter the data processed by the convolutional layer.Specifically, within each data region of the pooling kernel size, the maximum value is selected to form the new output.The primary computational process can be expressed by the following formula: In the equation, , l i j Y represents the data value at position (i, j) for the l-th channel of The pooling layer is employed to filter redundant components from the output data of the preceding layer, reducing the computed data volume, enhancing the model's noise resistance, preventing overfitting, and simultaneously preserving the original data features.Common pooling methods include max pooling and average pooling, with this study choosing max pooling to filter the data processed by the convolutional layer.Specifically, within each data region of the pooling kernel size, the maximum value is selected to form the new output.The primary computational process can be expressed by the following formula: In the equation, Y l i,j represents the data value at position (i, j) for the l-th channel of the pooling layer's output, and X(i, j, l) represents the value at position (i, j) for channel l of the input data.S denotes the stride of the pooling kernel, set to 1 in this study, and P represents the size of the pooling kernel.The process of the pooling layer's treatment of the output data from the preceding layer (with a stride of 2 for the pooling kernel) is illustrated in Figure 2.
of the preceding layer, reducing the computed data volume, enhancing the model's noise resistance, preventing overfitting, and simultaneously preserving the original data features.Common pooling methods include max pooling and average pooling, with this study choosing max pooling to filter the data processed by the convolutional layer.Specifically, within each data region of the pooling kernel size, the maximum value is selected to form the new output.The primary computational process can be expressed by the following formula: In the equation, , l i j Y represents the data value at position (i, j) for the l-th channel of the pooling layer's output, and X(i, j, l) represents the value at position (i, j) for channel l of the input data.S denotes the stride of the pooling kernel, set to 1 in this study, and P represents the size of the pooling kernel.The process of the pooling layer's treatment of the output data from the preceding layer (with a stride of 2 for the pooling kernel) is illustrated in Figure 2. The batch normalization layer can be viewed as a preprocessing step involving data standardization and regularization.It is frequently applied in convolutional neural networks to normalize the output of each convolutional layer, ensuring that the outputs adhere to a Gaussian distribution with consistent mean and variance.This aids in preventing continuous shifts in the distribution of input data across different layers, thereby enhancing the stability and efficiency of the training process.The batch normalization process involves the following steps.Firstly, calculate the variance and mean for each data batch: The batch normalization layer can be viewed as a preprocessing step involving data standardization and regularization.It is frequently applied in convolutional neural networks to normalize the output of each convolutional layer, ensuring that the outputs adhere to a Gaussian distribution with consistent mean and variance.This aids in preventing continuous shifts in the distribution of input data across different layers, thereby enhancing the stability and efficiency of the training process.The batch normalization process involves the following steps.Firstly, calculate the variance and mean for each data batch: In the equation, m represents the size of each batch, and X i represents the i-th sample within each batch.
By computing the variance and mean of each batch, the normalization process is applied to the batch data: Here, ε is a small positive constant introduced to prevent division by zero in the denominator.
Finally, the normalized data is shifted and scaled to accelerate the training process while ensuring the stability of the model: Here, γ represents the scaling parameter, and β is the translation parameter.
The activation function introduces non-linearity into the model, thereby enhancing the expressive power of the neural network.Common activation functions include the Rectified Linear Unit (ReLU) function, the Sigmoid function, and others.In this study, the ReLU function is utilized as the activation function for the convolutional neural network.The computation process of the ReLU function for each input data x is represented as follows: The fully connected layer achieves a linear combination of data and weights, introducing non-linearity through the incorporation of an activation function.This allows the model to extract more complex features from the data and undergo non-linear transformations, thereby enhancing its flexibility.The computation process is as follows: In the equation, X represents the input data vector, W is the weight matrix, b is the bias vector, and f (x) denotes the ReLU function.

CNN Structure
CNNs have been extensively explored by researchers for predicting concrete compressive strength.For instance, Deng et al. [54] developed a neural network with a convolutional layer kernel and a hidden layer containing four neurons, utilizing four input features to predict recycled aggregate concrete.Conversely, Zeng et al. [47] argued that as the number of input indicators increases, the CNN's structure should be adjusted accordingly.Therefore, they expanded the number of convolutional kernels and conducted a search for the optimal number of neurons in the fully connected layer within the range of 4 to 128.
Considering the significant variations in raw materials among samples in this study, the convolutional structure is enhanced accordingly.Each convolutional structure comprises a convolutional layer (with a kernel size of 3 × 1 × 1), a pooling layer (with a pooling kernel size of 1 × 1), a batch normalization layer, and a ReLU activation function layer.The collected data enter the model through the input layer.After undergoing basic training with three convolutional structures, redundant data are eliminated through dropout layers.Subsequently, data fusion is accomplished through fully connected layers.Finally, the model is trained, and data prediction is performed using a regression layer.The structure of the CNN is illustrated in Figure 3. model is trained, and data prediction is performed using a regression layer.The structure of the CNN is illustrated in Figure 3.

Model Training and Testing
The model undergoes training using a loss function, which quantifies the disparity between predicted and actual values.The CNN is iteratively optimized during training to minimize this loss.Common loss functions encompass mean square error, mean absolute error, and cross-entropy.In this study, the root mean square error (RMSE) is chosen as the loss function.The error after a training iteration is computed as: ( ) Here, N represents the total number of samples, i y denotes the actual value of the i- th sample, and ˆi y is the predicted value for the i-th sample.
The optimizer fine-tunes model parameters for the CNN based on predefined criteria to minimize the loss function.In this study, stochastic gradient descent (SGD) is employed as the optimizer.For each training iteration, SGD randomly selects a subset from all samples to calculate the gradient, subsequently updating the model parameters.The main formula for this process is as follows:

Model Training and Testing
The model undergoes training using a loss function, which quantifies the disparity between predicted and actual values.The CNN is iteratively optimized during training to minimize this loss.Common loss functions encompass mean square error, mean absolute error, and cross-entropy.In this study, the root mean square error (RMSE) is chosen as the loss function.The error after a training iteration is computed as: Here, N represents the total number of samples, y i denotes the actual value of the i-th sample, and ŷi is the predicted value for the i-th sample.
The optimizer fine-tunes model parameters for the CNN based on predefined criteria to minimize the loss function.In this study, stochastic gradient descent (SGD) is employed as the optimizer.For each training iteration, SGD randomly selects a subset from all samples to calculate the gradient, subsequently updating the model parameters.The main formula for this process is as follows: In the equation, θ t represents the values of the model parameters at the t-th training iteration, η is the learning rate, L(θ t ; x i , y i ) denotes the loss function of the sample (x i , y i ) at the t-th training iteration, and ▽L(θ t ; x i , y i ) is the partial derivative of the loss function with respect to the model parameters.
During the initial training phase of convolutional networks, a higher learning rate aids in the rapid convergence of the model.However, as the model approaches the optimal point, a higher learning rate may lead to oscillations during training.Therefore, this study adopts a strategy of dynamic learning rate adjustment.We set the initial learning rate to 0.01, with a learning rate decay factor of 0.5.After 500 training iterations, the learning rate is reduced to 0.005.Additionally, during training, each batch consists of 30 samples, with a maximum of 2000 training iterations.The dataset is divided into 70% for training and 30% for testing.Given that this study aims to validate and test the predictive performance of CNNs for pervious concrete compressive strength, MATLAB 2021b is utilized for CNN model construction, training, and testing.Due to cost considerations, a shared data platform is not established at present.

CNN Model Predictive Performance
To visually demonstrate the predictive capabilities of the CNN model for the compressive strength of pervious concrete, this study selects results from a specific experiment, as illustrated in

CNN Model Predictive Performance
To visually demonstrate the predictive capabilities of the CNN model for the compressive strength of pervious concrete, this study selects results from a specific experiment, as illustrated in    The data points in the graph are primarily clustered around the diagonal line, indicating a strong alignment between the model's predictions and the observed results.This clustering pattern suggests a high degree of concordance between the predicted and actual compressive strength values.The model demonstrates notable accuracy in forecasting compressive strength for pervious concrete, as evidenced by the proximity of the data points to the diagonal line.This visual analysis underscores the CNN model's ability to provide accurate predictions for compressive strength.

Enhancements for Improved Robustness of the Model
Figure 4 illustrates the favorable predictive performance of the CNN model trained in this study for various types of pervious concrete.However, there is a lack of training data in the range of 10 to 20 MPa, which could lead to inadequacies in the model's predictions for the compressive strength of pervious concrete within this range.To enhance the robustness of the model, ensuring that subtle variations in the material composition of pervious concrete do not significantly impact the prediction results, experimental data on the measured 28-day compressive strength in the range of 10 to 20 MPa will be added.This additional data aims to augment the training set of the CNN predictive model, improving its applicability to different types of pervious concrete.

Method for Enhancing Model Robustness (1) Experimental Materials
This study employed various materials for the pervious concrete experiments, including Ordinary Portland Cement (OPC) of grade 42.5.The OPC has a standard consistency of 27.1%, a specific surface area of 357 m 2 /kg, an initial setting time of 203 min, and a final setting time of 250 min.For coarse aggregates, 5-20 mm aggregates supplied by the Jinying Hardware Business Department in Jiangning District, Nanjing, were chosen.These aggregates exhibit an apparent density of 3.0149 g/cm 3 , a bulk density of 3.0045 g/cm 3 , a compacted bulk density of 2.9246 g/cm 3 , and a crushing value of 3.04%.
Additionally, low-calcium Class I fly ash produced by the Nanjing Thermal Power Plant was employed, featuring a density of 2.04 g/cm 3 , a water demand ratio of 0.95, and fineness (remaining on the 45 µm sieve) of 6%.To enhance concrete performance, a high-performance polycarboxylate superplasticizer from Wuhan Greelan Building Material Technology Co., Ltd., located in Wuhan City, Hubei Province, China, was introduced.This superplasticizer, in powder form with a gray-white appearance, has a bulk density ranging from 350 to 450 kg/m 3 and achieves a 25% to 30% reduction in mortar water content.The water used in concrete mixing adhered to the standards outlined in JGJ63-2006 for concrete water usage [55].
(2) Experimental Procedure In this study, twelve sets of pervious concrete were prepared with different mix proportions, and their specific compositions are detailed in Table 3.The pervious concrete was fabricated using the slurry coating method, following a specific procedure: Initially, the coarse aggregates were mixed with approximately 3% water for 30 s in a mixer to ensure thorough pre-wetting of the aggregate surfaces, enhancing their adhesion to cement.Subsequently, 100% cement, water, and corresponding additives were added, and the mixture was stirred for 180 s to form a highly flowable slurry, significantly reducing friction between the aggregates.This process effectively prevented the crushing of the coarse aggregates when their resistance exceeded acceptable limits while facilitating uniform coating of the aggregate surfaces by the slurry, promoting the formation of a spherical structure and enhancing the porosity of the pervious concrete.The freshly mixed concrete was then poured into cubic molds measuring 100 × 100 × 100 mm 3 and compacted by vibration.After being left to stand for 24 h, the specimens were demolded and placed in a standard curing chamber.After 28 days, compressive strength tests were conducted on the specimens according to the "Standard for Test Method of Mechanical Properties of Ordinary Concrete" (GB/T 50081-2002) [56].The porosity and compressive strength test results of the pervious concrete specimens are listed in Table 4. Since the pervious concrete specimens were prepared to supplement the data gap in the 10-20 MPa range, the compressive strength test results showed relatively close values, with a standard deviation of approximately 4.63 MPa.

Predictive Performance after Model Training Enhancement
To visually showcase the predictive performance of the CNN developed in this study for estimating the compressive strength of pervious concrete with different material compositions, actual data from various sources were compared with their corresponding predicted values generated by the model.The comprehensive predictive performance is illustrated in Figure 5. Notably, the additional data incorporated in this study successfully addressed the data gap within the 10~20 MPa range.Following the retraining process, the CNN model demonstrated favorable predictive accuracy across all sample data, with data points clustered closely around the diagonal line.
After incorporating additional training data, the prediction performance of the CNN model on both the training and test sets is illustrated in Figure 6.It is evident that the model's predicted values closely match the actual values in both the test and training sets, with minimal absolute errors.This observation signifies that the model, retrained with the inclusion of new data, showcases excellent predictive capabilities without encountering underfitting or overfitting issues.The model consistently achieves high accuracy in predicting the 28-day compressive strength of diverse types of pervious concrete.
positions, actual data from various sources were compared with their corresponding pre-dicted values generated by the model.The comprehensive predictive performance is illustrated in Figure 5. Notably, the additional data incorporated in this study successfully addressed the data gap within the 10~20 MPa range.Following the retraining process, the CNN model demonstrated favorable predictive accuracy across all sample data, with data points clustered closely around the diagonal line.To comprehensively demonstrate the predictive performance of the model, this paper employs the following metrics to further evaluate the prediction performance of the CNN model.The coefficient of determination R 2 , characterizing the prediction effect, is calculated by the Formula (11).The coefficient of determination is a commonly used evaluation index for assessing the prediction and fitting effects of the model.According to its definition, the value of R 2 falls between [0, 1], with a value closer to 1 indicating that the model's predicted values are closer to the actual values.
( ) ( ) In Formula (11), Predictedi represents the predicted strength of the i-th sample in the model, Actuali represents the measured strength of the i-th sample, and Actual denotes the average measured strength of all samples.
In addition to the coefficient of determination, this paper assesses the predictive performance of the CNN model on pervious concrete using Root Mean Square Error, Mean Absolute Percentage Error (MAPE), and Mean Absolute Error (MAE).Formulas ( 12)-( 14) present the expressions for these indicators, and Table 5 provides the values of these evaluation metrics in a single model training experiment.To comprehensively demonstrate the predictive performance of the model, this paper employs the following metrics to further evaluate the prediction performance of the CNN model.The coefficient of determination R 2 , characterizing the prediction effect, is calculated by the Formula (11).The coefficient of determination is a commonly used evaluation index for assessing the prediction and fitting effects of the model.According to its definition, the value of R 2 falls between [0, 1], with a value closer to 1 indicating that the model's predicted values are closer to the actual values.
In Formula (11), Predicted i represents the predicted strength of the i-th sample in the model, Actual i represents the measured strength of the i-th sample, and Actual denotes the average measured strength of all samples.
In addition to the coefficient of determination, this paper assesses the predictive performance of the CNN model on pervious concrete using Root Mean Square Error, Mean Absolute Percentage Error (MAPE), and Mean Absolute Error (MAE).Formulas ( 12)-( 14) present the expressions for these indicators, and Table 5 provides the values of these evaluation metrics in a single model training experiment.In addition to the aforementioned indicators, this paper also visually presents the distribution of relative errors between predicted and actual values in both the training and test sets through histograms, as illustrated in Figure 7.According to the calculations, the minimum relative error in the training set can reach 0.03%, and in the test set, it can reach 0.08%.Moreover, in the training set, over 60% of the relative errors are less than 10%, and in the test set, a similar proportion of over 60% of the relative errors fall below 10%.The percentage of relative errors exceeding 20% is only 9.30% in the training set and 8.11% in the test set.These findings indicate the CNN model's ability to provide reliable and accurate estimates of compressive strength for pervious concrete with varying material compositions.
Buildings 2024, 14, x FOR PEER REVIEW 13 of 18 In addition to the aforementioned indicators, this paper also visually presents the distribution of relative errors between predicted and actual values in both the training and test sets through histograms, as illustrated in Figure 7.According to the calculations, the minimum relative error in the training set can reach 0.03%, and in the test set, it can reach 0.08%.Moreover, in the training set, over 60% of the relative errors are less than 10%, and in the test set, a similar proportion of over 60% of the relative errors fall below 10%.The percentage of relative errors exceeding 20% is only 9.30% in the training set and 8.11% in the test set.These findings indicate the CNN model's ability to provide reliable and accurate estimates of compressive strength for pervious concrete with varying material compositions.

Comparative Analysis between CNN and Other Prediction Methods
To further highlight the superiority of CNN, this study conducted a comparative analysis of the predictive performance between CNN and another widely used machine learning model, the BP Neural Network.Both models were trained and tested using the same dataset.Figure 8 visually illustrates the predictive capabilities of the BP Neural Network regarding the compressive strength of pervious concrete from various sources.It is evident that the data points are generally distributed in proximity to the diagonal line, with some points exhibiting a certain distance from the diagonal but lacking clear outliers.This observation suggests that the BP Neural Network also demonstrates acceptable predictive performance for the 28-day compressive strength of pervious concrete with diverse material compositions.

Comparative Analysis between CNN and Other Prediction Methods
To further highlight the superiority of CNN, this study conducted a comparative analysis of the predictive performance between CNN and another widely used machine learning model, the BP Neural Network.Both models were trained and tested using the same dataset.Figure 8 visually illustrates the predictive capabilities of the BP Neural Network regarding the compressive strength of pervious concrete from various sources.It is evident that the data points are generally distributed in proximity to the diagonal line, with some points exhibiting a certain distance from the diagonal but lacking clear outliers.This observation suggests that the BP Neural Network also demonstrates acceptable predictive performance for the 28-day compressive strength of pervious concrete with diverse material compositions.
work regarding the compressive strength of pervious concrete from various sources.It is evident that the data points are generally distributed in proximity to the diagonal line, with some points exhibiting a certain distance from the diagonal but lacking clear outliers.This observation suggests that the BP Neural Network also demonstrates acceptable predictive performance for the 28-day compressive strength of pervious concrete with diverse material compositions.To visually compare the efficacy of CNN and the BP neural network in predicting pervious concrete compressive strength, this paper generated a comparative graph illustrating their predicted values against the actual values.To visually compare the efficacy of CNN and the BP neural network in predicting pervious concrete compressive strength, this paper generated a comparative graph illustrating their predicted values against the actual values.To provide a more explicit comparison of the predictive capabilities for the 28-day compressive strength of pervious concrete between the CNN and BP models, this study contrasts the predicted values of CNN and BP with the actual values individually, as depicted in Figure 10.The results in the figure clearly demonstrate that both BP and CNN exhibit satisfactory predictive performance across the entire dataset.However, the data points in the CNN model are more densely clustered around the diagonal, indicating closer proximity between CNN predictions and the measured compressive strength.Additionally, the overall R 2 for the sample predictions in the CNN model is 0.931, while for the BP neural network, it is 0.893.This implies that the overall predictive performance of CNN surpasses that of the BP neural network.To provide a more explicit comparison of the predictive capabilities for the 28-day compressive strength of pervious concrete between the CNN and BP models, this study contrasts the predicted values of CNN and BP with the actual values individually, as depicted in Figure 10.The results in the figure clearly demonstrate that both BP and CNN exhibit satisfactory predictive performance across the entire dataset.However, the data points in the CNN model are more densely clustered around the diagonal, indicating closer proximity between CNN predictions and the measured compressive strength.Additionally, the overall R 2 for the sample predictions in the CNN model is 0.931, while for the BP neural network, it is 0.893.This implies that the overall predictive performance of CNN surpasses that of the BP neural network.To comprehensively compare the predictive performance of CNN and BP, this evaluated the RMSE, MAE, and MAPE metrics between the two models.Figure 11 trates the comparative results of CNN and BP based on these metrics.The findings r that CNN exhibits smaller RMSE, MAE, and MAPE in comparison to BP, indicatin the average differences between predicted values and actual values are reduced i CNN model.Moreover, the MAPE for the BP test set is 14.40%, surpassing the des threshold of 10%.This suggests that while the BP neural network demonstrates reaso predictive performance for most pervious concrete samples, it may exhibit notable d tions from actual values for specific mix ratios, presenting challenges in practical ap tions.In contrast, CNN demonstrates smaller error metrics, with all MAPE values f below the 10% threshold, signifying that its predictive performance is within an op range.Therefore, CNN provides reliable predictions for the 28-day compressive str of pervious concrete with various material compositions.To comprehensively compare the predictive performance of CNN and BP, this study evaluated the RMSE, MAE, and MAPE metrics between the two models.Figure 11 illustrates the comparative results of CNN and BP based on these metrics.The findings reveal that CNN exhibits smaller RMSE, MAE, and MAPE in comparison to BP, indicating that the average differences between predicted values and actual values are reduced in the CNN model.Moreover, the MAPE for the BP test set is 14.40%, surpassing the desirable threshold of 10%.This suggests that while the BP neural network demonstrates reasonable predictive performance for most pervious concrete samples, it may exhibit notable deviations from actual values for specific mix ratios, presenting challenges in practical applications.In contrast, CNN demonstrates smaller error metrics, with all MAPE values falling below the 10% threshold, signifying that its predictive performance is within an optimal range.Therefore, CNN provides reliable predictions for the 28-day compressive strength of pervious concrete with various material compositions.To comprehensively compare the predictive performance of CNN and BP, this study evaluated the RMSE, MAE, and MAPE metrics between the two models.Figure 11 illustrates the comparative results of CNN and BP based on these metrics.The findings reveal that CNN exhibits smaller RMSE, MAE, and MAPE in comparison to BP, indicating that the average differences between predicted values and actual values are reduced in the CNN model.Moreover, the MAPE for the BP test set is 14.40%, surpassing the desirable threshold of 10%.This suggests that while the BP neural network demonstrates reasonable predictive performance for most pervious concrete samples, it may exhibit notable deviations from actual values for specific mix ratios, presenting challenges in practical applications.In contrast, CNN demonstrates smaller error metrics, with all MAPE values falling below the 10% threshold, signifying that its predictive performance is within an optimal range.Therefore, CNN provides reliable predictions for the 28-day compressive strength of pervious concrete with various material compositions.The results of this study enable the prediction of the 28-day compressive strength of various types of pervious concrete using existing pervious concrete preparation experience, better meeting the needs of practical construction.However, due to the limited nature of the dataset, factors such as aggregate size, type, cement grade, curing conditions, etc., were not included as input parameters.The diversity of pervious concrete types may result in suboptimal performance of the predictive model constructed in this study.Therefore, in future work, to build CNN predictive models more suitable for different materials, it is essential to fully utilize existing experimental data, incorporate material information and preparation conditions of pervious concrete into the model's input parameters, and collect sufficient data to ensure model convergence.Additionally, with the increase in input parameter variables and the significant expansion of the dataset, determining specific values for hyperparameters such as learning rate and learning rate decay factor will become a complex issue.It will be necessary to develop appropriate algorithms to partition a portion of the overall dataset for estimating these hyperparameter values, enabling the model to converge more quickly and thereby improve prediction accuracy and applicability.

Figure 1 .
Figure 1.Schematic Diagram of the Convolution Process.

Figure 1 .
Figure 1.Schematic Diagram of the Convolution Process.

Figure 2 .
Figure 2. Schematic Diagram of the Pooling Process.

Figure 2 .
Figure 2. Schematic Diagram of the Pooling Process.

Figure 3 .
Figure 3.The Structure of the CNN.

Figure 3 .
Figure 3.The Structure of the CNN.

Figure 4 .
The vertical axis represents the 28-day compressive strength predicted by the CNN model, while the horizontal axis represents the actual compressive strength data obtained through literature review and experimental testing.This figure enables a direct visual comparison of the CNN model's predictions on the training and testing sets.Buildings 2024, 14, x FOR PEER REVIEW 9 of 18

Figure 4 .
The vertical axis represents the 28-day compressive strength predicted by the CNN model, while the horizontal axis represents the actual compressive strength data obtained through literature review and experimental testing.This figure enables a direct visual comparison of the CNN model's predictions on the training and testing sets.The data points in the graph are primarily clustered around the diagonal line, indicating a strong alignment between the model's predictions and the observed results.This clustering pattern suggests a high degree of concordance between the predicted and actual compressive strength values.The model demonstrates notable accuracy in forecasting compressive strength for pervious concrete, as evidenced by the proximity of the data points to the diagonal line.This visual analysis underscores the CNN model's ability to provide accurate predictions for compressive strength.

Figure 4 .
Figure 4. Relationship between Predicted and Actual Values in CNN Training and Testing Sets.

Figure 4
Figure 4 illustrates the favorable predictive performance of the CNN model trained in this study for various types of pervious concrete.However, there is a lack of training data in the range of 10 to 20 MPa, which could lead to inadequacies in the model's predictions for the compressive strength of pervious concrete within this range.To enhance the robustness of the model, ensuring that subtle variations in the material composition of pervious concrete do not significantly impact the prediction results, experimental data on the measured 28-day compressive strength in the range of 10 to 20 MPa will be added.This additional data aims to augment the training set of the CNN predictive model, im-

Figure 4 .
Figure 4. Relationship between Predicted and Actual Values in CNN Training and Testing Sets.

Figure 5 .
Figure 5. Prediction Performance of Samples from Different Sources in the Model: (a) Training Set; (b) Test Set.After incorporating additional training data, the prediction performance of the CNN model on both the training and test sets is illustrated in Figure 6.It is evident that the model's predicted values closely match the actual values in both the test and training sets, with minimal absolute errors.This observation signifies that the model, retrained with the inclusion of new data, showcases excellent predictive capabilities without encountering underfitting or overfitting issues.The model consistently achieves high accuracy in predicting the 28-day compressive strength of diverse types of pervious concrete.

Figure 5 .
Figure 5. Prediction Performance of Samples from Different Sources in the Model: (a) Training Set; (b) Test Set.Buildings 2024, 14, x FOR PEER REVIEW 12 of 18

Figure 6 .
Figure 6.Comparison between Predicted and Actual Values of the Improved CNN Model: (a) Training Set; (b) Test Set.

Figure 6 .
Figure 6.Comparison between Predicted and Actual Values of the Improved CNN Model: (a) Training Set; (b) Test Set.

Figure 7 .
Figure 7. Histogram of Relative Errors in the Training Set and Test Set.

Figure 7 .
Figure 7. Histogram of Relative Errors in the Training Set and Test Set.

Figure 8 .
Figure 8. Predictive Performance of BP Neural Network for Data from Different Sources.

18 Figure 8 .
Figure 8. Predictive Performance of BP Neural Network for Data from Different Sources.
Figure 9 provides an intuitive representation of the variance in predictive performance between the BP neural network model and CNN for the 28-day compressive strength of pervious concrete.Notably, both CNN and BP's predicted values exhibit clustering around the actual values.However, the predicted values of the CNN model are in closer proximity to the real values, visually indicating superior predictive performance.This visual analysis underscores that the CNN model offers greater accuracy and reliability in predicting the compressive strength of pervious concrete compared to the BP neural network.

Figure 9 .
Figure 9.Comparison between Predicted Values of CNN and BP and Actual Values.

Figure 9 .
Figure 9.Comparison between Predicted Values of CNN and BP and Actual Values.

Buildings 2024 ,Figure 10 .
Figure 10.Predictive Performance of BP Neural Network and CNN.

Figure 11 .
Figure 11.Comprehensive Comparison of CNN and BP Metrics.
This paper introduces a CNN model designed for predicting the 28-day compre strength of pervious concrete, utilizing eight mix proportion parameters as input bles.The model undergoes training and testing on a dataset comprising 123 samples literature and experiments.The key findings of this study are as follows:

Figure 10 .
Figure 10.Predictive Performance of BP Neural Network and CNN.

Figure 11 .
Figure 11.Comprehensive Comparison of CNN and BP Metrics.
This paper introduces a CNN model designed for predicting the 28-day compressive strength of pervious concrete, utilizing eight mix proportion parameters as input variables.The model undergoes training and testing on a dataset comprising 123 samples from literature and experiments.The key findings of this study are as follows:

Figure 11 .
Figure 11.Comprehensive Comparison of CNN and BP Metrics.
This paper introduces a CNN model designed for predicting the 28-day compressive strength of pervious concrete, utilizing eight mix proportion parameters as input variables.The model undergoes training and testing on a dataset comprising 123 samples from literature and experiments.The key findings of this study are as follows: (I) The proposed CNN model showcased remarkable accuracy.Through multiple experiments, the CNN model achieved an R 2 of 0.938 and a MAPE of 9.13% on the test set, indicating acceptable prediction errors and robust model stability.This underscores the CNN model's capability to precisely predict the 28-day compressive strength of pervious concrete, making it adaptable to diverse material compositions.(II) The predictive model presented in this paper demonstrates enhanced stability and outperforms traditional methods.In comparison to the BP neural network trained and tested on the same dataset, the CNN model exhibits considerably lower prediction error metrics (RMSE, MAE, and MAPE) and notably higher R 2 , signifying superior predictive performance and stability compared to traditional approaches.(III) This study supplemented the model training with experimental data encompassing the compressive strength range of 10-20 MPa for pervious concrete, ensuring coverage of the common compressive strength spectrum of pervious concrete.The test set results indicate that the model data augmented with experimental data performs well in predicting data obtained from different literature sources as well as data acquired through experiments.

Table 1 .
Partial Predictive Models for Compressive Strength of Pervious Concrete and Their Performance.

Table 2 .
Source and Information of the Dataset.

Table 4 .
Pervious Concrete Porosity and Compressive Strength Test Results.

Table 5 .
Values of evaluation indicators in a single model training experiment.

Table 5 .
Values of evaluation indicators in a single model training experiment.