Next Article in Journal
The Problem of the Comparability of Road Accident Data from Different European Countries
Previous Article in Journal
Small-Scale Farming in the United States: Challenges and Pathways to Enhanced Productivity and Profitability
Previous Article in Special Issue
Correlation Between Packing Voids and Fatigue Performance in Sludge Gasification Slag-Cement-Stabilized Macadam
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Behavior Prediction of Connections in Eco-Designed Thin-Walled Steel–Ply–Bamboo Structures Based on Machine Learning for Mechanical Properties

by
Wanwan Xia
1,
Yujie Gao
1,2,3,*,
Zhenkai Zhang
1,
Yuhan Jie
2,4,
Jingwen Zhang
5,
Yueying Cao
6,
Qiuyue Wu
2,7,
Tao Li
2,7,*,
Wentao Ji
7 and
Yaoyuan Gao
7
1
School of Physical and Mathematical Sciences, Nanjing Tech University, Nanjing 210037, China
2
College of Art and Design, Nanjing Tech University, Nanjing 210037, China
3
School of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing 210037, China
4
Institute of Urban and Rural Environment, Nanjing Tech University, Nanjing 210037, China
5
College of Energy Science and Engineering, Nanjing Tech University, Nanjing 210037, China
6
Arizona College of Technology, Hebei University of Technology, Tianjin 300130, China
7
College of Civil Engineering, Nanjing Tech University, Nanjing 210037, China
*
Authors to whom correspondence should be addressed.
Sustainability 2025, 17(15), 6753; https://doi.org/10.3390/su17156753
Submission received: 14 May 2025 / Revised: 10 July 2025 / Accepted: 21 July 2025 / Published: 24 July 2025

Abstract

This study employed multiple machine learning and hyperparameter optimization techniques to analyze and predict the mechanical properties of self-drilling screw connections in thin-walled steel–ply–bamboo shear walls, leveraging the renewable and eco-friendly nature of bamboo to enhance structural sustainability and reduce environmental impact. The dataset, which included 249 sets of measurement data, was derived from 51 disparate connection specimens fabricated with engineered bamboo—a renewable and low-carbon construction material. Utilizing factor analysis, a ranking table recording the comprehensive score of each connection specimen was established to select the optimal connection type. Eight machine learning models were employed to analyze and predict the mechanical performance of these connection specimens. Through comparison, the most efficient model was selected, and five hyperparameter optimization algorithms were implemented to further enhance its prediction accuracy. The analysis results revealed that the Random Forest (RF) model demonstrated superior classification performance, prediction accuracy, and generalization ability, achieving approximately 61% accuracy on the test set (the highest among all models). In hyperparameter optimization, the RF model processed through Bayesian Optimization (BO) further improved its predictive accuracy to about 67%, outperforming both its non-optimized version and models optimized using the other algorithms. Considering the mechanical performance of connections within TWS composite structures, applying the BO algorithm to the RF model significantly improved the predictive accuracy. This approach enables the identification of the most suitable specimen type based on newly provided mechanical performance parameter sets, providing a data-driven pathway for sustainable bamboo–steel composite structure design.

1. Introduction

Sustainable construction is a crucial topic in modern civil engineering, aiming to reduce environmental impacts and promote the use of renewable resources. Bamboo, as a renewable and low-carbon construction material, has excellent mechanical properties and ecological benefits, such as carbon sequestration potential. It is a rapidly renewable resource that provides substantial sustainability benefits due to its short growth cycle and minimal resource needs. In sustainable construction, combining high-strength thin-walled steel (TWS) with renewable materials such as bamboo offers an eco-friendly approach without compromising structural performance.
Machine learning, an interdisciplinary field integrating computer science, mathematics, statistics, and engineering, is committed to developing algorithmic models for learning and training on extensive datasets [1,2]. This enables machines to autonomously discern patterns and rules from data to address various problems [3]. As machine learning models (such as K-Nearest Neighbors and Random Forest) become increasingly intricate, hyperparameter optimization (such as Grid Search, Bayesian Optimization, and Particle Swarm Optimization) can markedly improve the efficiency of model operations [4,5,6,7,8,9,10]. Civil engineering data, characterized by nonlinearity and high dimensionality, present challenges related to managing massive, real-time, and uncertain data in traditional ways, and this demands the maintenance of high precision in the context of material selection and application. Machine learning offers a powerful data-driven approach for addressing this challenge, enabling the simulation and prediction of bamboo’s mechanical behavior in composite structures. Through leveraging experimental data and advanced algorithms, machine learning can uncover patterns and relationships that are difficult to capture with conventional methods, thereby optimizing the design of sustainable structures. Consequently, machine learning and hyperparameter optimization have emerged as effective tools for tackling complex engineering challenges. They can efficiently process large-scale civil engineering data and enhance the generalization ability of a model.
Thin-walled steel (TWS) structures [11], composed of thin steel plates or strips, offer numerous benefits including light weight, high strength, seismic resistance, corrosion and fire resistance, and flexibility in design. These attributes make them a popular choice for constructing houses and bridges [12]. However, traditional TWS structures are susceptible to local distortion and global instability due to the thinness of the plates [13]. This study presents a novel structural form—namely, the thin-walled steel–ply–bamboo composite structure [14]—which leverages the superior mechanical properties of bamboo to significantly enhance the overall performance and sustainability of TWS structures. At present, there is little literature on the combination of TWS structures with ply–bamboo panels to construct connections. Therefore, it is imperative to conduct comprehensive research to augment the relevant theory and facilitate its application in practical engineering contexts. For the sake of convenience in expression, this is referred to as a TWS composite structure connection in the subsequent text. This novel composite not only addresses the mechanical limitations of traditional TWS structures but also aligns with sustainable construction goals by reducing reliance on steel and incorporating bamboo, thereby lowering the structure’s environmental footprint.
Bamboo, as a renewable natural material, is gaining increasing attention in structural engineering [15]. It not only has superior mechanical properties, such as high strength, good toughness, and durability, but also excels in thermal and acoustic performance [16]. Therefore, bamboo was selected as the sheathing material in this study to enhance the mechanical performance of a TWS composite structure connection. Self-drilling screw connections are widely used in thin-walled steel structures as an efficient method for joining steel plates and composite materials [17,18]. In this study, self-drilling screw connections were employed to explore their application in the combination of thin-walled steel and bamboo [19].
Factor analysis [20]—a multivariate statistical method—is employed to assess the impacts of various materials and structural parameters on the mechanical properties of TWS composite structures [21]. Through the utilization of factor analysis, the critical parameters influencing structural performance can be discerned, allowing for the selection of the optimal TWS composite structure connection type [22].
In 2017, the machining problem of thin-walled flexible workpieces was explored, and a machining platform was established in a research paper [23]. In 2021, Runqiong focused on a data-driven chatter detection technology and first used the structural function method to extract the fractal features of signals [24]. Hao proposed a sparse Bayesian learning method based on engineering knowledge in 2022 for the in situ prediction of machining errors in thin-walled blades [25]. In 2023, Chenyu proposed a novel impact load identification and localization method using three machine learning models [26]. Xu et al. studied the use of convolutional neural networks to evaluate the mechanical properties of thin-film elasto-plastic materials in the same year [27]. Kyosuke et al. described an inverse analysis method based on neural network spectroscopy and applied it to the evaluation of quantitative optical constants [28,29]. These methods effectively handle complex nonlinear relationships and provide more accurate predictions of material characteristics.
However, previous studies have shown limitations, such as a narrow focus on specific machining techniques, limited machine learning model diversity, and a lack of innovative material integration, which often failed to address the comprehensive mechanical performance and generalization of thin-walled structures. Therefore, this study proposes a novel approach to overcome these shortcomings. In this study, in contrast to the existing schemes, the contributions of this design are as follows:
(1)
Eight new machine learning models were applied to tackle the problem of connection classification within TWS structures, and the best model was selected by comparing their accuracy.
(2)
Five hyperparameter optimization algorithms, including grid search and simulated annealing, were employed to fine-tune the performance of existing machine learning models, effectively avoiding overfitting or underfitting and enhancing generalization ability.
(3)
A thin-walled steel–ply–bamboo composite structure was introduced, combining the strengths of TWS structures with those of bamboo panels. Its performance under different conditions was studied, demonstrating its advantages.
(4)
Factor analysis and data processing were integrated to assess the optimal mechanical properties of the connections within TWS structures.
(5)
The combination of this innovative material with intelligent predictive models has the potential to facilitate more efficient material selection, supporting the promotion of thin-walled steel–ply–bamboo composites and meeting a broader range of applications.
The study focused initially on the performance evaluation of TWS composite structure connections, as depicted in Figure 1. It involved gathering performance parameters, load-displacement behaviors, and both monotonic and cyclic loading data from experiments conducted on TWS composite structure connections. Figure 1 illustrates the overall research workflow, encompassing stages from experimental data collection to factor analysis, model selection, and hyperparameter optimization. Following this, factor analysis was utilized to elucidate the principal performance indicators of various connection types. This process facilitated the selection of the structural type with the best overall performance through a meticulous data-driven evaluation. Then, eight different machine learning models were utilized for data analysis and prediction. Through a comparative analysis of these models, the optimal analytical model was identified. Finally, five hyperparameter optimization algorithms were applied to fine-tune the performance of the optimal model, further enhancing its accuracy and generalization ability. This process provided a reliable and robust tool for predicting the optimal types and classifications of future TWS composite structure connections.

2. Notation

All of the symbols used in this study are presented in Table 1.

3. Materials

The inner construction of the TWS composite structure connections is illustrated in Figure 2.

3.1. Ply–Bamboo Panels

Two types of 8 mm thick ply–bamboo panels were used as sheathing materials for TWS structures: double-directional laminated bamboo panels and unidirectional laminated flat-pressed bamboo panels. The tensile properties of the double-directional ply–bamboo were lower than those of the unidirectional flat-pressed ply–bamboo [30,31].

3.2. Thin-Walled Steel Studs

The thin-walled steel studs were fabricated using a 89 × 38 × 12 × 0.6 mm C-shaped section. The material properties of the stud members were determined in accordance with the AISI (American Iron and Steel Institute) S400–15 specification. The average yield strength, tensile strength, and elongation ratio of stainless steel (SS) were 180 MPa, 485 MPa, and 40%, respectively. For cold-formed steel (CFS), these values were 286.50 MPa, 370.30 MPa, and 33.50%, respectively [30,31,32].

3.3. Self-Drilling Screws

Self-drilling screws were used in conjunction with the sheathing panels on steel–frame shear walls. Three distinct types of self-drilling screws were employed for the sheathing-to-stud connections, comprising the PTS 3.5 flathead phosphating self-drilling screw and the STS 3.5 and STS 4.2 flathead stainless steel self-drilling screws [30,31,33].

4. Manufacture of Specimens and the Experimental Setup

To evaluate the mechanical performance of thin-walled steel (TWS) composite structure connections, 51 connection specimens were tested under various conditions, yielding a total of 249 sets of experimental data [34,35,36]. These tests incorporated two types of 8 mm thick ply–bamboo panels (double-directional laminated bamboo panels and unidirectional flat-pressed bamboo panels), two types of thin-walled steel studs (stainless steel and cold-formed steel), three types of self-drilling screws (PTS 3.5, STS 3.5, and STS 4.2), different end distances (e.g., 15 mm, 20 mm), and two loading protocols (monotonic and cyclic). The experimental design aimed to systematically assess the influence of these parameters on the connection performance.
The specimens were constructed by connecting ply–bamboo panels to thin-walled steel studs using self-drilling screws. The bamboo panels were prepared as per the specifications in Section 3.1, with double-directional laminated panels exhibiting lower tensile strength than that of unidirectional flat-pressed panels [30,31]. The steel studs, with a C-shaped section of 89 × 38 × 12 × 0.6 mm, were fabricated according to the AISI S400–15 standard [33]. Self-drilling screws were used to secure the sheathing panels to the steel frame, ensuring consistent connection configurations across specimens.
The experimental program was designed to ensure comprehensive coverage of key parameters affecting TWS composite structure connections. A total of 51 test series were developed, systematically combining the following variables: (1) bamboo panel type (double-directional laminated or unidirectional flat-pressed), (2) steel stud type (stainless steel or cold-formed steel), (3) screw type (PTS 3.5, STS 3.5, or STS 4.2), (4) end distance (e.g., 15 mm, 20 mm), and (5) loading protocol (monotonic or cyclic). These parameters were selected based on their established influence on connection performance, as reported in prior studies [30,31,33]. An orthogonal experimental design was employed to ensure the balanced representation of variable combinations, minimizing bias and enabling the evaluation of both main effects and interactions. Table 2 summarizes the experimental design, detailing the distribution of the test series and sample sizes.
The experiments were conducted vertically, using a 2-ton MTS universal testing machine that was controlled by a computer with a preset loading program. Displacement was measured using a linear variable differential transformer (LVDT) sensor positioned on the extraction platform, aligned with the screw to minimize external deformation. To reduce eccentricity in the lap joint, clamps were adjusted to align the sliding surface between the sheathing panel and steel frame with the centerline of the actuator and load cell. Before loading, clamps were fine-tuned to ensure linear force application at a constant rate. Each monotonic test group consisted of three specimens (e.g., CD-3.5P15-M15-1 to CD-3.5P15-M15-3), while each cyclic test group included five specimens (e.g., CU-4.2S30-C15-1 to CU-4.2S30-C15-5) to account for variability under repeated loading. The cyclic loading protocol involved the following: (1) a five-stage cyclic loading up to the yield displacement; (2) increasing the load increment by 0.5 times the yield displacement, repeated three times until failure. In total, 51 connection specimens were tested, yielding 249 sets of measurement data.
To ensure statistical robustness, the sample sizes (3 for monotonic tests and 5 for cyclic tests) were determined based on prior studies [34,35,36], which demonstrated that these quantities adequately capture performance variability. For example, the test series CD-3.5P15-M15, with specimens including CD-3.5P15-M15-3, provided reliable data for elastic stiffness (Ke) and yield displacement (δy), as shown in Table 1. Preliminary statistical analysis confirmed the balance of the test groups, with the coefficient of variation (CV) for key mechanical performance indicators (e.g., Ke, δy) across test series being less than 20%, indicating consistent and representative data within each group. Furthermore, the orthogonal design ensured an even distribution of test conditions, preventing the overrepresentation of any single variable. This balanced design supports the reliability of the experimental results and their suitability for subsequent analyses, including machine learning applications.

5. Experimental Results

The experiment yielded 249 sets of measurement data from the 51 tested specimens, which were compiled into a table. This table presents the seven unique mechanical performances of the TWS composite structure across different experimental groups. Specifically, the measured performance indicators include the elastic stiffness (Ke), yield displacement (δy), maximum displacement (δm), maximum force (Fm), ultimate displacement (δu), ultimate force (Fu), and ductility factor (μ).
The selection of these seven mechanical performance indicators enables a comprehensive characterization of the overall performance of the TWS composite structure connections. Elastic stiffness (Ke) represents a connection’s resistance to deformation in the initial loading stage, reflecting its contribution to structural stability. Yield displacement (δy) marks the transition from elastic to plastic behavior, influenced by material and geometric parameters. Maximum displacement (δm) measures the connection’s deformation capacity before reaching the peak load, which is critical for seismic energy dissipation. Maximum force (Fm) indicates the connection’s peak load-carrying capacity, and it is closely related to the screw type and steel stud strength. Ultimate displacement (δu) reflects the total deformation capacity at failure, determining the ductility and failure modes. Ultimate force (Fu) represents the residual strength at failure, revealing material degradation characteristics. Ductility factor (μ) quantifies the connection’s plastic deformation capacity, a key indicator for seismic performance. The selection of these features is based on their established importance in TWS and similar composite connection studies [30,31,33,34,35,36]. Together, they describe the connection’s stiffness, strength, deformation capacity, and ductility, serving as critical indicators for structural design and performance evaluation. Other potential features, such as energy dissipation and initial slip, were considered but excluded due to their limited additional explanatory power and increased analytical complexity. A portion of the results are displayed in Table 3, while the complete set is provided in Appendix A.2.
Table 1 presents the statistical characteristics of the dataset, including the mean, standard deviation, and range for each mechanical performance parameter. Annotations indicate significant variability in parameters such as the ductility ratio (μ) and energy dissipation (E), reflecting the influence of the bamboo type and screw size on the connection behavior. These statistics provide critical insights into the dataset’s heterogeneity, guiding the selection of appropriate machine learning models and justifying the use of factor analysis to reduce dimensionality.

6. Correlation Analysis and Normality Test

6.1. Normality Test

Normality analysis is a statistical method employed to ascertain whether data adhere to a normal distribution [37]. The method typically exhibits superior performance and reliability when the data closely align with a normal distribution. In this study, the Shapiro–Wilk test was used to check whether the data followed a normal distribution [38], as shown in Table 4, which summarizes the Spearman correlation coefficients between the input and output parameters, with annotations highlighting statistically significant correlations (p < 0.05). For instance, the strong correlation between screw diameter and maximum force (Fm) underscores the importance of fastener selection in optimizing connection performance. These findings informed the factor analysis by identifying key variables influencing mechanical behavior, ensuring robust feature selection for the machine learning models. The test is appropriate for datasets with less than 5000 observations.
In a hypothesis test, if the p-value is less than 0.05, then the null hypothesis is rejected and the data do not follow a normal distribution. If the p-value is greater than or equal to 0.05, then the null hypothesis is accepted, and it means that the data follow a normal distribution [39]. According to Table 2, the maximum significance p-value for these variables was 0.007, which fell below the 0.05 threshold of statistical significance. Consequently, the null hypothesis was rejected, indicating that the measurement data did not adhere to a normal distribution.

6.2. Correlation Coefficient

The correlation coefficient (r) is a fundamental statistical metric used to quantify the strength and direction of linear relationships between variables [40]. Given that the data did not follow a normal distribution, the Spearman correlation coefficient was used to analyze the relationships between mechanical variables, revealing their impact on the mechanical properties of TWS composite structure connections [41]. A heatmap of the Spearman correlation coefficients is presented in Figure 3.
Figure 3 displays a confusion matrix that visualizes the correlation coefficients (r), which range from −1 to 1, providing a measure of the strength and direction of the relationships between variables. An r-value less than 0 indicates a negative correlation (as one variable increases, the other decreases), a value greater than 0 indicates a positive correlation (both variables increase together), and an r-value of 0 suggests no correlation [42]. Due to the limited sample size, an r-value with an absolute value exceeding 0.3 was considered to reflect a strong correlation. Figure 3 presents 100 correlation coefficients, illustrating the relationships between seven mechanical performance indicators (Ke, δy, δm, Fm, Fu, δu, μ) and three additional variables (S.M., S.S., S.Y.S.). Most features exhibit weak or negative correlations (correlation coefficient of <0.3 or negative), such as Ke with δy (−0.501), Ke with δm (−0.405), and μ with Ke (0.276), reflecting potential differences among the mechanical performance metrics. Some data show moderate correlations (correlation coefficient of 0.3 to 0.8), for example, Ke with Fm (−0.319), Ke with Fu (−0.319), δu with μ (0.715), and Fu with δu (0.766), providing a solid foundation for factor analysis to extract latent factors. A few features display high correlations, such as S.S. with S.M. (1.000) and δm with δu (0.835), but these high correlations primarily occur between additional variables and between additional variables and mechanical performance indicators; among the 49 correlation coefficients between mechanical performance indicators, 47 are of moderate to low strength (<0.8), strongly indicating a high degree of independence among these indicators. Furthermore, with a KMO test value of 0.78 (significantly above the 0.6 threshold) and the assumed significance of Bartlett’s sphericity test (p < 0.05), the data structure is further validated as suitable for factor analysis.

7. Factor Analysis

It is necessary to evaluate the overall performance of the material based on seven mechanical properties, and factor analysis is used as the method for determining the factors. Factor analysis reduces the dimensionality by consolidating numerous complex variables into a few independent common factors to minimize the loss of information. Typically, the purpose of factor analysis includes dimensionality reduction, calculating factor weights, and computing weighted results. The minimum residual method was used for factor extraction, followed by factor rotation. The process is shown in Figure 4.

7.1. Determining the Number of Factors

We employed the scree test method to determine the optimal number of factors. The scree test method identifies the most suitable number of factors by observing the inflection point in the curve of eigenvalues corresponding to different factor counts. This inflection point indicates the optimal number of factors, as shown in Figure 5.
Typically, factors with eigenvalues exceeding 1 are considered principal components. In Figure 5, the x-axis represents the number of factors from 1 to 7, and the y-axis shows the eigenvalues ranging from 0 to 4, with the curve dropping sharply from approximately 3.5 at factor 1 to about 1.5 at factor 2, then leveling off from factor 3 onward, reaching 1 at factor 4, and gradually approaching 0 thereafter. Since the eigenvalues for F1 to F4 all exceed 1, the number of principal components is determined to be 4.

7.2. Principal Factors’ Weights

Table 5 lists the contributions of six mechanical parameters to the four common factors (F1 to F4). Meanwhile, these values are also used to calculate the weight of the main factors. F1 shows a strong positive correlation with δu (0.893) and δm (0.463), while its correlations with Ke, Fm, Fu, and δy are relatively weak; F2 exhibits a significant positive correlation with Fm (0.530) and Fu (0.529), with lower correlations with other parameters; F3 is predominantly and highly positively correlated with Ke (1.062), with minimal influence from the other parameters; F4 has the strongest positive correlation with δy (1.661), while showing negative correlations with δu (−0.633) and δm (−0.422) and a minimal impact from the other parameters.
Table 6 summarizes the contributions of four common factors (F1 to F4) derived from factor analysis, along with their variance contribution rates, cumulative variance, and corresponding weights [43]. Specifically, F1 contributes 34.402% to the total variance with a weight of 0.344, F2 adds 33.938%, bringing the cumulative variance to 68.340% with a weight of 0.339, F3 contributes 17.388%, increasing the cumulative variance to 85.728% with a weight of 0.174, and F4 adds 11.21%, resulting in a cumulative variance of 96.944% with a weight of 0.112. This table indicates how much each factor explains the variability in the data, with the cumulative variance showing that, together, they account for nearly 97% of the total variance, reflecting their combined importance in the analysis.

7.3. Analysis Results

Table 7 presents the top 20 data points in terms of comprehensive score, arranged in descending order by F-value. The highest score was attained by SS-DL-4.2STS30-MT15-1 (1.416897), closely followed by SS-DL-4.2STS30-MT15-2 (1.388279). Given the potential biases inherent in each measurement, the AVG served as an effective tool for mitigating the effects of measurement errors and noise. Thus, based on the AVG, the optimal structure was SS-DL-4.2STS30-MT15. This composite system used stainless steel with double-directional laminated ply–bamboo as its sheathing material. STS 4.2 mm flathead stainless steel self-drilling screws were employed for the sheathing-to-stud connections, with an end distance of 30 mm. The loading protocol followed a monotonic test with a loading rate of 15 mm/min. The second-best structure was SS-DL-4.2STS30-CL15, followed by SS-DL-4.2STS15-MT15 in third place. Overall, in the thin-walled steel–ply–bamboo system, connection specimens using stainless steel and double-directional laminated ply–bamboo (SS-DL type) generally demonstrated superior comprehensive mechanical properties in the experiment.
Experimental equipment can be used to ascertain the specific values of the material’s parameters. Factor analysis is then performed to integrate these values and provide a comprehensive evaluation of the material’s overall performance. This will help in identifying the structural models with high mechanical performance, providing rankings to guide the development and design of superior structures.

8. Strength Prediction Model Based on Machine Learning

8.1. Data Preprocessing

During the preprocessing phase, the dataset was randomly segregated into a 70% training set for model learning and optimization and a 30% test set to evaluate generalization. In this study, this corresponds to 174 samples used for training and 75 for testing, with the training set being further subjected to five-fold cross-validation for model tuning. To improve model evaluation, k-fold cross-validation was employed, wherein the training set was divided into k equal subsets. In each iteration, k − 1 subsets were allocated for training, while the remaining subset was designated for validation. This procedure was executed k times, with the average performance determining the final evaluation. In this research, k was set to 5 to strike a balance between accuracy and efficiency.

8.2. Machine Learning Models

This study evaluates eight machine learning models—Extreme Gradient Boosting (XGBoost), K-Nearest Neighbors (KNN), Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM), Naive Bayes (NB), Backpropagation Neural Network (BPNN), and Extremely Randomized Trees (ExTrees)—to predict the mechanical properties of thin-walled steel–ply–bamboo shear wall connections. These models were selected based on their established performance in regression and classification tasks, particularly in handling the complex, nonlinear relationships that are typical of civil engineering datasets. RF and NB were chosen for their robustness in managing high-dimensional data and capturing variable interactions, which are critical for modeling bamboo’s variable mechanical properties. XGBoost was included for its efficiency in gradient boosting and ability to handle sparse data. KNN and SVM were selected for their flexibility in non-parametric modeling, making them suitable for datasets with non-normal distributions, as confirmed by the Shapiro–Wilk test. ExTrees was incorporated for its randomized node-splitting strategy, which enhances model diversity and reduces variance, further contributing to the ensemble’s robustness. BPNN was included to explore deep learning capabilities for capturing intricate patterns, while LR serves as a baseline for comparison [44,45,46,47,48,49,50]. These models, grounded in diverse principles and methods, each offer unique data processing strengths and are effectively applied to predict civil engineering data. Together, they balance prediction accuracy, interpretability, and computational efficiency, supporting the optimization of sustainable structural design. The flowcharts of XGBoost and RF are shown in Figure 6 and Figure 7.

8.3. Model Training

The machine learning package Scikit-learn was employed to develop a Python (3.10.12) script for model training. The training time and some of the hyperparameters for each model are presented in Table 8. The other hyperparameters for each model are provided in Appendix A.3, such as the number of leaves in the KNN, the node-splitting evaluation criterion in the RF, and the error convergence condition in the LR.

8.4. Performance Evaluation of the Machine Learning Models

After the model was trained on the training set, its effectiveness was validated using the test set. The evaluation metrics for the classification algorithm included the accuracy (AC), recall (RE), precision (PR), and F1-score (F1), as defined in Equations (1)–(4).
A C = T P + T N T P + T N + F P + F N
P E = T P T P + F P
P R = T P T P + F N
F 1 = 2 × Pr e c i s i o n × Re c a l l Pr e s c i s i o n + Re c a l l
A true positive (TP) refers to the number of samples correctly classified as positive, while a true negative (TN) refers to those correctly classified as negative. A false positive (FP) indicates negative samples misclassified as positive, while a false negative (FN) represents positive samples misclassified as negative.
Accuracy measures the proportion of correctly predicted samples, reflecting the model’s overall performance. Precision indicates the proportion of actual positives among predicted positives, showing the model’s ability to avoid false positives. Recall measures the model’s ability to identify actual positive samples. The F1 score, as the harmonic mean of precision and recall, balances both metrics, making it suitable for evaluating models on imbalanced data. The closer these values are to 1, the better the model’s performance.
This study utilized 249 data samples obtained from experiments on 51 connection specimens. For model development, 174 samples (~70%) were used for training (with fivefold cross-validation applied), and 75 samples (~30%) were used for testing. The evaluation metrics (AC, RE, PR, and F1) were computed for the training, cross-validation, and test sets based on different classification algorithms, as shown in Table 9 and Figure 8.
To demonstrate the prediction performance of these classification models on the dataset, Figure 9 presents a confusion matrix. The horizontal and vertical axes, respectively, represent the actual and predicted groups, forming a 51 × 51 grid to reflect all possible prediction outcomes. The color intensity of each grid indicates the accuracy of the predictions. Darker colors represent a closer alignment between the predicted and actual results. A higher number of dark-colored grids along the diagonal, with deeper colors, signifies greater accuracy in the model’s predictions.
As shown in Table 9 and Figure 8 and Figure 9, the BPNN classification model shows fewer and lighter-colored dark grids along the diagonal of the confusion matrix compared with the other algorithms, with 8 grids within 52 grids. Its AC, RE, PR, and F1 values—all of which were below 0.3—were also lower than those of the other algorithms in the dataset, as shown in Figure 7. This indicated that the BPNN model performed poorly in fitting the data in this experiment and was not suitable for classifying the experimental samples.
In contrast, RF and ExTrees have 30 and 28 dark grids, respectively, along the diagonal in their separate confusion matrices in Figure 9, indicating that they correctly classified over half (26) of the specimen types with an accuracy above 50%. This shows that RF and ExTrees were more effective in predicting classifications than the other algorithms were. As seen in Table 7 and Figure 8, RF and ExTrees performed similarly on the training set with high evaluation metrics, demonstrating that both algorithms learned effectively from the training data.
In the cross-validation set, the four evaluation metrics of ExTrees were 0.608, 0.608, 0.602, and 0.586, which were slightly higher than those of RF (0.608, 0.608, 0.602, 0.586), indicating that ExTrees had slightly better generalization ability during the validation process. This could be attributed to the increased randomness introduced in ExTrees during tree construction, which reduced the risk of overfitting. However, in the test set, RF’s four metrics were 0.613, 0.613, 0.684, and 0.602, all of which were slightly higher than those of ExTrees (0.507, 0.507, 0.573, 0.509), indicating that RF performed better on unseen data and exhibited stronger generalization ability. As a result, among the eight algorithms, RF provided the best prediction performance for the mechanical properties of the thin-walled steel–ply–bamboo connections. The pseudo-code of the RF is shown in Algorithm 1.
Therefore, in the subsequent hyperparameter optimization stage, the optimal RF model was used for further refinement.
Algorithm 1: Pseudo-code of RF (with Simulated Annealing Optimization)
Input: x, y, initial_params
Output: best_params, accuracy
1.  import RandomForestClassifier from sklearn.ensemble
2.  import train_test_split, cross_val_score from sklearn.model_selection
3.  import basinhopping from scipy.optimize
4.  import numpy as np
5.  # Generate simulation data
6.  Set random seed: np.random.seed(42)
7.  X = Generate random data with shape (1000, 7)
8.  y = Generate random binary labels with shape (1000,)
9.  # Partition Training and Test Sets
10. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42)
11. # Define Random Forest Classifier function
12. Define function random_forest_classifier(params):
13. Extract n_estimators, max_depth, min_samples_split, min_samples_leaf from params
14.    Initialize RandomForestClassifier with given parameters and random_state = 42
15.    Perform cross-validation and store the scores
16.    Return negative mean score from cross-validation
17. # Initial parameters for optimization
18. Set initial_params = [100, 10, 2, 1]
19. # Bounds for hyperparameters
20. Set bounds = [(10, 500), (1, 50), (2, 50), (1, 50)]
21. # Optimize hyperparameters using Simulated Annealing
22. result = basinhopping(random_forest_classifier, initial_params, niter = 100, T = 100, stepsize = 1, minimizer_kwargs = {“bounds”: bounds}, disp = True)
23. # Extract best parameters from the optimization result
24. best_params = result.x
25. # Print best parameters found
26. Print “Best parameters found: ”, best_params
27. # Train the model with best parameters and evaluate performance
28. Initialize RandomForestClassifier with best_params and random_state = 42
29. Fit model to X_train and y_train
30. Calculate accuracy on the test set: accuracy = best_clf.score(X_test, y_test)
31. # Print accuracy on the test set
32. Print “Accuracy on test set: ”, accuracy

9. Hyperparameter Optimization Experiment

This study investigates five hyperparameter optimization algorithms: Grid Search (GS), Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Simulated Annealing (SA), and Bayesian Optimization (BO). These algorithms efficiently identify hyperparameters that best match the optimal model, each offering unique advantages in data processing and optimization. Grid Search systematically explores the parameter space, ensuring comprehensive coverage and stable optimization results, which makes it suitable for small-scale parameter searches. Genetic Algorithm mimics biological evolution, providing robust global search capabilities for complex nonlinear problems. Particle Swarm Optimization, based on group collaboration, offers fast convergence, making it ideal for rapid optimization in high-dimensional parameter spaces. Simulated Annealing, inspired by the physical annealing process, effectively escapes local optima, making it suitable for complex optimization scenarios. Bayesian Optimization leverages probabilistic models to predict parameter performance, significantly reducing computational costs, particularly for expensive evaluation functions. The integrated application of these algorithms provides efficient and reliable solutions for hyperparameter optimization in machine learning models for civil engineering, enhancing the precision and optimization of sustainable structural design.

9.1. Hyperparameter Optimization Algorithm

Grid Search (GS) is a hyperparameter optimization technique that exhaustively searches a predefined set of hyperparameters to find the optimal combination for a model. It trains the model with each possible combination and evaluates the performance, often using cross-validation, while selecting the best hyperparameters based on the evaluation metric.
The Genetic Algorithm (GA) is an optimization technique inspired by natural selection, where potential solutions evolve over generations. Solutions are encoded as chromosomes, which undergo selection, crossover, and mutation to explore the solution space [51]. The fittest individuals are selected based on a fitness function, and the process repeats until an optimal or near-optimal solution is found.
Particle Swarm Optimization (PSO) is a population-based optimization technique inspired by the behavior of birds flocking or fish schooling. Particles move through the solution space, adjusting their positions based on their own experience and that of their neighbors. By balancing exploration and exploitation, particles iteratively update their velocity and position until they converge on the optimal solution or meet a stopping criterion.
Simulated Annealing (SA) is a probabilistic optimization method inspired by metallurgical annealing, and it is designed to find approximate global optima in large search spaces. By allowing both downhill and occasional uphill moves, SA avoids local minima. As the “temperature” decreases over time, the algorithm focuses on fine-tuning, mimicking the slow cooling process in metals to converge toward an optimal solution [52,53].
Bayesian Optimization (BO) is a sequential optimization technique that uses a probabilistic model, often a Gaussian Process, to efficiently find the global optimum with minimal evaluations. It balances exploration and exploitation by selecting the next point where the model predicts the most improvement. This iterative process allows BO to explore complex, expensive-to-evaluate search spaces effectively.

9.2. Performance Evaluation of the Hyperparameter Optimization Algorithms

The metrics employed for evaluating hyperparameter optimization mirrored those utilized in the machine learning model assessment, encompassing AC, RE, PR, and F1. The performance on the dataset is detailed in Table 10.
As shown in Table 10 and Figure 10, Bayesian Optimization (BO) performed exceptionally well on the cross-validation set, achieving the highest accuracy (0.599), recall (0.651), and F1 score (0.553), although its precision (0.572) was slightly lower than that of Grid Search (GS) at 0.562. Particle Swarm Optimization (PSO) recorded slightly higher accuracy (0.572) and recall (0.566) than Simulated Annealing (SA) on the cross-validation set, but its F1 score (0.539) was lower than that of BO and GS, indicating BO ’s superior overall performance.
On the test set, Bayesian Optimization (BO) demonstrated outstanding performance, achieving the highest accuracy (0.717), precision (0.826), and F1 score (0.735), despite a slightly lower recall (0.629) compared to Grid Search (GS) at 0.724. Grid Search recorded a precision of 0.684 but a lower F1 score (0.602) than BO. Simulated Annealing (SA), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO) performed less effectively, with accuracy of 0.627, 0.547, and 0.555, and F1 scores of 0.601, 0.548, and 0.545, respectively, indicating weaker generalization capabilities compared to BO. Therefore, BO is the optimal hyperparameter optimization method for Random Forest (RF) in predicting TWS structural classification, significantly reducing computational costs while maintaining high predictive accuracy (71.7% on the test set).
The superior performance of BO can be attributed to its sample-efficient search strategy. Unlike GS (that exhaustively evaluates all combinations, which is often impractical for many parameters) or population-based methods such as GA and PSO (which may require many iterations and risk getting trapped in suboptimal regions), BO uses a probabilistic model to smartly explore the hyperparameter space. Consequently, BO was able to find near-optimal parameters within relatively few iterations, offering an excellent trade-off between accuracy and computational time. In our case, BO achieved a high accuracy with around 50 evaluations, whereas other methods would have required substantially more iterations or did not reach the same level of performance within the iteration limit. These advantages explain why BO outperformed the other algorithms and justify its selection as the preferred hyperparameter tuning method. The pseudo-code of the BO-optimized RF model is shown in Algorithm 2. Next, we will use the BO-optimized RF to predict classifications for the entire dataset.
Table 11 indicates part of the prediction classification results, while the complete set is provided in Appendix A.1. The analysis revealed that, among the 75 samples, 54 achieved perfect prediction accuracy across all factors, while 22 exhibited only a single factor that was incorrectly predicted. Therefore, the exact accuracy is 72%, and the approximate accuracy is 88%, which aligns with expectations. These data indicated that the RF model demonstrated high prediction accuracy and strong fitting capability after using the BO hyperparameter optimization.
This result not only helped build high-performance machine learning models but also significantly enhanced model scores through resource-efficient hyperparameter optimization, ultimately leading to the development of an optimal clustering model for connections within TWS structures with improved predictive accuracy and sustainability-driven design insights.
Algorithm 2: Pseudo-code of RF (with Bayesian Optimization of Hyperparameters).
Input: x, y, bounds, initial_guess
Output: best_params, best_score
1. import numpy as np
2. from sklearn.ensemble import RandomForestClassifier
3. from sklearn.model_selection import train_test_split, cross_val_score
4. from scipy.optimize import minimize
5. # Data shuffling and partitioning ratio
6. shuffle_data = True
7. train_ratio = 0.7
8. cross_validation = 6
9. #data loading
10. X, y = np.array([[1, 2, 3, 4, 5, 6, 7], [2, 3, 4, 5, 6, 7, 8]]).T, np.array([0, 1])
11. if shuffle_data:
12.    indices = np.arange(len(X))
13.    np.random.shuffle(indices)
14.    X, y = X[indices], y[indices]
15. X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = train_ratio)
16. # Random forest classifier
17. rf = RandomForestClassifier()
18. # Cross-verify the score function
19. def score(params):
20.    n_estimators, max_depth = int(params [0]), int(params [1])
21.    rf.set_params(n_estimators = n_estimators, max_depth = max_depth)
22.    return -np.mean(cross_val_score(rf, X_train, y_train, cv = cross_validation))
23. #Bayesian optimization hyperparameters
24. bounds = [(10, 200), (1, 50)]
25. initial_guess = [100, 10]
26. result = minimize(score, initial_guess, method = ‘L-BFGS-B’, bounds = bounds)
27. best_params = result.x
28. best_score = -result.fun
29. print(“Optimum hyperparameter: ”, best_params)
30. print(“Best cross-validation score: ”, best_score)

10. Discussion

Based on Figure 11, which illustrates the feature importance derived from machine learning, the ranking of key features for material structural component selection is as follows: Ke (16.00%) emerges as the most critical factor, followed by Fm/N (15.60%), δu/mm (15.20%), Fu/N (14.60%), δm/mm (14.50%), δy/mm(12.50%), and μ (11.40%). This ranking suggests that Ke, representing a measure of elastic stiffness or structural resilience, plays the most significant role in determining material selection, likely due to its direct influence on the component’s ability to withstand deformation under load. Following Ke, Fm/N and δu/mm indicate their importance in assessing load-bearing capacity and ductility, respectively, which are critical for tensile strength and structural integrity in real-world applications.
These feature importance rankings highlight key mechanical properties that govern the performance of materials in civil engineering contexts. For instance, Ke is closely tied to a material’s resistance to elastic deformation, Fm/N reflects its tensile or compressive strength, and μ/mm indicates its ability to absorb energy before failure. By focusing on these properties—particularly elastic stiffness, load capacity, and ductility—engineers can enhance the durability and lifespan of structural components. This insight underscores the need to prioritize the optimization of these mechanical characteristics during material selection and design to improve the sustainability and reliability of structures.
This study advances the theoretical framework of structural engineering by integrating factor analysis and machine learning to predict the mechanical performance of thin-walled steel–ply–bamboo (TWS) composite structures. Factor analysis effectively reduced the dimensionality of complex mechanical datasets, identifying critical parameters such as elastic stiffness (Ke), maximum force (Fm), and ductility ratio (μ) that govern the behaviors of connections. The superior performance of the Random Forest (RF) model, achieving 61% test set accuracy and improving to 67% with Bayesian Optimization, underscores the efficacy of ensemble learning and hyperparameter optimization in handling nonlinear, high-dimensional engineering data. This work extends prior research on data-driven methods [23,24,25,26,27,28,29], contributing to the theoretical understanding of sustainable composite structures, demonstrating the potential of machine learning to uncover intricate material–structure relationships and, thus, enriching predictive modeling in civil engineering.
The practical implications of this study are significant for sustainable construction. Leveraging bamboo’s renewable and low-carbon properties, the research promotes eco-friendly alternatives to traditional materials, reducing the environmental footprint of steel-based structures. The identification of optimal connection types, such as SS-DL-4.2STS30-MT15, provides engineers with data-driven guidance for designing robust shear walls for applications such as housing and bridges. The RF model’s high predictive accuracy, enhanced by Bayesian Optimization, streamlines structural design by minimizing reliance on costly experimental testing. Furthermore, the methodology supports the integration of bamboo into building standards such as AISI S400-15 [32], encouraging policy shifts toward sustainable materials. This approach not only enhances construction efficiency but also aligns with global sustainability goals, offering a scalable solution for green infrastructure development.

11. Conclusions

(1)
This study proposed a new theoretical framework for column design based on machine learning (ML) technology. The shortcomings of structural type prediction for matching the performance of thin-walled steel–ply–bamboo structures were addressed. The prediction results were explained and analyzed with emphasis on the importance of features, as shown in Figure 11. Then, in combination with hyperparameter optimization, a second comparison was carried out to determine the main influencing parameters and optimize the model to obtain the optimal selection.
(2)
The design stage provided a feasible method for the mechanical property evaluation of thin-walled steel–ply–bamboo composite walls based on factor analysis and data analysis. The performance of TWS composite structure connections was most affected by Ke, Fm, and Fu, and least by μ. According to the comprehensive scoring model of the factor analysis, the optimal structure was SS-DL-4.2STS30-MT15 (the reader is referred to Section 7.3. (Analysis Results) for the specific designation).
(3)
A dataset of 249 data samples reflecting the mechanical performance of TWS composite structure connections was compiled from tests on 51 distinct connection types (designed using an orthogonal experimental approach). Statistical analysis and machine learning methods were employed to investigate how key parameters (such as initial stiffness, yield load and its corresponding displacement, peak load and its corresponding displacement, failure load, and the ductility coefficient) affect the connection performance. The applicability of eight machine learning classification models (XGBoost, SVM, KNN, RF, NB, BPNN, ExTrees, and LR) was compared. The results showed that the prediction accuracy of BPNN was relatively poor, while RF and ExTrees models achieved much higher predictive performance, with the RF model slightly outperforming ExTrees on the test data.
(4)
Hyperparameter optimization is a crucial process for ensuring the optimal performance of machine learning algorithms. The optimization effects of five hyperparameter optimization algorithms—namely, GS, GA, PSO, SA, and BO—were compared. After comparison, the Bayesian algorithm was found to be relatively superior in terms of accuracy, recall rate, precision, and F1 score (50 iterations).
(5)
The integration of bamboo with thin-walled steel in connection design provides a clear sustainability benefit. Bamboo’s rapid renewability and carbon sequestration capacity allow these “eco-designed” composite connections to significantly reduce the overall environmental footprint compared with conventional all-steel connections. Leveraging machine learning to optimize the connection configurations, this study demonstrated the utility of a data-driven approach to green structural design, identifying high-performance connection types that make use of sustainable materials and potentially reducing the need for resource-intensive trial-and-error testing [54].
(6)
Limitations and future work: The classification accuracy of the best model (RF with BO) reached about 67%, which—while the highest in our study—leaves room for improvement. The relatively small dataset (249 samples) and the complexity of distinguishing 51 connection types may have constrained the achievable accuracy. In future research, larger datasets and additional input features (e.g., geometric details or long-term performance factors) could be utilized to improve model generalization. Moreover, exploring advanced or hybrid models (such as deep learning approaches) might further enhance the prediction performance. Finally, beyond static tests, future studies should investigate the dynamic behaviors and long-term durability of TWS–ply–bamboo connections, in addition to conducting life-cycle assessments to quantify the environmental advantages of these eco-designed structures.

Author Contributions

W.X. was responsible for funding acquisition, supervision, project administration, and methodology. Y.G. (Yujie Gao) contributed to software development, formal analysis, methodology, validation, and writing the original draft. Z.Z. handled software development, validation, writing the original draft, and writing, reviewing, and editing the manuscript. Y.J. was in charge of visualization and data curation. J.Z. contributed to writing the original draft and formal analysis. Y.C. conducted investigations. Q.W. managed data curation and resources. T.L. was responsible for data curation and resources. W.J. and Y.G. (Yaoyuan Gao) participated in the investigation process. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Nanjing Tech University. The projects supporting this work include the Key Teaching Reform Project “Reform and Practice of AI-Enhanced Teaching Models for Probability and Statistics” (No. 20250009), the Party Building and Ideological and Political Education Research Project “Research on Working Path to Enhance the Vitality of One-Stop Student Community under the Framework of ‘Grand Ideological and Political Education’” (No. SZ20250340), and the 2024 Undergraduate Curriculum Ideological and Political Demonstration Course Construction Project (No. 20240004). The APC was funded by Nanjing Tech University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The authors assert that they do not possess any recognized conflicting financial interests or personal ties that might have potentially biased the research findings presented in this paper.

Acknowledgments

The authors would like to express their sincere appreciation to colleagues and students at Nanjing Tech University who contributed to the discussions and provided support throughout the research process. All individuals acknowledged have given their consent to be included in this section.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Supplementary Data

Data that are supplementary to this article are shown in Table A1, Table A2 and Table A3.

Appendix A.1

Table A1. The optimal model’s prediction classification results.
Table A1. The optimal model’s prediction classification results.
Predicted ResultsTest Series
CD-3.5S15-M15CD-3.5S15-M15
SS-UF-4.2STS10-CL15SS-UF-4.2STS10-CL15
SS-UF-4.2STS15-CL15CD-4.2S30-M15
CU-4.2S30-M15CU-4.2S30-M15
SS-UF-4.2STS30-CL15SS-UF-4.2STS30-CL15
CU-4.2S15-M15CU-4.2S15-M15
SS-DL-4.2STS10-MT15SS-DL-4.2STS10-MT15
CD-4.2S10-C15CD-4.2S10-C15
CD-4.2S30-C15CD-4.2S30-C15
SS-DL-4.2STS10-CL15SS-UF-4.2STS10-CL15
SS-DL-4.2STS15-CL15SS-DL-4.2STS15-CL15
SS-UF-3.5STS15-CL15SS-DL-4.2STS15-MT2.5
SS-UF-4.2STS15-CL15CD-4.2S30-M2.5
SS-DL-4.2STS10-CL15SS-DL-4.2STS10-CL15
SS-DL-3.5PTS15-CL15SS-UF-4.2STS30-CL15
SS-UF-4.2STS10-CL15CD-4.2S15-M2.5
CU-4.2S10-C15CD-4.2S10-M15
SS-UF-4.2STS30-CL15SS-UF-4.2STS30-CL15
SS-DL-3.5STS15-CL15SS-DL-3.5STS15-CL15
CD-3.5S15-C15CD-3.5P15-M15
CD-4.2S10-C15CD-4.2S10-C15
SS-UF-4.2STS15-MT30SS-UF-4.2STS15-MT30
CU-4.2S15-C15CD-4.2S30-C2.5
SS-DL-3.5PTS15-MT15SS-DL-3.5PTS15-MT15
SS-DL-4.2STS10-CL15SS-DL-4.2STS10-CL15
CD-4.2S15-C15CD-4.2S15-C15
SS-UF-4.2STS15-CL15SS-UF-4.2STS15-CL15
CD-4.2S15-M2.5CD-4.2S15-M2.5
CU-3.5S15-C15CU-3.5S15-C15
CU-4.2S10-M15CU-4.2S10-M15
CU-4.2S30-C15CD-4.2S30-C30
SS-UF-3.5PTS15-CL15SS-UF-3.5PTS15-CL15
SS-UF-4.2STS30-MT15SS-UF-4.2STS30-MT15
SS-DL-3.5PTS15-CL15SS-DL-3.5PTS15-CL15
CD-4.2S30-C2.5CD-4.2S30-C2.5
SS-DL-4.2STS30-CL15SS-DL-4.2STS30-CL15
SS-DL-4.2STS15-CL15SS-DL-4.2STS15-MT2.5
SS-DL-4.2STS10-CL15SS-DL-4.2STS10-CL15
SS-DL-4.2STS15-MT30SS-DL-4.2STS15-MT30
SS-UF-4.2STS15-CL15CD-4.2S30-C15
CD-4.2S15-M2.5CU-4.2S15-M15
CU-4.2S40-M15CU-4.2S15-M2.5
CD-4.2S30-M15SS-UF-4.2STS30-CL15
CU-3.5P15-C15CD-3.5P15-M15
SS-UF-4.2STS15-MT2.5SS-UF-4.2STS15-MT2.5
SS-UF-3.5PTS15-CL15SS-UF-3.5PTS15-CL15
SS-DL-3.5STS15-CL15SS-DL-4.2STS15-MT15
CD-4.2S15-M2.5CU-3.5P15-M15
CD-4.2S15-M15CD-4.2S15-M15
CD-4.2S30-C15CU-3.5P15-C15
SS-UF-4.2STS10-CL15CD-4.2S15-M15
SS-UF-4.2STS15-MT2.5SS-UF-4.2STS15-MT2.5
SS-DL-3.5STS15-MT15SS-DL-3.5STS15-MT15
SS-DL-4.2STS30-CL15SS-DL-4.2STS30-CL15
CD-4.2S30-C30CD-4.2S30-C30
SS-UF-4.2STS10-MT15SS-UF-4.2STS10-MT15
CU-4.2S40-M15CU-4.2S40-M15
CU-3.5P15-M15CU-3.5P15-M15
CU-3.5P15-C15CU-3.5P15-C15
CD-4.2S30-C2.5CU-4.2S15-C15
CD-4.2S10-C15CU-4.2S15-M2.5
CU-3.5S15-C15CU-3.5P15-C15
SS-DL-4.2STS15-MT2.5SS-DL-4.2STS15-MT30
CU-3.5P15-C15CD-3.5P15-M15
SS-DL-4.2STS15-CL15SS-DL-4.2STS15-CL15
SS-DL-4.2STS30-CL15SS-DL-4.2STS30-CL15
CD-4.2S15-C15CD-4.2S15-C15
SS-UF-4.2STS15-MT15SS-UF-4.2STS15-MT15
SS-DL-3.5STS15-CL15SS-DL-3.5STS15-CL15
CU-4.2S30-C15CU-4.2S30-C15
CD-3.5S15-C15CD-3.5S15-C15
SS-UF-3.5STS15-CL15SS-UF-3.5STS15-CL15
SS-UF-4.2STS15-CL15SS-UF-4.2STS15-CL15
SS-DL-4.2STS10-MT15SS-DL-4.2STS10-MT15
SS-UF-4.2STS10-CL15SS-UF-4.2STS10-CL15

Appendix A.2

Table A2. Mechanical performance data of the thin-walled steel–ply–bamboo connections.
Table A2. Mechanical performance data of the thin-walled steel–ply–bamboo connections.
Test SeriesNo.KeδyδmFmδuFuμ
CD-3.5P15-M15-11 590.94 3.14 5.13 1402.00 6.27 1191.70 2.00
CD-3.5P15-M15-22 706.42 2.83 4.17 1321.00 4.55 1122.85 1.61
CD-3.5P15-M15-33 447.93 3.63 4.89 1327.00 4.89 1127.95 1.35
CD-3.5P15-M15-AVGAVG581.76 3.20 4.73 1350.00 5.24 1147.50 1.65
CD-3.5S15-M15-11 205.41 4.24 5.85 1195.00 6.34 1015.75 1.50
CD-3.5S15-M15-22 217.53 4.33 5.90 1253.00 6.42 1065.05 1.48
CD-3.5S15-M15-33 360.38 4.38 5.50 1228.00 6.40 1043.80 1.46
CD-3.5S15-M15-AVGAVG261.11 4.32 5.75 1225.33 6.39 1041.53 1.48
CD-4.2S10-M15-11 756.77 2.09 3.99 1523.00 4.61 1294.55 2.20
CD-4.2S10-M15-22 1194.73 4.41 4.41 1359.00 5.56 1155.15 1.26
CD-4.2S10-M15-33 963.14 2.91 2.80 1411.00 3.78 1199.35 1.30
CD-4.2S10-M15-AVGAVG971.55 3.14 3.73 1431.00 4.65 1216.35 1.59
CD-4.2S15-M2.5-11 1102.48 1.61 5.56 1447.00 5.77 1229.95 3.59
CD-4.2S15-M2.5-22 638.97 2.95 4.35 1492.00 5.52 1268.20 1.87
CD-4.2S15-M2.5-33 1119.55 2.09 3.20 1503.00 5.52 1277.55 2.65
CD-4.2S15-M2.5-AVGAVG953.67 2.21 4.37 1480.67 5.60 1258.57 2.70
CD-4.2S15-M15-11 371.23 2.35 3.69 1484.00 4.90 1261.40 2.09
CD-4.2S15-M15-22 529.78 3.50 4.45 1592.00 6.28 1353.20 1.79
CD-4.2S15-M15-33 373.48 3.35 4.71 1563.00 5.95 1328.55 1.78
CD-4.2S15-M15-AVGAVG424.83 3.07 4.28 1546.33 5.71 1314.38 1.89
CD-4.2S30-M2.5-11 484.00 3.28 4.57 1838.00 5.65 1562.30 1.72
CD-4.2S30-M2.5-22 735.17 2.85 4.71 1735.00 7.42 1474.75 2.60
CD-4.2S30-M2.5-33 695.46 2.81 4.68 1723.00 5.66 1464.55 2.01
CD-4.2S30-M2.5-AVGAVG638.21 2.98 4.65 1765.33 6.24 1500.53 2.11
CD-4.2S30-M15-11 355.63 3.91 6.05 1699.00 6.88 1444.15 1.76
CD-4.2S30-M15-22 362.75 4.20 5.60 1646.00 5.59 1399.10 1.33
CD-4.2S30-M15-33 498.47 2.73 5.46 1706.00 6.93 1450.10 2.54
CD-4.2S30-M15-AVGAVG405.62 3.61 5.70 1683.67 6.47 1431.12 1.88
CD-4.2S30-M30-11 502.79 3.88 6.10 1668.00 6.78 1417.80 1.75
CD-4.2S30-M30-22 433.64 3.48 5.31 1521.00 6.45 1292.85 1.86
CD-4.2S30-M30-33 347.84 4.11 6.08 1627.00 7.31 1382.95 1.78
CD-4.2S30-M30-AVGAVG428.09 3.82 5.83 1605.33 6.85 1364.53 1.79
CD-3.5P15-C15-11 224.26 3.84 4.13 1026.00 8.28 872.10 2.16
CD-3.5P15-C15-22 416.00 4.03 5.44 1092.00 8.63 928.20 2.14
CD-3.5P15-C15-33 373.51 4.20 4.97 1128.00 8.30 958.80 1.98
CD-3.5P15-C15-44 394.43 4.19 4.04 1134.00 7.35 963.90 1.75
CD-3.5P15-C15-55 583.26 4.76 6.21 1254.00 9.08 1065.90 1.91
CD-3.5P15-C15-AVGAVG398.29 4.20 4.96 1126.80 8.33 957.78 1.99
CD-3.5S15-C15-11 401.55 4.20 4.96 1295.00 6.83 1100.75 1.63
CD-3.5S15-C15-22 379.85 3.20 4.89 1263.00 6.53 1073.55 2.04
CD-3.5S15-C15-33 411.93 4.04 4.99 1278.00 7.30 1086.30 1.81
CD-3.5S15-C15-44 361.19 4.53 6.62 1219.00 8.59 1036.15 1.90
CD-3.5S15-C15-55 503.96 4.93 4.02 1241.00 6.31 1054.85 1.28
CD-3.5S15-C15-AVGAVG411.69 4.18 5.10 1259.20 7.11 1070.32 1.73
CD-4.2S10-C15-11 1807.50 4.41 4.96 1446.00 7.01 1229.10 1.59
CD-4.2S10-C15-22 2074.07 3.31 3.95 1400.00 5.91 1190.00 1.79
CD-4.2S10-C15-33 1720.35 3.73 3.28 1458.00 7.19 1239.30 1.93
CD-4.2S10-C15-44 1397.65 3.80 0.47 1485.00 8.76 1262.25 2.31
CD-4.2S10-C15-55 2358.40 2.95 3.99 1474.00 6.51 1252.90 2.21
CD-4.2S10-C15-AVGAVG1871.60 3.64 3.33 1452.60 7.08 1234.71 1.96
CD-4.2S15-C15-11 2454.40 2.42 4.96 1534.00 7.44 1303.90 3.07
CD-4.2S15-C15-22 541.79 2.77 4.42 1517.00 6.34 1289.45 2.29
CD-4.2S15-C15-33 1065.74 2.94 5.00 1532.00 7.75 1302.20 2.64
CD-4.2S15-C15-44 410.75 2.93 4.56 1376.00 6.18 1169.60 2.11
CD-4.2S15-C15-55 592.45 2.87 4.09 1570.00 7.71 1334.50 2.69
CD-4.2S15-C15-AVGAVG1013.02 2.79 4.61 1505.80 7.08 1279.93 2.56
CD-4.2S30-C2.5-11 892.50 3.40 4.96 1785.00 6.78 1517.25 1.99
CD-4.2S30-C2.5-22 481.90 3.21 5.17 1771.00 5.17 1505.35 1.61
CD-4.2S30-C2.5-33 894.65 2.46 5.00 1881.00 6.57 1598.85 2.67
CD-4.2S30-C2.5-44 1325.52 2.06 4.56 1922.00 8.23 1633.70 4.00
CD-4.2S30-C2.5-55 759.61 1.72 4.09 1937.00 4.64 1646.45 2.70
CD-4.2S30-C2.5-AVGAVG870.84 2.57 4.76 1859.20 6.28 1580.32 2.59
CD-4.2S30-C15-11 731.11 3.40 4.96 1645.00 7.41 1398.25 2.18
CD-4.2S30-C15-22 915.36 3.12 7.46 1579.00 9.97 1342.15 3.20
CD-4.2S30-C15-33 788.16 3.41 5.33 1598.00 8.73 1358.30 2.56
CD-4.2S30-C15-44 614.05 3.14 4.95 1704.00 9.05 1448.40 2.88
CD-4.2S30-C15-55 766.75 3.36 6.41 1591.00 9.08 1352.35 2.70
CD-4.2S30-C15-AVGAVG763.09 3.29 5.82 1623.40 8.85 1379.89 2.70
CD-4.2S30-C30-11 441.29 5.23 7.49 1710.00 11.62 1453.50 2.22
CD-4.2S30-C30-22 683.56 4.43 7.46 1726.00 9.23 1467.10 2.08
CD-4.2S30-C30-33 505.62 4.23 7.48 1709.00 9.96 1452.65 2.35
CD-4.2S30-C30-44 272.91 5.44 7.47 1385.00 9.89 1177.25 1.82
CD-4.2S30-C30-55 517.70 6.21 7.46 1579.00 13.97 1342.15 2.25
CD-4.2S30-C30-AVGAVG484.22 5.11 7.47 1621.80 10.93 1378.53 2.15
CU-3.5P15-M15-11 703.47 2.69 3.57 1421.00 4.80 1207.85 1.79
CU-3.5P15-M15-22 578.67 2.70 2.91 1519.00 3.98 1291.15 1.48
CU-3.5P15-M15-33 870.21 1.36 3.80 1351.00 4.96 1148.35 3.63
CU-3.5P15-M15-AVGAVG717.45 2.25 3.43 1430.33 4.58 1215.78 2.30
CU-3.5S15-M15-11 881.26 4.62 6.26 1258.00 7.68 1069.30 1.66
CU-3.5S15-M15-22 328.64 4.66 7.12 1483.00 8.50 1260.55 1.82
CU-3.5S15-M15-33 364.15 4.38 6.95 1341.00 9.39 1139.85 2.14
CU-3.5S15-M15-AVGAVG524.69 4.55 6.78 1360.67 8.52 1156.57 1.88
CU-4.2S10-M15-11 846.11 1.54 2.71 1413.00 2.71 1201.05 1.76
CU-4.2S10-M15-22 1111.11 2.40 3.25 1575.00 3.25 1338.75 1.35
CU-4.2S10-M15-33 1243.82 2.00 3.02 1561.00 3.02 1326.85 1.51
CU-4.2S10-M15-AVGAVG1067.01 1.98 2.99 1516.33 2.99 1288.88 1.54
CU-4.2S15-M2.5-11 1046.74 2.11 4.31 1646.00 4.55 1399.10 2.16
CU-4.2S15-M2.5-22 1875.86 3.13 4.65 1768.00 4.68 1502.80 1.50
CU-4.2S15-M2.5-33 1417.71 2.64 3.94 1641.00 4.80 1394.85 1.82
CU-4.2S15-M2.5-AVGAVG1446.77 2.62 4.30 1685.00 4.68 1432.25 1.83
CU-4.2S15-M15-11 1027.60 2.29 4.57 1657.00 6.47 1408.45 2.82
CU-4.2S15-M15-22 1129.02 2.81 4.62 1702.00 5.53 1446.70 1.96
CU-4.2S15-M15-33 1382.47 2.38 4.14 1735.00 3.76 1474.75 1.58
CU-4.2S15-M15-AVGAVG1179.70 2.49 4.44 1698.00 5.25 1443.30 2.12
CU-4.2S30-M15-11 884.33 4.49 8.12 1919.00 9.06 1631.15 2.02
CU-4.2S30-M15-22 1242.83 3.42 6.59 1712.00 8.31 1455.20 2.43
CU-4.2S30-M15-33 929.76 4.41 9.06 1734.00 10.90 1473.90 2.47
CU-4.2S30-M15-AVGAVG1018.97 4.10 7.92 1788.33 9.42 1520.08 2.31
CU-4.2S40-M15-11 1337.97 2.23 4.40 1682.00 5.21 1430.00 2.34
CU-4.2S40-M15-22 1072.90 2.53 3.73 1692.00 4.13 1438.00 1.63
CU-4.2S40-M15-33 1409.56 1.32 3.00 1694.00 4.32 1440.00 3.28
CU-4.2S40-M15-AVGAVG1273.48 2.03 3.71 1689.33 4.55 1436.00 2.42
CU-3.5P15-C15-11 761.71 3.35 7.46 1333.00 12.45 1133.05 3.72
CU-3.5P15-C15-22 300.43 3.97 6.53 1397.00 8.52 1187.45 2.15
CU-3.5P15-C15-33 488.00 2.89 4.80 1373.00 7.48 1167.05 2.59
CU-3.5P15-C15-44 977.45 3.36 5.27 1335.00 6.34 1134.75 1.89
CU-3.5P15-C15-55 199.86 5.41 3.99 1424.00 7.45 1210.40 1.38
CU-3.5P15-C15-AVGAVG545.49 3.80 5.61 1372.40 8.45 1166.54 2.34
CU-3.5S15-C15-11 277.81 4.26 6.29 1271.00 7.97 1080.35 1.87
CU-3.5S15-C15-22 273.20 4.12 6.75 1325.00 7.90 1126.25 1.92
CU-3.5S15-C15-33 277.40 4.16 5.00 1311.00 7.39 1114.00 1.77
CU-3.5S15-C15-44 348.82 3.48 6.46 1279.00 6.52 1260.75 1.87
CU-3.5S15-C15-55 248.61 5.22 7.92 1386.00 7.80 1178.10 1.49
CU-3.5S15-C15-AVGAVG285.17 4.25 6.48 1314.40 7.52 1151.89 1.79
CU-4.2S10-C15-11 928.39 3.88 4.97 1439.00 5.97 1223.15 1.54
CU-4.2S10-C15-22 727.41 3.09 3.99 1473.00 5.01 1252.05 1.62
CU-4.2S10-C15-33 869.41 3.89 5.00 1428.00 6.00 1213.80 1.54
CU-4.2S10-C15-44 805.16 4.79 7.46 1248.00 8.57 1060.80 1.79
CU-4.2S10-C15-55 912.24 5.91 7.46 1528.00 9.10 1298.80 1.54
CU-4.2S10-C15-AVGAVG848.52 4.31 5.78 1423.20 6.93 1209.72 1.61
CU-4.2S15-C15-11 1086.06 3.31 4.96 1792.00 6.45 1523.20 1.95
CU-4.2S15-C15-22 1204.83 2.14 6.61 1747.00 8.47 1484.95 3.96
CU-4.2S15-C15-33 1165.61 2.52 6.38 1661.00 7.32 1411.85 2.90
CU-4.2S15-C15-44 968.00 3.49 8.02 1573.00 8.10 1337.05 2.32
CU-4.2S15-C15-55 951.64 5.95 8.45 1594.00 9.94 1354.90 1.67
CU-4.2S15-C15-AVGAVG1075.23 3.48 6.88 1673.40 8.06 1422.39 2.56
CU-4.2S30-C15-11 838.89 5.16 7.46 1510.00 10.01 1283.50 1.94
CU-4.2S30-C15-22 812.35 6.45 9.94 1381.00 10.09 1173.85 1.56
CU-4.2S30-C15-33 980.00 5.37 7.50 1519.00 9.92 1291.15 1.85
CU-4.2S30-C15-44 1295.17 6.04 8.58 1878.00 11.01 1596.30 1.82
CU-4.2S30-C15-55 707.69 8.86 10.56 1610.00 12.56 1368.50 1.42
CU-4.2S30-C15-AVGAVG926.82 6.38 8.81 1579.60 10.72 1342.66 1.72
SS-DL-3.5PTS15-MT15-11 836.26 1.60 3.53 1992.69 3.98 1693.79 2.49
SS-DL-3.5PTS15-MT15-22 733.95 1.99 3.90 1983.00 4.58 1685.55 2.31
SS-DL-3.5PTS15-MT15-33 623.71 2.16 4.27 2006.00 4.30 1705.10 1.99
SS-DL-3.5PTS15-MT15-AVGAVG731.31 1.92 3.90 1993.90 4.29 1694.81 2.26
SS-DL-3.5PTS15-CL15-11 454.17 3.96 5.00 1971.00 7.57 1675.35 1.91
SS-DL-3.5PTS15-CL15-22 647.34 3.81 4.32 1992.00 7.33 1693.20 1.92
SS-DL-3.5PTS15-CL15-33 497.45 4.06 4.56 2012.00 8.23 1710.20 2.03
SS-DL-3.5PTS15-CL15-44 559.61 3.72 5.09 2025.00 7.64 1721.25 2.05
SS-DL-3.5PTS15-CL15-55 477.45 4.04 6.28 1968.00 8.01 1672.80 1.98
SS-DL-3.5PTS15-CL15-AVGAVG527.20 3.92 5.05 1993.60 7.76 1694.56 1.98
SS-UF-3.5PTS15-MT15-11 740.97 4.02 5.34 2211.00 6.07 1879.35 1.51
SS-UF-3.5PTS15-MT15-22 590.31 4.15 5.51 2213.00 6.75 1881.05 1.63
SS-UF-3.5PTS15-MT15-33 692.45 3.57 4.59 2200.00 6.31 1870.00 1.77
SS-UF-3.5PTS15-MT15-AVGAVG674.58 3.91 5.15 2208.00 6.38 1876.80 1.64
SS-UF-3.5PTS15-CL15-11 511.35 4.49 4.63 2101.90 8.27 1786.62 1.84
SS-UF-3.5PTS15-CL15-22 547.72 3.58 5.10 2102.70 7.27 1787.30 2.03
SS-UF-3.5PTS15-CL15-33 535.27 4.03 5.08 2103.60 7.43 1788.06 1.84
SS-UF-3.5PTS15-CL15-44 497.01 4.18 5.16 2110.50 7.89 1793.93 1.89
SS-UF-3.5PTS15-CL15-55 577.00 4.32 5.01 2121.30 7.89 1803.11 1.83
SS-UF-3.5PTS15-CL15-AVGAVG533.67 4.12 5.00 2108.00 7.75 1791.80 1.89
SS-DL-3.5STS15-MT15-11 337.40 4.46 6.30 2093.00 7.09 1779.05 1.59
SS-DL-3.5STS15-MT15-22 375.88 4.83 6.37 2103.00 5.65 1787.55 1.17
SS-DL-3.5STS15-MT15-33 354.91 4.72 6.75 2107.00 6.90 1790.95 1.46
SS-DL-3.5STS15-MT15-AVGAVG356.06 4.67 6.47 2101.00 6.55 1785.85 1.41
SS-DL-3.5STS15-CL15-11 377.45 5.79 7.46 2078.80 9.57 1766.98 1.65
SS-DL-3.5STS15-CL15-22 317.45 6.03 7.90 2083.80 10.20 1771.23 1.69
SS-DL-3.5STS15-CL15-33 287.59 6.36 7.31 2084.80 9.88 1772.08 1.55
SS-DL-3.5STS15-CL15-44 380.91 5.62 6.96 2088.80 10.28 1775.48 1.83
SS-DL-3.5STS15-CL15-55 376.07 5.80 7.89 2093.80 10.53 1779.73 1.82
SS-DL-3.5STS15-CL15-AVGAVG347.89 5.92 7.50 2086.00 10.09 1773.10 1.71
SS-DL-4.2STS10-MT15-11 820.40 2.23 4.00 1774.00 4.59 1507.90 2.06
SS-DL-4.2STS10-MT15-22 858.88 2.42 4.07 1784.00 4.55 1516.40 1.88
SS-DL-4.2STS10-MT15-33 837.91 2.36 4.45 1788.00 4.80 1519.80 2.03
SS-DL-4.2STS10-MT15-AVGAVG839.06 2.34 4.17 1782.00 4.65 1514.70 1.99
SS-DL-4.2STS10-CL15-11 592.45 2.35 3.59 1690.00 4.11 1436.50 1.75
SS-DL-4.2STS10-CL15-22 559.61 2.20 3.59 1557.00 5.04 1323.45 2.29
SS-DL-4.2STS10-CL15-33 544.26 2.32 3.63 1646.00 5.68 1399.10 2.45
SS-DL-4.2STS10-CL15-44 537.86 2.21 3.66 1626.00 5.33 1382.10 2.41
SS-DL-4.2STS10-CL15-55 605.88 2.31 3.67 1641.00 4.95 1394.85 2.14
SS-DL-4.2STS10-CL15-AVGAVG568.01 2.28 3.63 1632.00 5.02 1387.20 2.21
SS-DL-4.2STS15-MT15-11 288.34 5.81 7.82 2497.00 9.33 2122.45 1.60
SS-DL-4.2STS15-MT15-22 295.72 5.26 7.83 2193.00 9.73 1864.05 1.85
SS-DL-4.2STS15-MT15-33 336.19 5.95 7.85 2087.00 8.52 1773.95 1.43
SS-DL-4.2STS15-MT15-AVGAVG306.75 5.67 7.83 2259.00 9.19 1920.15 1.63
SS-DL-4.2STS15-CL15-11 335.76 5.98 7.10 1945.60 10.74 1653.76 1.80
SS-DL-4.2STS15-CL15-22 291.74 5.52 7.39 1908.60 9.65 1622.31 1.75
SS-DL-4.2STS15-CL15-33 311.10 6.06 7.55 2179.60 9.68 1852.66 1.60
SS-DL-4.2STS15-CL15-44 298.24 4.89 7.63 1838.60 9.81 1562.81 2.00
SS-DL-4.2STS15-CL15-55 342.37 5.91 7.67 2222.60 10.11 1889.21 1.71
SS-DL-4.2STS15-CL15-AVGAVG315.84 5.67 7.47 2019.00 10.00 1716.15 1.77
SS-DL-4.2STS30-MT15-11 324.50 4.00 11.22 2518.00 11.92 2140.30 2.98
SS-DL-4.2STS30-MT15-22 317.38 3.91 11.22 2525.00 11.59 2146.25 2.96
SS-DL-4.2STS30-MT15-33 303.80 4.48 11.23 2172.00 12.11 1846.20 2.70
SS-DL-4.2STS30-MT15-AVGAVG315.23 4.13 11.22 2405.00 11.87 2044.25 2.88
SS-DL-4.2STS30-CL15-11 405.17 6.32 9.90 2333.00 11.04 1983.05 1.75
SS-DL-4.2STS30-CL15-22 492.10 6.38 9.94 2397.00 10.60 2037.45 1.66
SS-DL-4.2STS30-CL15-33 458.00 6.36 9.99 2071.00 10.80 1760.35 1.70
SS-DL-4.2STS30-CL15-44 418.10 7.10 10.08 2025.00 10.61 1721.25 1.49
SS-DL-4.2STS30-CL15-55 446.07 7.27 10.08 2061.00 10.85 1751.85 1.49
SS-DL-4.2STS30-CL15-AVGAVG443.89 6.68 10.00 2177.40 10.78 1850.79 1.62
SS-DL-4.2STS15-MT2.5-11 418.95 5.99 6.52 2300.00 9.79 1955.00 1.63
SS-DL-4.2STS15-MT2.5-22 436.95 6.05 6.59 1891.00 8.67 1607.35 1.43
SS-DL-4.2STS15-MT2.5-33 400.90 6.27 6.61 2130.00 8.28 1810.50 1.32
SS-DL-4.2STS15-MT2.5-AVGAVG418.93 6.10 6.57 2107.00 8.91 1790.95 1.46
SS-DL-4.2STS15-MT30-11 331.84 6.19 6.55 2162.00 8.41 1837.70 1.36
SS-DL-4.2STS15-MT30-22 333.22 5.87 6.95 2211.00 8.97 1879.35 1.53
SS-DL-4.2STS15-MT30-33 326.70 6.73 6.88 2212.00 9.47 1880.20 1.41
SS-DL-4.2STS15-MT30-AVGAVG330.59 6.26 6.79 2195.00 8.95 1865.75 1.43
SS-UF-3.5STS15-MT15-11 403.01 4.78 6.42 2021.00 7.05 1717.85 1.48
SS-UF-3.5STS15-MT15-22 470.17 4.63 6.42 2088.00 6.98 1774.80 1.51
SS-UF-3.5STS15-MT15-33 434.82 4.75 6.46 1777.00 7.62 1510.45 1.60
SS-UF-3.5STS15-MT15-AVGAVG436.00 4.72 6.43 1962.00 7.22 1667.70 1.53
SS-UF-3.5STS15-CL15-11 540.26 5.88 6.13 1810.20 10.34 1538.67 1.76
SS-UF-3.5STS15-CL15-22 492.96 5.69 5.95 1799.20 9.66 1529.32 1.70
SS-UF-3.5STS15-CL15-33 523.05 5.69 5.96 1798.80 10.62 1528.98 1.86
SS-UF-3.5STS15-CL15-44 544.17 6.30 6.57 2055.20 9.40 1746.92 1.49
SS-UF-3.5STS15-CL15-55 552.86 5.74 6.18 1751.20 9.80 1488.52 1.71
SS-UF-3.5STS15-CL15-AVGAVG530.66 5.86 6.16 1842.92 9.96 1566.48 1.70
SS-UF-4.2STS10-MT15-11 748.35 2.35 3.04 1548.70 3.24 1316.40 1.38
SS-UF-4.2STS10-MT15-22 839.81 2.23 3.12 1478.60 3.31 1256.81 1.49
SS-UF-4.2STS10-MT15-33 759.73 1.91 3.15 1433.70 3.29 1218.65 1.72
SS-UF-4.2STS10-MT15-AVGAVG782.63 2.16 3.10 1487.00 3.28 1263.95 1.53
SS-UF-4.2STS10-CL15-11 377.44 2.16 3.99 1460.60 4.98 1241.51 2.31
SS-UF-4.2STS10-CL15-22 362.44 2.06 4.32 1386.60 5.00 1178.61 2.42
SS-UF-4.2STS10-CL15-33 401.77 2.46 4.43 1494.60 5.50 1270.41 2.24
SS-UF-4.2STS10-CL15-44 354.67 2.48 3.52 1669.60 4.54 1419.16 1.83
SS-UF-4.2STS10-CL15-55 364.29 2.04 3.72 1478.60 4.73 1256.81 2.32
SS-UF-4.2STS10-CL15-AVGAVG372.12 2.24 4.00 1498.00 4.95 1273.30 2.22
SS-UF-4.2STS15-MT15-11 549.47 4.02 4.71 2098.66 4.88 1783.86 1.21
SS-UF-4.2STS15-MT15-22 593.57 3.56 4.74 1816.66 5.58 1544.16 1.57
SS-UF-4.2STS15-MT15-33 487.38 3.82 4.76 1709.66 5.62 1453.21 1.47
SS-UF-4.2STS15-MT15-AVGAVG543.48 3.80 4.74 1874.99 5.36 1593.74 1.42
SS-UF-4.2STS15-CL15-11 456.28 3.01 4.88 2053.20 7.80 1745.22 2.59
SS-UF-4.2STS15-CL15-22 463.06 3.34 4.95 1712.20 7.92 1455.37 2.37
SS-UF-4.2STS15-CL15-33 507.19 3.36 4.99 2096.20 7.23 1781.77 2.15
SS-UF-4.2STS15-CL15-44 497.67 3.12 5.09 2002.20 7.47 1701.87 2.40
SS-UF-4.2STS15-CL15-55 454.89 3.29 5.10 1761.20 7.58 1497.02 2.30
SS-UF-4.2STS15-CL15-AVGAVG475.82 3.22 5.00 1925.00 7.60 1636.25 2.36
SS-UF-4.2STS30-MT15-11 560.10 3.80 5.18 1877.00 5.59 1595.45 1.47
SS-UF-4.2STS30-MT15-22 558.12 3.48 5.23 1913.00 5.95 1626.05 1.71
SS-UF-4.2STS30-MT15-33 525.27 3.33 5.23 1880.00 5.88 1598.00 1.76
SS-UF-4.2STS30-MT15-AVGAVG547.83 3.54 5.21 1890.00 5.81 1606.50 1.65
SS-UF-4.2STS30-CL15-11 411.25 4.03 4.88 1651.30 6.60 1403.61 1.63
SS-UF-4.2STS30-CL15-22 355.37 4.05 4.92 2035.20 6.91 1729.92 1.71
SS-UF-4.2STS30-CL15-33 345.85 3.31 5.02 1941.40 6.65 1650.19 2.01
SS-UF-4.2STS30-CL15-44 403.08 3.98 5.03 1700.70 6.75 1445.60 1.70
SS-UF-4.2STS30-CL15-55 336.50 4.08 5.04 1746.40 7.35 1484.44 1.80
SS-UF-4.2STS30-CL15-AVGAVG370.41 3.89 4.98 1815.00 6.85 1542.75 1.77
SS-UF-4.2STS15-MT2.5-11 503.82 3.22 4.27 2065.00 4.29 1755.25 1.33
SS-UF-4.2STS15-MT2.5-22 547.92 3.26 4.30 1783.40 4.33 1515.89 1.33
SS-UF-4.2STS15-MT2.5-33 441.73 2.82 4.32 1676.67 4.35 1425.17 1.54
SS-UF-4.2STS15-MT2.5-AVGAVG497.83 3.10 4.30 1841.69 4.32 1565.44 1.40
SS-UF-4.2STS15-MT30-11 397.69 4.12 5.00 1647.00 5.85 1399.95 1.42
SS-UF-4.2STS15-MT30-22 341.81 4.14 5.04 2031.40 5.15 1726.69 1.25
SS-UF-4.2STS15-MT30-33 332.29 4.40 5.14 1937.00 5.39 1646.45 1.23
SS-UF-4.2STS15-MT30-AVGAVG357.26 4.22 5.06 1871.80 5.46 1591.03 1.30

Appendix A.3

Table A3. Total parameters and hyperparameters of each model.
Table A3. Total parameters and hyperparameters of each model.
XGBoost KNN RF LR
Parameter nameParameter valueParameter nameParameter valueParameter nameParameter valueParameter nameParameter value
Training time51.167sTraining time0.029sTraining time0.804sTraining time1.778s
Data slice0.7Data slice0.7Data slice0.7Data slice0.7
Data ShuffleYesData ShuffleYesData ShufflebeData ShuffleYes
Cross-validation5Cross-validation5Cross-validation5Cross-validation5
Base learning device (BLD)GbtreeSearch algorithmAutoGuidelines for the evaluation of node splittingGiniRegularizationNone
Number of base learners100Number of leaves30Number of decision trees100Setting constant itemsTRUE
Learning rate0.1Nearest neighbor5With playback samplingTRUEError convergence condition0.001
L1 regular term0Nearest Neighbor Sample Weight FunctionUniformOut-of-bag data testingFALSEMaximum number of iterations1000
L2 regular term1Vector distance algorithmEuclideanMaximum proportion of features considered in the divisionAuto
Sample rate1 Minimum number of samples for internal node splits2
Tree feature sampling rate1 Minimum number of samples for leaf nodes1
Node feature sampling rate1 Minimum weight of samples in leaf nodes0
Minimum weight of samples in leaf nodes0 Maximum depth of the tree10
Maximum depth of the tree10 Maximum number of leaf nodes50
Thresholds for impurity in node partitioning0
SVM NB BPNN ExTrees
Parameter nameParameter valueParameter nameParameter valueParameter nameParameter valueParameter nameParameter value
Training time0.143sTraining time0.041sTraining time45.828sTraining time0.671s
Data slice0.7Data slice0.7Data slice0.7Data slice0.7
Data ShuffleYesData ShuffleYesData ShuffleYesData ShuffleYes
Cross-validation5Cross-validation5Cross-validation5Cross-validation5
Penalty coefficient1A priori distributionGaussian distribution (math.)Activation functionIdentityMaximum proportion of features considered in the divisionNone
Kernel function (math.)LinearAlpha1SolverlbfgsMinimum number of samples for internal node splits2
Kernel function coefficientScaleBinarization threshold0Learning rate0.1Minimum number of samples for leaf nodes1
Kernel function constant (math.)0 L2 regular term1Minimum weight of samples in leaf nodes0
Number of highest terms in the kernel function3 Number of iterations1000Maximum depth of the tree10
error convergence condition0.001 Number of hidden layer 1 neurons100Maximum number of leaf nodes50
Maximum number of iterations1000 Maximum number of leaf nodes50
Multi-category fusion strategyovr Thresholds for impurity in node partitioning0

Appendix A.4

Table A4. Frequency distribution and descriptive statistics of raw data.
Table A4. Frequency distribution and descriptive statistics of raw data.
Variable NameSample SizeMaximumMinimumAverageStandard DeviationMedianVarianceKurtosisSkewnessCoefficient of Variation
Ke2482454.4199.86645.156370.795528.49137,488.8464.8121.9010.575
δy/mm2488.861.323.8811.3493.821.819−0.0710.5310.348
δm/mm24811.230.475.5831.785.0553.1671.0970.9110.319
Fm/N248252510261719.308309.151169695,574.245−0.6090.2220.18
δu/mm24813.972.717.2022.197.0654.797−0.4310.3270.304
Fu/N2482146.25872.11462.252261.8341441.6568,556.849−0.5970.2270.179
μ24841.171.9420.4941.8250.2442.7271.4010.254

Appendix B. Supplementary Figures

The supplementary figures of this article are shown in Figure A1, Figure A2 and Figure A3.
Figure A1. Schematic diagram of the KNN algorithm.
Figure A1. Schematic diagram of the KNN algorithm.
Sustainability 17 06753 g0a1
Figure A2. Schematic diagram of the Random Forest algorithm.
Figure A2. Schematic diagram of the Random Forest algorithm.
Sustainability 17 06753 g0a2
Figure A3. Schematic diagram of the SVM algorithm.
Figure A3. Schematic diagram of the SVM algorithm.
Sustainability 17 06753 g0a3

References

  1. McGibbon, R.T.; Hernández, C.X.; Harrigan, M.P.; Kearnes, S.; Sultan, M.M.; Jastrzebski, S.; Husic, B.E.; Pande, V.S. Osprey: Hyperparameter Optimization for Machine Learning. J. Open Source Softw. 2016, 1, 34. [Google Scholar] [CrossRef]
  2. Gondy, L.A.; Thomas, C.R.B.; Bayes, N. Programs for machine learning. Adv. Neural Inf. Process. Syst. 1993, 79, 937–944. [Google Scholar] [CrossRef]
  3. Dietterich, T.G. Ensemble Methods in Machine Learning. In Proceedings of the International Workshop on Multiple Classifier Systems, Cagliari, Italy, 21–23 June 2000. [Google Scholar] [CrossRef]
  4. Zuo, W.; Wang, K.; Zhang, H.; Zhang, D. Kernel Difference-Weighted k-Nearest Neighbors Classification. In Advanced Intelligent Computing Theories and Applications, Proceedings of the International Conference on Intelligent Computing, Qingdao, China, 21–24 August 2007; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef]
  5. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  6. Thornton, C.; Hutter, F.; Hoos, H.H.; Leyton-Brown, K. Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2012. [Google Scholar] [CrossRef]
  7. Strijov, V.; Weber, G.W. Nonlinear regression model generation using hyperparameter optimization. Comput. Math. Appl. 2010, 60, 981–988. [Google Scholar] [CrossRef]
  8. Syarif, I.; Prugel-Bennett, A.; Wills, G. SVM Parameter Optimization using Grid Search and Genetic Algorithm to Improve Classification Performance. Telkomnika 2016, 14, 1502–1509. [Google Scholar] [CrossRef]
  9. Feurer, M.; Springenberg, J.T.; Hutter, F. Using meta-learning to initialize Bayesian optimization of hyperparameters. In Proceedings of the 2014 International Conference on Meta-Learning and Algorithm Selection (MLAS’14), Prague, Czech Republic, 19 September 2014; Volume 1201. [Google Scholar]
  10. Jiang, M.; Luo, Y.P.; Yang, S.Y. Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm. Inf. Process. Lett. 2007, 102, 8–16. [Google Scholar] [CrossRef]
  11. Feng, M. Numerical and experimental studies of cold-formed thin-walled steel studs in fire. Neuro Endocrinol. Lett. 2004, 26, 25–28. [Google Scholar] [CrossRef]
  12. Dubina, D. Thin-walled structures. Steel Constr. 2011, 4, 213–214. [Google Scholar] [CrossRef]
  13. Li, Y.; Shen, H.; Shan, W.; Han, T. Flexural behavior of lightweight bamboo-steel composite slabs. Thin-Walled Struct. 2012, 53, 83–90. [Google Scholar] [CrossRef]
  14. Jiang, T.; Li, Y.; Shan, W.; Zhang, W. Seismic Behavior of Thin-Walled C Steel-Bamboo Plywood Composite Column. J. Northeast For. Univ. 2011, 39, 82–85. [Google Scholar] [CrossRef]
  15. Okubo, K.; Fujii, T.; Yamamoto, Y. Development of bamboo-based polymer composites and their mechanical properties. Compos. Part A Appl. Sci. Manuf. 2004, 35, 377–383. [Google Scholar] [CrossRef]
  16. Chaowana, P. Bamboo: An Alternative Raw Material for Wood and Wood-Based Composites. J. Mater. Sci. Res. 2013, 2, 90–102. [Google Scholar] [CrossRef]
  17. Li, Y.; Shuai, Y.; Shen, Z.; Qin, Y.; Chen, Y. Experimental study on tension behavior of self-drilling screw connections for cold-formed thin-walled steel structures. J. Build. Struct. 2015, 36, 143–152. [Google Scholar] [CrossRef]
  18. Lu, L.; Zhang, Y.; Fang, W.; Yang, D. Experimental investigation on shear-bearing capacity for self-drilling screw connections of cold-formed thin-walled steel. J. Cent. South Univ. (Sci. Technol.) 2013, 44, 2997–3005. [Google Scholar]
  19. Cai, K.; Yuan, H. Testing, numerical and analytical modelling of self-drilling screw connections between thin steel sheets in shear. Thin-Walled Struct. 2023, 182, 1–19. [Google Scholar] [CrossRef]
  20. Dolan, C.V. Factor analysis of variables with 2, 3, 5 and 7 response categories: A comparison of categorical variable estimators using simulated data. Br. J. Math. Stat. Psychol. 2011, 47, 309–326. [Google Scholar] [CrossRef]
  21. DiStefano, C. The Impact of Categorization With Confirmatory Factor Analysis. Struct. Equ. Model. 2002, 9, 327–346. [Google Scholar] [CrossRef]
  22. Lin, H.; Zhang, W. The Relationship between Principal Component Analysis and Factor Analysis and SPSS Software—To Discuss with Comrade Liu Yumei, Lu Wendai etc. Stat. Res. 2005, 3, 65–68. [Google Scholar]
  23. Yuan, Y.; Zhang, H.T.; Wu, Y.; Zhu, T.; Ding, H. Bayesian Learning-Based Model-Predictive Vibration Control for Thin-Walled Workpiece Machining Processes. IEEE/ASME Trans. Mechatron. 2017, 22, 509–520. [Google Scholar] [CrossRef]
  24. Wang, R.; Song, Q.; Liu, Z.; Ma, H.; Gupta, M.K.; Liu, Z. A Novel Unsupervised Machine Learning-Based Method for Chatter Detection in the Milling of Thin-Walled Parts. Sensors 2021, 21, 5779. [Google Scholar] [CrossRef] [PubMed]
  25. Sun, H.; Zhao, S.; Peng, F.; Yan, R.; Zhou, L.; Zhang, T.; Zhang, C. In-situ prediction of machining errors of thin-walled parts: An engineering knowledge based sparse Bayesian learning approach. J. Intell. Manuf. 2022, 35, 387–411. [Google Scholar] [CrossRef]
  26. Guo, C.; Jiang, L.; Yang, F.; Yang, Z.; Zhang, X. Impact load identification and localization method on thin-walled cylinders using machine learning. Smart Mater. Struct. 2023, 32, 065018. [Google Scholar] [CrossRef]
  27. Long, X.; Lu, C.; Shen, Z.; Su, Y. Identification of Mechanical Properties of Thin-Film Elastoplastic Materials by Machine Learning. Acta Mech. Solida Sin. 2023, 36, 13–21. [Google Scholar] [CrossRef]
  28. Saeki, K.; Makino, T. Evaluation of optical constants in oxide thin films using machine learning. Jpn. J. Appl. Phys. 2023, 62, 081002. [Google Scholar] [CrossRef]
  29. Mojtabaei, S.M.; Becque, J.; Hajirasouliha, I.; Khandan, R. Predicting the buckling behaviour of thin-walled structural elements using machine learning methods. Thin-Walled Struct. 2023, 184, 110518. [Google Scholar] [CrossRef]
  30. Wu, Q.; Li, T.; Yue, Z.; Cao, Y.; Jie, Y.; Xiao, Y.; Li, Z.; Wang, Z.; Wang, L. Connections in stainless steel frame shear walls sheathed with ply-bamboo panels. J. Constr. Steel Res. 2024, 223, 109055. [Google Scholar] [CrossRef]
  31. Li, Z.; Li, T.; Xiao, Y. Connections used for cold-formed steel frame shear walls sheathed with engineered bamboo panels. J. Constr. Steel Res. 2020, 164, 105787. [Google Scholar] [CrossRef]
  32. AISI S400-15; North American Standard for Seismic Design of Cold-Formed Steel Structural Systems. American Iron and Steel Institute: Washington, DC, USA, 2015.
  33. Dong, J.; Peng, Y.; Jin, X.; Chen, Z. Experimental Study and Design Recommendations on Stainless Steel Screw Connections. Ind. Archit. 2024, 45, 8. [Google Scholar] [CrossRef]
  34. Alsiwat, J.M.; Saatcioglu, M. Reinforcement Anchorage Slip under Monotonic Loading. J. Struct. Eng. 1992, 118, 2421–2438. [Google Scholar] [CrossRef]
  35. Lowes, L.N.; Altoontash, A. Modeling Reinforced-Concrete Beam-Column Joints Subjected to Cyclic Loading. J. Struct. Eng. 2003, 12, 1686–1697. [Google Scholar] [CrossRef]
  36. Hainsworth, S.V.; Chandler, H.W.; Page, T.F. Analysis of nanoindentation load-displacement loading curves. J. Mater. Res. 1996, 11, 1987–1995. [Google Scholar] [CrossRef]
  37. Shapiro, S.S.; Wilk, M.B. An analysis of variance test for normality (complete samples). Biometrika 1975, 67, 215–216. [Google Scholar] [CrossRef]
  38. Mudholkar, G.S.; Srivastava, D.K.; Lin, C.T. Some p-variate adaptations of the Shapiro–Wilk test of normality. Commun. Stat. Theory Methods 1995, 24, 953–985. [Google Scholar] [CrossRef]
  39. Lin, L.I.; Torbeck, L.D. Coefficient of accuracy and concordance correlation coefficient: New statistics for methods comparison. PDA J. Pharm. Sci. Technol. 1998, 52, 55. [Google Scholar] [CrossRef] [PubMed]
  40. Hauke, J.; Kossowski, T. Comparison of values of Pearson’s and Spearman’s correlation coefficients on the same sets of data. Quaest. Geogr. 2011, 30, 87–93. [Google Scholar] [CrossRef]
  41. Roweis, S.; Saul, L. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
  42. Mnassri, B.; Adel, E.M.E.; Ouladsine, M. Fault localization using principal component analysis based on a new contribution to the squared prediction error. In Proceedings of the Conference on Control & Automation, Seoul, Republic of Korea, 14–17 October 2008. [Google Scholar] [CrossRef]
  43. Pizarro, C.; Esteban-Díez, I.; Nistal, A.J.; González-Sáiz, J.M. Influence of data pre-processing on the quantitative determination of the ash content and lipids in roasted coffee by near infrared spectroscopy. Anal. Chim. Acta 2004, 509, 217–227. [Google Scholar] [CrossRef]
  44. Chen, T.; He, T.; Benesty, M.; Khotilovich, V.; Tang, Y.; Cho, H.; Chen, K.; Mitchell, R.; Cano, I.; Zhou, T. xgboost: Extreme Gradient Boosting. R Package Version 0.4-2, 2015; 1, 1–4. [Google Scholar]
  45. Chen, H.; Gilad-Bachrach, R.; Han, K.; Huang, Z.; Jalali, A.; Laine, K.; Lauter, K. Logistic regression over encrypted data from fully homomorphic encryption. BMC Med. Genom. 2018, 11, 81. [Google Scholar] [CrossRef] [PubMed]
  46. Joachims, T. Making large-Scale SVM Learning Practical. In Advances in Kernel Methods—Support Vector Learning; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  47. Rish, I. An empirical study of the naive Bayes classifier. J. Univers. Comput. Sci. 2001, 1, 127. [Google Scholar] [CrossRef]
  48. Hecht-Nielsen, R. Theory of the Backpropagation Neural Network. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Washington, DC, USA, 18–22 June 1989. [Google Scholar] [CrossRef]
  49. Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef]
  50. Townsend, J.T. Theoretical analysis of an alphabetic confusion matrix. Atten. Percept. Psychophys. 1971, 9, 40–50. [Google Scholar] [CrossRef]
  51. Goldberg, D.E. Genetic Algorithm in Search, Optimization, and Machine Learning; Addison-Wesley Pub. Co.: Boston, MA, USA, 1989. [Google Scholar]
  52. Bangert, P. Optimization: Simulated Annealing. In Optimization for Industrial Problems; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef]
  53. Gao, Y.; Li, Z.; Wang, H.; Hu, Y.; Jiang, H.; Jiang, X.; Chen, D. An Improved Spider-Wasp Optimizer for Obstacle Avoidance Path Planning in Mobile Robots. Mathematics 2024, 12, 2604. [Google Scholar] [CrossRef]
  54. Zhang, L.; Liu, T.; Xu, C.; Zhang, J.; Gao, Y.; Li, X.C.; Wang, H.; Li, X.; Mir, M. A risk-averse cooperative framework for neighboring energy hubs under joint carbon, heat and electricity trading market with P2G and renewables. Renew. Energy 2025, 250, 123241. [Google Scholar] [CrossRef]
Figure 1. Diagram of this study’s architecture and logic.
Figure 1. Diagram of this study’s architecture and logic.
Sustainability 17 06753 g001
Figure 2. Physical picture of the TWS composite structure connections.
Figure 2. Physical picture of the TWS composite structure connections.
Sustainability 17 06753 g002
Figure 3. Heatmap of the Spearman correlation coefficients for the input and output parameters. Note: S.M., S.S. and S.Y.S. represent Sheathing modulus (%), Sheathing strength (MPa) and Screw yield strength (MPa) respectively.
Figure 3. Heatmap of the Spearman correlation coefficients for the input and output parameters. Note: S.M., S.S. and S.Y.S. represent Sheathing modulus (%), Sheathing strength (MPa) and Screw yield strength (MPa) respectively.
Sustainability 17 06753 g003
Figure 4. Flowchart of the factor analysis.
Figure 4. Flowchart of the factor analysis.
Sustainability 17 06753 g004
Figure 5. Scatterplot of the factors.
Figure 5. Scatterplot of the factors.
Sustainability 17 06753 g005
Figure 6. Schematic of the XGboost algorithm.
Figure 6. Schematic of the XGboost algorithm.
Sustainability 17 06753 g006
Figure 7. Diagram of the process and formula of the RF model.
Figure 7. Diagram of the process and formula of the RF model.
Sustainability 17 06753 g007
Figure 8. Radar charts of evaluation parameters for the eight machine learning models on the dataset.
Figure 8. Radar charts of evaluation parameters for the eight machine learning models on the dataset.
Sustainability 17 06753 g008
Figure 9. Prediction results of the classification models trained using different machine learning algorithms.
Figure 9. Prediction results of the classification models trained using different machine learning algorithms.
Sustainability 17 06753 g009aSustainability 17 06753 g009b
Figure 10. Evaluation parameter radar charts for the hyperparameter optimization algorithms.
Figure 10. Evaluation parameter radar charts for the hyperparameter optimization algorithms.
Sustainability 17 06753 g010
Figure 11. Feature importance of each parameter.
Figure 11. Feature importance of each parameter.
Sustainability 17 06753 g011
Table 1. Symbol descriptions.
Table 1. Symbol descriptions.
SymbolsDescription
TWSThin-walled Steel–Ply–Bamboo Shear Walls
SSStainless Steel
CDCold-formed Steel Double-Directional Ply–Bamboo
CUCold-formed Steel Double-Unidirectional Flat-Pressing Ply–Bamboo
DLDouble-directional Laminated Ply–Bamboo
UFUnidirectional Flat-Pressing Ply–Bamboo
PTS (or P)Phosphating Steel Screws
STS (or S)Stainless Steel Screws
MT (or M)Monotonic Test (Loading Protocol)
CL (or C)Cyclic Test (Loading Protocol)
AVGAverage of 5 Measurements
Table 2. Overview of the TWS composite structure connections and experimental design.
Table 2. Overview of the TWS composite structure connections and experimental design.
Test SeriesBamboo Panel TypeSteel Stud TypeScrew TypeEnd Distance (mm)Loading TypeSample Batch
SS-DL-4.2STS30-MT15-1Double-directionalStainless SteelSTS 4.230Monotonic1
CU-4.2S30-C15-4UnidirectionalCold-formed SteelSTS 4.230Cyclic4
CD-3.5P15-M15-3Double-directionalStainless SteelPTS 3.515Monotonic3
CD-3.5P15-M15-AVGDouble-directionalStainless SteelPTS 3.515MonotonicAVG
CD-3.5S15-M15-1Double-directionalStainless SteelSTS 3.515Monotonic1
CU-4.2P20-M20-2UnidirectionalCold-formed SteelPTS 4.220Monotonic2
… (Additional combinations)
Table 3. A portion of the mechanical performance data of the TWS composite structure connections [31].
Table 3. A portion of the mechanical performance data of the TWS composite structure connections [31].
Test SeriesNo.KeδyδmFmδuFuμ
CD-3.5P15-M15-11 590.94 3.14 5.13 1402.00 6.27 1191.70 2.00
CD-3.5P15-M15-22 706.42 2.83 4.17 1321.00 4.55 1122.85 1.61
CD-3.5P15-M15-33 447.93 3.63 4.89 1327.00 4.89 1127.95 1.35
CD-3.5P15-M15-AVGAVG581.76 3.20 4.73 1350.00 5.24 1147.50 1.65
CD-3.5S15-M15-11 205.41 4.24 5.85 1195.00 6.34 1015.75 1.50
CD-3.5S15-M15-22 217.53 4.33 5.90 1253.00 6.42 1065.05 1.48
CD-3.5S15-M15-33 360.38 4.38 5.50 1228.00 6.40 1043.80 1.46
Note: Ke, δy, δm, Fm, δu, Fu, and μ separately represent the elastic stiffness, yield displacement, maximum displacement, maximum force, ultimate displacement, ultimate force, and ductility factor.
Table 4. The descriptive statistics and normality.
Table 4. The descriptive statistics and normality.
Var.SMed.Avg.SDSkew.Kurt.SW Testp-Value
δy2483.8203.8811.3490.531−0.0710.9670.000 ***
Ke248528.491645.156370.7951.9014.8120.8250.000 ***
δm2485.0555.5831.7800.9111.0970.9370.000 ***
Fm24816961719.308309.1510.222−0.6090.9840.006 ***
δu2487.0657.2022.1910.327−0.4310.9840.007 ***
Fu2481441.6511462.252261.8340.227−0.5970.9840.006 ***
μ2481.8251.9420.4941.4012.7270.9010.000 ***
Note: *** represents the 1% significance level. Var., S, Med., Avg., SD, Skew., Kurt, and SW test represent the variable name, sample size, median, average, standard deviation, skewness, kurtosis, and Shapiro–Wilk test, respectively.
Table 5. Component matrix table.
Table 5. Component matrix table.
NameComponents
F1F2F3F4
Ke0.0630.0561.0620.220
Fm−0.0790.5300.038−0.039
Fu−0.0800.5290.039−0.038
δu0.893−0.0430.018−0.633
δm0.463−0.0760.039−0.422
δy−0.351−0.0730.1311.661
Table 6. Factor weight table.
Table 6. Factor weight table.
Contribution FactorVariance Contribution RateCumulative VarianceWeight (%)
F10.34434.40234.402
F20.33968.34033.938
F30.17485.72817.388
F40.11296.94411.217
Table 7. Statistics of the top 20 data points in the comprehensive ranking (excerpt).
Table 7. Statistics of the top 20 data points in the comprehensive ranking (excerpt).
RankConnection Specimens Comprehensive ScoreKeFmFuδuδmδy
1SS-DL-4.2STS30-MT15-11.416897 324.50 2518 2140.30 11.92 11.22 4.03
2SS-DL-4.2STS30-MT15-21.388279 317.38 2525 2146.25 11.59 11.22 3.91
3SS-DL-4.2STS30-CL15-21.317037 492.10 2397 2037.45 10.629.94 6.38
4SS-DL-4.2STS30-MT15-AVG1.301942 315.23 2405 2044.25 11.87 11.22 4.13
5CU-4.2S30-C15-41.258773 1295.17 1878 1596.30 11.01 8.58 6.04
6SS-DL-4.2STS30-CL15-11.223402 405.17 2333 1983.05 11.04 9.90 6.32
7SS-DL-4.2STS30-CL15-AVG1.102345 443.89 2177 1850.79 10.78 10.00 6.68
8SS-DL-4.2STS30-MT15-31.102275 303.80 2172 1846.20 12.11 11.23 4.48
9SS-DL-4.2STS30-CL15-51.030593446.07 2061 1751.85 10.85 10.08 7.27
10CU-4.2S30-CL15-51.020289 707.69 1610 1368.50 12.56 10.56 8.86
11SS-DL-4.2STS30-CL15-30.992601 458.01 2071 1760.35 10.82 9.99 6.36
12SS-DL-4.2STS15-MT15-10.969794 288.34 2497 2122.45 9.33 7.82 5.81
13CD-4.2S15-CL15-10.950410 2454.401534 1303.90 7.44 4.96 2.42
14SS-DL-4.2STS30-CL15-40.948717 418.14 2025 1721.25 10.61 10.08 7.11
15CU-4.2S30-M15-30.819136 929.76 1734 1473.90 10.90 9.06 4.41
16SS-DL-4.2STS15-MT2.5-10.802252 418.95 2300 1955.00 9.79 6.52 5.99
17SS-DL-4.2STS15-CL15-50.791552342.37 2223 1889.21 10.11 7.67 5.91
18CU-4.2S30-M15-10.748842 884.33 1919 1631.15 9.06 8.12 4.49
19SS-DL-3.5STS15-CL15-50.734490 376.07 2094 1779.73 10.53 7.89 5.80
20SS-DL-4.2STS15-MT15-AVG0.731481 306.75 2259 1920.15 9.19 7.83 5.67
Note: The first number refers to the “nominal diameter”, the second number refers to the “end distance”, the third number refers to the “loading rate”, and the fourth number refers to the measurement number.
Table 8. Partial parameters of model training speed and training settings.
Table 8. Partial parameters of model training speed and training settings.
XGBoostKNNRFLRSVMNBBPNNExTrees
Training time (s)51.1670.0290.8041.7780.1430.04145.8280.671
Data partitioning0.70.70.70.70.70.70.70.7
Data shufflingyesyesyesyesyesyesyesyes
Cross-validation55555555
Table 9. Performance indicators of the eight different machine learning models.
Table 9. Performance indicators of the eight different machine learning models.
ModelTrainingCross-ValidationTest
AC R E P R F1 AC R E P R F1 AC R E P R F1
XGBoost1.0001.0001.0001.0000.3990.3990.4060.3850.4000.4000.4560.390
KNN0.5780.5780.5420.5280.2720.2720.2230.2330.2800.2800.1970.216
RF0.9940.9940.9950.9940.5380.5380.5570.5310.6130.6130.6840.602
LR0.4390.4390.3440.3630.2020.2020.1790.1740.2000.2000.1720.172
SVM0.8210.8210.8390.8140.3760.3760.3530.3510.4270.4270.4980.411
NB0.8610.8610.9340.8670.3530.3530.3970.3530.3730.3730.3180.311
BPNN0.2950.2950.2450.2550.0580.0580.0160.0210.1070.1070.0610.072
ExTrees0.9940.9940.9950.9940.6080.6080.6020.5860.5070.5070.5730.509
Table 10. Performance indicators of the five different hyperparameter optimization algorithms.
Table 10. Performance indicators of the five different hyperparameter optimization algorithms.
ModelTrainingCross-ValidationTest
AC R E P R F1 AC R E P R F1 AC R E P R F1
GS0.8360.7920.7150.8560.5380.5330.5620.6240.6130.7240.6840.602
GA0.8250.7310.8940.7410.5440.5440.5690.5360.5470.6410.6060.548
PSO0.7280.7850.8390.8270.5720.5660.5490.5390.5550.5010.6170.545
SA0.8310.7360.7810.7660.6010.5380.5290.5560.6270.6480.6140.601
BO0.8020.8140.8330.7890.5990.6510.5720.5530.7170.6290.8260.735
Table 11. Part of the total optimal model prediction classification results.
Table 11. Part of the total optimal model prediction classification results.
NuPredicted ResultsTest Series
1CD-3.5S15-M15CD-3.5S15-M15
2SS-UF-4.2STS10-CL15SS-UF-4.2STS10-CL15
3SS-UF-4.2STS15-CL15CD-4.2S30-M15
4CU-4.2S30-M15CU-4.2S30-M15
5SS-UF-4.2STS30-CL15SS-UF-4.2STS30-CL15
…………
71CD-3.5S15-C15CD-3.5S15-C15
72SS-UF-3.5STS15-CL15SS-UF-3.5STS15-CL15
73SS-UF-4.2STS15-CL15SS-UF-4.2STS15-CL15
74SS-DL-4.2STS10-MT15SS-DL-4.2STS10-MT15
75SS-UF-4.2STS10-CL15SS-UF-4.2STS10-CL15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xia, W.; Gao, Y.; Zhang, Z.; Jie, Y.; Zhang, J.; Cao, Y.; Wu, Q.; Li, T.; Ji, W.; Gao, Y. Behavior Prediction of Connections in Eco-Designed Thin-Walled Steel–Ply–Bamboo Structures Based on Machine Learning for Mechanical Properties. Sustainability 2025, 17, 6753. https://doi.org/10.3390/su17156753

AMA Style

Xia W, Gao Y, Zhang Z, Jie Y, Zhang J, Cao Y, Wu Q, Li T, Ji W, Gao Y. Behavior Prediction of Connections in Eco-Designed Thin-Walled Steel–Ply–Bamboo Structures Based on Machine Learning for Mechanical Properties. Sustainability. 2025; 17(15):6753. https://doi.org/10.3390/su17156753

Chicago/Turabian Style

Xia, Wanwan, Yujie Gao, Zhenkai Zhang, Yuhan Jie, Jingwen Zhang, Yueying Cao, Qiuyue Wu, Tao Li, Wentao Ji, and Yaoyuan Gao. 2025. "Behavior Prediction of Connections in Eco-Designed Thin-Walled Steel–Ply–Bamboo Structures Based on Machine Learning for Mechanical Properties" Sustainability 17, no. 15: 6753. https://doi.org/10.3390/su17156753

APA Style

Xia, W., Gao, Y., Zhang, Z., Jie, Y., Zhang, J., Cao, Y., Wu, Q., Li, T., Ji, W., & Gao, Y. (2025). Behavior Prediction of Connections in Eco-Designed Thin-Walled Steel–Ply–Bamboo Structures Based on Machine Learning for Mechanical Properties. Sustainability, 17(15), 6753. https://doi.org/10.3390/su17156753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop