You are currently viewing a new version of our website. To view the old version click .
Technologies
  • Article
  • Open Access

12 June 2025

Green Ground: Construction and Demolition Waste Prediction Using a Deep Learning Algorithm

,
,
,
,
,
,
and
1
Department of Computer Science, College of Sciences and Humanities in Jubail, Imam Abdulrahman Bin Faisal University, Dammam 34212, Saudi Arabia
2
Department of Physics, College of Sciences and Humanities in Jubail, Imam Abdulrahman Bin Faisal University, Dammam 34212, Saudi Arabia
3
Department of Urban and Regional Planning, College of Architecture and Planning, Imam Abdulrahman Bin Faisal University, Dammam 31451, Saudi Arabia
*
Author to whom correspondence should be addressed.
This article belongs to the Section Environmental Technology

Abstract

The waste management and recycling industry in Saudi Arabia is facing ongoing challenges in reducing the negative impact resulting from the recycling process. Different types of waste lack an efficient and accurate method for classification, especially in cases that require the rapid processing of materials. A deep learning prediction model based on a convolutional neural network algorithm was developed to classify and predict the types of construction and demolition waste (CDW). The CDW image dataset used contained 9273 images, including concrete, asphalt, ceramics, and autoclaved aerated concrete. The model obtained an overall accuracy of 97.12%. The Green Ground image prediction model is extremely useful in the construction and demolition industry for automating sorting processes. The model improves recycling rates by ensuring that materials are sorted correctly, thus reducing waste sent to landfills, by accurately identifying different types of materials in CDW images. As part of Saudi Arabia’s 2030 sustainability objectives, these steps contribute to achieving a greener future, complying with environmental regulations, and promoting sustainability.

1. Introduction

According to the World Bank, global waste generation is expected to increase from 2.01 billion tons in 2018 to approximately 3.40 billion tons by 2050 [1]. Improper waste disposal remains a principal contributor to pollution and environmental degradation, significantly affecting public health. The construction industry is among the leading generators of solid waste. Despite the rapid development of modern infrastructure and the elevation of living standards, rapid urbanization has also resulted in an alarming increase in both municipal solid waste (MSW) and construction and demolition waste (CDW). CDW refers to debris produced during building construction, renovation, and demolition [2,3]. With global population growth driving demand for new developments and CDW replacement, CDW volume continues to rise [4].
A growing challenge such as this underscores the importance of implementing efficient intelligent CDW management systems that identify and process waste quickly and accurately. Globally, MSW volumes are increasing as a result of economic development, population growth, and changes in consumption patterns [5,6]. Each year, CDW alone accounts for over 10 billion tons of waste, representing 35% to 40% of the overall amount of waste generated. In the European Union, it accounts for approximately 36% of the total waste, while in the United States, it is responsible for 67% of the total waste [7]. Demolition waste accounts for 70–90% of the global CDW stream, posing significant environmental challenges related to land use, climate resilience, and natural resource depletion [8].
Insufficient management of MSW and CDW inhibits sustainable development and negatively impacts economic growth, the environment, and the public health of urban areas [9]. Research, government agencies, and industry stakeholders recognize the importance of CDW management for sustainability. The CDW management programs developed by countries such as Japan and Germany have demonstrated effective recycling and reuse of CDW [10]. Technological advancements, particularly in artificial intelligence and machine learning, offer promising opportunities to improve CDW sorting and prediction. It is anticipated that this will reduce the sector’s environmental footprint.
The increasing urbanization in the 21st century, particularly in densely populated urban centers, has significantly contributed to the generation of CDW. Replacement of old, low-rise structures with high-density structures generates considerable waste [11]. A CDW management strategy typically includes source reduction, reuse, and recycling, with reuse being the most widely adopted of these methods. The United States generated 600 million tons of CDW in 2018—more than double the volume of MSW—at a 75% recycling rate [12]. The EU also achieved a 75% recovery rate for CDW by 2020 [13]. In contrast, China recycled only 10% of its CDW, despite setting a national target of 13%. Every year, India recycles only 1.3% of the 150 million tons of waste produced. The UAE’s CDW accounts for 30% of its total waste, much of which is disposed of in landfills [14].
Saudi Arabia, despite generating 53 million tons of MSW annually, lacks adequate infrastructure for recycling and reuse. An estimated 30% to 40% of this waste is CDW, which results in environmental losses of approximately $1.3 billion per year [15]. Unmanaged CDW impacts include resource depletion, increased pollution, and heightened greenhouse gas emissions. These consequences hinder development efforts, damage tourism, and compromise urban health, safety, and aesthetics [2,16]. However, successful models such as Japan’s integrated waste management system and Germany’s high recycling efficiency highlight the potential of technology-driven CDW solutions [17].
There is a large environmental impact associated with CDW, contributing to pollution, resource depletion, and landfill overuse. Recycling promotes greenhouse gas reduction, economic development, and job creation. CDW needs to be managed sustainably to protect the environment [18]. The construction industry is responsible for a significant share of global environmental degradation, including 23% of air pollution, 40% of drinking water contamination, and 50% of landfill waste and global warming contributions [19]. Although up to 90% of CDW materials are technically reusable, poor management practices often result in unnecessary landfill disposal. There is a large environmental impact associated with CDW, contributing to pollution, resource depletion, and landfill overuse. Recycling promotes greenhouse gas reduction, economic development, and job creation. CDW needs to be managed sustainably to protect the environment [20].
Several key barriers to recycling exist, including low landfill disposal fees, readily available inexpensive aggregates, poor-quality recycled materials, and inadequate source separation. Ineffective CDW management contributes to environmental, economic, and social issues, such as habitat destruction, emissions, and a decline in public safety [21]. By integrating smart technologies such as automated sorting, artificial intelligence-based recycling, and incentive-driven policies, CDW management can be significantly enhanced [22]. Educational initiatives can further support public awareness and participation. Without proper intervention, CDW may hinder progress toward the UN’s 2030 Sustainable Development Goals.
Aligned with Saudi Arabia’s Vision 2030, major infrastructure and real estate developments across cities like Riyadh, Jeddah, and Dammam are accelerating CDW generation [23]. Recent research in Saudi Arabia reports CDW generation rates of 50–60 tons per 1000 m2 for existing construction and up to 1200 tons per 1000 m2 for demolition [24]. Mixed soil, concrete blocks, gravel, asphalt, glazed tiles, and various metals and composites are among the components of Saudi Arabia’s CDW [25]. While many of these materials are non-hazardous, their high generation volume, storage limitations, and logistical challenges make efficient management difficult. Saudi Arabia has responded by establishing recycling plants, expanding green building incentives, and utilizing robotic demolition technologies to improve the handling of CDW [26]. Accurate classification and prediction of CDW are essential to optimize recycling processes, conserve natural resources, and minimize environmental harm, thus supporting a circular economy [27].
Despite these efforts, there remains a notable research gap in the development of advanced CDW prediction models, especially in developing nations. To address this problem, the present study introduces “Green Ground”, a convolutional neural network (CNN)-based deep learning framework developed for classifying and predicting different types of CDW, including materials such as autoclaved aerated concrete (AAC), asphalt, ceramics, and concrete. The term Green Ground was selected by the researchers to emphasize the model’s alignment with environmental objectives and its contribution to advancing sustainable waste management practices. Green Ground provides a robust and scalable framework for CDW management by leveraging image-based learning to enhance material classification and prediction accuracy [28,29]. This approach holds significant potential to transform recycling operations and substantially reduce waste generation.
As Saudi Arabia continues its transformation under Vision 2030, advanced technologies such as Green Ground can play a pivotal role in environmental sustainability. By automating waste sorting and improving classification accuracy, the model addresses key challenges in CDW recycling. These challenges include data scarcity, processing limitations, and concerns about scalability.
This study aimed to answer the following research questions:
(A)
How can deep learning models enhance sorting and recycling efficiency in the CDW sector?
(B)
How can these technologies support Saudi Arabia’s sustainability goals?
(C)
How can model limitations, such as data availability and training time, be mitigated?
(D)
What improvements can be made to broaden the model’s applicability across diverse CDW materials?
This research aimed to develop a CNN-based model capable of identifying and predicting CDW types from construction site images. Beyond developing a high-performing model, this study proposes strategies to improve prediction accuracy and operational integration. Ultimately, the Green Ground model represents a significant step toward sustainable waste management by enabling more efficient recycling and supporting national and global environmental objectives.
Key aspects of the Green Ground model include its ability to solve waste management and recycling problems. As CDW continues to increase, it is necessary to develop an adaptable model that can categorize and predict its types. Government and industry can optimize resources through Green Ground by identifying materials and predicting future requirements. Consequently, they will be able to adopt precise strategies to minimize the environmental impact caused by improper waste disposal, thus reducing the need for raw materials. Green Ground’s ability to predict various types of materials will contribute to scientific and technological advancements in this field, leading to the development of recycling systems.
As part of the sustainable waste management field, several key contributions are made. By analyzing images, Green Ground classifies CDW materials, improving recycling efficiency and speed by substantially reducing manual sorting. This model reduces manual sorting and enhances overall efficiency by automating material recognition. To ensure robust model training, well-structured datasets were gathered, labeled, and enhanced. The model was developed into a practical web interface using Gradio, a scalable and adaptable platform that supports more comprehensive recycling systems and environmental initiatives. To ensure the system’s effectiveness and reliability in categorizing waste materials, a wide range of performance metrics was used to assess its effectiveness and reliability.
Essentially, this paper is organized as follows: Section 2 presents a comprehensive literature review of CDW prediction approaches and their limitations. Section 3 outlines the methodology, including data collection and model architecture. Section 4 presents the experimental results. A detailed discussion of the findings is provided in Section 5, and this study is concluded in Section 6.

3. Materials and Methods

A basic practice in today’s society is recycling, which minimizes waste, protects priceless resources, and minimizes the impact on the environment. Materials are gathered, processed, and reused to keep them out of landfills and incinerators. By reprocessing these materials, recycling reduces the need to extract and manufacture new resources, which reduces pollution and energy consumption. Waste management through this approach is both economically viable and aligned with the larger goal of creating a more sustainable and environmentally conscious society. With a worldwide push to improve resource efficiency and preserve the environment, recycling initiatives are becoming increasingly important. The term “industrial waste” refers to abandoned or residual materials from industrial activities, also known as CDW. The process of recycling industrial waste involves extracting and repurposing materials to produce new products or resources. These recycling techniques improve efficiency while reducing environmental impact by minimizing the amount of industrial waste disposed of in landfills.

3.1. Data Acquisition

The dataset used in this research was derived from an open-source repository (Zenodo) focused on CDW research [52]. The dataset originally contained 2664 images of materials fragments (AAC, asphalt, concrete, and ceramics). The images were captured at a CDW collection and sorting yard using a Canon DSLR camera, positioned at a distance of approximately 70 cm, producing images at a resolution of 1920 × 1280 px. To ensure consistent image quality and minimize variations in illumination, the researchers captured images in shade. CDW fragments were used without any cleaning to reflect the actual situations in the real world. Images were taken by the authors by placing fragments directly on the CDW piles or the ground while taking the images. Table 5 summarizes the material types.
Table 5. CDW material types: (A) AAC, (B) asphalt, (C) ceramics, and (D) concrete.

3.2. Data Preprocessing

In order for the dataset to fit the DL model, data augmentation techniques were applied to increase the dataset size to attain high accuracy with respect to minimizing overfitting chances and to enhance the model’s generalization. The parameters used were as follows: rotation range, zoom range, horizontal and vertical flips, width shift range and height shift range, shear range, brightness range, and fill mode. The parameters in Table 6 allow for a wide range of transformations to be applied to images during data augmentation, where each generated image implies a specification within the range of the function. Table 7 summarizes the results of data augmentation applied to AAC, ceramics, concrete, and asphalt images in the dataset. Four examples of augmented images are shown in Figure 1. Figure 2 illustrates an image belonging to the AAC class of the dataset, which produced four images after applying the augmentation operations. While excessive augmentation may lead to overfitting, the current results suggest that it is beneficial in this context, with no adverse effects on classification. In addition, the improved classification performance also demonstrated that the augmentation strategies are effective in enhancing model learning when compared to the original Zenodo dataset source.
Table 6. Data augmentation functions.
Table 7. Augmentation applied to the dataset.
Figure 1. Data augmentation applied to the materials.
Figure 2. Images after applying the augmentation operations.

3.3. Model Building and Training

A DL architecture based on CNNs is widely used in computer vision tasks such as image detection and classification. CNNs are a type of neural network (NN) architecture that uses input images to identify and classify images by extracting relevant features. Convolutions are performed by CNNs using labels, and feature maps are generated by CNNs using labels.
CNNs are artificial neural networks designed specifically for recognizing images. An NN represents the activity of neurons in the brain through a patterned hardware and/or software system. CNNs are also defined as multilayer neural networks consisting of several layers, where each layer converts one number of activations into another through a function. This special architecture is often used in DL to recognize scenes and objects and to detect, extract, and segment images.
Many researchers have shown impressive accuracy when using the CNN architecture for classification and identification [53]. Therefore, it was chosen for building the model, employed using Python language and its libraries (TensorFlow and Keras), and deployed.

3.4. Model Components/Architectures

The model system leverages transfer learning (TF) and DL techniques to classify and predict images. Green Ground uses CNNs to learn hierarchical features from raw pixel data to classify and predict images. The model architecture consists of several components. The raw image data are processed by an initial image input layer, followed by a series of feature extraction layers. The first convolutional layer applies 32 filters (kernel size 3 × 3, ReLU activation) to the input image to produce an output of 222 × 222, which is then down-sampled using a max-pooling layer (2 × 2) to 111 × 111. A second convolutional layer with 64 filters follows, producing a 109 × 109 output, which is again reduced using max pooling to 54 × 54. The third convolutional layer applies 128 filters, yielding a 52 × 52 output, followed by a final max-pooling layer reducing it to 26 × 26. These feature maps are flattened into a 1 × 128 vector, which is passed through a fully connected dense layer with 128 hidden units and ReLU activation. To classify the images into four categories, SoftMax activation is used as the final output layer. Overall, the model includes 3 convolutional layers, 3 max-pooling layers, 1 flatten layer, and 2 dense layers, with approximately 11.17 M trainable parameters. Figure 3 illustrates the sequential flow of these layers from input to output.
Figure 3. Deep learning model architecture.

Why CNN?

CNNs were chosen as the core architecture for the proposed model due to their efficiency in image classification tasks, especially with texture-rich visual data such as CDW. The datasets used in this study contained images of different types of materials exhibiting distinct patterns, textures, and edges. Due to their convolutional layers, CNNs are specifically designed to detect spatial hierarchies, as they progressively learn local and global features.
The use of a traditional CNN offers a substantial advantage over more complex architectures such as ResNet or EfficientNet in terms of computational efficiency and ease of implementation, particularly under the limitations of the hardware encountered in this study. This model was developed and trained on a standard laptop without a GPU, making lightweight architectures more feasible. It is important to note that, despite these limitations, the model achieved strong performance metrics, suggesting that the CNN chosen was capable of effectively generalizing across datasets.

3.5. Theory/Calculation

  • Convolutional layer computes Z ( ι ) = W ( ι )   X ι 1 + b ( ι ) , where X ι 1 is the input from the previous layer (or the input image for the first layer), W ( ι ) is the filter for the layer, b ( ι ) is the bias for layer ι , and ∗ denotes the convolution operation. Z ( ι ) is the output before applying the activation function.
  • ReLU activation function is applied as A ( ι ) = R e L U   Z ( ι ) = ( 0 , Z ( ι ) ) , where A ( ι ) is the activation output for layer ι .
  • Max-pooling layer performs P ( ι ) = M a x P o o l ( A ι , f , s ) , where P ( ι ) is the output of the max-pooling layer, f is the size of the pooling filter, and s is the stride of the pooling operation.
  • Flattening is the process of converting the 2D matrices into a 1D vector: F = F l a t t e n ( P n ) , where P n is the output of the last max-pooling layer and F is the flattened layer.
  • Fully connected (dense) layer refers to the computation Z ( f c ) = W ( f c )   F + b ( f c ) , where W ( f c ) is the weight matrix, b ( f c ) is the bias vector, and Z ( f c ) is the output before applying the activation function.
  • The output layer with SoftMax activation function is defined as y ^ = S o f t m a x Z f c = e x p e x p   ( Z f c )   j   e x p e x p   ( Z j f c )   , where y ^ is the output prediction. In the SoftMax activation function, the output is transformed into a probability distribution over all possible classes. The input data are transformed in each of these operations step by step, which allows the model to learn and make predictions based on the input data.
An illustration of the Green Ground model’s website interface is shown in Figure 4 to provide an understanding of its structure and design. Uploading and dragging an image to the “Uploaded Image” box provides the user with the opportunity to test the model’s ability to predict and classify CDW materials. Once the user clicks on the “Submit” button, the image is sent to the models for testing. The predicted class is then displayed in the “Predicted Class” box. In the event that the “Flag” button is clicked, a file entitled “Flagged” appears on the user’s computer, which is intended for flagging specific images that have been selected by the user. In the case of the “Clear” button, it removes the current images to allow the user to test new images.
Figure 4. CDW prediction model interface.

3.6. Model Implementation

This section outlines the implementation steps undertaken by the authors in the development of the model. Figure 5 illustrates an implementation phase that includes the following.
Figure 5. Implementation phase.
Installation involved setting up all necessary software tools, including Anaconda version 2024.02, Jupyter Notebook version 6.5.4, Python version 3, TensorFlow version 2.12.0, Keras version 2.12.0, and Gradio version 4.32.2. The dataset was acquired from the Zenodo website, where several images depicting CDW were available. For preprocessing, the images were resized to a uniform size of 200 × 200 pixels, normalized, and possibly augmented to increase their diversity, robustness, and size. The dataset was then divided into training and testing sets, with 70% allocated for training and 30% for testing, ensuring that the model could be trained on a substantial amount of data while allowing for robust evaluation of untested samples.
After extensive research, a CNN architecture was selected due to its capability to learn hierarchical patterns from data, facilitating image classification and prediction. The TensorFlow, Keras, and Gradio libraries were used to implement CNNs for image prediction tasks using Python code. By building upon existing frameworks and methodologies, time and effort were saved. The codebase was tailored to the dataset and classification task, involving parameter adjustments, iterative error resolution, and issue handling. Following model training, various performance metrics, such as accuracy metrics and confusion matrices, were analyzed, leading to the development of a modified model. Ultimately, the Green Ground image prediction model was successfully deployed.
Iterative development was used during the implementation phase of the project to continuously improve the model. As a result, the desired experience and outcome were achieved. Although the model was successfully developed and tested on standard laptop hardware in a Jupyter Notebook environment, it has not been optimized for real-time industrial deployments. Integrated real-time systems and edge devices may be enabled with lightweight architectures or inference optimization techniques in future research.

Description of the Characteristics of the Machine Used

The specifications of the hardware environment used in the model development are listed in Table 8 to increase reproducibility and provide context for the reported training times and performance. A limited number of computational resources, however, made the training process significantly time-consuming. As the model was trained on a non-GPU laptop, the training time extended to several days. Consequently, the scope of experimentation and the selection of models were influenced by these constraints. Specifically, more advanced and computationally intensive architectures such as ResNet and EfficientNet were not explored since their training requirements exceeded the hardware capabilities. The chosen model architecture, therefore, reflects the balance between feasibility and performance in the available infrastructure.
Table 8. Characteristics of the machine used.

3.7. Model Evaluation

To obtain the best performance from the DL model, the dataset was divided into 70% training and 30% testing, resulting in an overall accuracy of 96% after feeding it into the model. A dataset comprising 80% training and 20% testing was also fed into the model, which achieved 95.8% accuracy. After this, the 70% training and 30% testing dataset was also selected for further development of the model. Several accuracy measurements were employed to indicate prediction accuracy, which were recall, precision, F1-score, confusion matrices, and ROC curve.
  • Accuracy measures the proportion of correctly predicted instances, including both true positives and true negatives, relative to the total number of instances. It provides an overall percentage of correct predictions.
    A c c u r a c y = T P + T N T P + T N + F P + F N  
  • Precision is the ratio of correctly predicted positive instances to the total number of positive instances predicted.
    P r e c i s i o n = T P / ( T P + F P )
  • Recall, on the contrary, is the ratio of correctly predicted positive instances to the total number of positive instances.
    R e c a l l = T P / T P + F N
  • F1-score integrates both precision and recall into a single metric, measured by a harmonic mean, which provides an objective measure that balances both concerns.
    F 1   S c o r e = 2 × P r e c i s i o n × R e c a l l / P r e c i s i o n + R e c a l l
  • F-beta score extends the F1-score by allowing the precision and recall weights to be altered. β is a parameter that determines the relative weight of recall and precision.
    F β   S c o r e = ( 1 + β 2 ) × P r e c i s i o n × R e c a l l / β 2 × P r e c i s i o n + R e c a l l

4. Results

The prediction model was developed and trained using Jupyter Notebooks on an augmented dataset. The dataset comprised 9273 images, subdivided into a 70–30% training–testing split, yielding 6491 images for training and 2782 for testing. Given that the model was trained on high-resolution images, the following results were derived.
The first model was trained for 50 epochs, with an accuracy of 96.69%. Although this seems impressive, it should be noted that not all materials were accurately predicted by the model. This discrepancy indicates that, while the model had a high overall accuracy, certain classes were more challenging to predict. This can be attributed to factors such as class imbalance, image quality, or inherent similarities between certain material types.
The second model, which was trained for 100 epochs, showed an improvement in accuracy of 97.12%. With this marginal increase in accuracy, almost all material classes were able to make more accurate predictions. This difference emphasizes the importance of sufficiently training a model so that it can recognize more complex patterns within a dataset. As a result, there are concerns regarding overfitting, where a model becomes too tailored to the training data and performs poorly when faced with unknown data.
To further evaluate the models, an external image of (VIII) concrete [54], which was not part of the dataset, as illustrated in Figure 6, was used as a test case. The model trained for 50 epochs misclassified the image as asphalt, demonstrating a specific case in which short training periods led to incorrect classifications. On the contrary, the model trained for 100 epochs correctly identified the image as concrete. This result suggests that increasing the number of epochs increases the model’s ability to generalize to new unseen data, thereby increasing its predictive capability. It also underscores the importance of balancing training duration with model performance.
Figure 6. CDW images tested for prediction.
Additionally, the model was evaluated using several CDW images to assess its prediction capabilities after processing 100 epochs. The images depicted in Figure 6 show a sample of public images of (V) AAC [55], (VI) asphalt [56], (VII) ceramics [57], and (VIII) concrete [54] that were obtained outside of the dataset. Meanwhile, images (I), (II), (III), and (IV) show images that were part of our dataset. These images were evaluated by the model, achieving accurate predictions for all labels. The model’s ability to accurately predict these labels demonstrates its potential applicability in real-world scenarios. However, it is important to consider the possibility of bias in the dataset, as the model’s success with these specific images may not fully reflect its performance on a broader range of CDW materials. Nevertheless, despite the lack of an explicit background complexity analysis, most images in the dataset contained visible target materials with little background interference. Therefore, the model appeared to naturally focus on the dominant material, which is in line with the dataset’s visual clarity and structure.
An evaluation method combining numerical and visual evaluation was used after 100 epochs to provide a comprehensive assessment of the model’s classification capability. Figure 7 and Figure 8 present the confusion matrix and its normalized counterpart, offering a detailed view of the model’s class-wise prediction behavior. The normalized confusion matrices revealed a relatively balanced performance across the four classes, with the proportion of correctly classified samples ranging between 21% and 30%. Approximately 24% of AAC samples were accurately predicted, whereas 30% were misclassified as asphalt. Similarly, the concrete class showed a 25% correct classification rate with a moderate degree of misclassification in the other classes. The patterns suggest that certain material types have overlapping visual characteristics, which may have contributed to the uncertainty associated with prediction.
Figure 7. Confusion matrix.
Figure 8. Normalized confusion matrix.
Additionally, Table 9 presents precision, recall, F1-score, and support for each class. Across all categories, the F1-scores ranged from 0.22 to 0.28, showing a consistent distribution of performance. Accordingly, the concrete class achieved a precision of 0.29, a recall of 0.27, and an F1-score of 0.28, indicating moderate yet stable classification performance. According to these metrics, no single class disproportionately influenced overall accuracy, and the model maintained similar behavior across all material types.
Table 9. Classification report I.
Table 10 presents the macro and weighted averages, both of which registered at 0.25. The closeness of these values confirms the validity of the model, while also acknowledging the influence of class imbalance. The classification report and confusion matrix provide an in-depth evaluation of the model, demonstrating its generalization potential while highlighting areas for further refinement, primarily in improving interclass separability and reducing misclassification among visually similar categories. There are inherent challenges associated with distinguishing materials with overlapping textures, such as concrete and asphalt, that contribute to the observation of misclassification between visually similar classes. Future work must incorporate advanced feature extraction methods and refine the dataset balance further.
Table 10. Classification report II.
Additionally, detailed counts of true positives (TPs) and false negatives (FNs) are provided in Table 11, offering a closer look at where the model succeeded or failed in its predictions. For instance, the model identified 210 true positives for concrete but also had 577 false negatives, indicating a significant number of instances where the model failed to correctly classify concrete images. This analysis is critical for understanding the specific challenges faced by the model and guiding future improvements, such as incorporating more diverse training data or refining the model’s architecture.
Table 11. TP and FN results.
Figure 9 presents the multiclass precision–recall (PR) curve, which further assesses the model’s precision and recall across all classes. This curve is particularly useful for visualizing the trade-offs between precision and recall, helping to identify classes where the model performs well and others where it struggles. The curve also provides insights into the model’s behavior under different thresholds, offering guidance on how to adjust the model to improve its performance for specific applications.
Figure 9. Multiclass PR curve.
In conclusion, the results indicate that increasing the number of training epochs generally results in an improvement in prediction accuracy. In addition, they emphasize the need for a balanced approach to model training, where the risk of overfitting must be carefully managed. While the model’s performance on external images suggests that it might be applicable to real-world scenarios, further refinement and validation on a wider range of materials are required to fully realize its potential.
This study presents an analysis of three deep learning models: the 15-layer CNN for plastic waste classification [58], the NoWa model for real-time waste classification [59], and the proposed Green Ground model. A summary of the key attributes, including accuracy, dataset size, image resolution, training configuration, and dataset composition, is presented in Table 12.
Table 12. Comparison of deep learning models for waste classification.
A comparative assessment of the models remains useful in understanding their relative performance and practical potential despite their differences in structure, application, and dataset characteristics. Although the data types and model depths vary, the comparisons provided above help contextualize the effectiveness of the proposed Green Ground model within the broader context of waste classification efforts.

5. Discussion

Significant insights were gained from developing and training a prediction model using an upgraded dataset of 9273 photos, split into 70% for training and 30% for testing. The first model, trained for 50 epochs, attained an accuracy of 96.69%. This early accomplishment, while excellent, demonstrated difficulties in precisely forecasting all materials, pointing to potential issues such as class imbalances, image quality variances, or intrinsic similarities between material kinds. This emphasizes the complexity of the categorization problem and the necessity for additional improvement.
The model’s accuracy improved by 97.12% after 100 epochs of training. This incremental improvement in accuracy demonstrates the relevance of longer training times in allowing the model to grasp more intricate patterns in the dataset. However, the risk of overfitting, in which the model becomes unduly suited to the training data and thus less successful when faced with fresh data, must be carefully examined and managed. To further analyze the models, an external image of concrete that was not included in the original dataset was used as a test case. The model trained for 50 epochs misidentified the concrete image as asphalt, demonstrating the importance of training length on classification accuracy. In contrast, a model trained over 100 epochs correctly classified the image as concrete. This result emphasizes the importance of sufficient training epochs to improve the model’s capacity to generalize to new data, thereby improving its prediction ability.
Following 100 epochs of training, the model was evaluated using a variety of CDW images. These photos featured AAC, asphalt, ceramics, and concrete from both the dataset and other sources. The algorithm correctly predicted all labels for the dataset photos, suggesting its potential use in real-world applications. It is critical to recognize the possibility of dataset bias influencing the model’s success on individual photos, and to exercise caution when generalizing its performance with a broader variety of CDW materials.
Key performance indicators such as precision, recall, F1-score, and confusion matrix were used to evaluate the model after 100 epochs. Precision, which measures the accuracy of positive predictions, and recall, which reflects the model’s capacity to identify all relevant instances of a class, were critical indicators of the model’s success.
In the classification report post-100 epochs, varying levels of precision, recall, and F1-score were observed across different classes. For instance, the model achieved a precision of 0.29 and a recall of 0.27 for the concrete class, resulting in an F1-score of 0.28. While the model performed reasonably well overall, there is still room for improvement, particularly in enhancing recall to ensure accurate identification of all instances within a class.
In-depth analysis of true positives (TPs) and false negatives (FNs) provided further insights into the model’s performance strengths and weaknesses. For instance, while the model correctly identified 210 true positives for concrete, it also had 577 false negatives, indicating areas for improvement. The multiclass precision–recall (PR) curve analysis offered a comprehensive view of the model’s precision and recall dynamics across all classes, aiding in visualizing performance discrepancies and guiding adjustments for enhanced predictive accuracy.
To conclude, the results underscore the positive impact of increasing training epochs for prediction accuracy and the importance of a balanced approach to model training to effectively mitigate overfitting risks. While the model’s performance on external images indicates its potential applicability in practical scenarios, further refinement and validation across a broader material spectrum are essential to fully unlock its capabilities for CDW material classification.

6. Conclusions

In recent years, Saudi Arabia has faced a growing challenge due to the increasing size of construction and demolition waste (CDW). This challenge highlights the need for effective and sustainable solutions. This study proposed a deep learning model based on convolutional neural networks (CNNs), developed using Python and integrated with a user-friendly interface through the Gradio library. The model was trained on an augmented dataset covering four types of CDW materials—asphalt, ceramics, AAC, and concrete—achieving an overall accuracy of 97.12%.
While the results are promising, there are several limitations to consider. The model’s precision and recall values for certain classes, such as ceramics, suggest further work is needed to balance techniques and refine evaluation metrics, including the F1-score and confusion matrix. Additionally, future work could potentially use techniques such as early stopping or cross-validation to improve model robustness.
It is important to acknowledge that, while the model shows promise, the results achieved so far are preliminary. The potential benefits of improved recycling rates and waste management efficiency must be seen as aspirational until further validation and deployment studies can confirm their real-world effectiveness. Future work should focus on (1) extending the dataset to include a wider range of CDW materials, (2) ensuring proper training on unseen data, and (3) improving the model’s performance in diverse environments to ensure its practical applicability. In conclusion, continued refinement and broader testing are essential to transform this promising model into a reliable tool for sustainable waste management.

Author Contributions

Project administration, N.M.A.; writing—review and editing, W.N.A., A.Z.A., S.E.A., M.F.A. and A.I.A.; data collection, S.E.A.; data argumentation, S.E.A.; conceptualization, N.M.A. and M.M.A.; implementation, W.N.A., S.E.A., A.Z.A. and M.M.A.; model training, A.Z.A., S.E.A. and W.N.A.; visualization, W.N.A. and A.Z.A.; investigation, M.F.A.; formal analysis, S.E.A., A.Z.A., W.N.A. and M.F.A.; writing—original draft preparation, W.N.A., A.Z.A., S.E.A., M.F.A. and A.I.A.; review, N.K.A.-S., A.I.A., M.M.A. and N.M.A.; reply to reviewer comments, W.N.A., A.Z.A., S.E.A. and M.F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available from [50]. The augmented data are available upon request from the corresponding author due to storage limitations.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript.
ACCAutoclaved aerated concrete
CDConstruction and demolition
CDWConstruction and demolition waste
CDWMConstruction and demolition waste management
CNNsConvolutional neural networks
CRDConstruction, renovation, and demolition
DLDeep learning
DOWEDry organic waste’s weight estimation
DWGDemolition waste glass
FNFalse negative
FPFalse positive
GUIGraphical user interface
LCALife cycle assessment
LSTM-SRNNLong short-term memory–simple recurrent neural network
MLMachine learning
NANatural aggregate
NNNeural network
PRPrecision–recall
RARecycled aggregate
TLTransfer learning
TNTrue negative
TPTrue positive
WOWEWet organic waste’s weight estimation

References

  1. Akanbi, L.A.; Oyedele, A.O.; Oyedele, L.O.; Salami, R.O. Deep learning model for demolition waste prediction in a circular economy. J. Clean. Prod. 2020, 274, 122843. [Google Scholar] [CrossRef]
  2. Almulhim, A.I.; Cobbinah, P.B. Urbanization-environment conundrum: An invitation to sustainable development in Saudi Arabian cities. Int. J. Sustain. Dev. World Ecol. 2023, 30, 359–373. [Google Scholar] [CrossRef]
  3. Oluleye, B.I.; Chan, D.W.; Saka, A.B.; Olawumi, T.O. Circular economy research on building construction and demolition waste: A review of current trends and future research directions. J. Clean. Prod. 2022, 357, 131927. [Google Scholar] [CrossRef]
  4. Akinade, O.O.; Oyedele, L.O.; Ajayi, S.O.; Bilal, M.; Alaka, H.A.; Owolabi, H.A.; Arawomo, O.O. Designing out construction waste using BIM technology: Stakeholders’ expectations for industry deployment. J. Clean. Prod. 2018, 180, 375–385. [Google Scholar] [CrossRef]
  5. Almulhim, A.I.; Cobbinah, P.B. Can rapid urbanization be sustainable? The case of Saudi Arabian cities. Habitat Int. 2023, 139, 102884. [Google Scholar] [CrossRef]
  6. De Andrade Salgado, F.; de Andrade Silva, F. Recycled aggregates from construction and demolition waste towards an application on structural concrete: A review. J. Build. Eng. 2022, 52, 104452. [Google Scholar] [CrossRef]
  7. Wang, Z.; Li, H.; Yang, X. Vision-based robotic system for on-site construction and demolition waste sorting and recycling. J. Build. Eng. 2020, 32, 101769. [Google Scholar] [CrossRef]
  8. Almulhim, A.I.; Cobbinah, P.B. Framing resilience in Saudi Arabian cities: On climate change and urban policy. Sustain. Cities Soc. 2024, 101, 105172. [Google Scholar] [CrossRef]
  9. Sáez, P.V.; Astorqui, J.S.C.; del Río Merino, M.; Moyano, M.D.P.M.; Sánchez, A.R. Estimation of construction and demolition waste in building energy efficiency retrofitting works of the vertical envelope. J. Clean. Prod. 2018, 172, 2978–2985. [Google Scholar] [CrossRef]
  10. Sáez, P.V.; Porras-Amores, C.; del Río Merino, M. New quantification proposal for construction waste generation in new residential constructions. J. Clean. Prod. 2015, 102, 58–65. [Google Scholar] [CrossRef]
  11. Masseck, T.; París-Viviana, O.; Habibi, S.; Pons-Valladares, O. Integrated sustainability assessment of construction waste-based shading devices for the refurbishment of obsolete educational public building stock. J. Build. Eng. 2024, 87, 109024. [Google Scholar] [CrossRef]
  12. Regona, M.; Yigitcanlar, T.; Hon, C.; Teo, M. Artificial intelligence and sustainable development goals: Systematic literature review of the construction industry. Sustain. Cities Soc. 2024, 108, 105499. [Google Scholar] [CrossRef]
  13. Li, J.; Wu, Q.; Wang, C.C.; Du, H.; Sun, J. Triggering factors of construction waste reduction behavior: Evidence from contractors in Wuhan, China. J. Clean. Prod. 2022, 337, 130396. [Google Scholar] [CrossRef]
  14. Lin, K.; Zhou, T.; Gao, X.; Li, Z.; Duan, H.; Wu, H.; Zhao, Y. Deep convolutional neural networks for construction and demolition waste classification: VGGNet structures, cyclical learning rate, and knowledge transfer. J. Environ. Manag. 2022, 318, 115501. [Google Scholar] [CrossRef] [PubMed]
  15. Dong, Z.; Chen, J.; Lu, W. Computer vision to recognize construction waste compositions: A novel boundary-aware transformer (BAT) model. J. Environ. Manag. 2022, 305, 114405. [Google Scholar] [CrossRef] [PubMed]
  16. Mirshekarlou, B.R.; Budayan, C.; Dikmen, I.; Birgonul, M.T. Development of a knowledge-based tool for waste management of prefabricated steel structure projects. J. Clean. Prod. 2021, 323, 129140. [Google Scholar] [CrossRef]
  17. Chen, J.; Lu, W.; Xue, F. “Looking beneath the surface”: A visual-physical feature hybrid approach for unattended gauging of construction waste composition. J. Environ. Manag. 2021, 286, 112233. [Google Scholar] [CrossRef]
  18. Zhang, H.; Shi, S.; Zhao, F.; Hu, M.; Fu, X. Integrated benefits of sustainable utilization of construction and demolition waste in a Pressure-State-Response framework. Sustainability 2024, 16, 8459. [Google Scholar] [CrossRef]
  19. Yuan, L.; Lu, W.; Xue, F. Estimation of construction waste composition based on bulk density: A big data-probability (BD-P) model. J. Environ. Manag. 2021, 292, 112822. [Google Scholar] [CrossRef]
  20. Shen, J.; Li, Y.; Lin, H.; Li, H.; Lv, J.; Feng, S.; Ci, J. Prediction of compressive strength of alkali-activated construction demolition waste geopolymers using ensemble machine learning. Constr. Build. Mater. 2022, 360, 129600. [Google Scholar] [CrossRef]
  21. Gao, Q.; Li, X.G.; Jiang, S.Q.; Lyu, X.J.; Gao, X.; Zhu, X.N.; Zhang, Y.Q. Review on zero waste strategy for urban construction and demolition waste: Full component resource utilization approach for sustainable and low-carbon. Constr. Build. Mater. 2023, 395, 132354. [Google Scholar] [CrossRef]
  22. Deng, F.; He, Y.; Zhou, S.; Yu, Y.; Cheng, H.; Wu, X. Compressive strength prediction of recycled concrete based on deep learning. Constr. Build. Mater. 2018, 175, 562–569. [Google Scholar] [CrossRef]
  23. Maghsoudi, M.; Shokouhyar, S.; Khanizadeh, S.; Shokoohyar, S. Towards a taxonomy of waste management research: An application of community detection in keyword network. J. Clean. Prod. 2023, 401, 136587. [Google Scholar] [CrossRef]
  24. Wu, X.; Kroell, N.; Greiff, K. Deep learning-based instance segmentation on 3D laser triangulation data for inline monitoring of particle size distributions in construction and demolition waste recycling. Resour. Conserv. Recycl. 2024, 205, 107541. [Google Scholar] [CrossRef]
  25. Sirimewan, D.; Bazli, M.; Raman, S.; Mohandes, S.R.; Kineber, A.F.; Arashpour, M. Deep learning-based models for environmental management: Recognizing construction, renovation, and demolition waste in the wild. J. Environ. Manag. 2024, 351, 119908. [Google Scholar] [CrossRef] [PubMed]
  26. Dodampegama, S.; Hou, L.; Asadi, E.; Zhang, G.; Setunge, S. Revolutionizing construction and demolition waste sorting: Insights from artificial intelligence and robotic applications. Resour. Conserv. Recycl. 2024, 202, 107375. [Google Scholar] [CrossRef]
  27. Sepasgozar, S.M.; Mair, D.F.; Tahmasebinia, F.; Shirowzhan, S.; Li, H.; Richter, A.; Xu, S. Waste management and possible directions of utilising digital technologies in the construction context. J. Clean. Prod. 2021, 324, 129095. [Google Scholar] [CrossRef]
  28. Lu, W.; Long, W.; Yuan, L. A machine learning regression approach for pre-renovation construction waste auditing. J. Clean. Prod. 2023, 397, 136596. [Google Scholar] [CrossRef]
  29. Xu, J.; Lu, W.; Ye, M.; Xue, F.; Zhang, X.; Lee, B.F.P. Is the private sector more efficient? Big data analytics of construction waste management sectoral efficiency. Resour. Conserv. Recycl. 2020, 155, 104674. [Google Scholar] [CrossRef]
  30. Majumder, A.; Canale, L.; Mastino, C.C.; Pacitto, A.; Frattolillo, A.; Dell’Isola, M. Thermal characterization of recycled materials for building insulation. Energies 2021, 14, 3564. [Google Scholar] [CrossRef]
  31. Atta, I.; Bakhoum, E.S. Environmental feasibility of recycling construction and demolition waste. Int. J. Environ. Sci. Technol. 2023, 21, 2675–2694. [Google Scholar] [CrossRef]
  32. ISO 14040:2006; Environmental Management—Life Cycle Assessment—Principles and Framework. International Organization for Standardization: Geneva, Switzerland, 2006.
  33. ISO 14044:2006; Environmental Management—Life Cycle Assessment—Requirements and Guidelines. International Organization for Standardization: Geneva, Switzerland, 2006.
  34. Devaki, H.; Shanmugapriya, S. LCA on construction and demolition waste management approaches: A review. Mater. Today Proc. 2022, 65, 764–770. [Google Scholar] [CrossRef]
  35. Peng, X.; Jiang, Y.; Chen, Z.; Osman, A.I.; Farghali, M.; Rooney, D.W.; Yap, P.-S. Recycling municipal, agricultural, and industrial waste into energy, fertilizers, food, and construction materials, and economic feasibility: A review. Environ. Chem. Lett. 2023, 21, 765–801. [Google Scholar] [CrossRef]
  36. Marinho, A.J.C.; Couto, J.; Camões, A. Current state, comprehensive analysis, and proposals on the practice of construction and demolition waste reuse and recycling in Portugal. J. Civ. Eng. Manag. 2022, 28, 232–246. [Google Scholar] [CrossRef]
  37. Tihomirovs, P.; De Maeijer, P.K.; Korjakins, A. Demolition waste glass usage in the construction industry. Infrastructures 2023, 8, 182. [Google Scholar] [CrossRef]
  38. Shamsaei, M.; Carter, A.; Vaillancourt, M. Using construction and demolition waste materials to develop chip seals for pavements. Infrastructures 2023, 8, 95. [Google Scholar] [CrossRef]
  39. Sirimewan, D.; Harandi, M.; Peiris, H.; Arashpour, M. Semi-supervised segmentation for construction and demolition waste recognition in-the-wild: Adversarial dual-view networks. Resour. Conserv. Recycl. 2024, 202, 107399. [Google Scholar] [CrossRef]
  40. Zhang, Q.; Yang, Q.; Zhang, X.; Bao, Q.; Su, J.; Liu, X. Waste image classification based on transfer learning and convolutional neural network. Waste Manag. 2021, 135, 150–157. [Google Scholar] [CrossRef] [PubMed]
  41. Wang, Y.; Zhao, W.J.; Xu, J.; Hong, R. Recyclable waste identification using CNN image recognition and Gaussian clustering. arXiv 2020, arXiv:2011.01353. [Google Scholar]
  42. Ku, Y.; Yang, J.; Fang, H.; Xiao, W.; Zhuang, J. Deep learning of grasping detection for a robot used in sorting construction and demolition waste. J. Mater. Cycles Waste Manag. 2020, 23, 84–95. [Google Scholar] [CrossRef]
  43. Zhao, X.; Yang, Y.; Duan, F.; Zhang, M.; Jiang, G.; Yan, X.; Cao, S.; Zhao, W. Identification of construction and demolition waste based on change detection and deep learning. Int. J. Remote Sens. 2022, 43, 2012–2028. [Google Scholar] [CrossRef]
  44. Torky, M.; Dahy, G.; Hassanien, A.E. GH2_MobileNet: Deep learning approach for predicting green hydrogen production from organic waste mixtures. Appl. Soft Comput. 2023, 138, 110215. [Google Scholar] [CrossRef]
  45. Verma, T.; Dubey, S. Prediction of diseased rice plant using video processing and LSTM-simple recurrent neural network with comparative study. Multimed. Tools Appl. 2021, 80, 29267–29298. [Google Scholar] [CrossRef]
  46. Utku, A.; Kaya, S.K. Deep learning-based comprehensive analysis for waste prediction. Oper. Res. Eng. Sci. Theory Appl. 2022, 5, 176–189. [Google Scholar] [CrossRef]
  47. Cha, G.-W.; Choi, S.-H.; Hong, W.-H.; Park, C.-W. Developing a prediction model of demolition-waste generation-rate via principal component analysis. Int. J. Environ. Res. Public Health 2023, 20, 3159. [Google Scholar] [CrossRef]
  48. Blaisi, N.I. Construction and demolition waste management in Saudi Arabia: Current practice and roadmap for sustainable management. J. Clean. Prod. 2019, 221, 167–175. [Google Scholar] [CrossRef]
  49. Ouda, O.K.M.; Peterson, H.P.; Rehan, M.; Sadef, Y.; Alghazo, J.M.; Nizami, A.S. A case study of sustainable construction waste management in Saudi Arabia. Waste Biomass Valorization 2017, 9, 2541–2555. [Google Scholar] [CrossRef]
  50. Haider, H.; AlMarshod, S.Y.; AlSaleem, S.S.; Ali, A.A.M.; Alinizzi, M.; Alresheedi, M.T.; Shafiquzzaman, M. Life cycle assessment of construction and demolition waste management in Riyadh, Saudi Arabia. Int. J. Environ. Res. Public Health 2022, 19, 7382. [Google Scholar] [CrossRef]
  51. Al-Ghamdi, O.; Makhdom, B.; Al-Faraj, M.; Al-Akhras, N.; Abdel-Magid, H.; Abdel-Magid, I. Management and recycling of construction and demolition waste in Kingdom of Saudi Arabia. Int. J. Innov. Res. Sci. Eng. Technol. 2016, 5, 1111. [Google Scholar] [CrossRef]
  52. Nežerka, V.; Zbíral, T.; Trejbal, J. Machine-learning-assisted classification of construction and demolition waste fragments using computer vision: Convolution versus extraction of selected features. Expert Syst. Appl. Data Repos. 2024, 238, 121568. [Google Scholar] [CrossRef]
  53. Dhanya, V.G.; Subeesh, A.; Kushwaha, N.L.; Vishwakarma, D.K.; Kumar, T.N.; Ritika, G.; Singh, A.N. Deep learning-based computer vision approaches for smart agricultural applications. Artif. Intell. Agric. 2022, 6, 211–229. [Google Scholar] [CrossRef]
  54. Blain, L. Revolutionary ‘True Zero Carbon’ Cement Uses Electrolysis, Not Furnaces. New Atlas. 19 September 2023. Available online: https://newatlas.com/materials/sublime-zerocarbon-concrete/ (accessed on 10 June 2024).
  55. Photovs, Removal of Debris Construction Waste Building Demolition with Rock and Concrete Rubble on Portable Bio-Toilets [Image]. Dreamstime. 26 November 2019. Available online: https://www.dreamstime.com/photos-images/construction-waste.html (accessed on 24 January 2024).
  56. Alamy Limited. Broken Pieces of Asphalt at a Construction Site. Recycling and Reuse Crushed Concrete Rubble, Asphalt, Building Material, Blocks. Crushed Concrete Bac. Alamy.com. 2025. Available online: https://www.alamy.com/brokenpieces-of-asphalt-at-a-construction-site-recycling-and-reuse-crushed-concreterubble-asphalt-building-material-blocks-crushed-oncrete-bacimage433825575.html (accessed on 12 May 2024).
  57. Cichocki, K.; Domski, J.; Katzer, J.; Ruchwa, M. Impact Resistant Concrete Elements with Non-Conventional Reinforcement [Image]. ResearchGate. January 2014. Available online: https://www.researchgate.net/publication/271014105_Impact_resistant_concrete_elements_with_nonconventional_reinforcement (accessed on 17 February 2024).
  58. Bobulski, J.; Kubanek, M. Deep learning for plastic Waste Classification system. Appl. Comput. Intell. Soft Comput. 2021, 2021, 6626948. [Google Scholar] [CrossRef]
  59. Shrivastava, A. Real time waste classification using deep learning and AV: Deep learning and implementation in the frontend. J. Stud. Res. 2023, 12, 1–9. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.