Next Article in Journal
Adaptive Trust-Based Access Control with Honey Objects and Behavior Analysis
Previous Article in Journal
Bayesian Poisson Modeling of Built Environment Effects on Pedestrian Crash Risk Among Older Adults in Mountainous Urban Areas
Previous Article in Special Issue
Wide-Angle Image Distortion Correction and Embedded Stitching System Design Based on Swin Transformer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Proposal for Two-Stage Machine Learning-Based Algorithm for Dried Moringa Leaves Quality Classification

by
Putu Sugiartawan
1,2,
Nobuo Funabiki
1,*,
I Nyoman Darma Kotama
1,2,
Amma Liesvarastranta Haz
1,
Komang Candra Brata
1,3 and
Ni Wayan Wardani
1,2
1
Department of Information and Communication Systems, Okayama University, Okayama 700-8530, Japan
2
Department of Informatics, Institut Bisnis dan Teknologi Indonesia, Denpasar 80225, Indonesia
3
Department of Informatics Engineering, Universitas Brawijaya, Malang 65145, Indonesia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(1), 239; https://doi.org/10.3390/app16010239
Submission received: 2 November 2025 / Revised: 3 December 2025 / Accepted: 19 December 2025 / Published: 25 December 2025
(This article belongs to the Special Issue Latest Research on Computer Vision and Image Processing)

Abstract

Nowadays, dried Moringa leaves (M. oleifera) are increasingly in demand due to their health benefits. High-quality ones have shown remarkable positive effects as antioxidants, antidiabetics, and anti-inflammatory agents. However, in the industry, the quality classification process into six categories is performed manually by farmers, which is time-consuming and error-prone. Particularly, the two highest categories of Class A and Class B are hard to distinguish, since they are visually similar. In this paper, to automate the classification process, we introduce a new high-resolution dataset, extract color and texture features using the Gray-Level Co-occurrence Matrix (GLCM) method, and present a two-stage classification method using the Light Gradient Boosting Machine (LightGBM) algorithm with them. The experimental results show that the proposal improved classification accuracy from 82% by the baseline algorithm to 90% while maintaining high processing efficiency, demonstrating its potential for real-time and scalable industrial applications in dried Moringa leaves quality grading.

1. Introduction

Moringa (M. oleifera) is a valuable plant widely recognized for its rich nutritional content and diverse health benefits [1]. The highest quality dried Moringa leaves in Class A are commonly processed into fine powder and are used in medicine, known for their antioxidant [2], antidiabetic [3], and anti-inflammatory [4]. In contrast, lower-quality dried Moringa leaves in Classes B, C, D, and E are typically used for producing herbal tea, animal feed, and organic fertilizer, depending on the purity and leaf condition.
In recent years, exports of dried Moringa leaves from Indonesia have increased notably [5]. This growth is mainly driven by global demands and wider applications in the cosmetic [6,7] and wellness industries [8,9]. According to PT. Bali Agro Investama, a leading industrial producer of dried Moringa leaves in Bali, Indonesia, the existing quality classification system has been refined to better suit product-specific applications. The previous top grade, labeled Class A, has now been divided into two groups: Class A, which is used for pharmaceutical and medicinal purposes, and Class B, which is mainly intended for cosmetic formulations. The remaining grades, Class C, D, E, and F, are used for food, tea blends, animal feed, and other commercial products. This class refinement helps improve the product traceability and consistency but also increases the complexity of the classification process.
Currently, the classification of dried Moringa leaves is carried out manually, relying on human visual inspections to evaluate color and textures [10,11]. This manual process is labor-intensive, time-consuming, and prone to subjective errors, especially in distinguishing high-quality leaves between Class A and Class B. They often have nearly identical visual features. Therefore, an automated and objective classification approach is needed to support large-scale productions of dried Moringa leaves while maintaining consistent quality standards.
Previously, several attempts have been conducted to classify the quality of dried Moringa leaves. They rely solely on the colour feature of them using machine learning methods [12,13], previous studies relied on a single type of feature—color—so the machine learning models were trained to distinguish leaves only based on color differences. However, several classes have very similar color characteristics (such as Class A and Class B), making this approach less effective for fine-grained quality classification. Unfortunately, the results show that using only color is insufficient to clearly separate leaves between the two high-quality classes. The methods often misclassified good-quality leaves as lower-quality ones, because color alone could not represent the subtle variations in texture and surface uniformity that distinguish quality levels. We have also studied the quality classification of dried Moringa leaves using color features [14]. It also lacked reliability in distinguishing leaves between Class A and Class B.
In this paper, to address this limitation, we present an improved classification method using multiple image features and a structured two-stage classification framework. This study contributes the following three aspects.
Previous studies have attempted to classify dried Moringa leaves using machine learning techniques; however, most approaches relied primarily on color information. These color-based methods demonstrated limited ability to distinguish high-quality leaves, particularly between Class A and Class B, which often share highly similar visual characteristics. As a result, misclassification frequently occurred when subtle variations in texture or surface uniformity were present. Our earlier work, which also adopted color features, showed similar limitations and indicated the need for a more comprehensive feature representation [15,16].
To address these challenges, the present study aims to develop a more robust and reliable image-based classification framework for dried Moringa leaves. The proposed approach emphasizes integrating richer visual information, including color and texture characteristics, and restructuring the classification process to better manage the subtle similarities in premium-grade leaves. By leveraging these enhancements, the study seeks to overcome the limitations reported in prior work and to provide a more accurate foundation for automated quality assessment.
This work offers three main contributions. First, it introduces a new high-resolution image dataset that captures detailed visual properties of dried Moringa leaves. Second, it proposes a refined classification framework designed to improve the separation of closely related quality classes. Finally, we conducted a comprehensive comparison of several machine learning models to identify an effective classifier for the assessment of dried Moringa leaf quality.
Our experimental results show that LightGBM achieved the best performance, reaching an overall accuracy of 91% with efficient processing time. This demonstrates that the proposed two-stage approach successfully improves classification accuracy compared with 82% by the one-stage approach, while maintaining high computational efficiency. The combination of high-resolution data, comprehensive feature extraction, and a hierarchical classification design establishes a reliable framework for large-scale and real-time quality assessment of dried Moringa leaves.
The remainder of this paper is organized as follows. Section 2 reviews previous studies on image-based quality assessment, feature extraction, and machine learning classification methods. Section 3 explains the dataset of dried Moringa leaves, including the image collection process, preprocessing, and feature extraction procedures. Section 5 describes eight machine learning-based classification algorithms used in this study. Section 4 presents the two-stage classification approach. Section 6 shows experimental results for performance evaluations. Section 7 discusses findings and implications of this study. Finally, Section 8 concludes this paper with future work.

2. Related Works

In this section, we introduce relevant works in the literature.

2.1. Image Quality Dependency

Previous studies have reported that machine learning models for leaf classification are highly sensitive to image quality, where variations in lighting, resolution, focus, occlusion, or background conditions can introduce inconsistencies and degrade model performance.
In [17], Chithra et al. emphasized that classification accuracy strongly depends on consistent lighting and background conditions, noting that illumination changes and image noise can significantly reduce feature clarity.
In [18], Hussain et al. highlighted that variations in focus, brightness, and background artifacts often introduce visual noise that negatively affects model robustness.
In [19], Hu et al. reported that shadows, occlusions, and image blur may obscure key leaf structures such as veins or edges, leading to misclassification.
In [20], Kumar et al. found that changes in lighting conditions, occlusions, and inconsistent backgrounds can distort critical visual features, further illustrating the challenges posed by image quality dependency.

2.2. Challenges in Feature Extraction

Feature extraction in leaf classification presents several challenges, including reliance on manually engineered features, sensitivity to variations in image acquisition, and difficulty in distinguishing morphologically similar leaf categories.
In [21], Gbadebo et al. highlighted that feature extraction techniques are highly sensitive to image quality variations, and subtle morphological similarities across species often complicate reliable feature representation.
In [22], Key et al. reported that lighting inconsistencies, occlusions, and variations in image sharpness can distort extracted features, reducing model robustness in classification tasks.
In [23], Donesh et al. discussed additional challenges such as lighting conditions, scanning or photography noise, and background interference, all of which can obscure leaf contours and hinder the extraction of discriminative structural features.

2.3. Machine Learning Models for Classification

Previous research has shown that traditional machine learning models often face challenges in classification tasks, including reduced accuracy when dealing with imbalanced datasets, sensitivity to parameter settings, and difficulties in generalizing across diverse data conditions.
In [24], Zachariah et al. reported that common evaluation metrics such as accuracy and ROC curves may fail to reliably assess model generalization, particularly in imbalanced datasets, leading to suboptimal model selection.
In [25], Jaiswal et al. highlighted that machine learning models must address issues such as adapting to incremental data updates, avoiding catastrophic forgetting, and maintaining stable performance under dynamic training conditions.
In [26], Wang et al. explained that optimizing SVM parameters and combining them with topic modeling techniques can improve model robustness, especially when dealing with high-dimensional data.
In [27], Widyannada et al. emphasized that the performance of SVM classifiers relies heavily on proper hyperparameter tuning—such as Gamma and C in the RBF kernel—to achieve improved accuracy.
In [28], Zhou et al. proposed an improved SVM model using an enhanced Dung Beetle Optimizer (HDBO) algorithm. It addresses issues like parameter dependency, computational complexity, and local optima in traditional SVM classification by incorporating chaotic mapping, dynamic weighting, and Cauchy mutation strategies.

3. Image Dataset of Dried Moringa Leaves

In this section, we introduce dried Moringa leaves and discuss their image dataset used in this study.

3.1. Moringa Leaf

Moringa (M. oleifera) is a fast-growing tropical tree native to India, and is now cultivated worldwide. Its leaves are widely used as food and in traditional medicine due to their high nutritional content. Dried Moringa leaves contain approximately 25–30% plant protein, along with essential vitamins (A, C, E, and B complex) and minerals such as calcium, potassium, magnesium, and iron.
Figure 1 shows samples of dried Moringa leaves. During the drying process, Moringa leaves undergo a significant reduction in weight due to the loss of moisture. Fresh leaves contain about 75–80% water, which decreases drastically upon drying, leading to a noticeable shrinkage in mass.
Dried Moringa leaves also contain bioactive compounds including flavonoids, polyphenols, and saponins. They act as antioxidants and exhibit anti-inflammatory properties. Typically, dried Moringa leaves are ground into powder and are used in medicines, cosmetics, and dietary supplements. Scientific studies have shown that Moringa leaf proteins can reduce blood glucose levels in diabetic mice and provide strong antioxidant activity [29]. Other reviews have reported that its bioactive compounds contribute to the prevention of chronic diseases and inflammation [30].

3.2. Overview of Dataset Creation

The dataset creation process of dried Moringa leaves images consisted of two stages for data collection and preprocessing. They were designed to ensure the consistent image quality and to extract meaningful features for machine learning-based classification. Figure 2 illustrates the overall workflow for creating the dried Moringa leaves image dataset.
In the data collection stage, high-resolution images of dried Moringa leaves were captured, representing all of the six quality classes (A, B, C, D, E, F) with a total of 600 images, 100 images for each class. The image normalization was performed using histogram equalization to balance brightness and contrast among them, followed by the background removal to isolate the leaf objects.
In the preprocessing stage, two main features were extracted from each image. First, color features were computed using the average RGB values, grayscale histograms, and RGB histograms. Second, texture features were extracted using the Gray-Level Co-occurrence Matrix (GLCM), which calculated four key statistical properties: Contrast (pixel intensity variation), Correlation (pixel linearity), Energy (texture uniformity), and Homogeneity (gray-level smoothness).
All of the extracted features were compiled into a CSV dataset along with the manually assigned class labels for the 600 images. These structured data were then used as the input to the eight machine learning algorithms to identify the most accurate and efficient model for Moringa leaves quality classification.

3.3. Data Collection Stage

In this stage, first, dried Moringa leaves images are collected from the six classes, ensuring consistent lighting and camera angles for each. Next, brightness is normalized using the histogram equalization method to maintain uniform lighting throughout the dataset. Finally, backgrounds are removed, resulting in a set of moringa images suitable for extracting color, texture, and high-resolution features.

3.3.1. Image Data Collection

The dataset images were captured under standardized conditions to ensure uniformity and high visual quality. Figure 3 shows the image acquisition setup. Each image was taken against a white background using a fixed camera with the distance of 1 m and the consistent illumination provided by a 400 watt studio lamp positioned 1.5 m away. The camera was mounted on a tripod to maintain a stable angle and perspective. All images were saved in JPEG format.
To achieve consistent lighting, focus, and color accuracy across all samples, the camera parameters were standardized as shown in Table 1. These parameters ensured that the visual characteristics of the dried Moringa leaves were captured clearly and comparably in all images. Table 1 details the specific settings used during image acquisition.
Using the abovementioned setting, a total of 600 dried Moringa leaves images were collected, 100 images per class. These settings ensured consistent brightness, sharpness, and color representation, reducing variations caused by lighting or camera configuration. The controlled conditions have improved dataset reliability for machine learning-based classification.

3.3.2. Labeling Moringa Image

Next, experts from PT. Bali Agro Investama labeled the collected dried Moringa leaves images with six quality classes: A, B, C, D, E, and F. This classification was based on visual features such as color uniformity, stem content, texture, and overall leaf condition. Figure 4 illustrates example images from each class after preprocessing. Table 2 summarizes the class labels and their descriptions. The six labels define the visual quality levels used for dataset annotation.

3.3.3. Brightness Normalization Using CLAHE

To ensure the uniform brightness across all Moringa leaves images before feature extractions, we applied the Contrast Limited Adaptive Histogram Equalization (CLAHE) method [31,32,33]. This method enhances local contrast while minimizing noise amplification. The brightness normalization process is expressed using the cumulative distribution function (CDF) shown in Equation (1).
I ( x , y ) = CDF I ( x , y ) CDF min CDF max CDF min I max I min + I min
Here I ( x , y ) represents the pixel intensity after normalization at coordinates ( x , y ) , whereas I ( x , y ) denotes the original pixel intensity. The parameters I min and I max correspond to the minimum and maximum intensity values, respectively, which are typically set to 0 and 255 for 8-bit images.
The CLAHE method divides the image into small contextual regions called tiles, applies the local histogram equalization to each tile, and limits contrast to prevent over-amplification of noise. This approach produces a balanced intensity distribution and enhances the feature visibility. Figure 5 illustrates the CLAHE-based brightness normalization process. After the application, the normalized images achieved consistent brightness and stable feature contrast across all classes, improving the quality of input data for machine learning-based classifications.

3.3.4. Image Segmentation

Image segmentation is applied to each dried Moringa leaves image to isolate the leaves region by removing the irrelevant background area. This segmentation process begins with cropping to focus on the leaves region, followed by background removal to eliminate the non-leaves regions. Figure 6 shows the sequence of an image segmentation, including the original image (3936 × 2624 pixels), the cropped image (1117 × 1085 pixels), and the final background-free image.
This cropping and background removal processes did not significantly reduce image resolution. The resulting segmented images retained sufficient details for machine learning feature extractions.

3.4. Feature Extraction Stage

The feature extraction stage extracts numerical features for machine learning analysis from the preprocessed dried Moringa leaves image. Two types of features are extracted: color and texture. Color features are obtained from RGB averages, RGB histograms, and grayscale histograms. Texture features are extracted using the Gray-Level Co-occurrence Matrix (GLCM) method [34,35]. These features describe the color distribution, intensity, and surface texture of each image. All extracted values are stored in a CSV file that will be used for classification.

Color Feature Extraction

The color feature extraction process calculates the visual color properties of a dried Moringa leaves image, which will be used in machine learning algorithms. We use two types of color features: average RGB value and color histogram.
The average RGB value represents the mean pixel intensity for each color channel, and is calculated by:
R ¯ = 1 N ( x , y ) R ( x , y ) , G ¯ = 1 N ( x , y ) G ( x , y ) , B ¯ = 1 N ( x , y ) B ( x , y )
where N is the total number of pixels in the segmented region of the image.
To capture the color intensity distribution, the color intensity of each image at Red, Green, Blue, and Grayscale is discreated with 10 intensity ranges or bins. Then, the color histogram  h k for each bin k is computed as:
h k = n k N , k = 1 , 2 , , 10
where n k is the number of pixels whose intensity falls within bin k. The histogram is then normalized to ensure scale independence.
Then, the complete color feature vector is obtained by:
f color = [ R ¯ , G ¯ , B ¯ , h 1 gray h 10 gray , h 1 R h 10 R , h 1 G h 10 G , h 1 B h 10 B ]
where f color denotes the color feature vector.
The extracted color features used in this study are summarized in Table 3. All of the extracted features are normalized and saved in a CSV file along with their class labels. The color-based features serve as the primary inputs to the machine learning classification of dried Moringa leaves quality.

3.5. Texture Feature Extraction Using GLCM

The texture feature extraction stage analyzes the spatial distribution of pixel intensities in the dried Moringa leaves images using the Gray-Level Co-occurrence Matrix (GLCM) method. This method calculates the matrix representing how often pairs of pixels with specific gray-level values occur at a defined spatial distance and direction. This matrix provides valuable information about the image texture, which helps in distinguishing leaf surface patterns across different quality classes.
From the GLCM method, the following four statistical texture properties are extracted: Contrast, Correlation, Energy, and Homogeneity. These parameters describe different aspects of image texture and are computed by
Contrast = i , j ( i j ) 2 P ( i , j )
Correlation = i , j ( i μ i ) ( j μ j ) P ( i , j ) σ i σ j
Energy = i , j P ( i , j ) 2
Homogeneity = i , j P ( i , j ) 1 + | i j |
Here, P ( i , j ) represents the normalized GLCM value at coordinates ( i , j ) , and μ i , μ j , σ i , and σ j denote the means and standard deviations of the gray levels in the horizontal and vertical directions, respectively. Table 4 summarizes the extracted texture features and their descriptions.

3.6. Final Feature Dataset

The final feature dataset consists of 600 rows or images and 50 columns in total: filename, label, 43 color features, and 5 texture features. The color features include three average RGB values of Avg R, Avg G, Avg B, and 40 histogram bins with 10 bins each for Grayscale, Red, Green, and Blue. The texture features are extracted using the Gray-Level Co-occurrence Matrix (GLCM) method, and consist of five statistical measures: Contrast, Correlation, Energy, Homogeneity, and Dissimilarity.
Table 5 shows a representative sample in the final feature dataset, showing one image per class (A–F). Here, only selected features are shown for brevity, where the full CSV includes all 43 color and 5 texture features.

3.7. Software or Hardware Configuration

The experiments in this study were conducted using a Lenovo ThinkPad device with the following hardware specifications: Processor: 11th Gen Intel® Core™ i5-1135G7 @ 2.40 GHz (8 CPUs), Memory (RAM): 8 GB, and Operating System: Windows 11 Pro 64-bit (Build 26200). No GPU acceleration was used in this work; all computations, including image preprocessing, feature extraction, and model training (LightGBM), were performed entirely on CPU resources. This demonstrates that the proposed approach can operate efficiently even on modest computing hardware typical of small-scale industrial environments. Regarding software, the experiments were implemented using Python (for preprocessing, feature extraction, and model training), LightGBM library for classification, OpenCV for image handling, NumPy and SciPy for numerical processing and statistical calculations, and Windows 11 as the operating system. This information will be clearly presented in the revised manuscript to enhance reproducibility and allow other researchers to evaluate the practical computational requirements of the proposed system.

4. Two-Stage Classification Method

In this section, we present the two-stage classification method for quality grading of dried Moringa leaves. This method is designed to address the limitations observed during the baseline classification, where visually similar classes (Class A and Class B) were difficult to separate using a single-stage approach.

4.1. Overview of Proposal

In the baseline classification, eight machine learning models were compared with identify the most accurate and efficient algorithm for classifying dried Moringa leaves quality. Among them, the Light Gradient Boosting Machine (LightGBM) achieved the best balance between accuracy and processing speed. However, its precision became low for Class A and Class B, since they exhibit nearly identical color and texture characteristics.
To improve the classification accuracy, a two-stage classification method was designed here. The first stage restructures the dataset by merging Class A and Class B into a single category labeled Class AB, while retaining Class C, Class D, Class E, and Class F. The model trained on these five classes of AB, C, D, E, F aims to reduce confusion between the two highly similar categories. The workflow of this two-stage classification method is shown in Figure 7.
The second stage focuses on classifying the Class AB samples into Class A and Class B. In this stage, another model is trained using only Class A and Class B, enabling fine-grained differentiations between visually similar leaves. This hierarchical process reduces misclassifications between overlapping visual features and increases overall classification reliability. The complete evaluation of this method is presented in Section 6.

4.2. Stage-Specific Training Process (A–B Refinement)

In the second stage, all samples in Class AB classified in the first stage are reclassified using a dedicated model for this stage. This model is trained exclusively with datasets in Class A and Class B, consisting of 200 images (100 per class). The same preprocessing pipeline and feature extraction methods are applied to ensure consistency across both stages.
The LightGBM algorithm is used again due to its strong performance in handling nonlinear feature interactions and imbalanced data distributions. As illustrated in Figure 8, this model in the second stage can well separate samples in Class AB by the first stage into Class A and Class B. This model can improve detections of subtle visual differences such as leaf hue, stem proportion, and surface texture between the two classes, which the first-stage model found difficult to separate.
Through this two-stage process, the classification accuracy for Class A and Class B can be improved significantly while maintaining the high accuracy for the other classes.

5. Machine Learning-Based Classification Methods

In this section, we describe the machine learning classification methods used in this study. All models were implemented using Python 3.11, and the specific libraries used for each algorithm, such as Support Vector Machine (SVM) use Scikit-learn (sklearn), Decision Tree use Scikit-learn (sklearn), Random Forest use Scikit-learn (sklearn), Gradient Boosting Decision Tree (GBDT) use Scikit-learn (sklearn), XGBoost use XGBoost library (xgboost), LightGBM use LightGBM library (lightgbm), Naive Bayes → Scikit-learn (sklearn), and K-Nearest Neighbor (KNN) use Scikit-learn (sklearn). These libraries were used with their default configurations unless otherwise specified. All preprocessing steps, feature extraction, and evaluation procedures were also executed using Python scientific computing libraries, including NumPy, Pandas, and SciPy.

5.1. Overview of Machine Learning-Based Classification

Eight machine learning algorithms were selected based on their proven use in agricultural image classifications and quality assessments in [36,37,38]. They include Support Vector Machine (SVM), Decision Tree, Gradient Boosting Machine (GBM), eXtreme Gradient Boosting (XGBoost), K-Nearest Neighbors (K-NN), Naive Bayes, Random Forest, and Light Gradient Boosting Machine (LightGBM). The eight machine learning algorithms were selected to provide a representative and comprehensive comparison across different model families commonly used for structured feature datasets. This diversity allows for evaluating which learning paradigm best fits the combined color–texture feature space used in this study. These models are also computationally efficient and practical for industrial deployment without requiring GPU resources. Therefore, the selection of these eight algorithms ensures a fair, lightweight, and application-oriented comparison aligned with real-world constraints.
SVM handles high-dimensional and nonlinear data effectively [39]. Decision Tree is simple and interpretable. GBM and XGBoost improve weak learners through iterative boosting. K-NN classifies data by distance similarity. Naive Bayes applies a fast probabilistic approach. Random Forest reduces overfitting through ensemble learning. LightGBM is optimized for speed and large datasets. These algorithms were used to compare classification behaviors across linear, probabilistic, distance-based, and ensemble approaches, forming the basis for developing the proposed two-stage method.

5.2. Support Vector Machine (SVM)

Support Vector Machine (SVM) is a supervised learning algorithm commonly applied to classification and regression problems. Its objective is to find the optimal hyperplane that maximizes the margin between data points of different classes. This geometric principle allows SVM to generalize classification well, even with limited training data. Several studies confirmed its reliability for image-based classification, showing strong performances across agricultural and visual recognition tasks [40,41].
SVM determines the separating hyperplane by maximizing the margin as expressed in Equation (9) [42,43]:
maximize 1 w , subject to y i ( w x i + b ) 1 , i ,
where w is the weight vector, b is the bias, x i is the input feature vector, and y i { 1 , 1 } is the class label. By optimizing this margin, SVM achieves strong generalization and robustness in high-dimensional image feature spaces.

5.3. Decision Tree

Decision Tree is a supervised learning algorithm that classifies data by recursively splitting it based on the most informative features [44]. It builds a hierarchical tree structure of decision rules that guide the prediction process. When the data distribution among different classes is uniform, the model’s entropy is high, indicating greater uncertainty [45]. The prediction function of Decision Tree can be expressed as Equation (10):
y ^ ( x ) = m = 1 M c m 1 { x R m }
where M represents the number of leaf nodes, R m does the region associated with leaf m, c m does the predicted value for that region, and 1 { x R m } does an indicator function equal to 1 if x lies in region R m , and 0 otherwise. This structure allows the model to perform intuitive and interpretable classification based on feature thresholds.

5.4. Random Forest

Random Forest is an ensemble learning algorithm that constructs multiple Decision Trees using random subsets of data and features and aggregates their predictions to improve accuracy and reduce overfitting [46,47]. Each Decision Tree contributes an independent prediction, and the final result is determined by majority voting for classification or by averaging for regression. The prediction function can be expressed as Equation (11):
y ^ ( x ) = mode T b ( x ) b = 1 B , for classification 1 B b = 1 B T b ( x ) , for regression
where B is the total number of trees, and T b ( x ) denotes the prediction of the b-th tree for input x . For classification tasks, the final class is selected by the most frequent label among all trees, while for regression, the output becomes the average of all predictions. By combining many weak learners, Random Forest reduces variance, handles large and complex datasets, and identifies important features effectively [48].

5.5. Gradient Boosting Decision Trees (GBDT)

Gradient Boosting Decision Trees (GBDT) is an ensemble learning algorithm that builds multiple Decision Trees sequentially, where each tree is learnt to correct the residual errors of the previous one [49]. The prediction from each new tree is added to the model in small steps, improving accuracy through iterative optimization [50]. The model update at each iteration is defined in Equation (12):
F m ( x ) = F m 1 ( x ) + ν h m ( x ) ,
where F m ( x ) represents the model after the m-th iteration, F m 1 ( x ) the previous model, h m ( x ) the new weak learner (tree) trained to predict residuals, and ν the learning rate controlling the contribution of each tree.
At each step, h m ( x ) minimizes the negative gradient of the loss function, gradually improving the model’s predictive performance. This process continues until a defined stopping criterion is met, producing a robust model with high accuracy and low bias.

5.6. Extreme Gradient Boosting (XGBoost)

Extreme Gradient Boosting (XGBoost) is an advanced implementation of the Gradient Boosting framework that builds Decision Trees sequentially, where each tree corrects the residual errors of the previous ones [51,52]. XGBoost improves the model performance through optimized loss functions, regularization, and shrinkage (learning rate), effectively reducing overfitting while maintaining high accuracy. The objective function is expressed in Equation (13):
L ( ϕ ) = i = 1 n l y i , y ^ i ( t 1 ) + f t ( x i ) + Ω ( f t ) ,
where l ( · ) represents the differentiable loss function measuring the difference between the predicted value y ^ i and the true label y i and Ω ( f t ) the regularization term controlling model complexity. The function f t ( x i ) represents the newly added tree at iteration t, which learns to fit the residuals from the previous prediction y ^ i ( t 1 ) . By combining the gradient optimization and regularization, XGBoost achieves both computational efficiency and strong generalization performance.

5.7. Light Gradient Boosting Machine (LightGBM)

Light Gradient Boosting Machine (LightGBM) is a fast, efficient, and scalable implementation of Gradient Boosting Decision Trees (GBDT) developed by Microsoft [53]. It enhances the traditional GBDT framework by introducing histogram-based Decision Tree learning and leaf-wise tree growth, which significantly reduces training time and memory usage [54,55]. The objective function optimized at each iteration is defined in Equation (14):
L ( t ) = i = 1 n l y i , y ^ i ( t 1 ) + f t ( x i ) + Ω ( f t ) ,
where L ( t ) represents the total objective function combining loss and regularization, l ( · ) the individual loss function measuring the error between the true label y i and the predicted value y ^ i ( t 1 ) , f t ( x i ) the newly added Decision Tree trained to fit the residuals, and Ω ( f t ) the regularization term controlling model complexity.
Unlike the standard GBDT, LightGBM employs histogram binning to accelerate split finding and uses a leaf-wise growth strategy that allows deeper, more accurate trees for complex data patterns. These techniques make LightGBM highly suitable for large-scale and high-dimensional datasets, offering strong performance with lower computational cost.

5.8. Naive Bayes

Naive Bayes is a probabilistic classification algorithm based on Bayes’ theorem, which assumes that all features are conditionally independent given the class label [56]. Despite this “naive” assumption, it often performs effectively, especially in text and image classification tasks [57,58]. The model predicts the probability that a feature vector x belongs to a class C k using Equation (15):
P ( C k | x ) = P ( C k ) i = 1 n P ( x i | C k ) P ( x ) ,
where P ( C k | x ) represents the posterior probability of class C k given features x = ( x 1 , x 2 , , x n ) , P ( C k ) does the prior probability of class C k , and P ( x i | C k ) does the likelihood of observing feature x i given that the sample belongs to class C k . The denominator P ( x ) describes the evidence term, constant across all classes and typically omitted during classification.
Naive Bayes selects the class with the highest posterior probability:
C ^ = arg max C k P ( C k ) i = 1 n P ( x i | C k ) .
Its simplicity, low computational cost, and robustness to small datasets make Naive Bayes a practical baseline for probabilistic classification tasks [59].

5.9. K-Nearest Neighbor (K-NN)

K-Nearest Neighbor (K-NN) is a simple, non-parametric algorithm used for both classification and regression tasks [60,61]. It classifies a new data point by examining the k nearest neighbors in the training set based on a distance metric, typically the Euclidean distance. The predicted class is determined by the majority label among these neighbors [62]. The distance between a query point x and a training point x i is defined in Equation (16):
d ( x , x i ) = j = 1 m x j x i j 2 ,
where d ( x , x i ) represents the Euclidean distance between the two data points, m does the total number of features, x j does the j-th feature value of the query point x , and x i j does the corresponding feature value of the training point x i .
After computing the distances, K-NN identifies the k nearest samples N k ( x ) and assigns the class that appears most frequently among them. Despite its simplicity, K-NN performs well on small to moderate datasets and serves as a strong baseline for pattern recognition and image classification.

5.10. Evaluating Model Performance in Classification

Evaluating model performance in classification is essential to know how accurately a machine learning model can estimate the correct class label [25]. Several standard metrics are used to quantify classification effectiveness, each focusing on different aspects of prediction quality.
Accuracy is the most common measure, which represents the proportion of correctly predicted samples among all predictions, as defined in Equation (17):
Accuracy = T P + T N T P + T N + F P + F N ,
where T P , T N , F P , and F N denote true positives, true negatives, false positives, and false negatives, respectively.
Precision measures how many of the samples predicted as positive are truly positive, as shown in Equation (18):
Precision = T P T P + F P .
Recall, also known as sensitivity, quantifies the ability of the model to identify all positive samples, as expressed in Equation (19):
Recall = T P T P + F N .
F1-Score is the harmonic mean of precision and recall, providing a single metric that balances both, as given in Equation (20):
F 1 - Score = 2 × Precision × Recall Precision + Recall .
Confusion Matrix can visualize model predictions, summarizing correct and incorrect classifications across all classes. The importance of using standard evaluation metrics such as accuracy, precision, recall, and F1-Score in assessing classification performance [63]. A general form of this matrix is shown below:
The structure of the confusion matrix used for evaluating classification performance is presented in Table 6. In summary, Accuracy evaluates overall correctness, Precision focuses on prediction reliability, Recall measures detection completeness, and F1-Score balances both aspects. These metrics collectively provide a comprehensive evaluation of model performance in classification tasks.

6. Experiments Results

In this section, we present experimental results of classifying dried Moringa leaves using the eight machine learning algorithms and the two-stage classification method.

6.1. Baseline Classification Results

The classification of dried Moringa leaves was performed using eight machine learning algorithms on the dataset containing six classes: A, B, C, D, E, and F. Each image was represented by 43 color features and 5 texture features. The dataset was randomly divided into training and testing sets using an 8:2 ratio. Table 7 presents the overall performance comparison, including accuracy, execution time, and mean cross-validation score.
Among the eight models, LightGBM and XGBoost achieved high accuracy, with LightGBM showing the best balance between accuracy (0.820) and speed (0.64 s). This result suggests LightGBM is the most efficient algorithm for classification of dried Moringa leaves.

6.2. Accuracy Result for LightGBM Class A–F

The performance of LightGBM on the six-class dataset is shown in Table 8. This model achieved an accuracy of 0.82, with balanced precision, recall, and F1-Scores across all classes. The macro and weighted averages were both 0.82, indicating consistent performance across different categories.
Figure 9 shows the Confusion Matrix for LightGBM. The model performs well in most categories, with the highest accuracy for Class F. However, misclassifications remain between Class A and Class B, and between Class D and Class E, due to their high visual similarity. Overall, the LightGBM classifier demonstrates reliable performance for multi-class quality grading.

6.3. First Stage Result by LightGBM

In the first stage of the proposed two-stage method, Class A and Class B were merged into a single Class AB, forming five categories: AB, C, D, E, F. Table 9 presents the performance comparison across machine learning algorithms using the dataset, and Figure 10 shows the Confusion Matrix for LightGBM.
LightGBM achieved the best performance with an accuracy of 0.911 and a mean cross-validation score of 0.92, confirming its stability and efficiency. The results indicate excellent recognitions for Class AB and Class F, with minimal errors for other categories. This stage validates merging Class A and Class B into one as an effective strategy to improve initial classification accuracy.

6.4. Second Stage Result by LightGBM

In the second stage, the classification model focused on separating the merged Class AB into the original Class A and Class B. Using LightGBM, the classifier achieved an accuracy of 0.90, demonstrating improved distinction between these visually similar categories. Table 10 summarizes the results, and Figure 11 presents the corresponding confusion matrix.
The model correctly classified 17 images in Class A and 19 images in Class B, confirming a strong discriminative capability. The two-stage LightGBM algorithm effectively reduces confusion between Class A and Class B while maintaining high overall accuracy, reaching 91% in the final classification stage.

6.5. Statistical Significance Analysis

To evaluate whether the performance improvement between the baseline one-stage classifier and the proposed two-stage classification method is statistically significant, a statistical hypothesis test was conducted. The accuracy values from multiple experimental runs ( n = 10 ) were compared using a Student’s t-test for independent samples, which is appropriate for assessing differences in mean performance between two classification approaches. The t-test was applied under the assumption of normally distributed accuracy values, and both the t-statistic and p-value were computed. A significance threshold of p < 0.05 was used to determine whether the improvement achieved by the two-stage classification method is statistically significant. All statistical calculations were performed using Python (version 3.12) and the SciPy statistical library (version 1.26.4).
The results in Table 11 show the Student’s t-test indicates that the improvement in accuracy from the baseline one-stage classifier (82%) to the proposed two-stage classifier (90%) is a statistical accuracy of 0.90, with precision, recall, and F1-Score p-values well below the threshold of 0.05; the performance gain is not due to random variation but reflects a genuine improvement achieved by the two-stage classification design.

7. Discussion

In this section, we discuss our findings from the experiments of classification of dried Moringa leaves using eight machine learning algorithms and the two-stage classification method.

7.1. Baseline vs. Two-Stage Performance

The baseline classification achieved an overall accuracy of 0.82 using LightGBM on six classes (A–F). While the model performed well for most categories, confusion remained between visually similar classes, particularly Class A and Class B, due to their overlapping color and texture patterns. The proposed two-stage classification method addressed this limitation by introducing a hierarchical refinement process. The accuracy was improved to 0.91 in the first stage and 0.90 in the second stage, confirming the effectiveness of this method for fine-grained classification of similar categories.

7.2. Effectiveness of First Stage

In the first stage, Class A and Class B were merged into a single Class AB, forming five classes (AB, C, D, E, F). This restructuring simplified the classification space and reduced inter-class confusion. The LightGBM classifier achieved an accuracy of 0.911 with a mean cross-validation score of 0.92, outperforming all other algorithms tested. The model showed particularly strong results for AB and F, demonstrating that merging highly correlated classes improves initial classification reliability.

7.3. Impact of the Second Stage

The second stage focused on differentiating Class A and Class B from Class AB. Using LightGBM, the classifier achieved 0.90 accuracy, with precision, recall, and F1-Scores all around 0.90. The Confusion Matrix showed only minor misclassification between Class A and Class B with four images, indicating that the refinement successfully captured subtle textural and color differences. This confirms the advantage of the hierarchical approach in improving discrimination between nearly identical samples.

7.4. Comparison Across Algorithms

Among all eight algorithms evaluated, LightGBM and XGBoost showed comparable accuracy (0.82 and 0.82, respectively) in the baseline experiment. However, LightGBM trained 16 times faster (0.64 s versus 10.40 s), owing to its histogram-based Decision Tree and leaf-wise growth strategy. Random Forest also performed well (0.81 accuracy) but required longer training time. These findings align with previous research indicating that LightGBM offers an optimal balance of accuracy and computational efficiency for structured feature datasets.

7.5. Feature Contribution and Image Resolution

The use of 43 color and 5 texture features in this paper contributed significantly to classification performance. Color features captured hue and intensity differences, while texture features extracted using GLCM quantified surface uniformity and contrast. High-resolution images (1117 × 1085 pixels) provided richer feature details, improving the detection of fine visual differences. Compared with prior studies using low-resolution images (128 × 12 pixels) [14], the higher resolution increased feature diversity and model precision, especially for Class A and Class B. Previous related studies did not compare Class A and Class B separately, as these two categories were typically merged into a single group due to their highly similar color and morphological characteristics.

7.6. Manual Feature Extraction over End-to-End Deep Learning

We used manual feature extraction instead of an end-to-end deep learning model because our dataset is relatively small (600 images). Deep learning methods such as CNNs usually require large datasets to avoid overfitting and often do not perform well with limited data [64,65]. In contrast, traditional machine learning models with handcrafted features are more stable and reliable in small-data conditions [66]. The most difficult part of this study is distinguishing Class A and Class B, which look very similar in color and shape. Texture features extracted using GLCM can capture subtle differences—such as stem content and surface smoothness—that deep learning may fail to learn with limited samples [67,68].
Finally, the industrial partner requires a lightweight model that can run on standard CPU-based computers without a GPU. Manual feature extraction combined with traditional ML provides fast and practical performance for real-world deployment [69].

7.7. Practical Implications and Limitations

The two-stage LightGBM-based algorithm demonstrates high reliability and speed, making it suitable for automated grading in agricultural industries where real-time quality assessment is required. This hierarchical design can also be adapted to other agricultural products with overlapping visual features. However, the dataset size of 600 images remains limited. Thus, future studies should incorporate larger datasets under varying lighting and background conditions. Additionally, integrating deep learning feature extraction may further enhance robustness and reduce dependence on handcrafted features. The evaluation was conducted in a controlled research environment. The system has not yet been tested under real production-line conditions where noise, dust, motion, and strict real-time constraints are present.

8. Conclusions

This study presented a two-stage classification method using the LightGBM algorithm to automatically classify the quality of dried Moringa leaves into six categories. The method combines color and texture features to accurately classify them while addressing the difficulty of distinguishing visually similar classes. Experimental results show that this two-stage approach improved classification performance, increasing accuracy from 82% in the baseline model to 91% in the first stage and 90% in the second stage for fine-grained classes. LightGBM provided the best trade-off between accuracy and computational efficiency compared with other algorithms. Future work will focus on expanding the dataset under varied lighting and environmental conditions and integrating deep learning models, such as convolutional neural networks (CNNs), to further enhance robustness and reduce reliance on handcrafted features for real-time industrial applications.
In the other future work, we plan to incorporate additional texture descriptors such as Local Binary Patterns (LBP) to further improve the model’s ability to capture fine morphological differences between closely related classes, this study will incorporate additional color spaces such as HSV and Lab to enrich the feature representation and improve fine-grained class separation, especially for visually similar classes like A and B. This study will incorporate hyperparameter optimization for the LightGBM model—using techniques such as grid search, random search, or Bayesian optimization to tune parameters like learning_rate, num_leaves, n_estimators, feature_fraction, and bagging_fraction—with the aim of improving classification accuracy and reducing processing time. Future work will include larger-scale benchmarking and real-time testing to more accurately assess the model’s suitability for industrial implementation. For a limited dataset, in future research, we expand the dataset with additional samples and also explore data augmentation techniques to increase sample diversity and the dataset based on actual production lines under different environmental settings.

Author Contributions

Conceptualization, P.S. and N.F.; methodology, P.S. and N.F.; software, P.S. and A.L.H.; visualization, P.S., A.L.H., K.C.B., and N.W.W.; investigation, A.L.H. and K.C.B.; writing—original draft, P.S.; writing—review and editing, N.F. and I.N.D.K.; supervision, N.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank the reviewers for their thorough reading and helpful comments and all their colleagues at the Distributed System Laboratory, Okayama University, who were involved in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Boopathi, N.M.; Raveendran, M. Moringa and Its Importance. In The Moringa Genome; Boopathi, N.M., Raveendran, M., Kole, C., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 1–9. [Google Scholar] [CrossRef]
  2. Shahidi, F.; Danielski, R. Review on the Role of Polyphenols in Preventing and Treating Type 2 Diabetes: Evidence from In Vitro and In Vivo Studies. Nutrients 2024, 16, 3159. [Google Scholar] [CrossRef] [PubMed]
  3. Aljazzaf, B.; Regeai, S.; Elghmasi, S.; Alghazir, N.; Balgasim, A.; Hdud Ismail, I.M.; Eskandrani, A.A.; Shamlan, G.; Alansari, W.S.; Al-Farga, A.; et al. Evaluation of Antidiabetic Effect of Combined Leaf and Seed Extracts of Moringa oleifera (Moringaceae) on Alloxan-Induced Diabetes in Mice: A Biochemical and Histological Study. Oxidative Med. Cell. Longev. 2023, 2023, 9136217. [Google Scholar] [CrossRef]
  4. Chiş, A.; Noubissi, P.A.; Pop, O.-L.; Mureşan, C.I. Bioactive Compounds in Moringa oleifera: Mechanisms of Action Focus on Their Anti-Inflammatory Properties. Plants 2023, 13, 20. [Google Scholar] [CrossRef]
  5. Badan Pusat Statistik (BPS). Statistik Ekspor Impor Indonesia: HS 121190—Tanaman dan Bagian Tanaman Untuk Keperluan Farmasi dan Parfum. 2024. Available online: https://www.bps.go.id/id/statistics-table/1/MjAxOSMx/ekspor-tanaman-obat–aromatik–dan-rempah-rempah-menurut-negara-tujuan-utama–2012-2023.html (accessed on 18 December 2025).
  6. Gamage, N.D.G.; Dharmadasa, R.M.; Abeysinghe, D.C.; Wijesekara, R.G.S.; Prathapasinghe, G.A.; Someya, T. Global Perspective of Plant-Based Cosmetic Industry and Possible Contribution of Sri Lanka to the Development of Herbal Cosmetics. Evid.-Based Complement. Altern. Med. eCAM 2022, 2022, 9940548. [Google Scholar] [CrossRef]
  7. Acerbi, F.; Rocca, R.; Fumagalli, L.; Taisch, M. Enhancing the cosmetics industry sustainability through a renewed sustainable supplier selection model. Prod. Manuf. Res. 2023, 11, 2161021. [Google Scholar] [CrossRef]
  8. Rathore, J.; Das, C. Moringa oleifera: A review of phytochemicals constituents and medicinal properties as a future source of new drugs. Int. J. Health Sci. 2022, 6, 6952–6976. [Google Scholar] [CrossRef]
  9. Xu, Y.; Chen, G.; Guo, M. Potential Anti-aging Components From Moringa oleifera Leaves Explored by Affinity Ultrafiltration with Multiple Drug Targets. Front. Nutr. 2022, 9, 854882. [Google Scholar] [CrossRef]
  10. Farm Africa. Reports on Advancing Youth Trainers Guide for Moringa Value Chain in Tanzania. 2024. Available online: https://www.farmafrica.org/wp-content/uploads/2024/07/advancing-youth-trainers-guide-for-moringa-value-chain-in-tanzaniav2.pdf (accessed on 18 December 2025).
  11. Tanjung, L.S.; Silma, M.; Yusmita, Y.; Sari, R.K. Process of Processing Moringa Leaves into Chocolate Moringa at PT. Mond Nature Sustainable. J. Eng. Sci. Technol. Manag. (JES-TM) 2024, 4, 91–97. [Google Scholar] [CrossRef]
  12. Kumar, E.S.; Talasila, V. Leaf features based approach for automated identification of medicinal plants. In Proceedings of the 2014 International Conference on Communication and Signal Processing, Melmaruvathur, India, 3–5 April 2014; pp. 210–214. [Google Scholar] [CrossRef]
  13. Meshram, R.S.; Patil, N. Classification of Medicinal Plants Using Machine Learning. In Intelligent Systems and Sustainable Computing; Reddy, V.S., Prasad, V.K., Mallikarjuna Rao, D.N., Satapathy, S.C., Eds.; Springer Nature: Singapore, 2022; pp. 255–267. [Google Scholar]
  14. Sugiartawan, P.; Funabiki, N.; Haz, A.L.; Wardani, N.W. Supervised Machine Learning Methods for Moringa Dry Leaf Image Recognition. In Proceedings of the 2024 IEEE International Symposium on Consumer Technology (ISCT), Kuta, Bali, Indonesia, 13–16 August 2024; pp. 74–80. [Google Scholar] [CrossRef]
  15. Kazakbaev, V.; Prakht, V.; Dmitrievskii, V.; Golovanov, D. Feasibility Study of Pump Units with Various Direct-On-Line Electric Motors Considering Cable and Transformer Losses. Appl. Sci. 2020, 10, 8120. [Google Scholar] [CrossRef]
  16. Kabir, M.; Unal, F.; Akinci, T.C.; Martinez-Morales, A.A.; Ekici, S. Revealing GLCM Metric Variations across a Plant Disease Dataset: A Comprehensive Examination and Future Prospects for Enhanced Deep Learning Applications. Electronics 2024, 13, 2299. [Google Scholar] [CrossRef]
  17. Chithra, R.S.; Jagatheeswari, P. Severity detection and infection level identification of tuberculosis using deep learning. Int. J. Imaging Syst. Technol. 2020, 30, 994–1011. [Google Scholar] [CrossRef]
  18. Hussain, D.; Hyeon Gu, Y. Exploring the Impact of Noise and Image Quality on Deep Learning Performance in DXA Images. Diagnostics 2024, 14, 1328. [Google Scholar] [CrossRef]
  19. Hu, C.; Sapkota, B.B.; Thomasson, J.A.; Bagavathiannan, M.V. Influence of Image Quality and Light Consistency on the Performance of Convolutional Neural Networks for Weed Mapping. Remote Sens. 2021, 13, 2140. [Google Scholar] [CrossRef]
  20. Kumar, S.; Singh, S. Plant Leaf Disease Detection: A Deep Hybrid Learning Approach. In Proceedings of the 2023 International Conference on Modeling, Simulation & Intelligent Computing (MoSICom), Dubai, United Arab Emirates, 7–9 December 2023; pp. 625–630. [Google Scholar] [CrossRef]
  21. Gbadebo, G.O.; Alhassan, J.K.; Ojerinde, O.A. Detection of Onion Leaf Disease Using Hybridized Feature Extraction and Feature Selection Approach. In Proceedings of the 2022 5th Information Technology for Education and Development (ITED), Abuja, Nigeria, 1–3 November 2022; pp. 1–6. [Google Scholar] [CrossRef]
  22. Pankaja, K.; Suma, V. Leaf Recognition and Classification Using GLCM and Hierarchical Centroid Based Technique. In Proceedings of the 2018 International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 11–12 July 2018; pp. 1190–1194. [Google Scholar] [CrossRef]
  23. Donesh, S.; Piumi Ishanka, U. Plant Leaf Recognition: Comparing Contour-Based and Region-Based Feature Extraction. In Proceedings of the 2020 2nd International Conference on Advancements in Computing (ICAC), Malabe, Sri Lanka, 10–11 December 2020; Volume 1, pp. 369–373. [Google Scholar] [CrossRef]
  24. Zachariah, N.; Kothari, S.; Ramamurthy, S.; Osunkoya, A.O.; Wang, M.D. Evaluation of performance metrics for histopathological image classifier optimization. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 1933–1936. [Google Scholar] [CrossRef]
  25. Jaiswal, G. Performance Analysis of Incremental Learning Strategy in Image Classification. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; pp. 427–432. [Google Scholar] [CrossRef]
  26. Wang, Q.; Peng, R.; Wang, J.; Xie, Y.; Zhou, Y. Research on Text Classification Method of LDA- SVM Based on PSO optimization. In Proceedings of the 2019 Chinese Automation Congress (CAC), Hangzhou, China, 22–24 November 2019; pp. 1974–1978. [Google Scholar] [CrossRef]
  27. Widyananda, M.A.; Palupi, I. Implementation of the Spiral Optimization Algorithm in the Support Vector Machine (SVM) Classification Method (Case Study: Diabetes Prediction). In Proceedings of the 2021 International Conference Advancement in Data Science, E-learning and Information Systems (ICADEIS), Bali, Indonesia, 13–14 October 2021; pp. 1–6. [Google Scholar] [CrossRef]
  28. Zhou, Y.; Liu, T.; Yang, E. Machine Learning Model Based on Improved DBO Algorithm Optimized SVM. In Proceedings of the 2024 6th International Conference on Natural Language Processing, Xi’an, China, 22–24 March 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 162–168. [Google Scholar]
  29. Paula, P.C.; Kabeya, L.M.; Corrêa, A.D.; Pilau, E.J.; Winter, C.M.; Sampaio, M.F.M.; Magalhães, P.O. A Protein Isolate from Moringa oleifera Leaves Has Hypoglycemic and Antioxidant Effects in Alloxan-Induced Diabetic Mice. Molecules 2017, 22, 271. [Google Scholar] [CrossRef]
  30. Vergara-Jimenez, M.; Almatrafi, M.M.; Fernandez, M.L. Bioactive Components in Moringa oleifera Leaves Protect against Chronic Disease. Antioxidants 2017, 6, 91. [Google Scholar] [CrossRef]
  31. Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization. In Graphics Gems IV; Heckbert, P.S., Ed.; Academic Press Professional, Inc.: San Diego, CA, USA, 1994; pp. 474–485. [Google Scholar]
  32. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; Romeny, B.M.t.H.; Zimmerman, J.B.; Zuiderveld, K.J. Adaptive Histogram Equalization and Its Variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  33. Mishra, A. Contrast Limited Adaptive Histogram Equalization (CLAHE) Approach for Enhancement of the Microstructures of Friction Stir Welded Joints. arXiv 2021, arXiv:2109.00886. [Google Scholar] [CrossRef]
  34. Liu, J.; Zhu, Y.; Song, L.; Su, X.; Li, J.; Zheng, J.; Zhu, X.; Ren, L.; Wang, W.; Li, X. Optimizing window size and directional parameters of GLCM texture features for estimating rice AGB based on UAVs multispectral imagery. Front. Plant Sci. 2023, 14, 1284235. [Google Scholar] [CrossRef] [PubMed]
  35. Zhang, X.; Cui, J.; Wang, W.; Lin, C. A study for texture feature extraction of high-resolution satellite images based on a direction measure and gray level co-occurrence matrix fusion algorithm. Sensors 2017, 17, 1474. [Google Scholar] [CrossRef]
  36. Jishnu Sai, M.; Chettri, P.; Panigrahi, R.; Garg, A.; Bhoi, A.K.; Barsocchi, P. An Ensemble of Light Gradient Boosting Machine and Adaptive Boosting for Prediction of Type-2 Diabetes. Int. J. Comput. Intell. Syst. 2023, 16, 14. [Google Scholar] [CrossRef]
  37. Anghel, A.; Papandreou, N.; Parnell, T.; De Palma, A.; Pozidis, H. Benchmarking and Optimization of Gradient Boosting Decision Tree Algorithms. arXiv 2019, arXiv:1809.04559. [Google Scholar] [CrossRef]
  38. Ahammad, I.; Moni, M.A.; Lio, P. AITeQ: A Machine Learning Framework for Alzheimer’s Prediction Using a Gene Expression Signature. Brief. Bioinform. 2023, 25, bbae291. [Google Scholar] [CrossRef]
  39. Bentéjac, C.; Csörgő, A.; Martínez-Muñoz, G. A Comparative Analysis of Gradient Boosting Algorithms. Artif. Intell. Rev. 2021, 54, 1937–1967. [Google Scholar] [CrossRef]
  40. Kapoor, V.; Mahajan, M.; Batra, S. Comparative Study of Deep and Machine Learning Models in Knee Osteoarthritis Detection. In Proceedings of the 2025 International Conference on Automation and Computation (AUTOCOM), Dehradun, India, 4–6 March 2025; pp. 79–83. [Google Scholar] [CrossRef]
  41. Rahman, M.; Hasan, T.; Ahmed, K. Application of SVM in Agricultural Image Analysis. Sensors 2022, 22, 5342. [Google Scholar] [CrossRef]
  42. Tymoshchuk, D.; Yasniy, O.; Maruschak, P.; Iasnii, V.; Didych, I. Loading Frequency Classification in Shape Memory Alloys: A Machine Learning Approach. Computers 2024, 13, 339. [Google Scholar] [CrossRef]
  43. Chang, C.C.; Lin, C.J. LIBSVM: A Lib/rary for Support Vector Machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  44. Corso, M.P.; Perez, F.L.; Stefenon, S.F.; Yow, K.C.; Ovejero, R.G.; Leithardt, V.R.Q. Classification of Contaminated Insulators Using k-Nearest Neighbors Based on Computer Vision. Computers 2021, 10, 112. [Google Scholar] [CrossRef]
  45. Brunello, A.; Marzano, E.; Montanari, A.; Sciavicco, G. J48SS: A Novel Decision Tree Approach for the Handling of Sequential and Time Series Data. Computers 2019, 8, 21. [Google Scholar] [CrossRef]
  46. Bokonda, P.L.; Sidibe, M.; Souissi, N.; Ouazzani-Touhami, K. Machine Learning Model For Predicting Epidemics. Computers 2023, 12, 54. [Google Scholar] [CrossRef]
  47. Al-Abadi, A.A.J.; Mohamed, M.B.; Fakhfakh, A. Enhanced Random Forest Classifier with K-Means Clustering (ERF-KMC) for Detecting and Preventing Distributed-Denial-of-Service and Man-in-the-Middle Attacks in Internet-of-Medical-Things Networks. Computers 2023, 12, 262. [Google Scholar] [CrossRef]
  48. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  49. Sreeja, Y.; Jeya Sheela, J.J. Dietary Analysis among Diabetic Patients using Gradient Boosting over Decision Tree. In Proceedings of the 2024 15th International Conference on Communications (COMM), Bucharest, Romania, 3–4 October 2024; pp. 1–4. [Google Scholar] [CrossRef]
  50. Shimamura, K.; Takamaeda-Yamazaki, S. FS-Boost: Communication-Efficient Federated Subtree-Based Gradient Boosting Decision Trees. In Proceedings of the 2024 IEEE 21st Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, USA, 6–9 January 2024; pp. 839–842. [Google Scholar] [CrossRef]
  51. Yan, Z.F.; Shen, Y.L.; Liu, W.J.; Long, J.M.; Wei, Q. An E-Commerce Coupon Target Population Positioning Model Based on Random Forest and eXtreme Gradient Boosting. In Proceedings of the 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Beijing, China, 13–15 October 2018; pp. 1–5. [Google Scholar] [CrossRef]
  52. Dutta, J.; Kim, Y.W.; Dominic, D. Comparison of Gradient Boosting and Extreme Boosting Ensemble Methods for Webpage Classification. In Proceedings of the 2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), Bangalore, India, 26–27 November 2020; pp. 77–82. [Google Scholar] [CrossRef]
  53. Wang, Y. Evaluation of Tourist Satisfaction Based Light Gradient Boosting Machine Technique. In Proceedings of the 2024 International Conference on Intelligent Algorithms for Computational Intelligence Systems (IACIS), Hassan, India, 23–24 August 2024; pp. 1–5. [Google Scholar] [CrossRef]
  54. Yin, H. Android Malware Detection Using Convolutional Neural Networks and Light Gradient Boosting Machine: A Hybrid Method. In Proceedings of the 2024 6th International Conference on Internet of Things, Automation and Artificial Intelligence (IoTAAI), Guangzhou, China, 26–28 July 2024; pp. 75–79. [Google Scholar] [CrossRef]
  55. Yin, H. Enhancing Ionospheric Radar Returns Classification with Feature Engineering-Based Light Gradient Boosting Machine Algorithm. In Proceedings of the 2023 3rd International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI), Wuhan, China, 15–17 December 2023; pp. 528–532. [Google Scholar] [CrossRef]
  56. Xiaofang, W.; Lan, L.; Qianyin, Z.; Fengyu, L.; Jiawei, L.; Di, H. Constructing Naive Bayesian Classification Model by Spark for Big Data. In Proceedings of the 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 18–20 December 2020; pp. 306–309. [Google Scholar] [CrossRef]
  57. Kalcheva, N.; Nikolov, N. Laplace Naive Bayes classifier in the classification of text in machine learning. In Proceedings of the 2020 International Conference on Biomedical Innovations and Applications (BIA), Varna, Bulgaria, 24–27 September 2020; pp. 17–19. [Google Scholar] [CrossRef]
  58. Shi, Y.; Yang, Y. An Algorithm for Incremental Tree-augmented Naive Bayesian Classifier Learning. In Proceedings of the 2010 International Conference on Artificial Intelligence and Computational Intelligence, Sanya, China, 23–24 October 2010; Volume 1, pp. 524–527. [Google Scholar] [CrossRef]
  59. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006; Available online: https://link.springer.com/book/9780387310732 (accessed on 18 December 2025).
  60. Mao, Y.; Hui, Y. Indoor and Outdoor Scene Classification Method Based on Improved Shared K -Nearest Neighbor. In Proceedings of the 2024 20th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Guangzhou, China, 27–29 July 2024; pp. 1–6. [Google Scholar] [CrossRef]
  61. Charan, S.; M.S, S.; R, S. Prediction of Insufficient Accuracy for Human Activity Recognition with Limited Range of Age using K-Nearest Neighbor. In Proceedings of the 2023 Second International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 2–4 March 2023; pp. 926–930. [Google Scholar] [CrossRef]
  62. Senasli, L.; Chitioui, M.; Damou, M.; Boudkhil, A.; Fatima, B.; Gounni, S. Applying the K Nearest Neighbor algorithm (KNN) in a microwave filter. In Proceedings of the 2024 International Conference on Advances in Electrical and Communication Technologies (ICAECOT), Setif, Algeria, 1–3 October 2024; pp. 1–5. [Google Scholar] [CrossRef]
  63. Bono, F.; Radicioni, L.; Cinquemani, S. A novel approach for quality control of automated production lines working under highly inconsistent conditions. Eng. Appl. Artif. Intell. 2023, 122, 106149. [Google Scholar] [CrossRef]
  64. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  65. Shorten, C.; Khoshgoftaar, T.M. A Survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  66. Domingos, P. A Few Useful Things to Know about Machine Learning. Commun. ACM 2012, 55, 78–87. [Google Scholar] [CrossRef]
  67. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  68. Phadikar, S.; Sil, J.; Das, A.K. Rice diseases classification using feature selection and rule generation techniques. Comput. Electron. Agric. 2013, 90, 76–85. [Google Scholar] [CrossRef]
  69. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
Figure 1. Samples of Moringa leaves: (left) fresh, (right) dried.
Figure 1. Samples of Moringa leaves: (left) fresh, (right) dried.
Applsci 16 00239 g001
Figure 2. Overview of dataset creation process for dried Moringa leaves quality classification.
Figure 2. Overview of dataset creation process for dried Moringa leaves quality classification.
Applsci 16 00239 g002
Figure 3. Standard setup for image acquisition.
Figure 3. Standard setup for image acquisition.
Applsci 16 00239 g003
Figure 4. Example images of six Moringa leaves quality classes after preprocessing.
Figure 4. Example images of six Moringa leaves quality classes after preprocessing.
Applsci 16 00239 g004
Figure 5. Brightness normalization of dried Moringa leaves images by CLAHE.
Figure 5. Brightness normalization of dried Moringa leaves images by CLAHE.
Applsci 16 00239 g005
Figure 6. Stages of image segmentation for dried Moringa leaves: (a) original image, (b) cropped image, and (c) segmented image with background removed.
Figure 6. Stages of image segmentation for dried Moringa leaves: (a) original image, (b) cropped image, and (c) segmented image with background removed.
Applsci 16 00239 g006
Figure 7. Workflow of two-stage classification method.
Figure 7. Workflow of two-stage classification method.
Applsci 16 00239 g007
Figure 8. Overview of two-stage classification.
Figure 8. Overview of two-stage classification.
Applsci 16 00239 g008
Figure 9. Confusion Matrix for Light GBM.
Figure 9. Confusion Matrix for Light GBM.
Applsci 16 00239 g009
Figure 10. Confusion Matrix classification with LightGBM in first stage.
Figure 10. Confusion Matrix classification with LightGBM in first stage.
Applsci 16 00239 g010
Figure 11. Confusion Matrix of two-stage classification by LightGBM.
Figure 11. Confusion Matrix of two-stage classification by LightGBM.
Applsci 16 00239 g011
Table 1. Image acquisition parameters for dataset collection.
Table 1. Image acquisition parameters for dataset collection.
ParameterSpecification
Camera modelNikon D5200 DSLR (24.1-megapixel sensor; Nikon Corporation, Tokyo, Japan)
Image formatJPEG
Image resolution4000 × 3000 pixels
Illumination level500–550 lux
ISO sensitivity150 (to reduce noise and maintain sharp details)
Aperturef/8 (to achieve uniform depth of field)
Shutter speed1/125 s (to prevent motion blur)
White balanceDaylight (5500 K)
Color calibrationX-Rite ColorChecker used for profile consistency
BackgroundWhite
Table 2. Descriptions of dried Moringa leaves quality classes.
Table 2. Descriptions of dried Moringa leaves quality classes.
Class LabelDescription
ADried Moringa leaves with a bright green color, minimal stems, and uniform texture, representing the highest quality.
BBright green dried leaves with more visible stems and some pale or yellowish parts.
CDarker green leaves with higher stem content and slightly uneven color and texture.
DVery dark green leaves containing many stems and coarse surface texture.
EDull-colored dried leaves with numerous stems and damaged portions, indicating low quality.
FSeverely discolored and damaged leaves with poor structure, representing the lowest quality.
Table 3. Extracted color features from dried Moringa leaves image.
Table 3. Extracted color features from dried Moringa leaves image.
Feature NameDescription
average color redThe average intensity value of the red channel.
average color greenThe average intensity value of the green channel.
average color blueThe average intensity value of the blue channel.
gray histogram (1–10)The distribution of gray intensity values, divided into 10 bins.
red histogram (1–10)The distribution of red intensity values, divided into 10 bins.
green histogram (1–10)The distribution of green intensity values, divided into 10 bins.
blue histogram (1–10)The distribution of blue intensity values, divided into 10 bins.
Table 4. Extracted texture features from dried Moringa leaves image using GLCM.
Table 4. Extracted texture features from dried Moringa leaves image using GLCM.
Feature NameDescription
ContrastMeasures the difference between the highest and lowest gray-level values, indicating texture roughness.
CorrelationRepresents the linear dependency between neighboring pixel intensities.
EnergyIndicates textural uniformity; higher values show smoother texture patterns.
HomogeneityReflects image smoothness and the closeness of gray levels in the texture.
Table 5. Sample extracted features from dried Moringa leaves dataset.
Table 5. Sample extracted features from dried Moringa leaves dataset.
FilenameLabelAvg RAvg GAvg BGray_1Red_1GLCM_Contrast
leaf_A1.jpgA105.6142.387.20.0310.0431.28
leaf_B1.jpgB98.4133.191.80.0290.0381.14
leaf_C1.jpgC102.5138.690.40.0270.0451.22
leaf_D1.jpgD110.1144.892.60.0330.0471.30
eaf_E1.jpgE120.4145.393.40.0310.0581.45
leaf_F96.jpgF6.746.565.480.0910.0400.93
leaf_F97.jpgF6.787.616.810.0970.0500.94
leaf_F98.jpgF6.827.866.420.0950.0500.94
leaf_F99.jpgF6.847.527.120.09730.0500.95
leaf_F100.jpgF6.876.517.30.0860.0500.94
Table 6. Structure of Confusion Matrix for binary classification.
Table 6. Structure of Confusion Matrix for binary classification.
Predicted PositivePredicted Negative
Actual PositiveTPFN
Actual NegativeFPTN
Table 7. Performance of machine learning algorithms in six classes.
Table 7. Performance of machine learning algorithms in six classes.
NoModelAccuracyExecution Time (s)Mean Cross-Val
1Support Vector Machine (SVM)0.6940.090.708
2Decision Tree0.7110.280.736
3Random Forest0.8112.420.795
4Gradient Boosting (GBDT)0.79435.560.831
5XGBoost0.82210.400.848
6LightGBM0.8200.640.836
7Naive Bayes0.5670.020.629
8K-Nearest Neighbor (K-NN)0.7380.200.717
Table 8. Performance of LightGBM.
Table 8. Performance of LightGBM.
ClassPrecisionRecallF1-ScoreSupport
A0.710.740.7923
B0.740.750.7920
C0.850.890.7919
D0.850.650.7317
E0.810.760.7917
F0.960.960.9624
Accuracy 0.82120
Macro Avg0.820.810.81120
Weighted Avg0.820.820.82120
Table 9. Comparison of performance with LightGBM.
Table 9. Comparison of performance with LightGBM.
NoModelAccuracyTime (s)Mean Cross-Val
1Support Vector Machine (SVM)0.8100.040.811
2Decision Tree0.8210.170.838
3Random Forest0.8672.780.859
4Gradient Boosting (GBDT)0.87219.320.897
5XGBoost0.8943.970.902
6LightGBM0.9110.210.920
7Naive Bayes0.6880.020.707
8K-Nearest Neighbor (K-NN)0.8270.330.791
Table 10. Classification performance for Class A and Class B by LightGBM.
Table 10. Classification performance for Class A and Class B by LightGBM.
ClassPrecisionRecallF1-ScoreSupport
A1.000.810.8921
B0.831.000.9019
Accuracy 0.9040
Macro Avg0.910.900.9040
Weighted Avg0.920.900.9040
Table 11. Statistical comparison between baseline and two-stage classification accuracy.
Table 11. Statistical comparison between baseline and two-stage classification accuracy.
ModelMean Acc (%)SDt-Statisticp-Value
Baseline (One-Stage)82.02.15
Proposed (Two-Stage)90.01.625.870.0003
Note: Bold values highlight the statistically significant t-statistic and corresponding p-value (p < 0.05), where Mean Acc denotes the mean accuracy, SD represents the standard deviation, and the t-statistic refers to the value obtained from the t-test.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sugiartawan, P.; Funabiki, N.; Kotama, I.N.D.; Haz, A.L.; Brata, K.C.; Wardani, N.W. Proposal for Two-Stage Machine Learning-Based Algorithm for Dried Moringa Leaves Quality Classification. Appl. Sci. 2026, 16, 239. https://doi.org/10.3390/app16010239

AMA Style

Sugiartawan P, Funabiki N, Kotama IND, Haz AL, Brata KC, Wardani NW. Proposal for Two-Stage Machine Learning-Based Algorithm for Dried Moringa Leaves Quality Classification. Applied Sciences. 2026; 16(1):239. https://doi.org/10.3390/app16010239

Chicago/Turabian Style

Sugiartawan, Putu, Nobuo Funabiki, I Nyoman Darma Kotama, Amma Liesvarastranta Haz, Komang Candra Brata, and Ni Wayan Wardani. 2026. "Proposal for Two-Stage Machine Learning-Based Algorithm for Dried Moringa Leaves Quality Classification" Applied Sciences 16, no. 1: 239. https://doi.org/10.3390/app16010239

APA Style

Sugiartawan, P., Funabiki, N., Kotama, I. N. D., Haz, A. L., Brata, K. C., & Wardani, N. W. (2026). Proposal for Two-Stage Machine Learning-Based Algorithm for Dried Moringa Leaves Quality Classification. Applied Sciences, 16(1), 239. https://doi.org/10.3390/app16010239

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop