Next Article in Journal
Prioritizing Early-Stage Start-Up Investment Alternatives Under Uncertainty: A Venture Capital Perspective
Previous Article in Journal
Applied Biomechanics and Sports Sciences: Closing Editorial
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Multilevel Thresholding in Differentiating Various Small-Scale Crops Based on UAV Multispectral Imagery

Department of Chemical and Earth Sciences, University of Fort Hare, Private Bag X1314, Alice 5700, South Africa
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 10056; https://doi.org/10.3390/app151810056
Submission received: 11 June 2025 / Revised: 5 September 2025 / Accepted: 7 September 2025 / Published: 15 September 2025
(This article belongs to the Section Applied Physics General)

Abstract

Differentiation of various crops in small-scale crops is important for food security and economic development in many rural communities. Despite being the oldest and simplest classification technique, thresholding continues to gain popularity for classifying complex images. This study aimed to evaluate the effectiveness of a multilevel thresholding technique in differentiating various crop types in small-scale farms. Three (3) types of crops were identified in the study area, and these were cabbage, maize, and sugar bean. Analytical Spectral Devices (ASD) spectral reflectance data were used to detect subtle differences in the spectral reflectance of crops. Analysis of ASD reflectance data revealed reflectance disparities among the surveyed crops in the Green, red, near-infrared (NIR), and shortwave infrared (SWIR) wavelengths. The ASD reflectance data in the Green, red, and NIR were then used to define thresholds for different crop types. The multilevel thresholding technique was used to classify the surveyed crops on the unmanned aerial vehicle (UAV) imagery, using the defined thresholds as input. Three (3) other machine learning classification techniques were also used to offer a baseline for evaluating the performance of the MLT approach, and these were the multilayer perceptron (MLP) neural network, radial basis function neural network (RBFNN), and the Kohonen’s self-organizing maps (SOM). An analysis of crop cover patterns revealed variations in crop area cover as predicted by the MLT and selected machine learning techniques. The classification results of the surveyed crops revealed the area covered by cabbage crops to be 7.46%, 6.01%, 10.33%, 7.05%, 9.48%, and 7.04% as predicted by the MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and SOM, respectively. The area covered by maize crops as predicted by the MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and SOM were noted to be 13.62%, 26.41%, 12.12%, 11.03%, 12.19% and 15.11%, respectively. Sugar bean was noted to occupy 57.51%, 43.72%, 26.77%, 27.44%, 24.15%, and 16.33% as predicted by the MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and SOM, respectively. Accuracy assessment results generally showed poor crop pattern prediction with all tested classifiers in categorizing the surveyed crops, with the kappa index of agreement (KIA) values of 0.372, 0.307, 0.488, 0.531, 0.616, and 0.659 for the MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and Kohonen’s SOM, respectively. Despite recommendations by recent studies, we noted that the MLT was noted to be unsuitable for classifying complex features such as spectrally overlapping crops.

1. Introduction

The spatial configuration of different types of crops in small-scale farms is a pre-requisite to establishing an efficient approach for sustainable crop production [1,2]. Remote sensing technology offers the prospect of spatially characterizing different crops [3,4]. From a remote sensing perspective, crops are characterized based on their spectral reflectance behavior across several channels of the electromagnetic spectrum. From an optical remote sensing perspective, changes in pixel values across different image bands become the basis for distinguishing a particular type of crop from others in that imagery [5]. Therefore, satellite sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS) [6], Landsat series [6,7], Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) [8], Satellite Pour l’Observation de la Terre (SPOT) [9], and Sentinel-2 [10] have become the basis for identifying various types of crops due to their ease of access. However, the accurate identification of various types of crops in small-scale farms, which are characteristically less than 10 m wide, using these sensors remains a challenge due to their spatial resolutions, which are lower than individual small-scale farm plots [11].
The advent of unmanned aerial vehicle (UAV) technology has marked an enticing dawn for characterizing small-scale crops due to the ultra-high spatial resolution associated with UAV sensors [12,13,14,15]. Different sensors, such as RGB, multispectral, hyperspectral, or thermal cameras, can be mounted on UAV platforms, with a view matching the specifications of crop variables under investigation. Furthermore, UAV data are less susceptible to atmospheric errors [16]; they recorded the reflected radiation when situated below the atmosphere. However, these ultra-high spatial resolution images are often subjected to noise due to increased discernable features [17]. Though many UAV systems have a higher radiometric resolution (i.e., 32-bits), their coarse spectral resolution makes it difficult to detect crop variations outside their resolution. Moreover, the intricacy of data in UAV imagery upsurges due to the enhancement of sensor radiometric resolution [18]. This further complicates the differentiation of crops, as variations in leaf positions may create an overlapping reflectance intensity across different crops. This spectral entropy is even more challenging when a sensor with a few spectral bands is used. For this reason, the attainment of the accurate classification of crops subjected to these conditions becomes a challenge. Although classification algorithms, such as the maximum likelihood [19,20], minimum distance [21], support vector machine [22], K-Nearest Neighbor [23], and Random Forest [24], were noted to be effective in characterizing various crops based on UAV imagery, these approaches operate under the notion of a crisp clustering, where each pixel spectral property belongs to a single crop type [25]. Moreover, these classifiers, which often require training data to learn patterns, generally classify crops based on maximum accuracy, which often results in the biased categorization of similar pixels towards the majority class and misclassification of the minority class.
Several studies also employed the spectral unmixing approach as a way of improving crop classification accuracy [26,27,28,29,30]. In this approach, the probability of the endmember to belong to any pixel is constructed and subsequently fused with the spectral linear model and classifier to compute crop cover [31]. This is performed under the notion that the heterogeneous land features in the imagery become more apparent as the spatial resolution of the image improves [32]. Although this approach was devised to overcome the shortfalls brought of the empirical method for characterizing crops over [33], it works under the supposition that the spectral reflectance of a particular pixel is a result of the sum of the spectral reflectance of the numerous discernible features within that pixel [34]. Overall, this approach only aims to resolve classification problems that are related to different land cover types occupying the same pixel and not linearly inseparable spectral features. The spectral similarity and difference metric (SSDM) technique was also recommended to minimize land cover misclassification problems [35]. This approach is often used to characterize land features based on the interclass spectral separability or intraclass spectral variability [36]. Moreover, the advent of advanced pattern recognition techniques, such as deep learning, has seen improvement in crop characterization. In comparison with the conventional classification techniques mentioned, deep learning techniques have been shown to be capable of learning linearly inseparable data [37]. Deep learning techniques such as the multilayer perceptron (MLP) [38], radial basis function (RBF) [39], and the Kohonen’s self-organizing map (SOM) [40] have evidently been applied in pattern recognition.
Despite being the ancient image classification technique [41] and the recent developments in pattern recognition through the application of machine learning and deep learning [42,43], recent studies are still emphasizing the significance of thresholding (rule-based classifier) for segmenting complex features [44,45,46,47]. Multilevel thresholding (MLT), in particular, was proposed to partition images into multiple classes [44]. Despite its simplicity, Kumar et al. [48] noted that the multilevel thresholding technique can segment a complex crop image. The advantage of using this approach was based on its ability to use dynamic rules and that it does not rely on human supervision and training data [49]. This technique achieves the partitioning of images by employing either the Otsu class variance method [49] or the Kapur maximum entropy method [50]. Whereas the Otsu thresholding approach uses the maximum variance of classes to partition the image [51], the Kapur approach enhances the maximum entropy of the image histogram to evaluate the uniformity among various classes and determine the best threshold values [52]. When using other classification approaches, Akgün et al. [53] noted that all pixels in the image tend to be allocated to any of the existing classes, unless the thresholding technique is used or an additional class denoting non-crop features is added, in cases of the classification of crop types. Despite its simplicity, flexibility, and global search capabilities [54], studies also revealed that the MLT classifier can achieve the accurate classification of complex features [55,56,57]. This underscores the necessity to evaluate the efficacy of this approach in spectrally differentiating various types of crops. Moreover, we have not found studies that evaluated the efficacy of the multilevel thresholding classifier in differentiating various types of crops. While hyperspectral remote sensing offers a great deal of spectral information pertaining to crop types, the need to spatially partition the surveyed crops in the study area prompted the integration of UAV multispectral imagery and non-imaging field spectrometric data.

2. Materials and Methods

2.1. Study Area

The small-scale farms in which the current study was conducted are found in the Mutale River catchment, situated in the Vhembe District of the Limpopo Province of South Africa. This area is dominated by small-scale farmers who practice substantial farming. The study area is located at the 22°47′41.95″ S, 30°29′08.21″ E and 22°47′58.61″ S, 30°29′19.83″ E grid reference location, at an elevation which ranges between 620 m and 660 m above sea level. The presence of various small-scale crop types that are cultivated under the irrigation system prompted the selection of the study area. In this area, farmers tend to cultivate a variety of crops irrespective of the season. Some of the crops cultivated in the study area include maize, cabbage, sweet potatoes, and sugar bean [11]. Figure 1 presents the location of the study site in South Africa.

2.2. Data Acquisition

The following types of data were collected and utilized to attain the main purpose of the current study.

2.2.1. UAV Multispectral Imagery

The UAV multispectral imagery used in this study was acquired using the MicaSense Red Edge MX imagining sensor, embedded on the DJI Matrice 600 Pro supplied by the SZ DJI Technology Co., Ltd., Shenzhen, China. The focal length of the Red Edge sensor used was 5.5 mm. With the DJI Matrice 600 Pro operating at 120 m altitude, the MicaSense Red Edge-MX imaging sensor captured multispectral imagery of a flat cropland at a spatial resolution of 8.7 cm. This UAV imaging system enabled rapid acquisition of crop data, offering aerial access to inaccessible areas while covering a larger area. The side lap and end lap of 75% was considered during landscape imaging. The maximum distance covered was approximately 15 hectares. Clear sky, wind speed of at most 10 m/s, and midday (i.e., between 12h00 and 14h00 pm) were considered viable environmental conditions for imagery acquisition. Prior to image acquisition, MicaSense Red Edge imaging sensor was calibrated using the Calibrated Reflectance Panel (CRP) placed on flat ground in an open area from about 1 m in height. Table 1 provides the spectral wavelength center and range for the UAV multispectral sensor used in this study.

2.2.2. Field Spectrometry Data

Analytical Spectral Devices (ASD) FieldSpec 4 Hi-Res Spectroradiometer was used to obtain hyperspectral reflectance data crops under investigation in the study area. This non-imaging system acquired spectral reflectance data for crops using the visible, NIR, and SWIR spectral wavelengths. Prior to collecting data, the ASD Fieldspec4 Spectroradiometer warmed up for 30 min. Moreover, calibration of the ASD FieldSpec4 device was performed after each crop sampling to ensure data quality. The distance of 50 cm was maintained between the light source and the spectral gun during spectral reflectance data collection. A total of five (5) spectral reflectance samples were collected from each sample leaf for each crop type. The centimeter level precision Ashtech®ProMark2™ Global Positioning System (GPS) device was also used to record the absolute locations where the spectral reflectance samples were collected.

2.3. UAV Imagery Preparation

Prior to image analysis, we ortho-mosaicked the UAV image tiles to cover the study area. This was guided by six (6) ground control points (GCPs) established prior to flying the UAV. We used attached visible white papers in the survey poles and recorded their locations using the GPS. Geometric correction was carried out to assign spatial reference properties to UAV imagery. In this case, the UAV imagery was spatially referenced to the Universal Transverse Mercator (UTM) Zone 36S, based on the World Geodetic System of the year 1984 (WGS84) spheroid. This was carried out using the ArcGIS Drone2Map program supplied by the ESRI South Africa, Midrand, South Africa. The UAV multispectral bands were also radiometrically corrected using the “Radiance” tool in the TerrSet 18.31 Geospatial Modeling and Monitoring program supplied by the Clark Labs, Clark University, Worcester, United States of America. In this study, the L m i n / L m a x option was used to convert D n to radiance using Equation (1):
R a d i a n c e = L m i n + D n max D n L m a x L m i n
where L m i n and L m a x denote minimum and maximum D n values, respectively, specified in, i.e., milliWatts per square centimeter per steradian per micron ( m W c m 2 s r 1 u m 1 ) .

2.4. Assessing Variability in the UAV Spectral Radiance Properties of Crops

A total of three (3) point-based GIS vector layers were created. Subsequently, a total of 200 points were randomly digitized in the vicinity of each type of crop, guided by the RGB image and field knowledge. The digitized points were overlaid on each UAV spectral band and the pixel values on which the points were overlaid were extracted using the “Extract multi-values to points” tool embedded in the Spatial Analysis Tools of the ArcMap 10.8.2 Software package. The point attribute table containing the pixel values for each crop type was exported to Microsoft Excel format for crop spectral radiance and spectral vegetation index values analysis. Variability in response of crops to radiation of different spectra was achieved by using Equation (2), adopted from Levene [58]:
W = N k i = 1 k N i Z ¯ i Z ¯ 2 n i k 1 i = 1 k j = 1 n i Z i j Z ¯ i   2
where N represents the population size; Z i j = Y i j Ȳ i when Y i j and Ȳ i denote the Y value and the mean for the observation of the i t h subgroup; Z ¯ i represents the mean of the group Z i j ; and Z ¯ represents the overall mean of the Z i j .
Consequently, the computed p-value was evaluated to explain the variations in the interpolated UAV spectral radiance properties across crop types.

2.5. Spectral Profiling of UAV Spectral Radiance Properties of Crops

The spectral radiance properties of crops were plotted with a view to identify the UAV spectral band that was suitable for differentiating crops. In this case, the mean radiance values computed using Levene’s k-comparison of equal variance statistical technique was used. The linear graph was plotted to show the spectral radiance behavior of crops.

2.6. Crop Spectral Reflectance Thresholding Selection

In our study, we proposed the thresholding optimization approach, which relied on ground spectral reflectance data between adjacent crop types. Their threshold optimization approach considered descriptive statistics, i.e., minimum and maximum reflectance of each crop type. We proposed Equation (3) to optimize upper and lower thresholds for crop types, such that
T o = T i , u + T i , l 2
where T o denotes the optimized spectral band threshold value between crop types; X i m i n denotes the lower threshold value of the upper class; and X i m a x denotes the upper threshold value of the lower class.
Crop reflectance data for each wavelength is first arranged in ascending order ( x 1 ,   x 2 ,   x 3 , ,   x n ). In our study, we adopted the interquartile range threshold selection approach proposed by Tukey et al. [59]:
T i , l = Q i ,   l c × I Q R i
T i , u = Q i , u + c × I Q R i
where I Q R i denotes the interquartile distance, computed using Equation (6), and c is the control parameter decided by the user. Bardet and Dimby [60] recommended the use of a 1.5 factor in order to ensure that the probability of a Gaussian random variable being classified as an outlier is approximately 0.07:
I Q R i = Q i , u Q i , l
where Q i , u denotes the upper quartile value of each crop reflectance data, and Q i , l denotes the lower quartile value for each crop reflectance data. Q i , u and Q i , l   were computed using Equations (7) and (8):
Q i , u = x i p i , u
Q i , l = x i p i , l
P i , u denotes the position of the upper quartile, computed using Equation (9), and p i , l denotes the position of the lower quartile, computed using Equation (10):
p i , u = 3 × ( n i + 1 ) 4
p i , l = n i + 1 4
where n denotes the number of samples.

2.7. Multilevel Thresholding of Crop Types

The optimized minimum and maximum thresholds were used as the input data into the multilevel thresholding in differentiating crop types. The multilevel thresholding was then applied on both the corresponding UAV spectral bands, Equation (11), adopted from Jiang et al. [61]:
I T x , y = C 1 ,               i f   T o I x , y     T o 1 ,                           C 2 ,               i f   T o 1 I x , y T o 2 ,                             C 3                   i f   T o 2 I x , y T o 3 ,                             C 4                   i f   T o 3 I x , y T o 4                     0 ,   o t h e r w i s e                                                                                                        
where I T denotes the pixel in the x ,   y location; C   denotes crop class; and T denotes the minimum and maximum threshold values for each type of crop, respectively.

2.8. Classification of Crops Using Machine Learning Algorithms

In our study, we also selected three machine learning algorithms, viz., MLP, RBFNN, and SOM, to differentiate crop types, with a view that evaluates the performance of the MLT technique against the selected machine learning algorithms. Prior to machine learning model training, training sites were created for the algorithms to analyze in order to learn patterns and make predictions. Table 2 explains the sequences employed to achieve crop type differentiation using the selected machine learning algorithms.

2.9. Classification Accuracy Assessment

The performance of thresholding, along with selected machine learning classifiers, was evaluated. In this case, the ground truth image, classified based on GPS points collected in the vicinity of each crop type, was compared with spectrally classified crops. This was achieved using the overall kappa index of agreement (KIA), computed using Equation (29) and adopted from Cohen [67]:
k = p 0 P e 1 p e
where p 0 denotes the proportion of pixels assigned to the correct class, and p e denotes the proportion of pixels expected to be assigned to the correct class. A KIA value ranges from 0 to 1, with the value closer to 0 indicating poor classification performance, and the value closer to 1 indicating accurate classification results.

3. Results

3.1. Identification of Crops Cultivated in the Study Site

During the field observation of the study area, a total of three (3) types of crops were identified in the study area, and these were cabbage, maize, and sugar bean. Figure 2 shows various crop types that were identified in the study area, as imaged by the UAV system.

3.2. Spectral Characterization of Crops from UAV Imagery

Levene’s k-comparison of equal variance statistics results for analyzing variability in crop patterns imaged using a UAV multispectral sensor are provided in Table 3.
From Table 3, the UAV spectral radiance properties were noted to vary across the surveyed crops. At N = 200, α = 0.05, Levene’s k-comparison test produces the p-value < 0.001, which was less than 0.05 significance alpha, signifying variability in the ASD spectral reflectance properties across the surveyed crops.

3.3. Hyperspectral Reflectance Patterns of the Identified Crops

In this study, field spectrometric data were used to analyze spectral distinction across the surveyed crop types in the study area. Figure 3 presents the reflectance spectrum of the surveyed crops, i.e., cabbage, maize, and sugar bean.
From Figure 3a, cabbage showed a higher reflectance in the visible–NIR spectral region when compared with other surveyed crops, with sugar bean showing a high reflectance than other crops in the SWIR spectral region. From Figure 3b, clear reflectance disparities were noted in the four labeled spectral regions (i.e., a–c in Figure 3b). By implication, reflectance data generated from these spectral regions could be ideal for thresholding the surveyed crops.

3.4. Reflectance Behavior of Maize Cultivars in ASD Spectral Regions Corresponding to the UAV Spectral Wavelengths

The hyperspectral reflectance curves for the surveyed crops were analyzed, with specific focus on the wavelengths corresponding to the wavelengths of UAV spectral bands. Figure 4 provides the reflectance patterns of the surveyed crops in the spectral regions corresponding to wavelengths of the UAV spectral bands.
Analysis of spectral reflectance properties of the surveyed crops in spectral wavelengths corresponding to UAV spectral bands showed that these crops can be spectrally differentiated in the Green and NIR spectral wavelengths. In the Blue spectral band, only cabbage could clearly be identified due to its higher reflectance compared to other surveyed crops, while maize had a similar reflectance with sugar bean (Figure 4).
Moreover, Levene’s k-comparison of equal variance was computed for crop reflectance data acquired from the ASD spectral wavelengths corresponding to UAV spectral bands (see Table 4).
From Table 3, the ASD spectral reflectance properties were noted to vary across the surveyed crops. At N = 50, α = 0.05, Levene’s k-comparison test produces the p-value < 0.001, which was less than 0.05 significance alpha, signifying variability in the ASD spectral reflectance properties across the surveyed crops. Therefore, the computed mean reflectance values were used to spectrally profile various crops in the study area. The mean spectral reflectance values obtained from the ASD spectrometer were compared with the mean spectral radiance values acquired from UAV imagery to assess the degree of agreement between crop reflectance and radiance properties (Figure 5).
From Figure 5, both the UAV mean radiance and ASD mean reflectance properties of the surveyed crops were noted to follow similar pattens across the specified spectral wavelengths. However, the UAV mean radiance showed a slight deviation from the ASD mean reflectance of the cabbage in the 0.533 µm–0.587 µm and 0.785 µm–0.899 µm spectral wavelengths (Figure 5a). In Figure 5b,c, the discrepancy between the UAV mean radiance and ASD mean reflectance for maize was noted in the 0.785 µm–0.899 µm spectral wavelength.

3.5. Reflectance Q 1 , Q 3 , and I Q R for the Surveyed Crops

The Q 1 , Q 3 , and I Q R were computed for each surveyed crop, based on the spectral wavelengths in which a reflectance discrepancy among the crops was observed. Table 5 shows the Q 1 , Q 3 , and I Q R values computed for each surveyed crop type.
The upper reflectance and the lower.
Based on the computed Q 1 , Q 3 , and I Q R values, the upper and lower threshold for each surveyed crop type were computed and are presented in Table 6.
Upon the computation of T i , l , T i , u , and I Q R i values, the thresholds were computed to separate the surveyed crop types. Table 7 provides the computed thresholds that classify crops in the study area.
From Table 7, the reflectance threshold between cabbage and maize and between maize and sugar bean, as computed from a 0.443 µm–0.507 µm spectral wavelength, were found to be 0.53 and 0.67, respectively. The reflectance threshold between cabbage and maize and between maize and sugar bean, as computed from a 0.533 µm–0.587 µm spectral wavelength, were found to be 0.24 and 0.35, respectively. Lastly, the reflectance threshold between cabbage and maize and between maize and sugar bean, as computed from a 0.785 µm–0.899 µm spectral wavelength, were found to be 0.62 and 0.75, respectively.

3.6. Optimized Threshold Values Selected from 0.443 µm–0.507 µm and 0.533 µm–0.587 µm Wavelengths

The computed thresholds were applied to differentiate the surveyed crops. Moreover, machine learning techniques were also applied to classify crops. This was performed to ensure the performance evaluation of the MLT against machine learning algorithms. Figure 6 provides the surveyed crops differentiated using MLT and selected machine learning algorithms.
The accuracy assessment results of the classified crop types using thresholding, along with selected machine learning techniques, are presented in Table 8.
From Table 8, analysis of the KIA generally showed the inefficiency of all the image partitioning techniques in differentiating the surveyed crops. The MLT approach was generally ineffective in categorizing the surveyed crops. This was apparent from the KIA values of 0.372, 0.307, and 0.488 for the MLT on Blue band, MLT on Green band, and MLT on NIR, respectively. The MLP, RBFNN, and Kohonen’s SOM produced KIA values of 0.531, 0.616, and 0.659, which were lower than the acceptable KIA value of 0.7. This generally confirms the significant proportion of misclassified crops. However, Kohonen’s SOM produced promising results when compared with all the evaluated techniques.

3.7. Area Determination of the Surveyed Crops

The area covered by each surveyed crop type, as predicted by the MLT and selected machine learning techniques, are presented in Figure 7.
From Figure 7, the area covered by cabbage crops, as predicted by the MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and SOM, was noted to be 7.46%, 6.01%, 10.33%, 7.05%, 9.48%, and 7.04%, respectively. The area covered by maize crops, as predicted by the MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and SOM was noted to be 13.62%, 26.41%, 12.12%, 11.03%, 12.19%, and 15.11%, respectively. Sugar bean was noted to occupy 57.51%, 43.72%, 26.77%, 27.44%, 24.15%, and 16.33%, as predicted by the MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and SOM, respectively. Generally, the misclassification of crops was noted in all crop classes predicted by all the deployed classification techniques. In Figure 6a, MLT on the Blue spectral band predicted maize crop, even in areas occupied by bare surface that was classified as maize crop. In Figure 6c, the MLP predicted cabbage in areas covered by sugar bean.

4. Discussion

The ability of multispectral remote sensing technology to characterize crops has not gone unnoticed [68,69]. As such, crop phenotyping offers traits that are critical for distinguishing various crops by analyzing the response of their traits to radiation received in different electromagnetic spectral wavelengths [70]. Ndou et al. [71] stated that UAV imaging systems can provide the successful acquisition of high spatial resolution imagery with detailed information regarding crop characteristics. However, information provided by UAV imaging systems is very complex, because these systems are also capable of detecting even the smallest undesired features [72]. UAV imagery often employs discrete spectral bands, which may lack specific spectral information outside the specified bandwidths [73]. Torres et al. [74] noted that spectral-based classification yields accurate results when utilizing sensors with many spectral bands. Nevertheless, non-imaging hyperspectral sensors offer continuous spectral information in numerous narrow bands (350–2500 nm), which allows for the detailed analysis of different crop properties [75]. They are employed in real-time, in situ leaf or canopy reflectance measurements, thus reducing the effect of background noise [76]. However, Potgieter et al. [77] noted that distinguishing spectrally overlapping crops remains a challenge. Therefore, the differentiation of crop types by thresholding UAV imagery based on ASD reflectance data was tested in this study.
In this study, true color composite (TCC) imagery, coupled with field knowledge, served as the base from which several crops were identified in the study area. Duranona Sosa et al. [78] noted that using TCC imagery in crop identification is relevant, because working with other imagery such as single band or spectral vegetation indices is not easy, due to their inability to facilitate image morphological filtering. Levene’s k-comparison of equal variance facilitated the variability analysis of spectral radiance/reflectance across the surveyed crops. Levene’s results produced p values that were <0.01, signifying variability in spectral radiance/reflectance across the surveyed crops. This technique test is usually applied to overcome challenges associated with the analysis of variance (ANOVA), which is to determine whether k-populations have a common mean μ [79]. It facilitates the diagnosis of the normality of data [80]. Otherwise, the ANOVA test would be used in case the computed p-values were >0.05. Although Serbin et al. [8] noted that maize and other various crops exhibited similar reflectance signatures in the visible (400–700 nm) and near-infrared (700–1100 nm) spectral wavelengths, the current study found that the maize cultivar can be spectrally distinguished from other crops in the Blue, Green and NIR spectral channels of spectrometric wavelengths. Angulo and Pamboukian [81] noted the efficacy of the NIR channel in distinguishing maize cultivars against other types of crops such as soybean, oats, and soy can be achieved in the NIR region and in the Green region of the spectrometer. Maize, known for its broad, long leaves, has a unique NIR reflectance profile that distinguishes it from other crops [82]. Through the combination of the UAV multispectral imagery data and non-imaging hyperspectral remote sensing techniques, Sudu et al. [83] noted that crop can be effectively and quantitatively analyzed.
The MLT was selected for classifying crops in this study, and its selection was prompted by several recent studies that noted its reliability in image classifying complex features [84,85]. Kumar et al. [48] noted that multilevel thresholding showed the ability to characterize crops and minimize non-crop background. Houssein et al. [86] also noted that the thresholding technique has the advantage of efficiency and versatility. In this study, we evaluated the MLT performance against selected machine learning techniques, viz., MLP, RBFNN, and SOM. Mndela et al. [11] noted that various types of crops in small-scale farms imaged using the UAV multispectral system can be distinguished by mean spectral profiling of sampled crop leaves. The advantage of using the mean-based thresholding is that its structure entirely relies on the normal distribution [87]. However, using spectral reflectance mean values as the threshold for categorizing crop types undermines the crop reflectance variability under varying illumination intensity. In the current study, we used minimum and maximum thresholds to categorize crop types. Pixel values which were outside the defined thresholds were classified as non-crop features.
The comparison of MLT classification results revealed shortcomings of the MLT against selected machine learning algorithms (Table 8). Specifically, the MLT failed to produce the desired classification results. This is despite the statement by Mahmoud et al. [85] that thresholding based on statistical values can provide accurate image segmentation results. However, this challenge was not only with MLT; generally, even selected machine learning approaches did not yield satisfactory classification results. However, Faridatul and Wu [88] noted that optimization of the threshold for a specific land feature class is important when classifying based on spectral vegetation indices. Despite the deployment of the IQR optimization technique on the MLT in the current study, as recommended by Zhao et al. [89], the results achieved were still poor. Despite the existence of numerous innovative and efficient image partitioning approaches in recent studies [90,91,92], a well-adapted classification technique applicable to areas with diverse crops remains elusive, owing to the challenges posed by multiple crop types, complex background information, and heterogeneous and altered leaf morphology due to crop infection.
The study did not account for crop water content, as variations in water content may also cause variations in crop chlorophyll content, which may lead to spectral overlap. Genc et al. [93] noted that reflectance of water-stressed sweet corn decreases in the red reflectance in red and increases in the NIR spectral region. It is worth noting that the surveyed crops in this study were also subjected to stress due to pests and pathogens, and this might have altered their natural spectral reflectance properties and created spectral overlap with one another. During the collection of hyperspectral reflectance data, crop leaves that were fully exposed to solar energy were targeted. However, some crop leaves were subjected to partial occlusion by the top-most leaves of the same plant. As such, partially occluded cabbage leaves had a similar reflectance as maize when observed from the UAV imagery. Zhou et al. [90] noted that aerial imagery comes with a variety of information, such as shapes and hues, which may create variations in the brightness levels of the same crop. Therefore, it remains intriguing to whether actual patterns can be achieved under the occlusion-free scenario. Although mature maize cultivars and other types of crops were considered, there was no clear scrutiny used to determine variations in crop stages. Therefore, future studies must attempt to spectrally differentiate maize cultivars from other crops under different growth stages. Due to similar spectral properties with crops, weeds were also classified as crops. This was also noted by Hlaing and Khaing [91], who noted that weeds tend to be included as part of crops due to the similarity in spectral reflectance properties.
Tufail et al. [92] noted that spectral radiance/reflectance properties of the same crop type can substantially differ, owing to variations in phenology and environmental conditions. These issues remain unresolved, despite the advancement in pattern recognition. It is therefore intriguing to understand how these factors influenced differentiation of crop types in the study area. Moreover, the study did not consider the variation in crop reflectance because of the variation in crop growth stages. As such, future studies must attempt to determine the effect of reflectance variations due to crop growth stage on crop classification using thresholding.
The main issue with thresholding classification is the selection of the parameters that correctly define the thresholds. This also includes accounting for variables that create a complex image, such as noise, heterogeneity in intensity, and texture. These highlighted issues make the practicality of the MLT approach across different seasons and geographical settings a challenging task, because crops often exhibit variability in these parameters, owing to their response to seasonal and environmental settings. Moreover, selection of thresholds must also take into account the influence of polarimetric scattering of different crops to attain optimum thresholds. Aitkenhead and Dyer [94] revealed that the potential issue with regard to generalizability of the MLT lies in its lack of transferability of winner thresholds from one imagery to another. While Sun et al. [95] recommended the inclusion of a synthetic aperture radar (SAR) sensor to improve the differentiation of crops using multispectral imagery due to the ability of SAR to account for polarimetric scattering traits [96], the medium spatial resolution associated with these sensors deem them unviable for application in small-scale farms of less than 2 hectares in size. This problem is not only inherent to MLT but also machine learning algorithms. Therefore, these existing challenges underscore the necessity to further search for other alternative methods for optimizing threshold selection.

5. Conclusions

The current study aimed to spatially partition different types of crops in the small-scale farms based on thresholding UAV imagery based on ASD hyperspectral reflectance data. This study noted that UAV imagery can help in the identification of plots occupied by crops. However, the UAV imaging system proved ineffectiveness in distinguishing different crops. The field spectrometric reflectance analysis of crops also revealed that crops can be spectrally differentiated in the red, NIR, and SWIR spectral wavelengths. The thresholding of UAV spectral bands showed an inability to classify multiple crops. Sugar bean had distinct reflectance from other crops in the SWIR. This spectral wavelength could be ideal for demarcating this crop alone. The results were worse compared to the ones produced by machine learning algorithms. The thresholding of UAV imagery based on optimized minimum and maximum thresholds did not yield accurate results. The study revealed several spectral-related challenges that inhibit the accurate characterization of crops by integrating UAV imagery and non-imaging field spectrometric data. Despite recommendations by recent studies, we noted that the MLT was noted to be unsuitable for classifying complex features such as spectrally overlapping crops. As long as various crops still exhibit spectral overlaps, classification of these crops by thresholding will remain a challenge unless new, robust thresholding optimization techniques are proposed. As such, a framework for facilitating both the selection of the most suitable thresholding method for a particular domain and the application of thresholds to facilitate the classification of spectrally overlapping crops is inevitable. Overall, the results of this study underscore the ongoing contribution of remote sensing technology in addressing agricultural sustainability and food security goals. Perhaps the future study should explore the prospect of calibrating images to thresholds for individual crops to evaluate the separability of the individual. Hyperspectral reflectance analysis of the surveyed crops also revealed reflectance disparities across crops in the shortwave spectral wavelength. The inclusion of this spectral band in the UAV platform or simulation of synthetic shortwave infrared bands based on visible–NIR spectral bands of the UAV is also suggested as a possible future research direction.

Author Contributions

Conceptualization, S.M. and N.N.; Methodology, S.M.; Investigation, S.M. and N.N.; Writing—original draft, S.M. and N.N.; Writing—review & editing, N.N.; Supervision, N.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kwak, G.H.; Park, N.W. Impact of texture information on crop classification with machine learning and UAV images. Appl. Sci. 2019, 9, 643. [Google Scholar] [CrossRef]
  2. Zhao, H.; Meng, J.; Shi, T.; Zhang, X.; Wang, Y.; Luo, X.; Lin, Z.; You, X. Validating the Crop Identification Capability of the Spectral Variance at Key Stages (SVKS) Computed via an Object Self-Reference Combined Algorithm. Remote Sens. 2022, 14, 6390. [Google Scholar] [CrossRef]
  3. Delrue, J.; Bydekerke, L.; Eerens, H.; Gilliams, S.; Piccard, I.; Swinnen, E. Crop mapping in countries with small-scale farming: A case study for West Shewa, Ethiopia. Int. J. Remote Sens. 2013, 34, 2566–2582. [Google Scholar] [CrossRef]
  4. Omia, E.; Bae, H.; Park, E.; Kim, M.S.; Baek, I.; Kabenge, I.; Cho, B.K. Remote sensing in field crop monitoring: A comprehensive review of Sensor Systems, Data Analyses and Recent Advances. Remote Sens. 2023, 15, 354. [Google Scholar] [CrossRef]
  5. Meng, X.; Li, C.; Li, J.; Li, X.; Guo, F.; Xiao, Z. Yolov7-ma: Improved yolov7-based wheat head detection and counting. Remote Sens. 2023, 15, 3770. [Google Scholar] [CrossRef]
  6. Pareeth, S.; Karimi, P.; Shafiei, M.; De Fraiture, C. Mapping Agricultural Landuse Patterns from Time Series of Landsat 8 Using Random Forest Based Hierarchial Approach. Remote Sens. 2019, 11, 601. [Google Scholar] [CrossRef]
  7. Fang, H.; Wu, B.; Liu, H.; Huang, X. Using NOAA AVHRR and LandsatTM to estimate rice area year-by-year. Remote Sens. Tech. 1997, 12, 23–26. [Google Scholar]
  8. Serbin, G.; Hunt, E.R., Jr.; Daughtry, C.S.T.; McCarty, G.W.; Doraiswamy, P.C. An Improved ASTER Index for Remote Sensing of Crop Residue. Remote Sens. 2009, 1, 971–991. [Google Scholar] [CrossRef]
  9. Navarro, A.; Rolim, J.; Miguel, I.; Catalão, J.; Silva, J.; Painho, M.; Vekerdy, Z. Crop Monitoring Based on SPOT-5 Take-5 and Sentinel-1A Data for the Estimation of Crop Water Requirements. Remote Sens. 2016, 8, 525. [Google Scholar] [CrossRef]
  10. Bautista, S.A.; Fita, D.; Franch, B.; Castiñeira-Ibáñez, S.; Arizo, P.; Sánchez-Torres, M.J.; Becker-Reshef, I.; Uris, A.; Rubio, C. Crop Monitoring Strategy Based on Remote Sensing Data (Sentinel-2 and Planet), Study Case in a Rice Field after Applying Glycinebetaine. Agronomy 2022, 12, 708. [Google Scholar] [CrossRef]
  11. Mndela, Y.; Ndou, N.; Nyamugama, A. Irrigation Scheduling for Small-Scale Crops Based on Crop Water Content Patterns Derived from UAV Multispectral Imagery. Sustainability 2023, 15, 12034. [Google Scholar] [CrossRef]
  12. Delavarpour, N.; Koparan, C.; Nowatzki, J.; Bajwa, S.; Sun, X. A technical study on UAV characteristics for precision agriculture applications and associated practical challenges. Remote Sens. 2021, 13, 1204. [Google Scholar] [CrossRef]
  13. Devia, C.A.; Rojas, J.P.; Petro, E.; Martinez, C.; Mondragon, I.F.; Patiño, D.; Rebolledo, M.C.; Colorado, J. High-throughput biomass estimation in rice crops using UAV multispectral imagery. J. Intell. Robot. Syst. 2019, 96, 573–589. [Google Scholar] [CrossRef]
  14. Liu, L.; Liu, M.; Guo, Q.; Liu, D.; Peng, Y. MEMS Sensor Data Anomaly Detection for the UAV Flight Control Subsystem. In Proceedings of the 2018 IEEE SENSORS, New Delhi, India, 28–31 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar] [CrossRef]
  15. Sona, G.; Passoni, D.; Pinto, L.; Pagliari, D.; Masseroni, D.; Ortuani, B.; Facchi, A. UAV multispectral survey to map soil and crop for precision farming applications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 1023–1029. [Google Scholar] [CrossRef]
  16. Shin, J.; Cho, Y.; Lee, H.; Yoon, S.; Ahn, H.; Park, C.; Kim, T. An optimal image selection method to improve quality of relative radiometric calibration for UAV multispectral images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 493–498. [Google Scholar] [CrossRef]
  17. Li, M.; Zang, S.; Zhang, B.; Li, S.; Wu, C. A review of remote sensing image classification techniques: The role of spatio-contextual information. Eur. J. Remote Sens. 2014, 47, 389–411. [Google Scholar] [CrossRef]
  18. Verde, N.; Mallinis, G.; Tsakiri-Strati, M.; Georgiadis, C.; Patias, P. Assessment of Radiometric Resolution Impact on Remote Sensing Data Classification Accuracy. Remote Sens. 2018, 10, 1267. [Google Scholar] [CrossRef]
  19. Sisodia, P.S.; Tiwari, V.; Kumar, A. Analysis of supervised maximum likelihood classification for remote sensing image. In Proceedings of the International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014), Jaipur, India, 9–11 May 2014; IEEE: Piscataway, NJ, USA; pp. 1–4. [Google Scholar]
  20. Song, D.; Liu, B.; Li, X.; Chen, S.; Li, L.; Ma, M.; Zhang, Y. Hyperspectral data spectrum and texture band selection based on the subspace-rough set method. Int. J. Remote Sens. 2015, 36, 2113–2128. [Google Scholar] [CrossRef]
  21. Snevajs, H.; Charvat, K.; Onckelet, V.; Kvapil, J.; Zadrazil, F.; Kubickova, H.; Seidlova, J.; Batrlova, I. Crop detection using time series of sentinel-2 and sentinel-1 and existing land parcel information systems. Remote Sens. 2022, 14, 1095. [Google Scholar] [CrossRef]
  22. Suykens, J.A.K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  23. Kerner, H.; Nakalembe, C.; Becker-Reshef, I. Field-level crop type classification with k nearest neighbors: A baseline for a new Kenya smallholder dataset. arXiv 2020, arXiv:2004.03023. [Google Scholar] [CrossRef]
  24. Htitiou, A.; Boudhar, A.; Lebrini, Y.; Hadria, R.; Lionboui, H.; Elmansouri, L.; Tychon, B.; Benabdelouahab, T. The performance of random forest classification based on phenological metrics derived from Sentinel-2 and Landsat 8 to map crop cover in an irrigated semi-arid region. Remote Sens. Earth Syst. Sci. 2019, 2, 208–224. [Google Scholar] [CrossRef]
  25. Lago-Ferna’ndez, L.; Corbacho, F. Normality-based validation for crisp clustering. Pattern Recognit. 2010, 43, 782–795. [Google Scholar] [CrossRef]
  26. Cunnick, H.; Ramage, J.M.; Magness, D.; Peters, S.C. Mapping Fractional Vegetation Coverage across Wetland Classes of Sub-Arctic Peatlands Using Combined Partial Least Squares Regression and Multiple Endmember Spectral Unmixing. Remote Sens. 2023, 15, 1440. [Google Scholar] [CrossRef]
  27. Li, Z.; Chen, J.; Rahardja, S. Kernel-Based Nonlinear Spectral Unmixing with Dictionary Pruning. Remote Sens. 2019, 11, 529. [Google Scholar] [CrossRef]
  28. Bangira, T.; Alfieri, S.M.; Menenti, M.; van Niekerk, A.; Vekerdy, Z. A Spectral Unmixing Method with Ensemble Estimation of Endmembers: Application to Flood Mapping in the Caprivi Floodplain. Remote Sens. 2017, 9, 1013. [Google Scholar] [CrossRef]
  29. Ahmad, U.; Nasirahmadi, A.; Hensel, O.; Marino, S. Technology and data fusion methods to enhance site-specific crop monitoring. Agronomy 2022, 3, 555. [Google Scholar] [CrossRef]
  30. Shao, Y.; Lan, J. A Spectral Unmixing Method by Maximum Margin Criterion and Derivative Weights to Address Spectral Variability in Hyperspectral Imagery. Remote Sens. 2019, 11, 1045. [Google Scholar] [CrossRef]
  31. Heinz, D.C.; Chang, C. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2002, 39, 529–545. [Google Scholar] [CrossRef]
  32. Zhan, Y.; Meng, Q.; Wang, C.; Li, J.; Li, D. Fractional vegetation cover estimation over large regions using GF-1 satellite data. Proc. Spie Int. Soc. Opt. Eng. 2014, 9260, 819–826. [Google Scholar]
  33. Chen, F.; Wang, K.; Tang, T.F. Spectral Unmixing Using a Sparse Multiple-Endmember Spectral Mixture Model. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5846–5861. [Google Scholar] [CrossRef]
  34. Averbuch, A.; Zheludev, M. Two linear unmixing algorithms to recognize targets using supervised classification and orthogonal rotation in airborne hyperspectral images. Remote Sens. 2012, 4, 532–560. [Google Scholar] [CrossRef]
  35. Zhao, D.; Eyre, J.X.; Wilkus, E.; de Voil, P.; Broad, I.; Rodriguez, D. 3D characterization of crop water use and the rooting system in field agronomic research. Comput. Electron. Agric. 2022, 202, 107409. [Google Scholar] [CrossRef]
  36. Li, S.; Li, F.; Gao, M.; Li, Z.; Leng, P.; Duan, S.; Ren, J. A new method for winter wheat mapping based on spectral reconstruction technology. Remote Sens. 2021, 13, 1810. [Google Scholar] [CrossRef]
  37. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  38. He, X.; Chen, Y. Modifications of the Multi-Layer Perceptron for Hyperspectral Image Classification. Remote Sens. 2021, 13, 3547. [Google Scholar] [CrossRef]
  39. Tsoulos, I.G.; Varvaras, I.; Charilogis, V. RbfCon: Construct Radial Basis Function Neural Networks with Grammatical Evolution. Software 2024, 3, 549–568. [Google Scholar] [CrossRef]
  40. Fan, X.; Zhang, S.; Xue, X.; Jiang, R.; Fan, S.; Kou, H. An Improved Self-Organizing Map (SOM) Based on Virtual Winning Neurons. Symmetry 2025, 17, 449. [Google Scholar] [CrossRef]
  41. Jardim, S.; António, J.; Mora, C. Image thresholding approaches for medical image segmentation-short literature review. Procedia Comput. Sci. 2023, 219, 1485–1492. [Google Scholar] [CrossRef]
  42. Lu, T.; Wan, L.; Wang, L. Fine crop classification in high resolution remote sensing based on deep learning. Front. Environ. Sci. 2022, 10, 991173. [Google Scholar] [CrossRef]
  43. Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of Three Deep Learning Models for Early Crop Classification Using Sentinel-1A Imagery Time Series—A Case Study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef]
  44. Ewees, A.A.; Abualigah, L.; Yousri, D.; Sahlol, A.T.; Al-qaness, M.A.A.; Alshathri, S.; Elaziz, A. Modified Artificial Ecosystem-Based Optimization for Multilevel Thresholding Image Segmentation. Mathematics 2021, 9, 2363. [Google Scholar] [CrossRef]
  45. Hernández Molina, D.D.; Gulfo-Galaraga, J.M.; López-López, A.M.; Serpa-Imbett, C.M. Methods for estimating agricultural cropland yield based on the comparison of NDVI images analyzed by means of Image segmentation algorithms: A tool for spatial planning decisions. Ingeniare Rev. Chil. De Ing. 2023, 31, 24. Available online: https://www.scielo.cl/pdf/ingeniare/v31/0718-3305-ingeniare-31-24.pdf (accessed on 20 July 2025). [CrossRef]
  46. Hosny, K.M.; Khalid, A.M.; Hamza, H.M.; Mirjalili, S. Multilevel thresholding satellite image segmentation using chaotic coronavirus optimization algorithm with hybrid fitness function. Neural Comput. Appl. 2023, 35, 855–886. [Google Scholar] [CrossRef] [PubMed]
  47. Sharp, K.G.; Bell, J.R.; Pankratz, H.G.; Schultz, L.A.; Lucey, R.; Meyer, F.J.; Molthan, A.L. Modifying NISAR’s Cropland Area Algorithm to Map Cropland Extent Globally. Remote Sens. 2025, 17, 1094. [Google Scholar] [CrossRef]
  48. Kumar, A.; Kumar, A.; Vishwakarma, A. Multilevel Thresholding of Grayscale Complex Crop Images using Minimum Cross Entropy. In Proceedings of the 2023 10th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 23–24 March 2023; pp. 806–810. [Google Scholar] [CrossRef]
  49. Bangira, T.; Alfieri, S.M.; Menenti, M.; van Niekerk, A. Comparing Thresholding with Machine Learning Classifiers for Mapping Complex Water. Remote Sens. 2019, 11, 1351. [Google Scholar] [CrossRef]
  50. Kapur, J.N.; Sahoo, P.K.; Wong, A.K.C. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  51. Wu, B.; Zhou, J.; Ji, X.; Yin, Y.; Shen, X. An ameliorated teaching–learning-based optimization algorithm-based study of image segmentation for multilevel thresholding using Kapur’s entropy and Otsu’s between class variance. Inf. Sci. 2020, 533, 72–107. [Google Scholar] [CrossRef]
  52. Eisham, Z.K.; Haque, M.M.; Rahman, M.S.; Nishat, M.M.; Faisal, F.; Islam, M.R. Chimp optimization algorithm in multilevel image thresholding and image clustering. Evol. Syst. 2023, 14, 605–648. [Google Scholar] [CrossRef]
  53. Akgün, A.; Eronat, A.H.; Türk, N. Comparing Different Satellite Image Classification Methods: An Application in Ayvalik District, Western Turkey. In Proceedings of the 20th ISPRS Congress Technical Commission IV, Istanbul, Turkey, 12–23 July 2004; pp. 1091–1097. Available online: http://www.isprs.org/proceedings/xxxv/congress/comm4/papers/505.pdf (accessed on 10 July 2025).
  54. Li, X.; Zou, Y. Multi-Level Thresholding Based on Composite Local Contour Shannon Entropy Under Multiscale Multiplication Transform. Entropy 2025, 27, 544. [Google Scholar] [CrossRef]
  55. Kang, X.; Hua, C. Multilevel thresholding image segmentation algorithm based on Mumford–Shah model. J. Intell. Syst. 2023, 32, 20220290. [Google Scholar] [CrossRef]
  56. Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A.H. A Novel Evolutionary Arithmetic Optimization Algorithm for Multilevel Thresholding Segmentation of COVID-19 CT Images. Processes 2021, 9, 1155. [Google Scholar] [CrossRef]
  57. Bao, X.; Jia, H.; Lang, C. Dragonfly Algorithm with Opposition-Based Learning for Multilevel Thresholding Color Image Segmentation. Symmetry 2019, 11, 716. [Google Scholar] [CrossRef]
  58. Levene, H. Robust testes for equality of variances. In Contributions to Probability and Statistics; Olkin, I., Ed.; MR0120709; Stanford University Press: Palo Alto, CA, USA, 1960; pp. 278–292. [Google Scholar]
  59. Tukey, J.W. Exploratory Data Analysis; Addison-Wesley: Reading, MA, USA, 1977; Volume 2. [Google Scholar]
  60. Bardet, J.-M.; Dimby, S.-F. A new non-parametric detector of univariate outliers for distributions with unbounded support. Extremes 2017, 20, 751–775. [Google Scholar] [CrossRef]
  61. Jiang, Y.; Zhang, D.; Zhu, W.; Wang, L. Multi-Level Thresholding Image Segmentation Based on Improved Slime Mould Algorithm and Symmetric Cross-Entropy. Entropy 2023, 25, 178. [Google Scholar] [CrossRef] [PubMed]
  62. Almeida, L.B. Multilayer perceptrons. In Handbook of Neural Computatio; IOP Publishing Ltd.: Bristol, UK; Oxford University Press: Oxford, UK, 1997. [Google Scholar]
  63. Powell, M.J.D. Radial basis functions for multivariate interpolation: A review. In IMA Conference on Algorithms for the Approximation of Functions and Data; RMCS: Shrivenham, UK, 1985; pp. 143–167. [Google Scholar]
  64. Duda, R.O.; Hart, P.E. Pattern Classification; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  65. Kohonen, T. The Self-Organizing Map; IEEE: Piscataway, NJ, USA, 1990; Volume 78, pp. 1464–1480. [Google Scholar] [CrossRef]
  66. Kangas, J.; Kohonen, T.; Laaksonen, J. Variants of self-organizing maps. IEEE Trans. Neural Netw. 1990, 1, 93–99. [Google Scholar] [CrossRef] [PubMed]
  67. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  68. Li, M.; Shamshiri, R.R.; Weltzien, C.; Schirrmann, M. Crop Monitoring Using Sentinel-2 and UAV Multispectral Imagery: A Comparison Case Study in Northeastern Germany. Remote Sens. 2022, 14, 4426. [Google Scholar] [CrossRef]
  69. Vidican, R.; Mălinaș, A.; Ranta, O.; Moldovan, C.; Marian, O.; Ghețe, A.; Ghișe, C.R.; Popovici, F.; Cătunescu, G.M. Using Remote Sensing Vegetation Indices for the Discrimination and Monitoring of Agricultural Crops: A Critical Review. Agronomy 2023, 13, 3040. [Google Scholar] [CrossRef]
  70. Lin, Y.C.; Habib, A. Quality control and crop characterization framework for multi-temporal UAV LiDAR data over mechanized agricultural fields. Remote Sens. Environ. 2021, 256, 112299. [Google Scholar] [CrossRef]
  71. Ndou, N.; Thamaga, K.H.; Mndela, Y.; Nyamugama, A. Radiometric Compensation for Occluded Crops Imaged Using High-Spatial-Resolution Unmanned Aerial Vehicle System. Agriculture 2023, 13, 1598. [Google Scholar] [CrossRef]
  72. Baio, F.H.; Santana, D.C.; Teodoro, L.P.; Oliveira, I.C.; Gava, R.; de Oliveira, J.L.; Silva Junior, C.A.; Teodoro, P.E.; Shiratsuchi, L.S. Maize yield prediction with machine learning, spectral variables and irrigation management. Remote Sens. 2022, 15, 79. [Google Scholar] [CrossRef]
  73. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017, 1353691. [Google Scholar] [CrossRef]
  74. Torres, R.M.; Yuen, P.W.T.; Yuan, C.; Piper, J.; McCullough, C.; Godfree, P. Spatial Spectral Band Selection for Enhanced Hyperspectral Remote Sensing Classification Applications. J. Imaging 2020, 6, 87. [Google Scholar] [CrossRef]
  75. Curran, P.J. Remote sensing of foliar chemistry. Remote Sens. Environ. 1989, 30, 271–278. [Google Scholar] [CrossRef]
  76. Thenkabail, P.S.; Mariotto, I.; Gumma, M.K.; Middleton, E.M.; Landis, D.R.; Huemmrich, K.F. Selection of hyperspectral narrowbands (HNBs) and composition of hyperspectral twoband vegetation indices (HVIs) for biophysical characterization and discrimination of crop types using field reflectance and Hyperion/EO-1 data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 427–439. [Google Scholar] [CrossRef]
  77. Potgieter, A.B.; Zhao, Y.; Zarco-Tejada, P.J.; Chenu, K.; Zhang, Y.; Porker, K.; Biddulph, B.; Dang, Y.P.; Neale, T.; Roosta, F.; et al. Evolution and application of digital technologies to predict crop type and crop phenology in agriculture. In Silico Plants 2021, 3, diab017. [Google Scholar] [CrossRef]
  78. Durañona Sosa, N.L.; Vázquez Noguera, J.L.; Cáceres Silva, J.J.; García Torres, M.; Legal-Ayala, H. RGB Inter-Channel Measures for Morphological Color Texture Characterization. Symmetry 2019, 11, 1190. [Google Scholar] [CrossRef]
  79. Miller, R.G., Jr. Jacknifing variances. Ann. Math. Statist. 1968, 39, 567–582. [Google Scholar] [CrossRef]
  80. Gastwirth, J.L.; Gel, Y.R.; Miao, W. The Impact of Levene’s Test of Equality of Variances on Statistical Theory and Practice. Stat. Sci. 2009, 24, 343–360. [Google Scholar] [CrossRef]
  81. Angulo, L.; Pamboukian, S. Spectral Behavior of Maize, Rice, Soy, and Oat Crops Using Multi-Spectral Images from Sentinel-2. In Proceedings of the 5th Brazilian Technology Symposium: Emerging Trends, Issues, and Challenges in the Brazilian Technology, Campinas, Brazil, 23–25 October 2018; Springer International Publishing: Cham, Switzerland, 2021; Volume 2, pp. 327–336. [Google Scholar]
  82. Zhao, D.; Raja Reddy, K.; Kakani, V.G.; Read, J.J.; Carter, G.A. Corn (Zea mays L.) growth, leaf pigment concentration, photosynthesis and leaf hyperspectral reflectance properties as affected by nitrogen supply. Plant Soil 2003, 257, 205–218. [Google Scholar] [CrossRef]
  83. Sudu, B.; Rong, G.; Guga, S.; Li, K.; Zhi, F.; Guo, Y.; Zhang, J.; Bao, Y. Retrieving SPAD values of summer maize using UAV hyperspectral data based on multiple machine learning algorithm. Remote Sens. 2022, 14, 5407. [Google Scholar] [CrossRef]
  84. Jumiawi, W.A.; El-Zaart, A. Otsu Thresholding model using heterogeneous mean filters for precise images segmentation. In Proceedings of the 2022 International Conference of Advanced Technology in Electronic and Electrical Engineering (ICATEEE), M’sila, Algeria, 26–27 November 2022; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
  85. Mahmoud, E.; Alkhalaf, S.; Senjyu, T.; Furukakoi, M.; Hemeida, A.; Abozaid, A. GAAOA-Lévy: A hybrid metaheuristic for optimized multilevel thresholding in image segmentation. Sci. Rep. 2025, 15, 27232. [Google Scholar] [CrossRef] [PubMed]
  86. Houssein, E.H.; Mohamed, G.M.; Ibrahim, I.A.; Wazery, Y.M. An efficient multilevel image thresholding method based on improved heap-based optimizer. Sci. Rep. 2023, 13, 9094. [Google Scholar] [CrossRef] [PubMed]
  87. Komadina, A.; Martinić, M.; Groš, S.; Mihajlović, Z. Comparing Threshold Seletion Methods for Network Anomaly Detection. IEEE Access 2024, 12, 124943–124973. [Google Scholar] [CrossRef]
  88. Rodríguez-Esparza, E.; Zanella-Calzada, L.A.; Oliva, D.; Heidari, A.A.; Zaldivar, D.; Pérez-Cisneros, M.; Foong, L.K. An efficient Harris hawks-inspired image segmentation method. Expert Syst. Appl. 2020, 155, 113428. [Google Scholar] [CrossRef]
  89. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Oliva, D.; Muhammad, K.; Chen, H. Ant colony optimization with horizontal and vertical crossover search: Fundamental visions for multi-threshold image segmentation. Expert Syst. Appl. 2020, 167, 114122. [Google Scholar] [CrossRef]
  90. Zhou, T.; Fu, H.; Sun, C.; Wang, S. Shadow Detection and Compensation from Remote Sensing Images under Complex Urban Conditions. Remote Sens. 2021, 13, 699. [Google Scholar] [CrossRef]
  91. Hlaing, S.H.; Khaing, A.S. Weed and crop segmentation and classification using area thresholding. IJRET Int. J. Res. Eng. Technol. 2014, 3, 375–382. Available online: https://scispace.com/pdf/weed-and-crop-segmentation-and-classification-using-area-2rnuyikbqa.pdf (accessed on 20 June 2025).
  92. Tufail, R.; Tassinari, P.; Torreggiani, D. Assessing feature extraction, selection, and classification combinations for crop mapping using Sentinel-2 time series: A case study in northern Italy. Remote Sens. Appl. Soc. Environ. 2025, 38, 101525. [Google Scholar] [CrossRef]
  93. Genc, L.; Inalpulat, M.; Kizil, U.; Mirik, M.; Smith, S.E.; Mendes, E. Determination of water stress with spectral reflectance on sweet corn (Zea mays L.) using classification tree (CT) analysis. Zemdirb.-Agric. 2013, 100, 81–90. [Google Scholar] [CrossRef]
  94. Aitkenhead, M.J.; Dyer, R. Improving Land-cover Classification Using Recognition Threshold Neural Networks. Photogramm. Eng. Remote Sens. 2007, 73, 413–421. [Google Scholar] [CrossRef]
  95. Sun, Z.; Wang, D.; Zhong, G. A Review of Crop Classification Using Satellite-Based Polarimetric SAR Imagery. In Proceedings of the7th International Conference on Agro-geoinformatics (Agro-geoinformatics), Hangzhou, China, 6–9 August 2018; pp. 1–5. [Google Scholar] [CrossRef]
  96. Ma, X.; Li, L.; Wu, Y. Deep-Learning-Based Method for the Identification of Typical Crops Using Dual-Polarimetric Synthetic Aperture Radar and High-Resolution Optical Images. Remote Sens. 2025, 17, 148. [Google Scholar] [CrossRef]
Figure 1. The location of the study area.
Figure 1. The location of the study area.
Applsci 15 10056 g001
Figure 2. Various types of crops cultivated in the study area.
Figure 2. Various types of crops cultivated in the study area.
Applsci 15 10056 g002
Figure 3. Hyperspectral reflectance curves for surveyed crops: (a) Complete curves and (b) spectral regions where clear reflectance distinction across the surveyed crops were observed.
Figure 3. Hyperspectral reflectance curves for surveyed crops: (a) Complete curves and (b) spectral regions where clear reflectance distinction across the surveyed crops were observed.
Applsci 15 10056 g003
Figure 4. Reflectance behavior of crops in the wavelengths corresponding to the UAV spectral bands.
Figure 4. Reflectance behavior of crops in the wavelengths corresponding to the UAV spectral bands.
Applsci 15 10056 g004
Figure 5. Mean profile of crop spectral radiance and reflectance for (a) cabbage, (b) maize and (c) sugar bean.
Figure 5. Mean profile of crop spectral radiance and reflectance for (a) cabbage, (b) maize and (c) sugar bean.
Applsci 15 10056 g005
Figure 6. Differentiation of crop types using (a) MLT on Blue band, (b) MLT on Green bands, (c) MLT on NIR, (d) MLP, (e) RBFNN, and (f) SOM.
Figure 6. Differentiation of crop types using (a) MLT on Blue band, (b) MLT on Green bands, (c) MLT on NIR, (d) MLP, (e) RBFNN, and (f) SOM.
Applsci 15 10056 g006
Figure 7. Area occupied by different crops as predicted by MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and SOM.
Figure 7. Area occupied by different crops as predicted by MLT on Blue band, MLT on Green band, MLT on NIR, MLP, RBFNN, and SOM.
Applsci 15 10056 g007
Table 1. The UAV spectral band details used to select spectral wavelengths from spectrometric data.
Table 1. The UAV spectral band details used to select spectral wavelengths from spectrometric data.
Band NameCenter (µm)Wavelength Range (µm)
Blue0.4750.443–0.507
Green0.5600.533–0.587
Red0.6680.654–0.682
Red Edge0.7170.705–0.729
Near-IR0.8420.785–0.899
Table 2. Sequences followed by MLP, RBFNN, and SOM in differentiating crop types.
Table 2. Sequences followed by MLP, RBFNN, and SOM in differentiating crop types.
MLPRBFNNSOM
We applied the MLP to differentiate crop types as follows:
Network topology: We trained the MLP for crop type characterization using five (5) UAV spectral bands as input layer nodes and two (2) hidden layers, each of which had five (5) nodes, to improve the learning process.
Training parameters: We used both automatic training and dynamic learning rates to train models. The learning rate was set to 0.01, with a 0.5 momentum factor and sigmoid constant of 1.
Backpropagation training: We trained the MLP for crop type differentiation using Equation (12), adopted from Almeida [62]:
We applied the RBFNN algorithm to differentiate crop types as follows:
Basis functions: We used the set of the basis functions (Equation (13)) proposed by Powell [63]:
We applied the SOM algorithm to differentiate crop types as follows: Coarse   tuning :   We   started   SOM   by   performing   coarse   tuning ,   an   unsupervised   classification ,   to   achieve   competitive   learning   and   lateral   interaction   neuron   weights   representing   the   underlying   cluster   and   sub - cluster   in   input   neurons .   Supposing   x = x 1 , x 2 , , x n is the n-dimensional feature of SOM, the neuron in the output layer with minimum distance to the input feature vector (known as the winner) is then determined as follows:
n e t j = i = 1 m w i j O i (12) φ x x n (13) W i n n e r = a r g m i n i = 1 n x i t w j i t 2 (14)
where   n e t j   denotes   the   overall   weight   of   the   j - t h   neuron ;   x i   denotes   the   input   value   to   the   i - t h   node   of   the   input   neuron ;   w i j ,   computed   using   Equation   ( 15 ) ,   denotes   the   weight   between   the   i - t h   node   of   input   layer   and   the   j - t h   node   of   hidden   layer ;   and   O j   denotes   the   output   from   neuron   j , computed using Equation (18): where   φ .   denotes   nonlinear   function ,   typically   considered   to   be   Euclidian ,   i . e . ,   between   x   and   x n .
RBFNN training: The training of RBFNN for classifying crops involved two steps. The number of hidden layers were determined through the deployment of an unsupervised k-means classifier, using Equation (16) proposed by Duda and Hart [64]:
where   x i t   denotes   the   input   to   neuron   i   at   iteration   t ;   w j i t   denotes   the   weight   between   the   input   neuron   i   and   the   output   neuron   j . Whereas   the   weight   outside   the   winner   was   kept   unaltered ,   the   weights   of   the   winner   and   its   neighbors   within   radius   γ   were   altered   in   accordance   with   learning   rate   t , according to Equations (17) and (19), such that
w i j ( t + 1 ) = η δ j i O i (15) E c = i 1 2 min k x i c k 2 (16) w j i t + 1 = w j i t + t x i t w j i t ,   d w j γ t (17)
O j = f n e t j (18)Then, centers of the RBFs were aligned with the centers of the clusters from the k-means results, using Equations (20) and (23): w j i t + 1 = w j i t ,   d w j   γ t (19)
where   n e t j was computed in Equation (12).
The number of hidden layer nodes used in this study were estimated using Equation (13):
φ 2 = 1 p k = 1 p c j c k 2 (20) where   t   denotes   the   learning   rate   at   the   iteration   and   d w i n n e r and is obtained by deploying Equation (22), adopted from Kohonen [65]:
N h = I N T N i × N o (21) where   p denotes the number of radial basis functions: α t = α m a x α m i n α m a x 1 t (22)
where   N h   denotes   hidden   layer   nodes ;   N i   denotes   input   layer   nodes ;   and   N 0 denotes output layer nodes. Learning   error   evaluation :   To   determine   whether   the   learning   process   was   successfully   executed ,   the   error   associated   with   the   learning   of   network   was   evaluated   using   the   coefficient   of   determination   ( R 2 ), such that f n x = i = 1 K w n i φ i x + w n 0 (23)Fine tuning: We applied fine tuning to optimize the decision boundaries between crop classes based on the training data. We used the learning vector quantization (LVQ) proposed by Kangas et al. [66].
If x is correctly classified, then
R 2 = i = 1 n y ^ i y ¯ 2 i = 1 n y i y ¯ 2 (24) where   x   denotes   input   neurons ;   w n i   denotes   the   connection   between   the   basis   function   and   output   layer ;   and   φ i is the radial basis function, computed using Equation (26): w c t + 1 = w c t + δ t x i w c t (25)
where   y   denotes   the   measured   value ;   y ^ i   denotes   predicted   values ;   and   y ¯ denotes the mean value of the measured values. φ x = e x p x c 2 2 φ 2 (26)If x is incorrectly classified, then
where   x   is   the   d - d i m e n s i o n a l   input   vector   with   variable   x i ;   and   m j   denotes   the   vector   which   determines   the   center   of   the   basis   function   w j   and   has   variable   m j i . w c t + 1 = w c t δ t x i w c t (27)
Otherwise
w c t + 1 = w c t ,   i f   i c (28)
where   w c   denotes   the   weight   vector   of   the   winner ,   and   δ t denotes a gain term, which decreases as time decreases.
Table 3. Levene’s k-comparison of equal variance statistics for crop radiance variability obtained from UAV imagery.
Table 3. Levene’s k-comparison of equal variance statistics for crop radiance variability obtained from UAV imagery.
BlueGreenRed
CabbageMaizeSug. beanCabbageMaizeSug. beanCabbageMaizeSug. bean
N200200200200200200200200200
α0.050.050.050.050.050.050.050.050.05
μ 0.340.450.360.420.620.490.250.330.25
σ0.110.140.090.120.110.070.090.060.08
Var.0.010.020.10.010.010.010020.020.01
Min.0.230.0890.2470.2090.1640.3190.1660.1410.113
Max. 0.8550.3760.4130.9660.8330.8110.6170.7920.327
p-value<0.001<0.001<0.001
Red edgeNIR
CabbageMaizeSug. beanCabbageMaizeSug. bean
N200200200200200200
α0.050.050.050.050.050.05
μ 0.550630.670.720.700.68
σ0.110.090.120.060.110.11
Var.0.010.010.010.010.010.01
Min.0.370.150.460.410.3410.46
Max. 0.940.880.890.930.910.92
p-value<0.001<0.001
Table 4. Levene’s k-comparison of equal variance results for ASD spectral variance across crops.
Table 4. Levene’s k-comparison of equal variance results for ASD spectral variance across crops.
0.443 µm–0.507 µm0.533 µm–0.587 µm0.654 µm–0.682 µm
CabbageMaizeSug. beanCabbageMaizeSug. beanCabbageMaizeSug. bean
N100100100100100100100100100
α0.050.050.050.050.050.050.050.050.05
μ 0.380.480.340.390.590.550.220.370.2
σ0.120.120.050.090.120.090.060.120.03
Var.0.010.0100.010.010.0100.020
Min.0.20.210.230.220.360.360.120.160.12
Max. 0.850.830.470.980.870.850.580.80.34
p-value<0.001<0.001<0.001
0.705 µm–0.729 µm0.785 µm–0.899 µm
CabbageMaizeSug. beanCabbageMaizeSug. bean
N100100100100100100
α0.050.050.050.050.050.05
μ 0.680.670.710.680.670.71
σ0.090.070.080.090.070.08
Var.0.010.010.010.010.010.01
Min.0.410.450.460.410.450.46
Max. 0.970.850.950.970.850.95
p-value<0.001<0.001
Table 5. Q 1 , Q 3 , and I Q R values for each crop type computed from selected spectral wavelengths.
Table 5. Q 1 , Q 3 , and I Q R values for each crop type computed from selected spectral wavelengths.
0.443 µm–0.507 µm0.533 µm–0.587 µm0.785 µm–0.899 µm
Surveyed crops Q i , l Q i , u I Q R i Q i , l Q i , u I Q R i Q i , l Q i , u I Q R i
Cabbage0.330.430.10.180.250.070.620.750.13
Maize0.50.680.180.290.440.170.630.720.09
Sugar bean0.490.610.120.180.220.040.660.770.11
Table 6. Upper and lower thresholds computed for characterizing crops.
Table 6. Upper and lower thresholds computed for characterizing crops.
0.443 µm–0.507 µm0.533 µm–0.587 µm0.785 µm–0.899 µm
Surveyed crops T i , l T i , u T i , l T i , u T i , l T i , u
Cabbage0.180.580.0750.360.4250.745
Maize0.480.790.3350.6950.4950.855
Sugar bean0.550.950.120.280.6450.935
Table 7. Optimized thresholds for the surveyed crop types.
Table 7. Optimized thresholds for the surveyed crop types.
0.443 µm–0.507 µm0.533 µm–0.587 µm0.785 µm–0.899 µm
Cabbage–Maize0.530.3480.62
Maize–Sugar bean0.670.240.75
Table 8. Accuracy assessment summary.
Table 8. Accuracy assessment summary.
ClassifierOverall AccuracyKIA
MLT on Blue band0.4350.372
MLT on Green band0.3330.307
MLT on NIR0.4960.488
MLP0.5940.531
RBFNN0.6620.616
SOM0.6430.659
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mfamana, S.; Ndou, N. Evaluation of Multilevel Thresholding in Differentiating Various Small-Scale Crops Based on UAV Multispectral Imagery. Appl. Sci. 2025, 15, 10056. https://doi.org/10.3390/app151810056

AMA Style

Mfamana S, Ndou N. Evaluation of Multilevel Thresholding in Differentiating Various Small-Scale Crops Based on UAV Multispectral Imagery. Applied Sciences. 2025; 15(18):10056. https://doi.org/10.3390/app151810056

Chicago/Turabian Style

Mfamana, Sange, and Naledzani Ndou. 2025. "Evaluation of Multilevel Thresholding in Differentiating Various Small-Scale Crops Based on UAV Multispectral Imagery" Applied Sciences 15, no. 18: 10056. https://doi.org/10.3390/app151810056

APA Style

Mfamana, S., & Ndou, N. (2025). Evaluation of Multilevel Thresholding in Differentiating Various Small-Scale Crops Based on UAV Multispectral Imagery. Applied Sciences, 15(18), 10056. https://doi.org/10.3390/app151810056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop