Next Article in Journal
RETRACTED: Khan et al. A Comprehensive Survey of Energy-Efficient MAC and Routing Protocols for Underwater Wireless Sensor Networks. Electronics 2022, 11, 3015
Next Article in Special Issue
A Study on Correlation of Depth Fixation with Distance Between Dual Purkinje Images and Pupil Size
Previous Article in Journal
Real-Time Monitoring of LTL Properties in Distributed Stream Processing Applications
Previous Article in Special Issue
Efficient Clustering Method for Graph Images Using Two-Stage Clustering Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computational Intelligence Approach for Fall Armyworm Control in Maize Crop

by
Alex B. Bertolla
1,2,* and
Paulo E. Cruvinel
1,2,*
1
Embrapa Instrumentation, São Carlos 13561-206, SP, Brazil
2
Post Graduation Program in Computer Science, Federal University of São Carlos, São Carlos 13565-905, SP, Brazil
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(7), 1449; https://doi.org/10.3390/electronics14071449
Submission received: 14 January 2025 / Revised: 27 March 2025 / Accepted: 29 March 2025 / Published: 3 April 2025

Abstract

:
This paper presents a method for dynamic pattern recognition and classification of one dangerous caterpillar species to allow for its control in maize crops. The use of dynamic pattern recognition supports the identification of patterns in digital image data that change over time. In fact, identifying fall armyworms (Spodoptera frugiperda) is critical in maize production, i.e., in all of its growth stages. For such pest control, traditional agricultural practices are still dependent on human visual effort, resulting in significant losses and negative impacts on maize production, food security, and the economy. Such a developed method is based on the integration of digital image processing, multivariate statistics, and machine learning techniques. We used a supervised machine learning algorithm that classifies data by finding an optimal hyperplane that maximizes the distance between each class of caterpillar with different lengths in N-dimensional spaces. Results show the method’s efficiency, effectiveness, and suitability to support decision making for this customized control context.

1. Introduction

Maize (Zea mays L.) is one of the most important agricultural crops and one of the most cultivated cereals in the world. There are signs of maize cultivation dating back approximately seven thousand years in the regions where the country of Mexico’s is located today. It was the main food source for the American people of that period [1,2].
According to the United States Department of Agriculture (USDA) [3], Brazil is the third largest maize producer in the world, with actual production of 122 million tons in an area of 21.5 million hectares in its 2023/2024 harvest and production forecasts of 127 million tons in an area of 22.30 million hectares for its 2024/2025 harvest. For comparison, the United States and China are ahead of Brazil in this regard, with forecast productions for their 2023/2024 harvests of 389.67 and 288.84 million tons, respectively.
A large range of pests and diseases attacks the maize crop during its different stages of plant development, which severely affects its productive potential [4]. Among other caterpillar pests, such as Helicoverpa armígera, Helicoverpa zea, and Elasmopalpus lignosellus Zeller [5], the fall armyworm (Spodoptera frungiperda) (FAW) is one of the most notorious and voracious, able to cause losses that could reach approximately 70% of production, as it attacks the plant still in its formation stage [6]. The extensive losses caused by this pest can heavily affect the economy, as it is also considered a serious pest for other important crops in the world, such as soybean, cotton, potato, etc. [7]. Table 1 presents the specific patterns of the above-cited caterpillars and their particularities.
The first reports of FAW come from regions of North and South America, where it has been considered a constant pest [8]; however, in recent years, the presence of this pest has also been reported in different regions around the globe, including Asia [9], Africa [7], and Oceania [10]. In 2020, Kinkar et al. elaborated a technical report to the European Food Safety Authority (EFSA), where emergency measures were in place to prevent the introduction and spread of FAW within the European Union (EU). Due to the high spread capacity of adults, detection of moths at low levels of population is crucial to avoid further spread of this pest [11].
The developmental sequence of the FAW can be seen in Figure 1, which characterizes its different stages of growth, also called instars [5,12]. It is essential to emphasize that at about the neonate stage (newborn), given the size of the pest, it is impractical to attempt its detection by imaging; rather, it is more prudent to detect colonies (eggs).
However, according to agricultural pest experts, instars 5 and 6 can be regarded as a single instar based on their characteristics, method of control, and damage to the maize crop. Thus, the main focus of FAW pattern classification should be instars 1 to 5.
The current method of monitoring for FAW in maize includes trapping males using a pheromone odor similar to that of females. Once the pest is confirmed to be present in crop areas, insecticides are used for control.
However, such a technique can capture only the specimens that are already about to transform into moths or those that are already in that form, at which point they no longer cause significant damage to production [13]. Furthermore, traditional methods executed by humans are also labor-intensive and subjective, as they depend on human efforts [14].
This discrepancy between the current pest detection method and the intended result (to detect pests while they are still at their harmful stages) motivated this research study to find other methods for the early detection of this pest in cultured areas. Specifically, the focus of this study is on the development of a method for dynamic pattern recognition and classification for FAW based on the integration of digital image and signal processing, multivariate statistics, and machine learning (ML) techniques to favor the productivity of the maize crop.
The concept of computer vision seeks, through computer models, to reproduce the ability and functions of human vision, that is, the ability to see and interpret a scene. The ability to see can be implemented through the use of image acquisition devices and suitable methods for pattern recognition.
Nowadays, intelligent systems in agriculture that offer support for decisions in productive areas are equipped with capabilities based on machine learning processes. ML describes the capacity of systems to learn from customized problem-training data to automate and solve associated tasks [15,16,17]. In conjunction with such a concept, Deep learning (DL) is also a machine learning process but based on artificial neural networks with a set of specific-purpose layers [18,19]. The first layer of a neural network is the input layer, where the model receives input data. Such networks also include a convolutional layer, which uses filters to detect features in input data. In addition, to reduce the spatial dimensions of data, a pooling layer is used, which decreases the computational load. Likewise, after that, to allow for output, a fully connected layer is also included in such an arrangement, since each neuron in the layer is connected to every neuron in the previous and subsequent layer, which allows for analysis approaches [20].
Furthermore, ML and artificial intelligence (AI) can help in the decision-making process to establish a diagnosis with the aim of controlling this pest in a maize crop area. Currently, image and signal processing techniques are being utilized in several domains, most prominently in medicine [21], industry, security, and agriculture [22], among others.
Image acquisition has been verified to be a promising approach to the detection and identification of insect pests and plant diseases. In 2010, Sankaran et al. sought to identify diseased plants with greening or with nutritional deficiency through a method based on mid-infrared spectroscopy, where the samples are analyzed by a spectrometer [23]. In 2014, Miranda et al. proposed the study of different digital image processing techniques for the detection of pests in rice paddies by capturing images in the visible spectrum (RGB). They proposed a methodology in which images are scanned pixel by pixel both horizontally and vertically. This process is conducted in such a way that it is possible to detect and calculate the size (in pixels) of detected pests [24].
In 2015, Buades et al. presented a method for the filtering of digital images based on non-local means [25]. This method was compared with traditional methods of digital image filtering, such as Gaussian filtering, anisotropic diffusion filtering, and neighborhood-based filtering for white noise reduction. In 2011, Mythil and Kavitha compared the efficiencies of applying different types of filters to reduce noise in color digital images [26]. Mishra et al. compared Wiener, Lucy–Richardson, and regularized-filter digital filters to reduce noise in digital images [27].
In 2021, Bertolla and Cruvinel presented a method for the filtering of digital images degraded by non-stationary noise. In their research study, these authors added Gaussian-type noise with different levels of intensity to images of agricultural pests. Their approach allowed for the observation of images of maize pests with random noise signals for their [28] processing.
In 2013, He et al. highlighted methodologies based on image segmentation for the identification of pests and diseases in crops. With these methods, pests and diseases of cotton crops could be identified through segmentation techniques based on pseudo-colors (HSI and YCbCr), as well as in the visible spectrum (RGB) [29]. In 2015, Xia et al. detected small insect pests in low-resolution images that were segmented using the watershed method [30]. In 2017, Kumar et al. used the image segmentation technique known as adaptive thresholding for the detection and counting of insect pests. Such a method consists of computing the threshold of each pixel of the image by interpolating the results of the sub-images [31]. In 2018, Sriwastwa et al. compared color-based segmentation with Otsu segmentation and edge detection methods. Their experiments were initially performed with the Pyrilla pest (Pyrilla), found in sugarcane cultivation (Saccharum officinarum). Subsequently, the same methods were applied to images of termites (Isoptera) found in maize cultivation. For color-based segmentation, images were converted to the CIE L*a*b* [32] color space. For the feature extraction and pattern recognition of agricultural pests and diseases, in 2007, Huang applied an artificial neural network with the backpropagation algorithm for the classification of bacterial soft rot (BSR), bacterial brown spot (BBS), and Phytophthora black rot (PBR) in orchid leaves. The color and texture characteristics of the lesion area caused by the diseases were extracted using a co-occurrence matrix [33].
In 2011, Sette and Mailard proposed a method based on texture analysis of georeferenced images for the monitoring of a certain region of the Atlantic Forest based on images analyzed in the visible spectrum. Metrics such as contrast, entropy, correlation, inverse moment of difference, and second angular momentum were also extracted from the co-occurrence matrix [34].
On the other hand, machine learning-based approaches were discussed in 2008 by Ahmed et al., who then developed a real-time machine vision-based methodology for the recognition and control of invasive plants. The proposed system also had the objective of recognizing and classifying invasive plants into broad-leaf and narrow-leaf classes based on the measurement of plant density through masking operations [35].
In 2012, Guerrero et al. proposed a method based on a support vector machine (SVM) classifier for identification of weeds in maize plantations. For the classification process, they used SVM classifiers with a polynomial kernel, radial basis function (RBF), and sigmoid function [36].
In 2015, for the classification of different leaves, Lee et al. used a Convolutional Neural Network (CNN) based on the AlexNet neural network and a deconvolutional network to observe the transformation of leaf characteristics. For the detection of pests and diseases in tomato cultivation, a methodology based on machine learning techniques was proposed by Fuentes et al. [37]. Three models of neural networks were used to perform this task. To recognize pests and diseases (objects of interest) and their locations in the plant, faster region-based convolutional neural networks (F-CNNs) and region-based fully convolutional neural networks (R-FCNs) have been used [38].
In 2017, Thenmozhi and Reddy presented digital image processing techniques for insect detection in the early stage of sugar cane crops based on the extraction of nine geometrical features. The authors used the Bugwood image database for image sample composition [39]. In 2023, the same image database was used by Tian et al. The authors proposed a model based on the deep learning architecture to identify nine kinds of tomato diseases [40]. In 2019, Evangelista proposed the classification of flies and mosquitoes based on the frequency of their wing beats. Fast Fourier transform and a Bayesian classifier were used [41].
In 2018, Nanda et al. proposed a method for detecting termites based on SVM classifiers, using pieces of wood previously divided into two classes: infested and not infested by termites. SVM classifiers with linear kernel, RBF, polynomial, and sigmoid functions were used on datasets obtained by microphones, which captured sound signals from termites [42]. In 2019, Lui et al. presented a methodology for extracting data from intermediate layers of CNNs with the purpose of using these data to train a classifier and, thus, make it more robust. Features extracted from the intermediate layers of a CNN were representative and could significantly improve the accuracy of the classifier. To test the effectiveness of that method, CNNs such as AlexNet, VggNet, and ResNet were used for feature extraction. The extracted features were used to train classifiers based on SVM, naive Bayes, linear discriminant analysis (LDA), and decision trees [43].
In 2019, Li et al. trained a maximum likelihood estimator for the classification of maize grains. “Normal” and “damaged” classes were defined, the latter having seven subclassifications [44]. In 2019, Abdelghafour et al. presented a framework for classifying the covers of vines in their different phenological stages, that is, foliage, peduncle, and fruit. For this task, a Bayesian classifier and a probabilistic maximum a posteriori (MAP) estimator were used [45].
In 2022, Moreno and Cruvinel presented results related to the control of weed species with instrumental improvements based on a computer vision system for direct precision spray control in agricultural crops for the identification of invasive plant families and their quantities [46].
In 2024, Wang and Luo proposed a method to identity specific pests that occur in maize crops. This method is based on a YOLOv7 network and the SPD-Conv module to replace the convolutional layer, i.e., to realize small target feature and location information. According to the authors, the experimental results showed that the improved YOLOv7 model was more efficient for such control [47].
In 2024, Liu et al. proposed a model to detect maize leaf diseases and pests. These authors presented a multi-scale inverted residual convolutional block to improve models’ ability to locate the desired characteristics and to reduce interference from complex backgrounds. In addition, they used a multi-hop local-feature architecture to address problems regarding the extraction of features from images [48].
In 2025, Valderrama et al. presented an ML method to detect Aleurothrixus floccosus in citrus crops. This method is based on random sampling image acquisition, i.e., alternating the extraction of leaves from different trees. Techniques of imaging processing, including noise reduction, edge smoothing, and segmentation, were also applied. The final results were acceptable, and the authors used a dataset of 1200 digital images for validation [17].
Also in 2025, Zhong et al. proposed a flax pest and disease detection method for different crops based on an improved YOLOv8 model. The authors employed the Albumentations library for data augmentation and a Bidirectional Feature Pyramid Network (BiFPN) module. This arrangement was organized to replace the original feature extraction network, and the experimental results demonstrated that the improved model achieved significant detection performance on the flax pest and disease dataset [49].
This paper presents the integration of digital image processing, multivariate statistics, and computational intelligence techniques, focusing on a method for dynamic pattern recognition and classification of FAW caterpillars. Customized context analysis is also performed, taking into account ML and DL based on SVM classifiers and an AlexNet CNN (A-CNN) [50] through the use of the Tensorflow framework.
After this Introduction, the remainder of this paper is organized as follows: Section 2 introduces the materials and methods, Section 3 presents the results and respective discussions, and Section 4 provides conclusions and includes suggestions for continuity and future research.

2. Materials and Methods

All experiments were performed in Python (version 3.11), i.e., by using both the image processing and ML libraries in openCV, as well as the scikit-image and scikit-learn algorithms. We also considered an operating platform with a 64-bit CPU Intel (R) model Core(TM) i7-970, 16 Gb RAM, and Microsoft Windows 11 operating system. Figure 2 shows a block diagram of the method for classifying the patterns of FAW caterpillars.
Concerning the dataset used for validation, the choice of images was based mainly on their quality and diversity. For this study, the Insect Images dataset, which is a subset of the Bugwood Image Database System, was selected. Currently, the Bugwood Image Database System is composed of more than 300 thousand images divided into more than 27 thousand subgroups. Another important factor for the use of this dataset is that most of the images were captured in the field, that is, they were influenced by lighting and have variations in scale and size, among other characteristics resulting from acquisition in a real environment. Therefore, in order to minimize the effects of lighting on the images, a set of digital images from leaves and cobs with the presence of FAW was taken into account, acquired under close lighting intensities, in addition to the inclusion of geometric feature extraction, together with color and texture information, for pattern recognition. Table 2 outlines the characteristics of the images used to validate the developed method.
The restoration process considers the use of a degradation function (H), resulting in a restored image ( f ( x , y ) ). When the degradation is exclusively due to noise, the H function is applied with a value equal to 1. On the other hand, a filter ( g ( x , y ) ) enables an image to be obtained such that it presents a better result in terms of the Signal-to-Noise Ratio (SNR). As a result, a restored image ( f ^ ( x , y ) ) is obtained. The process of restoring noisy images can be applied to both the spatial and frequency domains.
For noise filtering of the acquired images, the use of digital filters and the presence of noise, mainly random Gaussian and impulsive noises, were considered [51]. In digital image processing, noise can be defined as any change in the signal that causes degradation or loss of information from the original signal, which can be caused by lighting conditions of the scene or object, the temperature of the signal capture sensor during the acquisition of the image, or transmission of the image, among other factors [28].
For the problem regarding FAW, it has been observed that only noises present in the considered images need to be treated. Thus, H equals 1, and additive noises resulting from temperature variation of the image capture sensor and influenced by lighting conditions can be represented as follows:
f ^ ( x , y ) = η ( x , y ) + f ( x , y ) ,
where f ( x , y ) represents the original image, η ( x , y ) is the noise added to the original image, and f ^ ( x , y ) is the noisy image [26,52].
In this work, image restoration was performed in the spatial domain. The use of Gaussian filters [53] and non-local means [25] was evaluated. The application of a Gaussian filter has the effect of smoothing an image; the degree of smoothing is controlled by the standard deviation ( σ ). Its kernel follows the mathematical model expressed as follows:
f ( x , y ) = 1 2 π σ 2 exp x 2 + y 2 2 σ 2 ,
where x and y represent the kernel dimensions of the filter and σ is the value of the standard deviation of the Gaussian function with σ > 0 .
The application of a non-local mean (NLM) filter is based, as the name suggests, on non-local mean measures. The NLM filter searches for the estimated value of the intensity of each pixel (i) in a certain region of the image (f), then calculates the weighted average of this region. The similarity is estimated according to an image with noise (g) in the form of g ( i ) | i f , N L M [ g ] ( s ) , where N L M represents the estimated value of a given pixel (i) [54] as follows:
N L M [ g ] ( s ) = 1 C ( i ) j f ω ( i , j ) g ( j ) ,
where C ( i ) is a normalizing factor, i.e., C ( i ) 0 , and ω ( i , j ) represents the similarity of the weights of pixels i and j to satisfy the conditions of 0 ω ( i , j ) 1 and j f ω ( i , j ) = 1 . In addition, the weight of ω ( i , j ) [55] is calculated as follows:
ω ( i , j ) = 1 C ( i ) exp | ( N i ) 2 ( N j ) 2 | σ 2 ,
where N i and N j are vectors of pixels whose values are related not only to similarity measures but also to the Gaussian weighted Euclidean distance at which intensities of the gray levels are within a square neighborhood centered at positions i and j, respectively.
For color-space operations, the use of the HSV and CEI L*a*b* color spaces was evaluated based on images acquired in the RGB color space [56].
In reference to digital image processing, the RGB color space applies mainly to the image acquisition and result visualization stages. However, because of its low capacity to capture intensity variation in color components [57], its use is not recommended in the other stages of the process.
To convert an RGB image to the HSV color space, because of the more natural representation of RGB colors for human perception, it is necessary to normalize the values of the R, G, and B components, measuring their respective maximum and minimum values, then convert them to the HSV color space as follows:
H = 60 ° G B Δ mod 6 if C m a x = R 60 ° B R Δ + 2 if C m a x = G 60 ° R G Δ + 4 if C m a x = B ,
S = 0 if Δ = 0 Δ C m a x if Δ 0 ,
V = C m a x ,
where C m a x is the maximum value of the R, G, and B components; C m i n is the minimum value of the R, G, and B components; and H, S, and V are the components of the HSV color space.
It is also possible to convert images from the HSV to the RGB color space, considering the following:
( R 1 , G 1 , B 1 ) = ( C , X , 0 ) if 0 ° H < 60 ° ( X , C , 0 ) if 60 ° H < 120 ° ( 0 , C , X ) if 120 ° H < 180 ° ( 0 , X , C ) if 180 ° H < 240 ° ( X , 0 , C ) if 240 ° H < 300 ° ( C , 0 , X ) if 300 ° H < 360 ° ,
where R 1 , G 1 , and B 1 represent points on the faces of the RGB cube, whereas C represents the chroma component.
On the other hand, the conversion from the RGB to the CIE L*a*b* color space follows the method described below. The Commission Internationale de l’Eclairage (CIE), or the International Commission on Illumination, defines the sensation of color based on the elements of luminosity, hue, and chromaticity. Thus, the condition of existence of color is based on three elements: illuminant, object, and observer [58]. Accordingly, the color space known as CIE Lab started to consider the “L” component as a representation of luminosity, ranging from 0 to 100; the “a” component as a representation of chromaticity, ranging from green (negative values) to red (or magenta for positive values); and the “b” component also as a representation of chromaticity, varying from yellow (for negative values) to blue (or cyan for positive values) [59]. In 1976, the CIE L*a*b* standard was created based on improvements of the CIE Lab created in 1964. The new standard provides more accurate color differentiation concerning human perception [60]. However, given that the old standard is still widely used, the use of asterisks (∗) was adopted in the nomenclature of the new standard.
Because the CIE L*a*b* color space standard, like its predecessor, is based on the CIE XYZ color standard, conversion from RGB images occurs in two steps [61]. First, the RGB image is converted to the CIE XYZ standard, that is,
X Y Z = 0.4125 0.3576 0.1804 0.127 0.7152 0.0722 0.0193 0.1192 0.9502 R G B .
Once the image has been converted to the CIE XYZ standard, it can be converted to the CIE L*a*b* standard, considering the following:
L * = 116 Y Y n 1 3 16 if Y Y n > 0.008856 903 , 3 Y Y n if Y Y n 0.008856 ,
a * = 500 X X n 1 3 Y Y n 1 3 ,
b * = 200 X X n 1 3 Z Z n 1 3 .
The segmentation step aims to divide, or isolate, regions of an image, which could be labeled as foreground and background objects. Thus, foreground objects are called regions of interest (ROIs) of the image, that is, regions where patterns related to the end objective are sought for the identification of FAW. Background objects are any other objects that are not of interest [62].
Segmentation algorithms are generally based on two basic properties: discontinuity, such as edge detection and the identification of borders between regions, and similarity, which is the case of pixel allocation in a given region [63].
In binary images, the representation of pixels with values of 0 (black) normally indicates the background of the image, whereas that of pixels with values of 1 (white) indicates the object(s) of interest [64].
A binary image ( b ( x , y ) ) is generated with the application of a threshold (T) to the histogram of the original image ( f ( x , y ) ), considering the following:
b ( x , y ) = 0 , if f ( x , y ) < T 1 , if f ( x , y ) T .
The threshold value can be chosen through a manual analysis from the histogram of an image or the use of an automatic threshold selection algorithm. In this case, seed pixels were used. Additionally, the use of Otsu’s method was considered. This method performs non-parametric and unsupervised discriminant analysis and automatically selects the optimal threshold based on the intensity values of the pixels of a digital image, allowing for a better separation of classes [65].
In a 2D digital image with dimensions of N × M and an intensity level of L, where n i denotes the number of pixels of intensity i, and M N is the total number of pixels, the histogram is normalized considering the following [66]:
p i = n i M N ,
where p i 0 and i = 1 L p i = 1 .
The operation of a threshold ( T ( k ) = k , where 0 < k < L 1 ) divides the L intensity levels of an image into two classes ( C 1 and C 2 , representing the object of interest and the background of the image, respectively, where C 1 consists of all pixels in the range of [ 0 , k ] and C 2 covers the range of [ k + 1 , L 1 ] ). This operation is defined as follows:
P 1 ( k ) = i = 0 k p i ,
where P 1 ( k ) is the probability of a pixel (k) being assigned to class C 1 .
Likewise, the probability of occurrence of class C 2 is expressed as follows:
P 2 ( k ) = i = k + 1 L 1 p i = 1 P 1 ( k ) .
The values of the average intensities of classes C 1 and C 2 are given considering the following equations:
m 1 ( k ) = i = 0 k i P ( i | C 1 ) = 1 P 1 ( k ) i = 0 k i p i ,
m 2 ( k ) = i = k + 1 L 1 i P ( i | C 2 ) = 1 P 2 ( k ) i = k + 1 L 1 i p i .
The average cumulative intensity ( m ( k ) ) is expressed as follows:
m ( k ) = i = 0 k i p i .
The optimal threshold can be obtained via the minimization of one of the discriminant functions, described as follows:
λ = σ B 2 σ W 2 , η = σ B 2 σ G 2 , κ = σ T 2 σ W 2 ,
where σ W 2 is within-class variance, σ B 2 is cross-class variance, and σ G 2 is global variance, respectively expressed as follows:
σ B 2 = P 1 P 2 ( m 1 m 2 ) 2 = ( m G P 1 m ) 2 P 1 ( 1 P 1 ) ,
σ G 2 = i = 0 L 1 ( i m G ) 2 p i .
The greater the difference in the average values of m 1 and m 2 , the greater the variance between classes ( σ B 2 ), confirming it to be a separability measure [66]. Likewise, if σ G 2 is a constant, it is possible to verify that η is also a measure of separability and that maximizing this metric is equivalent to maximizing σ B 2 . Thus, the objective is to determine the value of k to maximize the variance between classes. Therefore, the threshold (k) that maximizes the η function is selected based on the following:
η ( k ) = σ B 2 ( k ) σ T 2 .
Also, when the optimal threshold ( k * ) is obtained, the original image ( f ( x , y ) ) is segmented considering the following:
b ( x , y ) = 0 , if f ( x , y ) k * 1 , if f ( x , y ) > k * .
The use of Otsu’s method automates the process of segmenting images containing objects that represent FAW both in maize leaves and cobs. Additionally, after the segmentation method was applied, the extraction of features from these patterns, through the use of the methods of the histogram of oriented gradient (HOG) [67] and invariant moments of Hu [68], was considered.
Herein, the HOG descriptor is applied in five stages [69]. The first involves transforming the segmented image in the CIE L*a*b* color space to grayscale (in 8-bit or 256-tone conversion), whereas the other steps involve calculating the intensities of the gradients; grouping the pixels of the image into cells; grouping these cells into blocks; and, finally, extracting characteristics of the magnitude of the gradient, according to the following equation:
m ( x , y ) = [ f u ( x , y ) ] 2 + [ f v ( x , y ) ] 2 ,
where m is the magnitude of the feature vector at point ( x , y ) . Additionally, f u ( x , y ) consists of a component in u directions, and f v ( x , y ) is a component in v directions. It is possible to obtain the direction of the gradient vector ( θ ( x , y ) ) in the following form:
θ ( x , y ) = tan 1 f v ( x , y ) f u ( x , y ) .
In addition, for geometrical feature extraction purposes, the Hu invariant moments descriptor was considered [70]. First, it is necessary to calculate the two-dimensional moments. They can be defined as polynomial functions projected in a 2D image ( f ( x , y ) ) with dimensions of M × N and order ( p + q ) .
The normalized central moments allow the central moments to be invariant-to-scale transformations, defined as follows:
η p q = μ p q μ 00 γ ,
where γ is defined as
γ = p + q 2 + 1 ,
for p + q = 2 , 3 , , positive integers ∈ Z .
Then, the invariant moments can be calculated as follows:
ϕ 1 = η 20 + η 02 ,
ϕ 2 = ( η 20 η 02 ) 2 + 4 η 11 2 ,
ϕ 3 = ( η 30 3 η 12 ) 2 + ( 3 η 21 η 03 ) 2 ,
ϕ 4 = ( η 30 + η 12 ) 2 + ( η 21 + η 03 ) 2 ,
ϕ 5 = ( η 30 3 η 12 ) ( η 30 + η 12 ) [ ( η 30 + η 12 ) 2 3 ( η 21 + η 03 ) 2 ] + ( 3 η 21 η 03 ) ( η 21 + η 03 ) ,
ϕ 6 = ( η 20 η 02 ) [ ( η 30 + η 12 ) 2 ( η 21 + η 03 ) 2 ] + 4 η 11 ( η 30 + η 12 ) ( η 21 + η 03 ) ,
ϕ 7 = ( 3 η 21 η 03 ) ( η 30 + η 12 ) [ ( η 30 + η 12 ) 2 3 ( η 21 + η 03 ) 2 ] + ( 3 η 12 η 30 ) ( η 21 + η 03 ) [ 3 ( η 30 + η 12 ) 2 ( η 21 + η 03 ) 2 ] .
Furthermore, in this work, we used Principal Component Analysis (PCA) [71] to reduce the dimensionality of the vector. We consider a data array (X) with n observations and m independent variables.
X = x 11 x 1 m x n 1 x n m .
The principal components can be measured as a set of m variables ( X 1 , X 2 , , X m ) with means ( μ 1 , μ 2 , , μ m ) and variance ( σ 1 2 , σ 2 2 , , σ m 2 ), in which covariance between the n-th and m-th variables takes the following form:
Σ = σ 11 2 σ 1 m 2 σ n 1 2 σ n m 2 ,
where Σ is the covariance matrix. Eigenvalues and eigenvectors are measured (( λ 1 , e 1 ), ( λ 2 , e 2 ), …, ( λ m , e m ), where λ 1 λ 2 λ m ) and associated with Σ , where the i-th principal component is defined as follows:
Z i = e i 1 X 1 + e i 2 X 2 + + e i m X m ,
where Z i is the ith principal component. The objective is to maximize the variance of Z i as follows:
V a r ( Z i ) = V a r ( e i X ) = e i V a r ( X ) e i = e i Σ e i ,
where i = 1, …, m. Thus, the spectral decomposition of the matrix ( Σ ) is expressed as Σ = P Λ P , where P is the composite matrix according to the eigenvectors of Σ and Λ is the diagonal matrix of eigenvalues of Σ . Thus,
Λ = λ 1 0 0 0 λ 2 0 0 0 λ m .
The principal component of greatest importance is defined as the one with the greatest variance that explains the maximum variability in the data vector, as the second highest variance represents the second most important component, and so on, to the least important component.
The vector of reduced dimensionality features is composed of normalized eigenvectors, representing the descriptors of the FAW descriptors in the images. The feature vector comprises the input data for pattern recognition, involving ML.
As mentioned previously, in other words, ML may also be understood as the ability of a computational system to improve performance in a task based on experience [72]. In this work, the classification technique is related to ML—specifically, supervised learning [73]. Supervised learning is based on existent and classified patterns serving as training examples that enable a classifier to be efficiently generalized to new datasets [18]. In this context, the feature vector, with reduced dimensionality, is used for classification according to its position in the feature space.
In this work, after having defined the feature vector, as a next step, the application of computational intelligence is considered, i.e., based not only on SVM classifiers [74] but also taking into account the A-CNN through the use of the Tensorflow framework [19,75] for evaluation. In such a way, it is possible to carry out a context evaluation to choose between the use of ML or DL models in such a customized pest control problem.
SVM classifiers were selected for use in this study, since they have been quite well used for the classification of agricultural data, as reported in [76]. SVMs can be established based on linear behavior or even non-linear behavior. In this work, two types of kernel that are associated with these functionalities were evaluated to determine which one leads to the best operability, accuracy, precision, and other parameters associated with the analysis of the behaviors of these classifiers [77].
Classifiers with linear behavior use a hyperplane that maximizes the separation between two classes from a training dataset (x) with n objects ( x i X ) and their respective labels ( y i Y ) such that X represents the input dataset and Y = ( 1 , + 1 ) represents possible classes [72]. In this case, the hyperplane is defined as follows:
y i ( w · x i + b ) 1 0 , ( x i , y i ) X ,
where w is the normal vector to the hyperplane, w · x is the dot product of vectors w and x, and b is a fit term.
The maximization of the data separation margin in relation to w · x + b = 0 can be obtained via the minimization of w [78]. The minimization problem is quadratic because the object function is quadratic and the constraints are linear. This problem can be solved via the introduction of the Lagrange function [79]. The Lagrange function must be minimized concerning w and b, implying the maximization of the variables ( α i ). The value of L is derived concerning w and b.
This formulation is referred to as the dual form, whereas the original problem is referred to as the primal form. The dual form presents simpler restrictions and allows for the representation of the optimization problem in terms of inner products between data, which are useful for the nonlinearization of SVMs. It is also worth noting that the dual problem uses only training data and their labels [72].
Linearly separable datasets are classified efficiently by linear SVMs, with some error tolerance in the case of a linear SVM with smooth margins. However, in several cases, it is not possible to efficiently classify training data using this modality of a hyperplane [72], requiring the use of interpolation functions that allow for operation in larger spaces, that is, using non-linear SVM classifiers.
In that manner, it is possible for SVMs to deal with non-linear problems through a Φ function, mapping the dataset from its original space (input space) to a larger space (input space characteristics) [80], characterizing a non-linear SVM classifier.
Thus, based on the choice of Φ , the training dataset (x), in its input space ( R 2 ), is scaled to the feature space ( R 3 ) as follows:
Φ ( x ) = Φ ( x 1 , x 2 ) = ( x 2 , 2 2 x 1 x 2 , x 2 2 ) ,
h ( x ) = w · Φ ( x ) + b = w 1 x 1 2 + w 2 2 2 x 1 x 2 + w 3 x 2 2 + b = 0 .
In this way, the data are initially mapped to a larger space; then, a linear SVM is applied over the new space. A hyperplane is then found with a greater margin of separation, ensuring better generalization [81].
Given that the feature space can be in a very high dimension, the calculation of Φ might be extremely costly or even unfeasible. However, the only necessary information about the mapping is the calculation of the scalar products between the data in the feature space obtained through function kernels [72]. Table 3 presents the kernels selected to validate the developed method.
For each type of kernel, one should define a set of parameters, which must be customized as a function of the problem to be solved.
In addition, the use of a CNN based on a tensor is evaluated, which can be defined as a mathematical entity used for multidimensional representation of data. Its order determines the indices required for the component’s access. In fact, tensors can be scalar, vectors, matrices, or even higher-order entries with N dimensions [82], as follows:
X R I 1 x I 2 x I N ,
where I k for 1 k N is the dimension of the k-th mode of the tensor and x i 1 , i 2 , , i N denote each element of the tensor (X).
A Tensor Network (TN) is defined as a collection of tensors that can be multiplied or compacted according to a predefined topology. In such an arrangement, the configuration can be obtained by taking into account two types of indices. One of them considers a linked index, which can connect tensors two at a time to structure the network arrangement. The other uses an open index, which makes it possible to connect tensors directly to the network being structured. Larger scale dimensions can also be obtained by using addition and multiplication operations and considering linked and open operators, respectively.
In this work, all parameters are considered, not only for the SVM classifiers but also for the A-CNN, to allow for control of FAW caterpillars in maize crops.

3. Results and Discussions

For this study, an image dataset of the FAW in maize crop was organized, with a total of 2280 images representing its five stages of development, that is, 456 images generated for each stage of development. Figure 3 shows the results of the image acquisition step.
Based on the mean squared error (MSEs) and peak signal-to-noise ratios (PSNRs) of the images restored by the spatial filtering process, it was observed that the NLM filter yielded a better result than the Gaussian filter, as shown in Figure 4.
Table 4 presents the parameters used for the application of the NLM filter. These parameters were obtained after several tests performed with this filter based on the available literature.
The use of a kernel with dimensions of 7 × 7 pixels and a height and distance patch of 11 pixels allowed for the maximum attenuation of medium- and low-frequency noise effects, as well as the maintenance of the textural characteristics of the images.
For the image segmentation stage, tests were conducted using Otsu’s method based on the conversion of the HSV and CIE L*a*b* color spaces. However, because of the verified restrictions for the H map of the HSV color space, it was decided that the CIE L*a*b* would be used instead to obtain an ideal segmentation process.
Figure 5, Figure 6 and Figure 7 illustrate the image segmentation process using Otsu’s method based on components a* and b*, where each image shows only one FAW found on the considered region of maize leaves.
Figure 8, Figure 9 and Figure 10 illustrate the image segmentation process using Otsu’s method and based on components a* and b*, where each image showed two FAWs found on the considered region of maize leaves.
When the histograms of the a* components of Figure 5 and Figure 8 are analyzed, it can be seen that the pixels with the lowest values, represented by the blue color, refer to the pixels of maize plant leaves. Conversely, the highest-value pixels, in red, represent pest pixels and other anomalies present on the leaf. Therefore, to segment the pest from the rest of the image, only pixels with values above the threshold obtained by Otsu’s method are considered.
However, based on the tests conducted on the segmentation of images of FAWs on leaves using Otsu’s method and only the a* component of the CIE L*a*b* color space, it was determined that, despite the proven efficiency of this segmentation method, there is an evident need for a second segmentation stage. Assays were then performed using the b* component.
In histogram analysis of Figure 6 and Figure 9, it can be seen that the pixels with the lowest values represent the pixels of the pest and some parts of the leaf. Therefore, to segment the pest from the rest of the image, only pixels with values below the threshold obtained by Otsu’s method were considered.
Approximating the segmentation process using only the a* component, segmentation using the b* component also resulted in an image with parts of the leaf still in it. However, it was possible to verify that, in the spatial domain of the image, the non-segmented parts referring to the leaf in the segmentation with the a* component did not belong to the same spatial locations as the segmentation result achieved by the b* component. Therefore, based on the results obtained via the segmentation of the pest from the image on a leaf by the a* and b* components of the CIE L*a*b* color space, a new segmentation step was performed, this time in the form of an intersection. It can be observed from the results that segmentation by intersection proved to be efficient in the segmentation of images of pests on leaves.
Figure 11 and Figure 12 illustrate the image segmentation process using Otsu’s method and based on the b* components of FAWs found in the considered region of maize cobs. In Figure 11, only one FAW caterpillar is shown. In Figure 12, two FAW caterpillars in different stages are shown.
For the segmentation of the FAW on maize cobs using Otsu’s method, the results of the tests performed on the a* component of the CIE L*a*b* color space are shown to be inefficient, given the conversion process from the RGB color space. Thus, for the segmentation of pests on maize cobs, only the b* component was used.
The histograms of the map of the b* component (Figure 11 and Figure 12) show that the highest-value pixels represent the maize cob pixels in the b* component. Thus, to segment the FAW caterpillars from all the collected images, pixels with values below the threshold obtained by Otsu’s method were considered.
The results of the segmentation process showed that parts of the cob were not completely segmented. These results were expected for the segmentation of both images with spikes and images containing leaves because during the tests performed to validate the segmentation step of the standard method, the complexity of image formation was verified in terms of the two values of the pixels that constitute both the FAW and the background of the image.
Thus, it could be verified that for images of FAW caterpillars found on leaves, the segmentation process using Otsu’s method and the a* and b* maps achieved a result considered ideal, whereas for images of FAW caterpillars found on cobs, only the segmentation using the map of the b* component was sufficient.
In relation to image descriptors, in this work, we considered the use of the HOG and Hu methodologies.
The HOG descriptor was used to extract texture features of the FAW. Table 5 displays the parameterization of the HOG descriptor.
For the execution of the HOG descriptor, previously segmented images were resized to obtain a spatial resolution of 256 × 256 pixels. Once the parameters of the HOG descriptor were applied to the resized images, it was possible to generate a feature vector of 8100 positions in the form of V [ H O G ] for each image of the FAW, as illustrated in Figure 13.
Once the feature vector ( V [ H O G ] ) was obtained through the use of the HOG descriptor, the Hu invariant moment descriptor was then applied to the FAW images, as demonstrated in Figure 14.
Thus, for each image of the FAW, a feature vector ( V [ H u ] ) was generated, containing the seven invariant moments of Hu, that is, the shape and size features of the pests: M1, M2, M3, M4, M5, M6, and M7.
Considering the obtained descriptors, one referring to texture characteristics (HOG) and another referring to geometric characteristics (Hu), it was possible to classify the patterns of the FAW in its different stages of development after applying PCA to reduce the feature vector.
In fact, in feature extraction using the HOG descriptor and the Hu invariant moment descriptor, the feature vectors ( V [ H O G ] and V [ H u ] ) were concatenated to generate a single vector of features ( V [ H O G , H u ] ) with 8107 positions, as illustrated in Figure 15.
Therefore, because the values referring to texture, shape, and size characteristics are in different scales, it was necessary to normalize them before the generation of a database with the characteristic features of the patterns presented in each analyzed image.
Furthermore, by using PCA, it was possible to achieve a dimensionality reduction from 2280 to 128 principal components, maintaining approximately 98% of the variability of the original data, as shown in Figure 16.
In the tests on the classification and ML stage, SVM classifiers with a linear kernel function and a sigmoid kernel function were considered. For the validation of each SVM classifier, both the accuracy and precision in classifying the FAW target stage were taken into account.
For the training and testing stages of the SVM classifiers and the CNN, dataset proportions of 50%:50%, 70%:30%, and 80%:20% were evaluated for training and testing, respectively. Table 6, Table 7, and Table 8 present the results of SVM classifiers with a linear kernel function and dataset proportions of 50% for training and testing, 70% for training and 30% for testing, and 80% and 20% for training and testing, respectively.
Table 9, Table 10, and Table 11 present the results of the SVM classifiers with sigmoidal kernel functions and dataset proportions of 50%:50%, 70%:30%, and 80%:20% for training and testing, respectively.
Taking into account the classifiers having linear and sigmoidal function kernels, the assessed results revealed their efficiency in the dynamic classification of the FAW. It was possible to observe that the best results were obtained by using the sigmoidal function kernel. Therefore, a deeper evaluation of the SVM classifier based on such a function kernel was considered as follows.
For stage 1, the SVM classifier with a proportion 50%:50% of the dataset for training and testing presented the best result, with an accuracy rate of 72% and a precision rate of 80%. For stage 2, the SVM classifier with a proportion of 80%:20% of the dataset for training and testing, respectively, showed the best results based on precision and accuracy of 80% and 69%, respectively. For stage 3, the SVM classifier with a proportion of 50% of the dataset for training and testing showed the best result, with 80% accuracy and precision. For stage 4, the best result was demonstrated by the SVM classifier with a proportion of 50% of the dataset for training and testing. Finally, for stage 5, the SVM classifier that presented the best result was also the classifier with a proportion of 50% of the dataset for training and testing, resulting in 71% accuracy and 80% precision.
Figure 17, Figure 18 and Figure 19 illustrate the confusion matrices and ROC curves for the performance of Classifier # 1 .
Based on the measurements of the false-positive rate and true-positive rate, the AUC measures resulting from each version of Classifier # 1 could be analyzed. In this way, it could be verified that the proportion of 50% of the dataset used for training and testing led to the best result in relation to the classifiers set for this work placement, i.e., A U C = 52 % , followed by the classification with a proportion of 70%:30% of the dataset used for training and testing, respectively, with A U C = 48 % . The classification with a proportion of 80%:20% of the dataset used for training and testing, respectively, led to the result of A U C = 45 % .
Figure 20, Figure 21 and Figure 22 illustrate the confusion matrices and ROC curves for Classifier # 2 .
For Classifier # 2 , for the produced AUC measures, it was observed that the classifier with a proportion of 50% of the dataset used for training and testing obtained the best result in relation to the classifiers set for this work placement, i.e., A U C = 50 % , followed by the classification with a proportion of 80%:20% of the dataset used for training and testing, respectively, with A U C = 49 % . The classification with a proportion of 70%:30% of the dataset used for training and testing, respectively, obtained a result of A U C = 45 % .
Figure 23, Figure 24 and Figure 25 illustrate the confusion matrices and ROC curves for Classifier # 3 .
For Classifier # 3 , for the produced AUC measures, it was observed that the classifier with a proportion of 50% of the dataset used for training and testing presented a result of A U C = 48 % , and the same result was achieved for classification with a proportion of 70%:30% of the dataset used for training and testing, respectively. The classification with a proportion of 80%:20% of the dataset used for training and testing, respectively, achieved a result of A U C = 44 % .
Figure 26, Figure 27 and Figure 28 illustrate the confusion matrices and ROC curves for Classifier # 4 .
With regard to Classifier # 4 , the classifiers with a proportion of 50% of the dataset used for training and testing presented the best result in relation to the classifiers set for this work placement, i.e., A U C = 49 % , followed by the classification with a proportion of 70%:30% of the dataset used for training and testing, respectively, with A U C = 43 % . The classification with a proportion of 80%:20% of the dataset used for training and testing, respectively, achieved a result of A U C = 35 % .
Figure 29, Figure 30 and Figure 31 illustrate the confusion matrices and ROC curves for Classifier # 5 .
In relation to Classifier # 5 , for the produced AUC measures, it was observed that the classification with a proportion of 50% of the dataset used for training and testing presented the best result in relation to the classifiers set for this work placement, i.e., A U C = 45 % , followed by the classification with a proportion of 80%:20% of the dataset used for training and testing, respectively, with A U C = 42 % . The classification with 70%:30% of the dataset used for training testing, respectively, achieved a result of A U C = 34 % .
It was also observed that, even in some cases where the metrics of the confusion matrix and the ROC curve showed considerable rates of false positives and false negatives, the classification rate of true-positive values was significantly more accurate. Such behavior can be explained by the fact that all images included the pest and that, even at different stages of development, its shape, size, and texture characteristics are similar.
Since the use of SVM classifiers was useful in the methodology for FAW recognition and classification, a deep analysis was carried out in order to select the best parameters. In terms of time consumption for training and testing, the best proportion among the presented results is 70% for training and 30% for testing, as illustrated in Table 12.
Additionally, analyses with the same proportions of data split were carried out on the CNN for FAW classification. In this scenario, for hidden layers, the ReLU function activation was selected and as the final activation function for the output layer, and the Softmax function was applied. The considered number of epochs was equal to twelve.
Table 13, Table 14, and Table 15 present the results obtained using the A-CNN with dataset proportions of 50%:50%, 70%:30%, and 80%:20% for testing and testing, respectively.
Figure 32, Figure 33 and Figure 34 illustrate the confusion matrices and ROC curves for the results obtained using the A-CNN.
Table 16, Table 17, and Table 18 present comparisons of SVM classifiers and the A-CNN based on precision and accuracy, with dataset proportions of 50%:50%, 70%:30%, and 80%:20% training and testing, respectively.
For the configurations presented as options for the computational intelligence stage, both for the use of SVM classifiers (with an ML focus) and for the use of the A-CNN (with a DL focus), percentages of 50:50%, 70:30%, and 80:20% were considered for training and testing, respectively. For such a context, both the confusion matrix and the respective ROC curves were observed in order to evaluate the information regarding precision and accuracy for all different instars of the FAW caterpillar.
Taking into account these results, it was possible to observe the following. For instar #1 the best configuration was obtained using the A-CNN with a proportion of 50%:50% for testing and training, respectively, leading to an accuracy equal to 90% and a precision equal to 84%. For instar #2, the best configuration was obtained using the A-CNN with a proportion of 50%:50% for testing and training, respectively, leading to an accuracy equal to 90% and a precision equal to 96%. For instar #3, the best configuration was obtained using the A-CNN with a proportion of 50%:50% for testing and training, respectively, leading to an accuracy equal to 90% and a precision equal to 80%. It is important to observe that for instar #3, the resulting precision value, when using the SVM classifier, was equal to that achieved by the A-CNN; however, the accuracy value was smaller. For instar #4, the best configuration was obtained using the A-CNN with a proportion of 50%:50% for testing and training, respectively, leading to an accuracy equal to 90% and a precision equal to 95%. For instar #5, the best configuration was obtained using the A-CNN with a proportion of 50%:50% for testing and training, respectively, leading to an accuracy equal to 90% and a precision equal to 100%. Table 19 presents the final parametrization for the A-CNN to classify the different instars of FAW caterpillars.
Furthermore, Figure 35 illustrates the resultant context analysis considering the use of both ML and DL for FAW caterpillar classification purposes, focusing its control in a maize crop area. It was possible to observe that the structure that considers ML with SVM classifiers solved the problem in a good way, including gains in performance; however, A-CNN showed much better results.
The use of DL has been increasing in recent years, allowing for a multiplicity of data analyses from different angles. Therefore, DL algorithms are recommended problems that require multiple solutions or that may depend more heavily on situations that require the leveraging of technologies to solve problems that involve decisions based on unstructured or unlabeled data.
Although the use of ML based on structured data enabled a solution, including facilities for interoperability, the use of one A-CNN allows for a robust decision support system for FAW caterpillar classification. Additionally, such a result can be coupled with an agricultural fungicide sprayer to control varying dose rates as a function of FAW instars, i.e., enabling pest control in maize plants.
Another relevant aspect observed in this contextual analysis is that the use of ML required less training time compared to the same percentage of samples used in DL, which could be of interest for the scope of the problem related to pattern recognition and dynamic classification of FAW caterpillars in maize crops, leading to the opportunity to use less expensive hardware. However, today, one may use advanced hardware, such as Field Programmable Gate Arrays (FPGAs) or even Graphical Processor Units (GPUs) for time acceleration, which can bring about significant reductions in time processing, i.e., making the use of DL necessary. In fact, the experiments conducted for validation of the method proposed herein show its capacity to classify the patterns presented by FAWs in maize crops, which involves observing their different color, shape, size, and texture characteristics. The spatial location on maize plants should also be considered, i.e., whether present on the leaves or on the cobs.

4. Conclusions

An innovative method for in situ FAW recognition and classification is presented. The proposed method has the ability to evaluate the growth stages of this caterpillar species directly on maize plants. For validation, digital images of different stages of growth and different quantities of caterpillars were evaluated. Likewise, for image processing, techniques like filtering, segmentation by color scales, feature extraction based on the HOG and Hu algorithms, and the use of multivariate statistics with PCA were considered. Additionally, a contextual analysis for computational intelligence was conducted, taking into account not only an ML structure based on a set of SVM classifiers but also a DL supported by A-CNN. Results based on the evaluation of accuracy, precision, time processing, and hardware availability confirmed the DL structure with the A-CNN model as the final choice for the presented method, i.e., instead of the ML structure based on the use of SVM classifiers. The developed method has also proven to be useful in FAW pest control, which is considered the most dangerous pest for maize production, i.e., its use can allow for a decrease in losses, as well as the minimization of economic damage experienced farmers and producers. Im future work, one may consider the evolution of the developed method by including other computational intelligence techniques for pattern recognition and classification in both an unsupervised manner and through the use of a multi-spectral camera embedded in an Unmanned Aerial Vehicle (UAV) for real-time operation.

Author Contributions

This work was conducted collaboratively by both authors. Conceptualization, A.B.B. and P.E.C.; Formal analysis, A.B.B. and P.E.C.; Writing—original draft proposition A.B.B.; Writing—review and editing, P.E.C.; Supervision, P.E.C. All authors have read and agreed to the submitted version of the manuscript.

Funding

This work was supported by Embrapa Instrumentation and Fapesp (project number 17/19350-2).

Data Availability Statement

The original data presented in the study are openly available in repository 3453148_ABB_PEC on GitHub®, which can be accessed at https://github.com/alexbertolla/3453148_ABB_PEC, available to be accessed since 15 January 2025.

Acknowledgments

The authors thank the Brazilian Corporation for Agricultural Research (Embrapa) and the Post-Graduation Program in Computer Science of the Federal University of São Carlos (UFSCar).

Conflicts of Interest

The authors declare no conflicts of interests.

Abbreviations

The following abbreviations are used in this manuscript:
A-CNNAlexNet Convolutional Neural Network
AIArtificial Intelligence
BBSBacterial Brown Spot
BSRBacterial Soft Rot
CIECommission Internationale de l’Eclairage
CNNConvolutional Neural Network
DLDeep Learning
F-CNNFaster Region-based Convolutional Neural Network
HOGHistogram of Oriented Gradient
LDALinear Discriminant Analysis
MAPMaximum A Posteriori
MLMachine Learning
MSEMean Squared Error
NLMNon-Local Mean
PBRPhytophthora Black Rot
PCAPrincipal Component Analysis
PSNRPeak Signal-to-Noise Ratio
R-FCNRegion-based Fully Convolutional Neural Network
RBFRadial Basis Function
ROIRegion Of Interest
SNRSignal-to-Noise Ratio
TNTensor Network
SVMSupport Vector Machine
USDAUnited States Department of Agriculture

References

  1. Kennett, D.J.; Prufer, K.M.; Culleton, B.J.; George, R.J.; Robinson, M.; Trask, W.R.; Buckley, G.M.; Moes, E.; Kate, E.J.; Harper, T.K.; et al. Early isotopic evidence for maize as a staple grain in the Americas. Sci. Adv. 2020, 6, eaba3245. [Google Scholar] [CrossRef] [PubMed]
  2. Erenstein, O.; Chamberlin, J.; Sonder, K. Estimating the global number and distribution of maize and wheat farms. Glob. Food Sec. 2021, 30, 100558. [Google Scholar] [CrossRef]
  3. USDA. World Agricultural Production; USDA: Washington, DC, USA, 2022.
  4. Haque, M.A.; Marwaha, S.; Deb, C.K.; Nigam, S.; Arora, A.; Hooda, K.S.; Soujanya, P.L.; Aggarwal, S.K.; Lall, B.; Kumar, M.; et al. Deep learning-based approach for identification of diseases of maize crop. Sci. Rep. 2022, 12, 6334. [Google Scholar] [CrossRef]
  5. Viana, P.A.; Cruz, I.; Waquil, J.M. Árvore do Conhecimento do Milho: Pragas da Fase Inicial. 2011. Available online: https://www.embrapa.br/en/agencia-de-informacao-tecnologica/cultivos/milho/producao/pragas-e-doencas/pragas (accessed on 5 January 2025).
  6. Mutyambai, D.M.; Niassy, S.; Calatayud, P.A.; Subramanian, S. Agronomic factors influencing fall armyworm (Spodoptera frugiperda) infestation and damage and its co-occurrence with stemborers in maize cropping systems in Kenya. Insects 2022, 13, 266. [Google Scholar] [CrossRef] [PubMed]
  7. Makgoba, M.C.; Tshikhudo, P.P.; Nnzeru, L.R.; Makhado, R.A. Impact of fall armyworm (Spodoptera frugiperda)(JE Smith) on small-scale maize farmers and its control strategies in the Limpopo province, South Africa. Jàmbá J. Disaster Risk Stud. 2021, 13, 1016. [Google Scholar] [CrossRef]
  8. Horikoshi, R.J.; Vertuan, H.; de Castro, A.A.; Morrell, K.; Griffith, C.; Evans, A.; Tan, J.; Asiimwe, P.; Anderson, H.; José, M.O.; et al. A new generation of Bt maize for control of fall armyworm (Spodoptera frugiperda). Pest Manag. Sci. 2021, 77, 3727–3736. [Google Scholar] [CrossRef]
  9. Divya, J.; Kalleshwaraswamy, C.; Mallikarjuna, H.; Deshmukh, S. Does recently invaded fall armyworm, Spodoptera frugiperda displace native lepidopteran pests of maize in India. Curr. Sci. 2021, 120, 1358–1367. [Google Scholar] [CrossRef]
  10. Maino, J.L.; Schouten, R.; Overton, K.; Day, R.; Ekesi, S.; Bett, B.; Barton, M.; Gregg, P.C.; Umina, P.A.; Reynolds, O.L. Regional and seasonal activity predictions for fall armyworm in Australia. Curr. Res. Insect Sci. 2021, 1, 100010. [Google Scholar] [CrossRef]
  11. European Food Safety Authority (EFSA); Kinkar, M.; Delbianco, A.; Vos, S. Pest survey card on Spodoptera frugiperda. EFSA Support. Publ. 2020, 17, 1895E. [Google Scholar]
  12. Quick Guide-Fall Armyworm. Available online: https://www.mpi.govt.nz/dmsdocument/53053-Fall-Army-Work-Quick-Growers-Guide (accessed on 5 January 2025).
  13. Cruz-Esteban, S.; Valencia-Botín, A.J.; Virgen, A.; Santiesteban, A.; Mérida-Torres, N.M.; Rojas, J.C. Performance and efficiency of trap designs baited with sex pheromone for monitoring Spodoptera frugiperda males in corn crops. Int. J. Trop. Insect Sci. 2021, 42, 715–722. [Google Scholar] [CrossRef]
  14. Pfordt, A.; Paulus, S. A review on detection and differentiation of maize diseases and pests by imaging sensors. J. Plant Dis. Prot. 2025, 132, 40. [Google Scholar]
  15. Sharma, N.; Sharma, R.; Jindal, N. Machine learning and deep learning applications-a vision. Glob. Transitions Proc. 2021, 2, 24–28. [Google Scholar]
  16. Shinde, P.P.; Shah, S. A review of machine learning and deep learning applications. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
  17. Valderrama Solis, M.A.; Valenzuela Nina, J.; Echaiz Espinoza, G.A.; Yanyachi Aco Cardenas, D.D.; Villanueva, J.M.M.; Salazar, A.O.; Villarreal, E.R.L. Innovative Machine Learning and Image Processing Methodology for Enhanced Detection of Aleurothrixus Floccosus. Electronics 2025, 14, 358. [Google Scholar] [CrossRef]
  18. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  19. Patel, D.J.; Bhatt, N. Insect identification among deep learning’s meta-architectures using TensorFlow. Int. J. Eng. Adv. Technol. 2019, 9, 1910–1914. [Google Scholar]
  20. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
  21. Xavier, A.d.C.; Sato, J.R.; Giraldi, G.A.; Rodrigues, P.S.; Thomaz, C.E. Classificação e extração de características discriminantes de imagens 2D de ultrassonografia mamária. In Avanços em Visão Computacional; Curitiba, P.R., Neves, L.A.P., Neto, H.V., Gonzaga, A., Eds.; Omnipax: Curitiba, Brazil, 2012; Chapter 4; pp. 65–87. [Google Scholar]
  22. Menke, A.B.; Junior, O.A.d.C.; Gomes, R.A.T.; Martins, É.d.S.; de Oliveira, S.N. Análise de mudanças do uso agrícola da terrra a partir de dados de sonsoriamento remoto multitemporal no município de Luis Eduardo Magalhães (BA-Brazil). Rev. Bras. Ensino Fis. 2017, 39, 315–326. [Google Scholar] [CrossRef]
  23. Sankaran, S.; Ehsani, R.; Etxeberria, E. Mid-infrared spectroscopy for detection of Huanglongbing (greening) in citrus leaves. Talanta 2010, 83, 574–581. [Google Scholar] [CrossRef]
  24. Miranda, J.L.; Gerardo, B.D.; Tanguilig, B.T., III. Pest detection and extraction using Image processing techniques. Int. J. Comput. Commun. Eng. 2014, 3, 189–192. [Google Scholar] [CrossRef]
  25. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, San Diego, CA, USA, 20–25 June 2005; Volume II, pp. 60–65. [Google Scholar] [CrossRef]
  26. Mythili, C.; Kavitha, V. Efficient Technique for Color Image Noise Reduction. Res. Bull. Jordan ACM 2011, 2, 41–44. [Google Scholar]
  27. Mishra, R.; Mittal, N.; Khatri, S.K. Digital Image Restoration using Image Filtering Techniques. In Proceedings of the International Conference on Automation, Computational and Technology Management, London, UK, 24–26 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 268–272. [Google Scholar] [CrossRef]
  28. Bertolla, A.B.; Cruvinel, P.E. Band-Pass Filtering for Non-Stationary Noise in Agricultural Images to Pest Control Based on Adaptive Semantic Modeling. In Proceedings of the 2021 IEEE 15th International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, 27–29 January 2021; pp. 398–403. [Google Scholar] [CrossRef]
  29. He, Q.; Ma, B.; Qu, D.; Zhang, Q.; Hou, X.; Zhao, J. Cotton pests and diseases detection based on image processing. TELKOMNIKA Indones. J. Electr. Eng. 2013, 11, 3445–3450. [Google Scholar] [CrossRef]
  30. Xia, C.; Chon, T.S.; Ren, Z.; Lee, J.M. Automatic identification and counting of small size pests in greenhouse conditions with low computational cost. Ecol. Inform. 2015, 29, 139–146. [Google Scholar] [CrossRef]
  31. Kumar, Y.; Dubey, A.K.; Jothi, A. Pest detection using adaptative thresholding. In Proceedings of the International Conference on Computing, Communication adn Automation (ICCCA2017), Greater Noida, India, 5–6 May 2017; pp. 42–46. [Google Scholar]
  32. Sriwastwa, A.; Prakash, S.; Mrinalini; Swarit, S.; Kumari, K.; Sahu, S.S. Detection of Pests Using Color Based Image Segmentation. In Proceedings of the International Conference on Inventive Communication and Computational Technologies, ICICCT 2018, Coimbatore, India, 20–21 April 2018; pp. 1393–1396. [Google Scholar] [CrossRef]
  33. Huang, K.Y. Application of artificial neural network for detecting Phalaenopsis seedling diseases using color and texture features. Comput. Electron. Agric. 2007, 57, 3–11. [Google Scholar] [CrossRef]
  34. Sette, P.G.C.; Maillard, P. Análise de textura de imagem de alta resolução para aprimorar a acurácia da classificação da mata atlântica no sul da Bahia. In Proceedings of the XV Simpósio Brasileiro de Sensoriamento Remoto, Curitiba, Brazil, 30 April–5 May 2011; p. 2020. [Google Scholar]
  35. Elleithy, K. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  36. Guerrero, J.M.; Pajares, G.; Montalvo, M.; Romeo, J.; Guijarro, M. Support Vector Machines for crop/weeds identification in maize fields. Expert Syst. Appl. 2012, 39, 11149–11155. [Google Scholar] [CrossRef]
  37. Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef] [PubMed]
  38. Lee, S.H.; Chan, C.S.; Wilkin, P.; Remagnino, P. Deep-plant: Plant identification with convolutional neural networks. In Proceedings of the International Conference on Image Processing, ICIP, Quebec City, QC, Canada, 27–30 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 452–456. [Google Scholar] [CrossRef]
  39. Thenmozhi, K.; Reddy, U.S. Image processing techniques for insect shape detection in field crops. In Proceedings of the 2017 International Conference on Inventive Computing and Informatics (ICICI), Coimbatore, India, 23–24 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 699–704. [Google Scholar]
  40. Tian, K.; Zeng, J.; Song, T.; Li, Z.; Evans, A.; Li, J. Tomato leaf diseases recognition based on deep convolutional neural networks. J. Agric. Eng. 2023, 54, 1432. [Google Scholar] [CrossRef]
  41. Evangelista, I.R.S. Bayesian wingbeat frequency classification and monitoring of flying insects using wireless sensor networks. In Proceedings of the IEEE Region 10 Annual International Conference, Proceedings/TENCON, Jeju, Republic of Korea, 28–31 October 2018; pp. 2403–2407. [Google Scholar] [CrossRef]
  42. Nanda, M.A.; Seminar, K.B.; Nandika, D.; Maddu, A. A comparison study of kernel functions in the support vector machine and its application for termite detection. Information 2018, 9, 5. [Google Scholar] [CrossRef]
  43. Liu, X.; Zhang, R.; Meng, Z.; Hong, R.; Liu, G. On fusing the latent deep CNN feature for image classification. World Wide Web 2019, 22, 423–436. [Google Scholar] [CrossRef]
  44. Li, X.; Dai, B.; Sun, H.; Li, W. Corn classification system based on computer vision. Symmetry 2019, 11, 591. [Google Scholar] [CrossRef]
  45. Abdelghafour, F.; Rosu, R.; Keresztes, B.; Germain, C.; da Costa, J.P. A Bayesian framework for joint structure and colour based pixel-wise classification of grapevine proximal images. Comput. Electron. Agric. 2019, 158, 345–357. [Google Scholar] [CrossRef]
  46. Moreno, B.M.; Cruvinel, P.E. Computer vision system for identifying on farming weed species. In Proceedings of the 2022 IEEE 16th International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, 26–28 January 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 287–292. [Google Scholar]
  47. Wang, F.; Luo, Y. A Study on Corn Pest Detection Based on Improved YOLOv 7. In Proceedings of the 2024 7th International Conference on Computer Information Science and Application Technology (CISAT), Hangzhou, China, 12–14 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1039–1047. [Google Scholar]
  48. Liu, J.; He, C.; Jiang, Y.; Wang, M.; Ye, Z.; He, M. A High-Precision Identification Method for Maize Leaf Diseases and Pests Based on LFMNet under Complex Backgrounds. Plants 2024, 13, 1827. [Google Scholar] [CrossRef]
  49. Zhong, M.; Li, Y.; Gao, Y. Research on Small-Target Detection of Flax Pests and Diseases in Natural Environment by Integrating Similarity-Aware Activation Module and Bidirectional Feature Pyramid Network Module Features. Agronomy 2025, 15, 187. [Google Scholar] [CrossRef]
  50. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25. [Google Scholar]
  51. Abdurrazzaq, A.; Junoh, A.K.; Muhamad, W.Z.A.W.; Yahya, Z.; Mohd, I. An overview of multi-filters for eliminating impulse noise for digital images. TELKOMNIKA (Telecommun. Comput. Electron. Control) 2020, 18, 385–393. [Google Scholar] [CrossRef]
  52. Jain, P.; Tyagi, V. Spatial and Frequency Domain Filters for Restoration of Noisy Images. IETE J. Educ. 2013, 54, 108–116. [Google Scholar] [CrossRef]
  53. Solomon, C.; Breckon, T. Fundamentos de Processamento Digital de Imagens: Uma Abordagem Prática Com Exemplos em Matlab; Grupo Gen-LTC: Rio de Janeiro, Brazil, 2000. [Google Scholar]
  54. Said, A.B.; Hadjidj, R.; Eddine Melkemi, K.; Foufou, S. Multispectral image denoising with optimized vector non-local mean filter. Digit. Signal Process. Rev. J. 2016, 58, 115–126. [Google Scholar] [CrossRef]
  55. de Brito, A.R. Método para Classificação de Sementes Agrícolas em Imagens Obtidas por Tomografia de Raios-X em Alta Resolução. Ph.D. Thesis, Universidade Federal de São Carlos, São Carlos, Brazil, 2020. [Google Scholar]
  56. Ibraheem, N.A.; Hasan, M.M.; Khan, R.Z.; Mishra, P.K. Understanding color models: A review. ARPN J. Sci. Technol. 2012, 2, 265–275. [Google Scholar]
  57. Saravanan, G.; Yamuna, G.; Nandhini, S. Real time implementation of RGB to HSV/HSI/HSL and its reverse color space models. In Proceedings of the International Conference on Communication and Signal Processing, ICCSP 2016, Melmaruvathur, India, 6–8 April 2016; pp. 462–466. [Google Scholar] [CrossRef]
  58. Sangwine, S.J.; Horne, R.E. The Colour Image Processing Handbook; Chapman & Hall: London, UK, 1998. [Google Scholar]
  59. Bansal, S.; Aggarwal, D. Color Image Segmentation using CIELab Color Space using Ant Colony Optimization. Int. J. Comput. Appl. 2011, 29, 28–34. [Google Scholar] [CrossRef]
  60. Durmus, D. CIELAB Color space boundaries under theoretical spectra and 99 test color samples. Color Res. Appl. 2020, 45, 796–802. [Google Scholar] [CrossRef]
  61. Kaur, A.; Kranthi, B.V. Comparison between YCbCr Color Space and CIELab Color Space for Skin Color Segmentation. Int. J. Appl. Inf. Syst. 2012, 3, 30–33. [Google Scholar]
  62. Baxes, G.A. Digital Image Processing: Principles and Applications; Wiley New York: New York, NY, USA, 1994. [Google Scholar]
  63. Gonzalez, R.C.; Woods, R.C. Processamento Digital de Imagens; Pearson Educación: London, UK, 2009. [Google Scholar]
  64. Kulkarni, N. Color thresholding method for image segmentation of natural images. Int. J. Image Graph. Signal Process. 2012, 4, 28–34. [Google Scholar] [CrossRef]
  65. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  66. Gonzalez, R.C.; Woods, R.E. Processamento de Imagens Digitais, 3rd ed.; Editora Blucher: São Paulo, Brazil, 2010. [Google Scholar]
  67. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume I, pp. 886–893. [Google Scholar] [CrossRef]
  68. Hu, M.K. Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
  69. Chu, H.; Zhang, D.; Shao, Y.; Chang, Z.; Guo, Y.; Zhang, N. Using HOG Descriptors and UAV for Crop Pest Monitoring; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2019; Volume 1, pp. 1516–1519. [Google Scholar] [CrossRef]
  70. Zhao, W.; Wang, J. Study of feature extraction based visual invariance and species identification of weed seeds. In Proceedings of the 2010 Sixth International Conference on Natural Computation, Yantai, China, 10–12 August 2010; Volume 2, pp. 631–635. [Google Scholar] [CrossRef]
  71. Bertolla, A.B.; Cruvinel, P.E. Dimensionality Reduction for CCD Sensor-Based Image to Control Fall Armyworm in Agriculture. In Proceedings of the 2024 ALLSENSORS 9th International Conference on Advances in Sensors, Actuators, Metering and Sensing, Barcelona, Spain, 26–30 May 2024; IARIA: Bucharest, Romania, 2024; pp. 7–12. [Google Scholar]
  72. Faceli, K.; Lorena, A.C.; Gama, J.; Carvalho, A.C.P.L.F. Inteligência Artificial: Uma Abordagem de Aprendizado de Máquina; LTC: Rio de Janeiro, Brazil, 2011. [Google Scholar]
  73. Mitchell, T.M. Machine Learning; McGraw-Hill: New York, NY, USA, 1997; p. 432. [Google Scholar]
  74. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall PTR: Hoboken, NJ, USA, 1994. [Google Scholar]
  75. Rimal, K.; Shah, K.; Jha, A. Advanced multi-class deep learning convolution neural network approach for insect pest classification using TensorFlow. Int. J. Environ. Sci. Technol. 2023, 20, 4003–4016. [Google Scholar]
  76. Zekiwos, M.; Bruck, A. Deep Learning-Based Image Processing for Cotton Leaf Disease and Pest Diagnosis. J. Electr. Comput. Eng. 2021, 2021, 9981437. [Google Scholar]
  77. Burges, C.J.C. A Tutorial on support vector machines for pattern recognition. In Data Mining and Knowledge Discovery; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1998; Volume 2, pp. 121–167. [Google Scholar]
  78. Campbell, C. An introduction to kernel methods. In Studies in Fuzziness and Soft Computing; PHYSICA-VERLAG: Heidelberg, Germany, 2001; Chapter 7; pp. 155–192. [Google Scholar]
  79. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  80. Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Appl. 1998, 13, 18–28. [Google Scholar]
  81. Lorena, A.C.; Carvalho, A.C.P.L.F.D. Uma introdução às support vector machines. RITA 2007, 14, 43–67. [Google Scholar]
  82. Ouamane, A.; Chouchane, A.; Himeur, Y.; Debilou, A.; Nadji, S.; Boubakeur, N.; Amira, A. Enhancing plant disease detection: A novel CNN-based approach with tensor subspace learning and HOWSVD-MDA. Neural Comput. Appl. 2024, 36, 22957–22981. [Google Scholar]
Figure 1. Fall armyworm (Spodoptera Frugiperda) stages of growth [12].
Figure 1. Fall armyworm (Spodoptera Frugiperda) stages of growth [12].
Electronics 14 01449 g001
Figure 2. Block diagram for classification of the classification of dynamic patterns of fall armyworm (Spodoptera frugiperda) in maize crop.
Figure 2. Block diagram for classification of the classification of dynamic patterns of fall armyworm (Spodoptera frugiperda) in maize crop.
Electronics 14 01449 g002
Figure 3. Different stages of the Fall Armyworm (Spodoptera frugperda) found on maize leaves (a,b) and cobs (c,d), i.e., instar 1, instar 2, instar 3, and instar 4.
Figure 3. Different stages of the Fall Armyworm (Spodoptera frugperda) found on maize leaves (a,b) and cobs (c,d), i.e., instar 1, instar 2, instar 3, and instar 4.
Electronics 14 01449 g003
Figure 4. Evaluation of the (a) Mean Squared Error (MSE), and (b) Peak Signal-to-Noise Ration (PSNR) for the non-local mean filter validation.
Figure 4. Evaluation of the (a) Mean Squared Error (MSE), and (b) Peak Signal-to-Noise Ration (PSNR) for the non-local mean filter validation.
Electronics 14 01449 g004
Figure 5. Segmentation process using Otsu’s method. (a) Original image, (b) channel a*, (c) channel a* histogram, and (d) segmented image result.
Figure 5. Segmentation process using Otsu’s method. (a) Original image, (b) channel a*, (c) channel a* histogram, and (d) segmented image result.
Electronics 14 01449 g005
Figure 6. Segmentation process using Otsu’s method. (a) Original image, (b) channel b*, (c) channel b* histogram, and (d) segmented image result.
Figure 6. Segmentation process using Otsu’s method. (a) Original image, (b) channel b*, (c) channel b* histogram, and (d) segmented image result.
Electronics 14 01449 g006
Figure 7. Segmentation process using Otsu’s method. (a) Original image, (b) segmented image by channel a*, (c) segmented image by channel b*, and (d) segmented image result.
Figure 7. Segmentation process using Otsu’s method. (a) Original image, (b) segmented image by channel a*, (c) segmented image by channel b*, and (d) segmented image result.
Electronics 14 01449 g007
Figure 8. Segmentation process using Otsu’s method. (a) Original image, (b) channel a*, (c) channel a* histogram, and (d) segmented image result.
Figure 8. Segmentation process using Otsu’s method. (a) Original image, (b) channel a*, (c) channel a* histogram, and (d) segmented image result.
Electronics 14 01449 g008
Figure 9. Segmentation process using Otsu’s method. (a) Original image, (b) channel b*, (c) channel b* histogram, and (d) segmented image result.
Figure 9. Segmentation process using Otsu’s method. (a) Original image, (b) channel b*, (c) channel b* histogram, and (d) segmented image result.
Electronics 14 01449 g009
Figure 10. Segmentation process using Otsu’s method. (a) Original image, (b) segmented image by channel a*, (c) segmented image by channel b*, and (d) segmented image result.
Figure 10. Segmentation process using Otsu’s method. (a) Original image, (b) segmented image by channel a*, (c) segmented image by channel b*, and (d) segmented image result.
Electronics 14 01449 g010
Figure 11. Segmentation process using Otsu’s method. (a) Original image, (b) channel b*, (c) channel b* histogram, and (d) segmented image result.
Figure 11. Segmentation process using Otsu’s method. (a) Original image, (b) channel b*, (c) channel b* histogram, and (d) segmented image result.
Electronics 14 01449 g011
Figure 12. Segmentation process using Otsu’s method. (a) Original image, (b) channel b*, (c) channel b* histogram, and (d) segmented image result.
Figure 12. Segmentation process using Otsu’s method. (a) Original image, (b) channel b*, (c) channel b* histogram, and (d) segmented image result.
Electronics 14 01449 g012
Figure 13. HOG feature descriptor. (a) Segmented fall armyworm image; (b) HOG image.
Figure 13. HOG feature descriptor. (a) Segmented fall armyworm image; (b) HOG image.
Electronics 14 01449 g013
Figure 14. Hu invariant moment results. (a) Fall armyworm stage, 1 (M1 = [2.581], M2 = [5.231], M3 = [9.430], M4 = [10.169], M5 = [20.381], M6 = [ 13.881 ], M7 = [ 20.003 ]). (b) Fall armyworm, stage 3 (M1 = [2.4329], M2 = [4.9603], M3 = [8.2253], M4 = [9.0384], M5 = [ 18.623 ], M6 = [ 11.739 ], M7 = [17.673]).
Figure 14. Hu invariant moment results. (a) Fall armyworm stage, 1 (M1 = [2.581], M2 = [5.231], M3 = [9.430], M4 = [10.169], M5 = [20.381], M6 = [ 13.881 ], M7 = [ 20.003 ]). (b) Fall armyworm, stage 3 (M1 = [2.4329], M2 = [4.9603], M3 = [8.2253], M4 = [9.0384], M5 = [ 18.623 ], M6 = [ 11.739 ], M7 = [17.673]).
Electronics 14 01449 g014
Figure 15. Feature vector example ( V e t o r [ H O G , H u ] ).
Figure 15. Feature vector example ( V e t o r [ H O G , H u ] ).
Electronics 14 01449 g015
Figure 16. Feature vector example ( V e t o r [ H O G , H u ] ) obtained via PCA.
Figure 16. Feature vector example ( V e t o r [ H O G , H u ] ) obtained via PCA.
Electronics 14 01449 g016
Figure 17. Confusion matrix (a) and ROC curve (b) of Classifier #1 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Figure 17. Confusion matrix (a) and ROC curve (b) of Classifier #1 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Electronics 14 01449 g017
Figure 18. Confusion matrix (a) and ROC curve (b) of Classifier #1 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Figure 18. Confusion matrix (a) and ROC curve (b) of Classifier #1 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Electronics 14 01449 g018
Figure 19. Confusion matrix (a) and ROC curve (b) of Classifier #1 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Figure 19. Confusion matrix (a) and ROC curve (b) of Classifier #1 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Electronics 14 01449 g019
Figure 20. Confusion matrix (a) and ROC curve (b) of Classifier #2 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Figure 20. Confusion matrix (a) and ROC curve (b) of Classifier #2 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Electronics 14 01449 g020
Figure 21. Confusion matrix (a) and ROC curve (b) of Classifier #2 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Figure 21. Confusion matrix (a) and ROC curve (b) of Classifier #2 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Electronics 14 01449 g021
Figure 22. Confusion matrix (a) and ROC curve (b) of Classifier #2 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Figure 22. Confusion matrix (a) and ROC curve (b) of Classifier #2 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Electronics 14 01449 g022
Figure 23. Confusion matrix (a) and ROC curve (b) of Classifier #3 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Figure 23. Confusion matrix (a) and ROC curve (b) of Classifier #3 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Electronics 14 01449 g023
Figure 24. Confusion matrix (a) and ROC curve (b) of Classifier #3 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Figure 24. Confusion matrix (a) and ROC curve (b) of Classifier #3 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Electronics 14 01449 g024
Figure 25. Confusion matrix (a) and ROC curve (b) of Classifier #3 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Figure 25. Confusion matrix (a) and ROC curve (b) of Classifier #3 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Electronics 14 01449 g025
Figure 26. Confusion matrix (a) and ROC curve (b) of Classifier #4 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Figure 26. Confusion matrix (a) and ROC curve (b) of Classifier #4 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Electronics 14 01449 g026
Figure 27. Confusion matrix (a) and ROC curve (b) of Classifier #4 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Figure 27. Confusion matrix (a) and ROC curve (b) of Classifier #4 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Electronics 14 01449 g027
Figure 28. Confusion matrix (a) and ROC curve (b) of Classifier #4 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Figure 28. Confusion matrix (a) and ROC curve (b) of Classifier #4 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Electronics 14 01449 g028
Figure 29. Confusion matrix (a) and ROC curve (b) of Classifier #5 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Figure 29. Confusion matrix (a) and ROC curve (b) of Classifier #5 with sigmoidal function kernel and dataset proportion of 50% for training and 50% for testing.
Electronics 14 01449 g029
Figure 30. Confusion matrix (a) and ROC curve (b) of Classifier #5 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Figure 30. Confusion matrix (a) and ROC curve (b) of Classifier #5 with sigmoidal function kernel and dataset proportion of 70% for training and 30% for testing.
Electronics 14 01449 g030
Figure 31. Confusion matrix (a) and ROC curve (b) of Classifier #5 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Figure 31. Confusion matrix (a) and ROC curve (b) of Classifier #5 with sigmoidal function kernel and dataset proportion of 80% for training and 20% for testing.
Electronics 14 01449 g031
Figure 32. Confusion matrix (a) and ROC curve (b) of the A-CNN classifier with dataset proportions of 50% for training and 50% for testing.
Figure 32. Confusion matrix (a) and ROC curve (b) of the A-CNN classifier with dataset proportions of 50% for training and 50% for testing.
Electronics 14 01449 g032
Figure 33. Confusion matrix (a) and ROC curve (b) of the A-CNN classifier with dataset proportions of 70% for training and 30% for testing.
Figure 33. Confusion matrix (a) and ROC curve (b) of the A-CNN classifier with dataset proportions of 70% for training and 30% for testing.
Electronics 14 01449 g033
Figure 34. Confusion matrix (a) and ROC curve (b) of the A-CNN classifier with dataset proportions of 80% for training and 20% for testing.
Figure 34. Confusion matrix (a) and ROC curve (b) of the A-CNN classifier with dataset proportions of 80% for training and 20% for testing.
Electronics 14 01449 g034
Figure 35. Comparative time-consumption analysis between ML and CNN training and testing for FAW stage identification.
Figure 35. Comparative time-consumption analysis between ML and CNN training and testing for FAW stage identification.
Electronics 14 01449 g035
Table 1. Characteristics of caterpillars.
Table 1. Characteristics of caterpillars.
Type of CaterpillarSpecific PatternsColor PatternsMaximum Size (mm)
Spodoptera frungiperdaInverted “Y” marking in the head area and four smaller dorsal spots in a trapeze arrangement on most segments and in a square arrangement on the last segmentGreenish to dark brown40 mm
Helicoverpa armígeraSmooth saddle-shaped tubercles with hairs at the apexYellowish–white to reddish–brown or green40 mm
Helicoverpa zeaStripes of other colors arranged on the sides of the bodyWhite and yellow, turning to brown35 mm
Elasmopalpus lignosellus ZellerBrown, purple, or dark-brown transverse streaksYellowish with red stripes; before hatching, it takes on a black coloration22 mm
Table 2. Characteristics of images of fall armyworm (Spodoptera frugiperda) in Insect Image dataset.
Table 2. Characteristics of images of fall armyworm (Spodoptera frugiperda) in Insect Image dataset.
Type of ArchiveJPG/JPEG
Color spaceRGB
Image width3072 pixels
Image height2048 pixels
Image resolution72 ppi (pixels per inch)
Pixel size≈0.35 mm
Table 3. Selected kernels to validate the developed method.
Table 3. Selected kernels to validate the developed method.
KernelFunction K ( x i , x j ) Parameters
Polynomial ( δ ( x i · x j ) + κ ) d δ , κ e d
Radial basis function (RBF) kernel e x p ( σ x i x j 2 ) σ
Sigmoidaltanh( δ ( x i · x j ) + κ ) δ e κ
Table 4. Non-local mean filter parameters and values.
Table 4. Non-local mean filter parameters and values.
ParameterValue
Kernel 7 × 7
Patch width11
Patch height11
Table 5. HOG descriptor parameters.
Table 5. HOG descriptor parameters.
ParameterValue
Pixels per cell 16 × 16
Cells per block 2 × 2
Numbers of orientations9
Feature vectorTRUE
Transform SQRTFALSE
Table 6. Fall armyworm stage classification with SVM linear function kernel and dataset proportion of 50% for training and testing.
Table 6. Fall armyworm stage classification with SVM linear function kernel and dataset proportion of 50% for training and testing.
SMV ClassifierPrecisionRecallF1 ScoreSupport VectorsAccuracy (%)
# 10.810.630.711830.59
# 20.830.690.751830.64
# 30.730.790.831830.73
# 40.850.790.821830.72
# 50.850.720.781830.68
Table 7. Fall armyworm stage classification with SVM linear function kernel and dataset proportions of 70% for training and 30% for testing.
Table 7. Fall armyworm stage classification with SVM linear function kernel and dataset proportions of 70% for training and 30% for testing.
SMV ClassifierPrecisionRecallF1 ScoreSupport VectorsAccuracy (%)
# 10.740.680.711060.56
# 20.780.650.711060.57
# 30.820.790.801060.70
# 40.810.780.801060.69
# 50.860.750.801060.71
Table 8. Fall armyworm stage classification with SVM linear function kernel and dataset proportions of 80% for training and 20% for testing.
Table 8. Fall armyworm stage classification with SVM linear function kernel and dataset proportions of 80% for training and 20% for testing.
SMV ClassifierPrecisionRecallF1 ScoreSupport VectorsAccuracy (%)
# 10.710.610.66720.50
# 20.720.650.69720.53
# 30.790.810.80720.68
# 40.810.780.79720.68
# 50.820.750.78720.67
Table 9. Fall armyworm stage classification with SVM sigmoidal function kernel and dataset proportion of 50% for training and testing.
Table 9. Fall armyworm stage classification with SVM sigmoidal function kernel and dataset proportion of 50% for training and testing.
SMV ClassifierPrecisionRecallF1 ScoreSupport VectorsAccuracy (%)
# 10.800.870.831830.72
# 20.800.810.811830.68
# 30.801.000.891830.80
# 40.810.990.891830.80
# 50.800.850.821830.71
Table 10. Fall armyworm stage classification with SVM sigmoidal function kernel and dataset proportions of 70% for training and 30% for testing.
Table 10. Fall armyworm stage classification with SVM sigmoidal function kernel and dataset proportions of 70% for training and 30% for testing.
SMV ClassifierPrecisionRecallF1 ScoreSupport VectorsAccuracy (%)
# 10.750.860.861060.67
# 20.770.830.801060.68
# 30.771.000.871060.77
# 40.770.990.871060.77
# 50.750.870.811060.89
Table 11. Fall armyworm stage classification with SVM sigmoidal function kernel and dataset proportions of 80% for training and 20% for testing.
Table 11. Fall armyworm stage classification with SVM sigmoidal function kernel and dataset proportions of 80% for training and 20% for testing.
SMV ClassifierPrecisionRecallF1 ScoreSupport VectorsAccuracy (%)
# 10.770.880.82720.70
# 20.800.860.81720.69
# 30.781.000.88720.78
# 40.780.970.86720.76
# 50.770.860.81720.68
Table 12. Selected SVM classifiers for fall armyworm (Spodoptera frugperda) image pattern classification.
Table 12. Selected SVM classifiers for fall armyworm (Spodoptera frugperda) image pattern classification.
SVM ClassifierKernel FunctionParameter CParameter Delta ( δ )
# 1Sigmoidal101,0
# 2Sigmoidal1001,0
# 3Sigmoidal0.10.1
# 4Sigmoidal0.110
# 5Sigmoidal11
Table 13. Fall armyworm stage classification with A-CNN and dataset proportion of 50% for training and testing.
Table 13. Fall armyworm stage classification with A-CNN and dataset proportion of 50% for training and testing.
InstarPrecisionRecallF1 ScoreAccuracy (%)
# 10.840.830.830.90
# 20.960.690.800.90
# 30.801.000.890.90
# 40.950.950.950.90
# 51.001.001.000.90
Table 14. Fall armyworm stage classification with A-CNN and dataset proportion of 70% for training and 30% for testing.
Table 14. Fall armyworm stage classification with A-CNN and dataset proportion of 70% for training and 30% for testing.
InstarPrecisionRecallF1 ScoreAccuracy (%)
# 10.550.970.700.83
# 21.000.130.230.83
# 31.000.990.990.83
# 40.941.000.970.83
# 51.001.001.000.83
Table 15. Fall armyworm stage classification with A-CNN and dataset proportions of 80% for training and 20% for testing.
Table 15. Fall armyworm stage classification with A-CNN and dataset proportions of 80% for training and 20% for testing.
InstarPrecisionRecallF1 ScoreAccuracy (%)
# 10.880.940.910.84
# 21.000.800.890.84
# 30.601.000.850.84
# 40.980.510.670.84
# 51.000.970.980.84
Table 16. Comparison of fall armyworm classification with SVM and A-CNN using a dataset proportion of 50% for training and testing.
Table 16. Comparison of fall armyworm classification with SVM and A-CNN using a dataset proportion of 50% for training and testing.
InstarSVM Accuracy (%)CNN Accuracy (%)SMV PrecisionCNN Precision
# 10.720.900.800.84
# 20.680.900.800.96
# 30.800.900.800.80
# 40.800.900.910.95
# 50.710.900.801.00
Table 17. Comparison of fall armyworm classification with SVM and the A-CNN using dataset proportions of 70% for training and 30% for testing.
Table 17. Comparison of fall armyworm classification with SVM and the A-CNN using dataset proportions of 70% for training and 30% for testing.
InstarSVM Accuracy (%)CNN Accuracy (%)SMV PrecisionCNN Precision
# 10.670.830.750.55
# 20.680.830.771.00
# 30.770.830.771.00
# 40.770.830.770.94
# 50.890.830.751.00
Table 18. Comparison of fall armyworm classification with SVM and the A-CNN using dataset proportions of 80% for training and 20% for testing.
Table 18. Comparison of fall armyworm classification with SVM and the A-CNN using dataset proportions of 80% for training and 20% for testing.
InstarSVM Accuracy (%)CNN Accuracy (%)SMV PrecisionCNN Precision
# 10.700.840.770.88
# 20.690.840.801.00
# 30.780.840.780.60
# 40.760.840.780.98
# 50.680.840.771.00
Table 19. A-CNN parametrization for fall armyworm (Spodoptera frugperda) image pattern classification.
Table 19. A-CNN parametrization for fall armyworm (Spodoptera frugperda) image pattern classification.
InstarHidden-Layer Activation FunctionOutput-Layer Activation FunctionEpochs
# 1ReLUSoftmax10
# 2ReLUSoftmax10
# 3ReLUSoftmax10
# 4ReLUSoftmax10
# 5ReLUSoftmax10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bertolla, A.B.; Cruvinel, P.E. Computational Intelligence Approach for Fall Armyworm Control in Maize Crop. Electronics 2025, 14, 1449. https://doi.org/10.3390/electronics14071449

AMA Style

Bertolla AB, Cruvinel PE. Computational Intelligence Approach for Fall Armyworm Control in Maize Crop. Electronics. 2025; 14(7):1449. https://doi.org/10.3390/electronics14071449

Chicago/Turabian Style

Bertolla, Alex B., and Paulo E. Cruvinel. 2025. "Computational Intelligence Approach for Fall Armyworm Control in Maize Crop" Electronics 14, no. 7: 1449. https://doi.org/10.3390/electronics14071449

APA Style

Bertolla, A. B., & Cruvinel, P. E. (2025). Computational Intelligence Approach for Fall Armyworm Control in Maize Crop. Electronics, 14(7), 1449. https://doi.org/10.3390/electronics14071449

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop