Next Article in Journal
Musculoskeletal Disorders in Primary School Teachers
Next Article in Special Issue
Social Life Cycle Assessment of Laser Weed Control System: A Case Study
Previous Article in Journal
Comparison of Different Coach Competition Micro-Cycle Planning Strategies in Professional Soccer
Previous Article in Special Issue
An Improved U-Net Model Based on Multi-Scale Input and Attention Mechanism: Application for Recognition of Chinese Cabbage and Weed
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancement for Greenhouse Sustainability Using Tomato Disease Image Classification System Based on Intelligent Complex Controller

1
Department of Agriculture Engineering, National Institute of Agricultural Sciences, Wanju County 63240, Republic of Korea
2
Department of Computer Engineering, Sejong University, Seoul 05006, Republic of Korea
3
Department of Convergence Engineering for Intelligent Drones, Sejong University, Seoul 05006, Republic of Korea
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(23), 16220; https://doi.org/10.3390/su152316220
Submission received: 17 October 2023 / Revised: 20 November 2023 / Accepted: 21 November 2023 / Published: 22 November 2023
(This article belongs to the Special Issue Intelligent Agricultural Technologies and Corresponding Equipment)

Abstract

:
Monitoring the occurrence of plant diseases and pests such as fungi, viruses, nematodes, and insects in crops and collecting environmental information such as temperature, humidity, and light levels is crucial for sustainable greenhouse management. It is essential to control the environment through measures like adjusting vents, using shade nets, and employing screen controls to achieve optimal growing conditions, ensuring the sustainability of the greenhouse. In this paper, an artificial intelligence-based integrated environmental control system was developed to enhance the sustainability of the greenhouse. The system automatically acquires images of crop diseases and augments the disease image information according to environmental data, utilizing deep-learning models for classification and feedback. Specifically, the data are augmented by measuring scattered light within the greenhouse, compensating for potential losses in the images due to variations in light intensity. This augmentation addresses recognition issues stemming from data imbalances. Classifying the data is done using the Faster R-CNN model, followed by a comparison of the accuracy results. This comparison enables feedback for accurate image loss correction based on reflectance, ultimately improving recognition rates. The empirical experimental results demonstrated a 94% accuracy in classifying diseases, showcasing a high level of accuracy in real greenhouse conditions. This indicates the potential utility of employing optimal pest control strategies for greenhouse management. In contrast to the predominant direction of most existing research, which focuses on simply utilizing extensive learning and resources to enhance networks and optimize loss functions, this study demonstrated the performance improvement effects of the model by analyzing video preprocessing and augmented data based on environmental information. Through such efforts, attention should be directed towards quality improvement using information rather than relying on massive data collection and learning. This approach allows the acquisition of optimal pest control timing and methods for different types of plant diseases and pests, even in underdeveloped greenhouse environments, without the assistance of greenhouse experts, using minimal resources. The implementation of such a system will result in a reduction in labor for greenhouse management, a decrease in pesticide usage, and an improvement in productivity.

1. Introduction

Recent advancements in artificial intelligence models have enabled the integration of biological and environmental information within greenhouses. This has led to extensive research on the application of disease diagnosis and crop management, aiming to enhance the efficiency of greenhouse operations and crop production. In the field of agriculture related to crop production within greenhouses, various tasks such as disease detection and classification [1], analysis of crop phenotypes to identify optimal environmental conditions [2], and generation of environmental information metadata for cultivation status analysis [3] are performed using machine learning techniques. These tasks aim to pursue increased productivity and profits through real-time feedback and compensation mechanisms based on the interaction between the environment and crops. To automate operations aimed at minimizing crop damage caused by diseases, a diagnostic model capable of acquiring crop images and environmental information automatically, as well as performing classification tasks, is necessary. Currently, the CNN (Convolutional Neural Network) model has demonstrated excellent performance and is widely used for image classification tasks [4]. Assuming that humans identify and resolve the problem of classifying crop diseases, utilizing the location information of disease symptoms and the environmental information associated with where these symptoms occur can provide greater clarity and aid in more accurate classification. For example, when performing CNN analysis on images, Global Average Pooling (GAP) is applied to compute the average values of the feature maps. In this case, the detection performance of the disease symptoms is significantly influenced by how clearly the boundaries of the symptoms in the original (RAW) image are distinguished, as GAP employs the average values of spatial information [5]. To address this, efforts are made from the outset to acquire a diverse set of original (RAW) images of the specific disease symptoms for training. Data augmentation techniques are employed to enable the model to undergo various forms of learning, aiming to enhance the detection and classification accuracy of the model.
In particular, recent classification studies aim to design and optimize network structures to enhance feature extraction capabilities. They conduct research under the assumption that the dataset’s quality remains consistent and immutable. Although these endeavors place greater emphasis on optimizing network structures, they overlook the impact of data quality improvement through preprocessing and augmentation techniques on the detection model.
However, empirical observations reveal significant variations in detection accuracy for the same disease, depending on the greenhouse environment in which the model was trained, as well as environments different from the greenhouse. The study presented in [6] illustrates that variations in data quality can lead to differences in accuracy in the classification of plant diseases and pests. The automation techniques in this paper propose data-centric machine learning and seek efficient methods to automatically generate suitable datasets to enhance the performance of artificial intelligence models. Specifically, this research aims to establish an industrial foundation for collaborative and sustainable agriculture, incorporating pest control robots and improving analytical performance through the automation of data collection and preprocessing in smart agriculture. In this paper, inspired by these challenges, we designed and implemented a system to enhance the classification accuracy of crop disease images in real time by utilizing an AI-based integrated environmental control system that integrates images acquired in real time through a portable imaging device at regular intervals. This system enables real-time transformation, augmentation, and feedback of crop images, overcoming the differences in images due to varying environmental conditions.
The structure of this paper is as follows: In Section 2, detection techniques of deep-learning models are discussed for disease data recognition and classification. Section 3 elaborates on the plant diseases and pests monitoring devices for securing experimental and validation data, as well as the description of experimental data collection and tomato disease diagnosis model utilizing the mentioned devices. Section 4 tests the performance of the disease classification system using the collected empirical data and describes the results. Finally, in Section 5, we summarize the achievements of this paper and provide insights into the expected outcomes and future research directions.

2. Related Work

In this study, the data classification technique for the five types of diseases affecting tomatoes (blight, powdery mildew, gray mold, leaf mold, and tomato yellow leaf curl virus) is divided into two modes: object detection mode and Region of Interest (ROI) mode. The object detection mode aims to detect both disease class and bounding box information from the presented images based on research in plant disease recognition [7,8,9,10,11]. In this scenario, the system can detect multiple categories corresponding to various diseases from the same sample image.
As depicted in Figure 1, the Control Class, as shown, is not a priority for the system in terms of object detection. However, during training, it serves as a class that can provide features and information about potential anomalies. On the other hand, the Target Class includes some classes that are part of the object detection objective. The key approach for conducting this study involves progressively improving the performance of the main categories (target) by training the model on the entire category (controlled) of the entire training dataset [12].
Furthermore, an imbalance in data quantities (imbalanced data) can generally have a detrimental effect on the training process [13]. Hence, in the composition of the dataset, data augmentation should be applied differently for each class to achieve a level of balance, ensuring that the augmentation amount varies for each class to maintain an approximately equal amount of data for the class with the maximum quantity. As shown in Table 1, machine learning has been utilized to detect anomalies in various types of datasets. Logistic Regression (LR) is primarily utilized for binary classification problems, classifying data into ‘normal’ or ‘anomalous’ categories. It models the relationship between the data and the results by inputting a dataset labeled in advance as normal or anomalous into the logistic function. In this case, if the output surpasses a specific threshold, it is classified as an anomalous datum; otherwise, it is classified as normal data [13]. However, it assumes that the criterion for separating data is linear. Thus, when classifying multidimensional data such as image data, it encounters performance degradation issues [14].
Random Forest (RF) is widely used for classification and regression problems, where it independently learns multiple decision trees and combines their results to make the final predictions [15,16]. Although it demonstrates high accuracy for various data types and mitigates overfitting, it tends to have slower prediction speeds compared to other models when trained on very large datasets due to the extended training time [17].
Support Vector Machine (SVM) is a supervised learning algorithm used for data classification. It employs a kernel function to map the data into a higher-dimensional space, specifying the optimal location of the decision boundary that separates ‘normal’ and ‘anomalous’ data [18]. SVM exhibits excellent performance for both linear and non-linear data, preventing overfitting and enhancing generalization performance. However, it tends to be time-consuming for large datasets and encounters challenges in multi-class classification [18,19,20].
Variational AutoEncoder (VAE) is a type of generative model used for detecting anomalies in image data and generating data. It learns the probability distribution of the data to generate new data. In the encoder part, it takes image data from the UCSD dataset as input and maps it to the probability distribution of the latent space. Mean and variance are learned in this process, and data are generated by sampling from the latent space. In the decoder, the generation function is restored, resulting in data being generated in a form similar to the input data [21]. However, VAE assumes a simple parametric distribution in the latent space and may struggle to model highly complex, multidimensional data distributions [22,23].
Generative Adversarial Networks (GANs) are models capable of generating data that is highly like the input data, even though they do not actually exist. They take in data and discern the distribution of the data. Once this understanding is achieved, the generative model creates data that is significantly similar to the distribution of the input data [24]. However, the generative model struggles to create diverse data and often encounters the Collapse Problem, where it generates only similar data rather than a variety of data [25,26].
Faster Regions with Convolutional Neural Networks (Faster R-CNN) is one of the deep-learning-based models known for accurately and swiftly performing tasks related to object localization and classification. Faster R-CNN sequentially trains a Region Proposal Network (RPN) and Region of Interest (RoI). The RPN is trained by detecting images and the corresponding objects in the training dataset. It generates candidate regions using Anchor Boxes and classifies each candidate region as either an object or not, adjusting the precise position of the object. RoI extracted using RPN are transformed into fixed-size feature maps through RoI Pooling. Subsequently, the classification of normal and anomalous data is carried out using Fully Connected Layers [27]. However, when data imbalance occurs, it makes training difficult and leads to biased learning of the model, resulting in lower accuracy in detecting anomalous data.
Studies on anomaly detection in various types of datasets are listed in Table 1.
Table 1. Studies on Anomaly Detection in Various Types of Datasets.
Table 1. Studies on Anomaly Detection in Various Types of Datasets.
Detection TechniqueStudyDatasetPerformance
L R Wright, R. E., et al. [13]
Wang, X., et al. [14]
Not definedNot defined
RFBreiman, L., et al. [15]
Kim, K., et al. [16]
Park, H., et al. [17]
BGP [6,16]
Sensor [17]
Acc: 0.9959 [16]
Acc: 0.99 [17]
S V M Noble, W.S., et al. [18]
D. Wei., et al. [19]
García, S., et al. [20]
UCI [19]
CUT-13 [20]
Acc: 0.9937 [19]
Acc: 0.5 [20]
VAEAn, J., et al. [21]
Ghosh, P., et al. [22]
Xu, J., et al. [23]
UCSD [21]
Mnist [22,28]
CIFAR-10 [22,29]
CELEBA [22,30]
PTB [23,31]
Accuracy: 0.99 [22]
GANGoodfellow, I.J., et al. [24]
Park, S.W., et al. [25]
Pei, S., et al. [26]
Mnist [24,28]
KDD 99 [24]
CIFAR-10 [26,29]
Accuracy: 0.9975 [26]
Faster R-CNNBenjdira, B., et al. [27]Not definedNot defined
This paper not only emphasizes approaches to the data discussed previously but also describes devices operating as part of the implementation of various useful solutions, such as real-time monitoring and production system support, farm management systems, farm monitoring systems, geographic information systems, and decision support systems for weather conditions. As discussed by Latino’s research in [28], drones, robots, and UAVs can be used not only for data collection but also for automating various activities such as material management and related cost savings. Additionally, they are employed in discovering crop diseases or evaluating food quality through image recognition. Farmers can achieve more efficient production and improve environmental monitoring through digital technology, big data, and analytical applications. Radogna’s research in [29] involves the development of low-cost framework devices embedded in embedded systems to automatically detect food contamination. They used Molecularly Imprinted Polymer (MIP) detection technology to continuously monitor the environment and identify problems in the pesticide treatment stage. Although the targets and detection methods differ from those covered in this paper, the study addresses technology for early response through monitoring, reducing costs, and preventing damage. It also contributes to achieving sustainable agriculture by increasing production efficiency through this approach. The analytical technology covered in this paper is built into a system that supports manual shooting and input using general cameras, enabling the use of artificial intelligence models at a low cost in conjunction with robot automation. According to Ghobakhloo’s research in [30] and Ejsmont’s research in [31], it is predicted that the introduction and collaboration of technology for efficiency and sustainability through the Fourth Industrial Revolution, including the artificial intelligence technology discussed in this paper, will be possible in all fields in the future.

3. Design and Implementation of Tomato Disease Classification Using Real-Time Augmented Data

The two main methods proposed in this paper are the automation technology for plant disease and pest monitoring and a system for preprocessing and augmentation transformations for improving the analysis performance of plant disease and pest data combined with automation technology, as depicted in Figure 2. The integrated platform for plant disease and pest diagnosis discussed in this study has the following structure. In the case of a greenhouse with a favorable image-capturing environment, image-based analysis is performed in real time in normal mode. If data suspected as disease symptoms persistently detected at specific locations (not identified as normal leaves) in normal mode, augmented data are generated in conjunction with environmental information, and analysis is conducted at the locations indicating abnormal signs.
The process is demonstrated wherein analysis is conducted when abnormal signs are detected, even for diseases not well known to the user, as depicted in Figure 3.
The disease image collection system used in this study utilized a crop image acquisition device developed by the Rural Development Administration’s Smart Farm Development Division. The basic structure is depicted in Figure 4. The image acquisition device, as shown in Figure 5, consists of a robot arm-mounted PTZ-supported RGB camera used for disease recognition in crops, an adjustable lift, a light measurement sensor, temperature and humidity sensors, RTK-GPS for autonomous movement within the greenhouse, a linear motor, and a line-scan barcode scanner integrated into a mobile platform.
The basic deep-learning architecture of the model used in this study utilizes the Faster R-CNN structure of the VGG-16 feature extractor, as illustrated in Figure 6. This Faster R-CNN consists of a CNN backbone, RoI Pooling layer, and a fully connected layer, with two branches for classification and bounding box regression. RPN is executed with an image input into the backbone convolutional neural network. The network learns whether there is an object at a given location in the input image and estimates its size for all points on the feature map outputted from the CNN backbone. The bounding box proposals of the Region Proposal Network (RPN) are employed for pooling features by the ROI (Region of Interest) pooling layer on the backbone feature map. The operational principles of the RoI Pooling layer are as follows:
(a) selecting the regions corresponding to the proposals of the backbone feature map.
(b) dividing these selected regions into a fixed number of sub-windows.
(c) performing max pooling over the sub-windows to achieve a fixed-size output. The currently implemented model can detect various tomato diseases, including blight, leaf mold, gray mold, white powdery mildew, and yellow leaf curl virus [32].
The Faster R-CNN possesses analyzable features independent of camera types and image sizes, enhancing object recognition performance by extracting regions where objects are likely to be through region proposal. The proposed fully convolutional network has a structure, as shown in Figure 7.
The summary of the parameters used is as follows:
Parameters:
  • Approximately 100,000 iterations over 50 h
  • Utilization of the VGG16 network architecture
  • Cross-validation using 80% of the data for training, 10% for testing, and 10% for validation (excluding 308 unseen data used in the final experiment from the set)
  • Fine-tuning a pre-trained model with the ImageNet dataset
  • Implementation of Data Augmentation
  • Application of Batch Normalization
  • Use of ReLU as the activation function
In this study, we performed the setting of Regions of Interest (ROI) and object detection within disease images through a process as shown in Figure 8. One of the significant considerations in this paper was how to enhance detection performance as closely as possible, even when farms and environments change. To train a deep-learning model robust to diverse environments, we employed a method where the user selects areas of interest, considering the changes in the environment, specifically focusing on image regions less affected by environmental changes. These selected areas were designated as unknown regions, and an ensemble technique was applied through iterative processes. Consequently, for five types of disease areas, we achieved an average classification accuracy level of at least 88%, reaching an average of 94%.
In this study, experiments on object detection mode were conducted as follows. Initially, the bounding boxes and labels of the existing dataset were modified and used as a baseline dataset. The entire model was trained using this dataset, and the performance was evaluated. Figure 9 illustrates the detection results of the baseline dataset.
The performance of the learning model is directly associated with high-quality datasets in terms of both qualitative and quantitative aspects. However, it is challenging to collect a large amount of actual data from various environments to learn the changes within and between classes of the disease targets due to their characteristics. Therefore, during experiments in the real environment, the system typically encounters data it has not been trained on since it is exposed to unseen data in real-world applications. Constructing a dataset that covers all possible scenarios is challenging. To enable the system to adapt, the learning model needs to be trained on new information. Furthermore, to address situations where the system encounters new diseases or patterns that it has not learned, techniques such as generating augmented data using methods like CycleGAN or minimizing/distorting image distortions through physical (reflectors) or software-based optical reflectance corrections should be employed. This is crucial for handling suspected regions representing new diseases (Unknown) during real-world scenarios.
The strategies for handling these diseases are as follows:
(1) First, create new classes to handle the information on the novel diseases.
(2) Separate classes for the background area around the crops, physically undamaged healthy leaves, and cases showing distinct forms based on the specific disease.
(3) Design units of learning feature responses before the final classifier. This approach helps prevent misclassification by the system until it obtains responses for the new diseases.
In this paper, we propose a system that collects disease data along with greenhouse information during disease data collection. Based on the greenhouse information, the disease data in the images are transformed and augmented to aid in classification. The proposed design is a system that utilizes a standard-based integrated environmental control system with an AI model integrated with a disease image automatic acquisition device. This system acquires and analyzes images and environmental data in real time by being connected to the disease image automatic acquisition device. To achieve this, we constructed a pilot device incorporating a Faster R-CNN-based disease image classifier, crop image acquisition device, and JETSON NX Board, including sensor nodes and an AI-based integrated environmental control system. Furthermore, in this study, we conducted design modifications to the entity-relationship modeling of the Smart Farm system’s DB for integrating a cloud-based Smart Farm system for preprocessing images and an integrated DB for disease diagnosis services. This modification was aimed at enabling seamless integration of Smart Farm system data, preprocessing data for disease diagnosis, and new disease classes for future Smart Farm system data and disease preprocessing data, allowing for an organic service configuration. The integration of disease diagnosis-related information into the database involved modifications and development of entity-relationship modeling, focusing on information closely related to disease occurrence, such as cultivation, environmental, management, and facility-related data in the Smart Farm. The entity-relationship diagram (ERD) for storing disease diagnosis information and results was designed to consider farm facilities and the environment, as shown in Figure 10.
To recognize disease imagery devices, we devised a system by integrating the current artificial intelligence models into a standard-based compound environmental control system. This system processes greenhouse environmental information and disease image data to enhance the accuracy of undetected data detection and disease diagnosis classification through data transformation, augmentation, and feedback. The composite environmental control system considered in this research adheres to the KS X 3267 [33] and TTAK.KO-10.1172 [34] standards-based interface for device compatibility. It employs a Plug and Play (PnP) approach to recognize imaging devices and integrates a CNN model implemented in Python code, enabling environmental data collection and image analysis. Additionally, the system is based on open-source technologies and incorporates a 4-channel relay module and sensor nodes within the Arduino environment. To process images, a classification system was established to handle unknown new diseases. Images that fall under the unknown classification, i.e., unclassified image data, were augmented to create an enhanced dataset.
As depicted in Figure 11, the detection and classification stages of this study can be described as follows [35]:
-
This study focuses on recognizing 5 diseases (baseline dataset).
-
The deep-learning model is designed to recognize these 5 diseases. However, during testing, if the input image does not match the features of the developed model, it is recognized as “unknown”.
-
When data are identified as “unknown” by the model, the system can adapt to new diseases with the support of domain experts. Subsequently, more data corresponding to the new disease needs to be collected.
-
To build an expanded dataset, new classes can be added to the baseline dataset, and the deep-learning model is trained using this expanded dataset.
-
This process involves fine-tuning the hyperparameters using the existing deep-learning model to learn the parameters.
-
Whenever new unknown data are inputted, the above procedure is repeated to extend the baseline model. The system recognizes diseases from the baseline dataset, and new diseases are incrementally added.
The light data for correcting the images acquired using the image analysis device shown in Figure 12 was measured based on solar radiation (W/m2), and an estimation formula for solar radiation was utilized to analyze the lighting conditions based on the weather. The analysis of the solar radiation incident on the surface can be derived from extraterrestrial solar radiation, denoted as I0. In the following equations, Equation (1) represents I0 as a function of latitude, solar declination, and hour angle, denoting the solar radiation before passing through the atmosphere. Equation (2) expresses the coefficient KT related to cloud cover, representing the ratio of solar radiation reaching the surface, denoted as I, to the solar radiation before passing through the atmosphere, I0. This can be expressed by the following equation [32].
I 0 = 12 × 3600 π G s c 1 + 0.033 c o s 360 n 365 × cos ϕ c o s σ s i n ω 2 s i n ω 1 + π ω 2 ω 1 180 × s i n ϕ s i n σ
  • ϕ : T h e   l a t i t u d e   o f   t h e   a r e a ,   degree  
  • σ : D a y   d e c l i n a t i o n   σ = 23.45 s i n 360 × 284 + n 365
  • n : J u l i a n   d a t e
  • ω 1 , ω 2 : T i m e   a n g l e   15 / h
  • G s c : S o l a r   c o n s t a n t   1367   W h / m 2
  • I 0 : I n s o l a t i o n   b e f o r e   p a s s i n g   t h r o u g h   t h e   a t m o s p h e r e   M J / m 2
  • I 0 is an expression for estimating the amount of insolation before passing through the atmosphere by calculating a function using latitude, solar declination, and time angle.
K T = I I 0
K T may be expressed as a ratio of I 0 to I by an equation representing clearness.
I d I = 1.0 0.09 K T 0 < K T 0.22 0.9511 0.1604 K T + 4.388 K T 2   0.22 < K T 0.8 16.638 K T 3 + 12.336 K T 4   0.165 K T > 0.8
  • I :   Actual   value   of   insolation   reaching   the   surface   as   measured   insolation   M J / m 2   i n   t h e   a r e a   ( m e a s u r e d   b y   m e a n s   o f   a   s e n s o r )
  • I d :   Sky   insolation   M J / m 2
  • K T :   Clearness
Figure 12. Image Analysis Device.
Figure 12. Image Analysis Device.
Sustainability 15 16220 g012
The image data are initially analyzed based on the RAW format. If detection does not occur within 10 s or if the similarity of the suspected region in the feedback of detection results is below 50%, adjustments to the image brightness are made in three stages (clear sky, partly cloudy, overcast) based on the solar radiation estimation formula according to the light conditions. Additionally, up to eight augmented data per image, including 90, 180, and 270-degree rotations, as well as vertical and horizontal flips, are utilized. This allows for verification of whether there is a 10% or more improvement in similarity based on the detection or feedback data [36].
The light intensity measurement information collected in the greenhouse, the ratio of direct and scattered light, and the light saturation point information for each season and crop were reflected in the insolation estimation information obtained through the above formula, as shown in Table 2. In addition, it was attempted to improve the recognition rate by determining whether and how to process image augmentation by recognizing an environment in which disease occurs easily.
According to the light environment is classified into 3 stages (clear sky, intermediate sky, and overcast sky) based on the solar radiation estimation formula and environmental information. If image data are not detected within 10 s after analysis based on RAW data, or if the similarity is less than 50% in the detection result feedback, data augmentation is determined using the light environment information and disease occurrence probability information. Data augmentation generates up to 9 augmented data per image, such as adjusting image brightness and rotating 90, 180, 270 degrees, up/down, left/right inversion, etc., as shown in Figure 13. It was designed to verify whether there is a similarity improvement of 10% or more based on the feedback data after attempting detection with the augmented data.
As shown in Figure 14, the AI-based integrated environmental control system automatically adjusts the environment (temperature, humidity, moisture, light, etc.) according to the optimal environmental settings for crop cultivation after verifying disease diagnosis.

4. Experiment

4.1. Collecting Tomato Disease Image Data

In this study, we collected over 3000 expert-verified tomato disease image data for AI training. The training process involved utilizing a total of over 8000 RAW images, including approximately 5000 images from previous disease diagnosis research and 3000 new images. This extensive dataset enabled the successful advancement and refinement of the AI model for disease classification and diagnosis. An additional 300 unlabeled images were collected and validated separately for verification. Figure 15 and Table 3 show the collection and provision of tomato blight images by the National Institute of Horticultural Specialty Sciences.

4.2. Validation of Tomato Blight Image Classification System Using Field Data

Collect Validation Data

For the demonstration of the AI disease reading inference engine obtained by learning, a new data set was constructed with the goal of verifying the system’s performance in new environments and conditions. Tests were conducted using data obtained from seven new farms to validate the learning model in an empirical environment. The number of data collected is shown in Table 4.
Only four classes of images (yellow curl, leaf mold, canker, and powdery mildew) were available for the field test, and in the case of canker, we used a part of the unlabeled dataset that was not used for the original training, as it is difficult to verify in the field what occurs in general farmers’ fields, as the disease is rare and usually removed immediately after the outbreak.

4.3. Performing Validation Tests and Results

As mentioned above, to verify the robustness to the changed environment by empirical testing, we conducted experiments using data other than the original data extraction location or data that was not used for training. Figure 16 shows an example of the experimental results on empirical data.
As a result of the empirical test, a satisfactory result of 95.2% was achieved, resulting in a result exceeding the 92.5% level of accuracy when discriminating data before preprocessing. In the case of the existing Keras R-CNN model, which was compared, the classification performance barely exceeded 90%, as shown in Figure 17, and the improved model using augmented data also showed a classification accuracy of 92.5%. However, the model adjusted for transformation, augmentation, and parameters using solar radiation and greenhouse environment data showed an average classification accuracy of 95.2%. This was determined using image data collected in a general greenhouse rather than at the laboratory level, and considering the classification accuracy in the empirical environment, it can be considered a very high level. The confusion matrix of the final empirical test is shown in Figure 18.
Among the parameters identified in this study, the factor that had the most significant impact on discriminative ability was the variation in illuminance values based on the shooting direction and solar incidence angle during the acquisition of tomato plant disease and pest images. Additionally, situations were observed where the analyzed results persisted even in tomato images without plant diseases and pests and instances where the analysis images were detected in categories beyond the classification classes. To eliminate such errors, outliers in color codes were removed in the image analysis process of the analysis model, effectively excluding areas unrelated to plant diseases and pests, and rectifying errors and missing data. Moreover, concerning illuminance-related aspects, color codes were referenced during shooting, and considering seasonal and temporal factors, as well as diffused light based on greenhouse material, a standard value was calculated to exclude backlighting. Only images with values within the normal discriminant range were considered, and classes beyond the analysis category of the acquired image were automatically re-captured. To address misdiagnoses caused by powdery mildew as the background in acquired images, the analysis model was adjusted to exclude related color codes, preventing classification into the class. The meta-architecture utilized in this study, VGG-16, while not markedly superior in feature extraction performance compared to deeper networks such as ResNet, demonstrated satisfactory performance for real-time analysis with augmented transformations, proving suitable for fast multi-detection.

5. Conclusions

This study raised the disease diagnosis accuracy to 92.5% as a gradual learning study on the existing disease diagnosis engine. In addition, the preprocessing technology research that combines external light environment information and environmental information utilization prediction information for disease occurrence in the greenhouse showed 95.2% accuracy in the demonstration stage of disease symptoms occurring in the actual greenhouse environment. In other words, it has been shown that crop disease diagnosis technology that has stayed at the laboratory level can be discriminated against with high accuracy through real-time pre-treatment technology in the field. When the surrounding environment sensitively affects the identification, such as disease diagnosis in a greenhouse, there is a limit to increasing the identification accuracy through gradual learning. In addition, when learning a specific class, it may affect the existing disease diagnosis engine, resulting in a decrease in identification accuracy.
In addition, in this paper, a complex environment control system including a reference sensor node that can acquire images using a mobile imaging device and process image and environmental information in a complex manner is configured, and an artificial intelligence classification model is used in the control system to classify and feedback the augmented image data according to environmental changes in real time to prevent the disease from being unclassified due to image loss due to the light environment. By utilizing a standard-based composite currency system equipped with an artificial intelligence model that is linked to an automatic image acquisition device, the system acquires and analyzes images and environmental data in real time, which will contribute to improving the sustainability of the greenhouse.
From a scientific perspective, examining the plant disease diagnosis service reveals that by applying artificial intelligence in agricultural fields instead of human cognition, time and costs can be reduced. Additionally, the future integration of agricultural robot technology could lead to the development of intelligent robots that autonomously recognize crop diseases and perform pest control.
In terms of pure technological contributions, overcoming the limitations of existing research that require substantial resources and data was achieved through performance improvement via preprocessing combined with environmental information.
From a societal standpoint, farmers with limited farming experience or those venturing into new crops due to climate change can make swift decisions for greenhouse pest management, enabling stable smart farming.
In future research, we would like to build a system that can analyze environmental information such as temperature and humidity, as well as literature information on signs of disease so that disease classification and prevention can be performed.

Author Contributions

Conceptualization, T.K. and D.S. (Dongkyoo Shin); funding acquisition, T.K.; methodology, T.K., J.B., M.K. and D.I.; design of machine learning algorithm, T.K., H.P. (Hansol Park) and D.S. (Dongkyoo Shin); supervision, D.S. (Dongkyoo Shin); validation, H.P. (Hyoseong Park) and D.S. (Dongil Shin); writing—original draft preparation, T.K. and H.P. (Hansol Park); writing—review and editing, D.S. (Dongkyoo Shin). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry (IPET) and Korea Smart Farm R&D Foundation (KosFarm) through Smart Farm Innovation Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA) and Ministry of Science and ICT (MSIT), Rural Development Administration (RDA) (grant number: 421005-04(=PJ016443)).

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fuentes, A.F.; Yoon, S.; Lee, J.; Park, D.S. High-Performance Deep Neural Network-Based Tomato Plant Diseases and Pests Diagnosis System with Refinement Filter Bank. Front. Plant Sci. 2018, 9, 1162. [Google Scholar] [CrossRef] [PubMed]
  2. Fiorani, F.; Schurr, U. Future Scenarios for Plant Phenotyping. Annu. Rev. Plant Biol. 2013, 64, 267–291. [Google Scholar] [CrossRef] [PubMed]
  3. Suarez, P.L.; Angel, D.S.; Boris, X.V. Leaning image vegetation index through conditional generative adversarial network. In Proceedings of the 2017 IEEE Second Ecuador Technical Chapters Meeting (ETCM), Salinas, Ecuador, 16–20 October 2017; pp. 1–6. [Google Scholar]
  4. Szegedy, C. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv 2016, arXiv:160207261v2. [Google Scholar] [CrossRef]
  5. Lin, M.; Chen, Q.; Yan, S. Network In Network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
  6. Li, Y.; Chao, X. Toward sustainability: Trade-off between data quality and quantity in crop pest recognition. Front. Plant Sci. 2021, 12, 811241. [Google Scholar] [CrossRef]
  7. Xu, Y.; Wu, L.; Xie, Z.; Chen, Z. Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters. Remote Sens. 2018, 10, 144. [Google Scholar] [CrossRef]
  8. Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef]
  9. Zhao, Z.; Zheng, P.; Xu, S.; Wu, X. Object Detection with Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
  10. Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  11. Heuvelink, E. Tomatoes; CABI: Glassglow, UK, 2018; Volume 27. [Google Scholar]
  12. Fuentes, A.F.; Yoon, S.; Park, D.S. Deep learning-based phenotyping system with glocal description of plant anomalies and symptoms. Front. Plant Sci. 2019, 10, 1321. [Google Scholar] [CrossRef] [PubMed]
  13. Wright, R.E. Logistic regression. In Reading and Understanding Multivariate Statistics; American Psychological Association: Washington, DC, USA, 1995. [Google Scholar]
  14. Wang, X.; Wang, X.; Sun, Z.N. Comparison on Confidence Bands of Decision Boundary between SVM and Logistic Regression. In Proceedings of the 2009 Fifth International Joint Conference on INC, IMS and IDC, Seoul, Republic of Korea, 25–27 August 2009; IEEE (CS): Piscataway, NJ, USA, 2009; pp. 272–277, ISBN 978-0-7695-3769-6. [Google Scholar]
  15. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  16. Kim, K.; Jang, J.; Park, H.; Jeong, J.; Shin, D.; Shin, D. Detecting Abnormal Behaviors in Dementia Patients Using Lifelog Data: A Machine Learning Approach. Information 2023, 14, 433. [Google Scholar] [CrossRef]
  17. Park, H.; Kim, K.; Shin, D.; Shin, D. BGP Dataset-Based Malicious User Activity Detection Using Machine Learning. Information 2023, 14, 501. [Google Scholar] [CrossRef]
  18. Noble, W.S. What is a support vector machine? Nat. Biotechnol. 2006, 24, 1565–1567. [Google Scholar] [CrossRef]
  19. Wei, D. Anomaly detection for blueberry data using sparse autoencoder-support vector machine. PeerJ Comput. Sci. 2023, 9, e1214. [Google Scholar] [CrossRef]
  20. García, S.; Grill, M.; Stiborek, J.; Zunino, A. An empirical comparison of botnet detection methods. Comput. Secur. 2014, 45, 100–123. [Google Scholar] [CrossRef]
  21. An, J.; Cho, S. Variational Autoencoder Based Anomaly Detection Using Reconstruction Probability; Technical Report; SNU Data Mining Center: Seoul, Republic of Korea, 2015. [Google Scholar]
  22. Ghosh, P.; Sajjadi, M.S.M.; Vergari, A.; Black, M.; Schölkopf, B. From Variational to Deterministic Autoencoders. arXiv 2020, arXiv:1903.12436. [Google Scholar]
  23. Xu, J.; Durrett, G. Spherical Latent Spaces for Stable Variational Autoencoders. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 4503–4513. [Google Scholar]
  24. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
  25. Park, S.W.; Huh, J.H.; Kim, J.C. BEGAN v3: Avoiding Mode Collapse in GANs Using Variational Inference. Electronics 2020, 9, 688. [Google Scholar] [CrossRef]
  26. Pei, S.; Xu, R.Y.D.; Xiang, S.; Meng, G. Alleviating Mode Collapse in GAN via Pluggable Diversity Penalty Module. arXiv 2021, arXiv:2108.02353v4. [Google Scholar]
  27. Benjdira, B.; Khursheed, T.; Koubaa, A.; Ammar, A.; Ouni, K. Car detection using unmanned aerial vehicles: Comparison between faster r-cnn and yolov3. In Proceedings of the 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), Muscat, Oman, 5–7 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  28. Latino, M.E.; Menegoli, M.; Corallo, A. Agriculture Digitalization: A Global Examination Based on Bibliometric Analysis. IEEE Trans. Eng. Management. 2022, 1–16. [Google Scholar] [CrossRef]
  29. Radogna, A.V.; Latino, M.E.; Menegoli, M.; Prontera, C.T.; Morgante, G.; Mongelli, D.; Giampetruzzi, L.; Corallo, A.; Bondavalli, A.; Francioso, L. A Monitoring Framework with Integrated Sensing Technologies for Enhanced Food Safety and Traceability. Sensors 2022, 22, 6509. [Google Scholar] [CrossRef] [PubMed]
  30. Ghobakhloo, M. Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 2020, 252, 119869. [Google Scholar] [CrossRef]
  31. Ejsmont, K.; Gladysz, B.; Kluczek, A. Impact of Industry 4.0 on Sustainability—Bibliometric Literature Review. Sustainability 2020, 12, 5650. [Google Scholar] [CrossRef]
  32. Gonzalez-huitron, V.; Le, A.; Amabilis-sosa, L.E.; Ramírez-pereda, B.; Rodriguez, H. Disease detection in tomato leaves via CNN with lightweight architectures implemented in Raspberry Pi 4. Comput. Electron. Agric. 2021, 181, 105951. [Google Scholar] [CrossRef]
  33. KS X 3267; RS485 MODBUS Interface between Sensor/Actuator Node and Greenhouse Controller in Smart Greenhouse. National Radio Research Agency: Naju-si, Republic of Korea, 2022.
  34. TTAK.KO-10.1172; Modbus/RS485-Based Smart Greenhouse Node/Device Registration Procedures and Description Specification. Telecommunications Technology Association: Seonnam-city, Republic of Korea, 2019.
  35. Geng, C.; Huang, S.J.; Chen, S. Recent advances in open set recognition: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3614–3631. [Google Scholar] [CrossRef]
  36. Jang, S.T.; Chang, S.J. Exploration of a light shelf system for multi-layered vegetable cultivation. KIEAE J. 2013, 13, 61–66. [Google Scholar]
Figure 1. Recognition Problem in Known, Unknown, and Control Classes.
Figure 1. Recognition Problem in Known, Unknown, and Control Classes.
Sustainability 15 16220 g001
Figure 2. The Integrated Platform for Plant Diseases and Pests Diagnosis.
Figure 2. The Integrated Platform for Plant Diseases and Pests Diagnosis.
Sustainability 15 16220 g002
Figure 3. The Process of Conducting Analysis When Abnormal Signs Are Detected.
Figure 3. The Process of Conducting Analysis When Abnormal Signs Are Detected.
Sustainability 15 16220 g003
Figure 4. Design of the Image Acquisition Device and Data Collection Study.
Figure 4. Design of the Image Acquisition Device and Data Collection Study.
Sustainability 15 16220 g004
Figure 5. Description of the Image Acquisition Device.
Figure 5. Description of the Image Acquisition Device.
Sustainability 15 16220 g005
Figure 6. Faster R-CNN Architecture.
Figure 6. Faster R-CNN Architecture.
Sustainability 15 16220 g006
Figure 7. The proposed fully convolutional network structure.
Figure 7. The proposed fully convolutional network structure.
Sustainability 15 16220 g007
Figure 8. Tomato Disease Diagnosis Model.
Figure 8. Tomato Disease Diagnosis Model.
Sustainability 15 16220 g008
Figure 9. Example of the Baseline Dataset.
Figure 9. Example of the Baseline Dataset.
Sustainability 15 16220 g009
Figure 10. ERD Configuration for Integrated Disease-Related Information.
Figure 10. ERD Configuration for Integrated Disease-Related Information.
Sustainability 15 16220 g010
Figure 11. Diagram of Strategies for Handling New Diseases.
Figure 11. Diagram of Strategies for Handling New Diseases.
Sustainability 15 16220 g011
Figure 13. Data Augmentation Techniques.
Figure 13. Data Augmentation Techniques.
Sustainability 15 16220 g013
Figure 14. Design for Disease Diagnosis Verification.
Figure 14. Design for Disease Diagnosis Verification.
Sustainability 15 16220 g014
Figure 15. Photos of new tomato blight (partial).
Figure 15. Photos of new tomato blight (partial).
Sustainability 15 16220 g015
Figure 16. Example of experimental results on empirical data (*: Canker uses an existing training dataset).
Figure 16. Example of experimental results on empirical data (*: Canker uses an existing training dataset).
Sustainability 15 16220 g016
Figure 17. Keras R-CNN model classification accuracy.
Figure 17. Keras R-CNN model classification accuracy.
Sustainability 15 16220 g017
Figure 18. Confusion matrix for the final test.
Figure 18. Confusion matrix for the final test.
Sustainability 15 16220 g018
Table 2. Complex environment information for data preprocessing.
Table 2. Complex environment information for data preprocessing.
Environmental InformationStandardDiscrimination
Greenhouse light transmittance70%±10, ..%
Crop light saturation pointTomato:
1400 μmol·m−2·s−1
Transmittance × Insolation
SeasonSummer, WinterWhether the light saturation point is met + seasonal information
Direct sunlight to scattered light ratioCloth material, 50%Considering season, insolation, light transmittance + direct sunlight, scattering ratio
Table 3. New tomato blight image history.
Table 3. New tomato blight image history.
Disease Name Counts Proportion
Canker50716.0%
Leaf blight92129.1%
Ashy mold86527.3%
Tomato chlorotic leaf curl virus40912.9%
Powdery mildew46814.8%
Table 4. Number of newly acquired empirical data (7 farms in Jeonbuk and Chungbuk Province).
Table 4. Number of newly acquired empirical data (7 farms in Jeonbuk and Chungbuk Province).
Type of Disease/File Number of Images
Tomato TYLCV4_Jeonbuk Jangsu-gun_Wanju-gun62
Tomato leaf mold disease7_Chungbuk Cheongju-si46
Tomato ashy mold disease 8_Chungbuk Cheongju-si59
Tomato ashy mold disease7_Chungbuk Cheongju-si75
Tomato Powdery Mildew7_Chungbuk Cheongju-si48
Tomato Powdery Mildew 8_Jeonbuk Iksan-si
G
18
Total308
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, T.; Park, H.; Baek, J.; Kim, M.; Im, D.; Park, H.; Shin, D.; Shin, D. Enhancement for Greenhouse Sustainability Using Tomato Disease Image Classification System Based on Intelligent Complex Controller. Sustainability 2023, 15, 16220. https://doi.org/10.3390/su152316220

AMA Style

Kim T, Park H, Baek J, Kim M, Im D, Park H, Shin D, Shin D. Enhancement for Greenhouse Sustainability Using Tomato Disease Image Classification System Based on Intelligent Complex Controller. Sustainability. 2023; 15(23):16220. https://doi.org/10.3390/su152316220

Chicago/Turabian Style

Kim, Taehyun, Hansol Park, Jeonghyun Baek, Manjung Kim, Donghyeok Im, Hyoseong Park, Dongil Shin, and Dongkyoo Shin. 2023. "Enhancement for Greenhouse Sustainability Using Tomato Disease Image Classification System Based on Intelligent Complex Controller" Sustainability 15, no. 23: 16220. https://doi.org/10.3390/su152316220

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop