You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Review
  • Open Access

14 September 2023

Role of Internet of Things and Deep Learning Techniques in Plant Disease Detection and Classification: A Focused Review

,
,
,
and
1
Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur 303007, India
2
Sri Karan Narendra Agriculture, Jobner 303328, India
3
Department of Informatics, Modeling Electronics and Systems (DIMES), University of Calabria, Arcavacata di Rende, 87036 Rende, Italy
4
National Research Council-Institute of Nanotechnology, Piazzale Aldo Moro, 33C, Arcavacata, 87036 Rome, Italy
This article belongs to the Special Issue Deep Learning for Environmental Remote Sensing

Abstract

The automatic detection, visualization, and classification of plant diseases through image datasets are key challenges for precision and smart farming. The technological solutions proposed so far highlight the supremacy of the Internet of Things in data collection, storage, and communication, and deep learning models in automatic feature extraction and feature selection. Therefore, the integration of these technologies is emerging as a key tool for the monitoring, data capturing, prediction, detection, visualization, and classification of plant diseases from crop images. This manuscript presents a rigorous review of the Internet of Things and deep learning models employed for plant disease monitoring and classification. The review encompasses the unique strengths and limitations of different architectures. It highlights the research gaps identified from the related works proposed in the literature. It also presents a comparison of the performance of different deep learning models on publicly available datasets. The comparison gives insights into the selection of the optimum deep learning models according to the size of the dataset, expected response time, and resources available for computation and storage. This review is important in terms of developing optimized and hybrid models for plant disease classification.

1. Introduction

With the increase in population, there has been a rise in the demand for agricultural products. Approximately 75% of farmers follow the traditional techniques of farming []. These techniques fail to meet the demands of the increasing population worldwide. The variations in climate and soil types in different regions affect crop productivity []. The traditional approaches do not provide any system to monitor the effects of climate and soil types. Additionally, there is no automatic mechanism for the calculation of the amount of fertilizer or pesticide required in a particular crop. This may lead to the excessive use of chemical fertilizers and pesticides. It increases the costs of agriculture, and the chemicals harm the soil as well as human health. Moreover, there is no automatic mechanism available to predict and classify plant diseases at an early stage []. The traditional approaches need human experts for disease detection in crops.
The above clearly shows that the high costs, need for human intervention, low yields, poor crop quality, and adverse effects of the excessive use of fertilizers and pesticides are the major challenges in traditional agriculture. There is a strong requirement to address the above-mentioned challenges. This motivated the present authors to thoroughly review the available technological solutions proposed for agriculture.
In recent years, great improvements have been observed in agriculture due to the massive enthusiasm regarding the Internet of Things (IoT) and deep learning (DL) [,,,]. IoT is useful in gathering real-time information. It helps in the judicious utilization of water, electricity, and fertilizer []. Further, IoT devices are efficient in monitoring visual and non-visual symptoms of disease at an early stage [], the requirements for herbicides or pesticides, weed detection [], and pest detection. IoT provides a fusion of imagery and parametric and genomic datasets []. Meanwhile, DL techniques are effective in image recognition, object detection, pattern matching, and classification [,]. Deep convolutional neural networks (DCNNs) are effective in automated feature extraction and feature selection. These are useful to extend the applications of DL techniques for plant disease detection [], weed detection [], fruit counting [], yield prediction [,], and the visualization of the detected fruit, disease, or weed []. These techniques improve the precision and reduce the time consumed in manual feature extraction and image recognition [,].
The general architecture of the IoT-enabled convolutional neural network (CNN) architecture (IoTCNN) applied for the multi-class classification of plant diseases is shown in Figure 1. We considered the pearl millet plant to demonstrate its architecture [,]. The architecture collects the data using cameras and sensors mounted in the field. The collected data are stored on a Raspberry Pi and a cloud server. The IoTCNN helps in data analysis and decision making. Moreover, it is suitable for embedded systems and smartphones for disease detection.
Figure 1. Architecture of IoT-enabled convolutional neural network.
The IoTCNN comprises convolution, pooling, and fully connected layers. The working mechanism of these layers is explained below.
i.
Convolutional Layer
This layer receives an input image in the form of a matrix. It includes a small matrix ‘kernel’ that strides over an input image to extract features from the image without destructing the spatial relationships between the pixels. The convolution operation g ( x , y ) , as defined in Equation (1), is the dot product of two functions, h ( x , y ) and f ( x , y ) . This operation is demonstrated in Figure 2.
g ( x , y ) = h ( x , y ) · ( x , y )
Figure 2. Convolutional operation process: (a) input matrix, (b) kernel, (c) feature map.
A part of an input image, as shown in Figure 2 with a square box under the brown boundary shown from rows 1 to 3 and columns 1 to 3 of the input matrix, is connected to a convolutional layer to perform the convolution operation. The dot product of this part of the input image and the filter shown in Figure 2 gives a single integer of the output volume, as shown in cell (1,1) of the matrix in Figure 2. Then, the filter is moved over the next receptive field, as shown with the blue boundary from rows 1 to 3 and columns 2 to 4 in Figure 2, and performs the convolution operation again. This procedure is repeated until the filter moves over the whole image and gives a feature map, as shown in Figure 2. Different filters generate different feature maps. Therefore, the convolution layer acts as a feature detector.
ii.
Pooling Layer
This layer receives the convoluted image as an input and applies the non-linear function. In max pooling, the maximum value of the local patch is extracted as a feature map, as shown in Figure 3. Here, 6, 7, 3, and 8 are the maximum values of features shown in the first, second, third, and fourth quadrants, respectively, in the matrix shown in Figure 3. Only these four numbers are considered in the feature map for the next step of processing. In average pooling, the average value of local patches is considered in the feature map, as shown in Figure 4. Both operations extract the relevant features from the image. Therefore, the pooling operation reduces the number of parameters and computations in a CNN model without losing vital information [].
Figure 3. Max pooling operation.
Figure 4. Average pooling operation.
iii.
Fully Connected Layer
This layer follows many convolution and pooling layers. It contains connections to all activations in the previous layer and enables the network to learn about the non-linear combinations of features for classification. It calculates the value of the gradient of the loss function and back-propagates it to the previous layers. Thus, there is a continuous update in the parameters of the model. It minimizes the value of the loss function and improves the classification accuracy.
IoT devices capture climate conditions such as cloud cover, rain, sunshine, temperature, and humidity. DL techniques can work on such real-time datasets captured by IoT devices, while field monitoring and datasets collected by other devices, such as drones, cameras, etc., are used to determine the health of a crop []. Recording and analyzing these conditions at an early stage is significant in preventing crop disease. The smart integration of DL techniques with IoT is effective in automating disease detection, the prediction of fertilizer requirements and water requirements, crop yield prediction, etc. In this research, we provide a rigorous review of the state-of-the-art DL and IoT techniques applied in plant disease detection and classification. We also compare the performance of different DL models on the same dataset. Further, we highlight the importance of employing transfer learning and optimization techniques to achieve better performance in DL models. We also give insights for the development of an automatic tool that encompasses IoT and DL techniques to assist farmers in smart farming, as shown in Figure 5.
Figure 5. Flow of the proposed system.
The rest of the paper is organized as follows. Section 2 illustrates the state-of-the-art of various deep learning models. Section 3 presents the comparative analysis of disease detection using IoT and DL. Section 4 includes the discussion. Section 5 concludes the findings and gives scope for future work. The major contributions of this manuscript are as follows:
  • A rigorous review of the deep learning techniques used for the detection and classification of diseases in plants;
  • The optimization of the DL models according to the response time, size of the dataset, and type of dataset;
  • The determination of the optimum models for early disease prediction, detection, and classification;
  • The integration of IoT and hybrid DL models for plant disease detection and classification.

2. State-of-the-Art Deep Learning Models

In this section, we present the deep learning models proposed in the literature for various applications.

2.1. AlexNet

The AlexNet model was proposed in 2012 [] with 5-Conv and 3-FC layers. The convolution layers Conv-1 and Conv-2 are followed by normalization and a pooling layer. The last convolution layer, Conv-5, is followed by a single pooling layer, as shown in Figure 6. It introduced the use of the ReLU activation function, which improved the performance and allowed for generalization in the DL model.
Figure 6. AlexNet architecture.
The authors applied a variant of AlexNet on a maize dataset and achieved a top-five test error rate of 15.3% and accuracy of 93.8% using data augmentation and regularization methods. The model can be applied to various fields, such as plant disease detection, NLP, and medical image processing [].

2.2. GoogleNet/Inception

Researchers addressed the issue of poor resource utilization and a lack of parallel processing in the AlexNet model and proposed the GoogleNet architecture in 2014 []. The architecture, as shown in Figure 7, has 22 layers (21-Conv and 1-FC). It has four million trainable parameters and performs batch normalization to resolve vanishing gradients. It employs the softmax activation function ( σ ( z ) j ), as defined in Equation (2). Here, z is the vector of the inputs given to the output layer. For example, z will contain 10 elements of 10 output units. The index of each output unit is represented by j, which takes values from 1 to K. This function is useful for multiclass classification.
σ ( z ) j = e j Z k = 1 k e z k
Figure 7. GoogleNet architecture.
GoogleNet improved the utilization of computing resources and showed a top-five test error rate of 6.67% in the ILSVRC-2014 competition, 8.63% lower than AlexNet. The model uses global average pooling instead of fully connected layers and its performance can be improved by increasing its limit of divergence [].

2.3. VGGNet-16 and VGGNet-19

Although GoogleNet reduced the error rate of AlexNet, there was a scope to improve the accuracy in disease detection and classification. Researchers proposed the VGGNet-16 and VGGNet-19 models to improve the accuracy in disease detection and classification [] in 2014. The basic architecture of the VGG model is shown in Figure 8. These models were submitted to the ILSVRC-2014 competition and achieved a top-five error rate of 7.5%. The VGG model was trained in stages to overcome vanishing gradients, and the VGG-16 model achieved 7.86% higher accuracy than AlexNet and the highest accuracy of 99.53%. VGG-19 is the most efficient classifier among AlexNet, Inception-v1, and Inception-v3, with accuracy of 99.67% [] and a top-five error rate of 7.5%. However, it has high computation complexity due to its 130 million trainable parameters. The VGG-19 model uses the max-pooling layer to reduce the volume of the network. The authors [] also claimed that VGG-19 is the most efficient classifier among AlexNet, Inception-v1, and Inception-v3, with accuracy of 99.67%. Moreover, its top-five error rate is 7.5%, which is 7.8% lower than that of the AlexNet model. However, it has high computation complexity due to its 130 million trainable parameters.
Figure 8. VGGNet architecture.

2.4. Inception-v3

For further improvements in the efficiency of CNN models, the authors of [] used the ideas of factorization and submitted the Inception-v3 model to the ILSVRC-2015 competition. This model outperformed the benchmark classification models submitted to the ILSVRC 2012 competition. Inception-v3 contains 23 million trainable parameters. The number of parameters is smaller than that of GoogleNet, VGG-16, and VGG-19. Therefore, it requires smaller storage space. It showed a top-five error rate of 3.58% on the validation set. Its error rate is 22.42% and 11.72%, for AlexNet, GoogleNet, and VGG-19, respectively. However, while Inception-v3 achieves 2.5 times higher accuracy than GoogleNet, it is computationally expensive due to the 42 layers in the network, as shown in Figure 9.
Figure 9. Inception-v3 architecture.

2.5. ResNet

The deeper CNNs require a huge dataset for training and are difficult to train. Moreover, deep networks require a long time for training. To address these issues, researchers [] developed residual networks in 2015. ResNet is eight times deeper than VGG-19. It contains 50, 101, or 152 layers. The basic architecture of ResNet is shown in Figure 10. ResNet-50 contains 26 million, ResNet-101 contains 60 million, and ResNet-152 contains 90 million trainable parameters. Based on the number of parameters, it is clear that ResNet-50 has low space and time complexity, whereas ResNet-152 reports the highest space and time complexity. The residual networks learn using residual functions rather than unreferenced learning. Similar to the ReLu activation function, the residual connections provide a solution for the vanishing gradient problem. The authors employed the softmax layer in ResNet to extend its applications in plant disease identification. This model reported a low error rate of 3.57%. Its error rate was 0.1% lower than that of Inception-v3. The authors submitted this CNN model to the ILSVRC-2015 and COCO 2015 challenges and competed for the first position. ResNet models require a long time for training. Thus, it becomes difficult to apply them to solve real-time problems where a quick decision is required.
Figure 10. ResNet architecture.

2.6. Inception-v4 and Inception-ResNet

To further minimize the error rate and accelerate the training of the network, the authors of [] developed a hybrid of residual and inception networks in 2016. This model has 43 million trainable parameters, as shown in Figure 11. The hybrid developed by ensembling three residuals and one Inception-v4 network gave the top-five error rate of 3.08%, which is 0.49% lower than the error rate reported for ResNet. Inception-v4 has higher space and time complexity than the ResNet-50, Inception-v3, and AlexNet models.
Figure 11. Inception-v4 and Inception-ResNet architecture.

2.7. Xception

The authors of [] replaced the inception module of the Inception model with depth-wise separable convolutions and developed the Xception architecture in 2016. It differs from the Inception model in the sequence of performing the convolution operations. Moreover, it does not employ the ReLU function for non-linearity. This model contains 23 million trainable parameters, as shown in Figure 12. The authors [] claimed that VGG-16 [], ResNet-152 [], Inception-v3 [], and Xception had top-five accuracies of 90.1%, 93.35%, 94.1%, 94.2%, and 94.5%, respectively. This shows that the Xception model outperforms the VGG-16, ResNet-152, and Inception-v3 models.
Figure 12. Xception architecture.

2.8. SqueezeNet

Developing a CNN model with low computational complexity and high accuracy has become the priority among researchers. To contribute in this direction, the authors of [] proposed the SqueezeNet model in 2017. Its architecture is shown in Figure 13. This model uses 1 × 1 filters rather than 3 × 3 filters. The SqueezeNet model consists of a stack of fire modules and pooling layers. Each fire module contains a squeeze layer and an expanded layer. Both layers have feature maps of the same size. The squeeze layer reduces the depth of the network, whereas the expand layer increases it. This model has 1.25 million trainable parameters. The authors claimed that the SqueezeNet model is 50 times shallower than AlexNet. However, it reports top-five accuracy of 80.3%, which is equivalent to the top-five accuracy of AlexNet. Further modifications in the SqueezeNet model improved its performance. The addition of one bypass connection to the network increased its top-one accuracy by 2.9% and top-five accuracy by 2.2% [].
Figure 13. SqueezeNet architecture.

2.9. DenseNet

In the ResNet model, the individual information is transferred to the next layer for each element. This increases the computational cost of the network. To address this challenge, researchers [] developed the DenseNet model in 2017. Its architecture is shown in Figure 14. DenseNet architectures with 190 layers, 250 layers, and 201 layers contain 40 million, 15.3 million, and 20 million trainable parameters, respectively. In the DenseNet model, the collective information of the previous layer is transferred to the next layer. Thus, it creates direct connections among the intermediate layers and reduces the thickness of the network. This makes DenseNet more efficient than ResNet in terms of computational complexity and memory utilization. A comparative study presented by [] showed that DenseNet performed better than ResNet and VGG models. The authors [] claimed that the use of a customized softmax layer makes DenseNet efficient for plant disease identification. It contains a smaller number of parameters and reports high accuracy. It has shown error rates of 3.46% on C10+ and 17.18% on C100+. C10+ and C100+ are augmented forms of the CIFAR-10 and CIFAR-100 [] datasets, respectively. The error rates were significantly lower than the error rates achieved by ResNet architectures [].
Figure 14. DenseNet architecture.
The above discussion shows that researchers are modifying the architectures of CNN models to improve the performance, minimize the error rate, and reduce the computation time. They are also working towards extending the applicability of CNN models in different areas. They have made changes in the number of layers in the networks, activation functions, normalization strategies, and types of connections between different layers of a network. They are developing customized CNN models according to the type of dataset, size of the dataset, and nature of the problem to be solved.
Based on the discussion, it is clear that the DL models VGG [], ResNet-01 [], AlexNet [], ResNet [], and DenseNet [] are effective in the binary as well as multiclass classification of diseases, objects, or digits, etc. However, the accuracy in multiclass classification is lower than the accuracy achieved for binary classification. Further, it is observed that the model GoogleNet is efficient in parallel computation. Therefore, it is useful in making predictions at a lower computational cost than AlexNet. It is also observed that XceptionNet is useful when higher computational costs can be endured for higher accuracy. On the other hand, the model SqueezeNet is beneficial where computational and storage resources are limited. A brief summary of the advantages and disadvantages of various DL models is given below in Table 1.
Table 1. Advantages and disadvantages of various DL models.

4. Discussion

In this section, we elaborate on the analysis of the IoT and DL models applied for plant disease detection, visualization, and classification. We observe that factors such as the crop age, climate, and location affect the quality of the dataset and hence the accuracy of detection, visualization, and classification.
Based on the review of related works, we observe that IoT systems are important for the collection of real-time imagery as well as parametric datasets from fields. However, merely using IoT systems is not sufficient to obtain intelligent predictions about the nutrient, fertilizer, and pesticide requirements and disease detection and classification. Therefore, the datasets collected from IoT systems must be fed to DL models for analysis and decision making and to detect climate changes.
However, these integrated systems require improvements in terms of the linking of drones with GPS field sensors. GPS requires a continuous power supply, which increases the cost for farmers.
Moreover, it is observed that disruptions in the connectivity of sensors and cloud and mobile applications decrease the reliability of the integrated systems. Data storage and transmission via the cloud raise privacy and security issues. Such information can be disseminated as an alert or notification to farmers on their mobile phones.
Further, deficiencies in nutrients and diseases may leave similar visual symptoms on a crop plant. Thus, there is a need to design IoT systems that can capture even minor differences in texture, color, etc., in a cropped image, and the DL model should be optimized for the precise recognition of diseases and nutrient deficiencies. The symptoms are highly dependent on the season of growing, atmospheric conditions, and fertilization strategies. Thus, there is a need to design intelligent systems that can accurately predict diseases even for crops grown in variable environments.
Based on the review of DL models, we conclude that AlexNet minimizes the problem of vanishing gradients by using the ReLu activation function. Therefore, it can be applied for plant disease detection and classification. However, this model is inefficient in resource utilization, and it lacks parallel processing.
The model GoogleNet overcomes this drawback and employs parallel processing for the better utilization of resources. Moreover, it reports higher accuracy than AlexNet in the detection and classification of plant diseases. The DL models VGG-19, ResNet, and DenseNet report higher accuracy than GoogleNet on huge training datasets. However, these models require large storage space for the layers of the network and trainable parameters. Among all the models, DenseNet reports the highest accuracy in plant disease detection and classification, but it requires a long time for training.
We also observe that the models Inception, SqueezeNet, MobileNet, and modified or reduced MobileNet contain a smaller number of trainable parameters. These models require less storage space and shorter computational times. Therefore, these models may prove useful in providing mobile-based applications to assist farmers in predicting, detecting, and classifying plant diseases.
We also conclude that the hybrid models developed by employing VGG and Inception models or DenseNet and Inception models are storage-efficient and report high accuracy in disease detection, even in the case of complex backgrounds in images. A complex background refers to the presence of multiple objects, different colors, shadows, etc.
Further, we conclude that image segmentation and saliency mapping are useful for the visualization of plant diseases. However, hallenges such as the high log-loss, low accuracy for mobile applications, and imprecision in distinguishing diseases with similar symptoms need to be addressed.
The selection of the model and the fine-tuning of its hyperparameters according to the type of dataset is the key requirement in achieving the optimum performance of the model. Employing the ReLu or softmax activation function, adding a suitable number of dropout layers, and changing the number of layers in the neural network model according to the size and type of the dataset are important in achieving the optimum performance of the model. Applying pre-processing techniques such as rotation, histogram equalization, changes in the contrast of the image, etc., improve the robustness of the model and make it effective for datasets collected from different sources.
IoT holds the potential to revolutionize farming practices, increase productivity, enhance sustainability, and contribute to global food security. However, the successful implementation of IoT in agriculture requires that data privacy and security concerns be addressed. Moreover, we must consider the digital divide in rural areas and promote farmer education and awareness about the benefits of IoT adoption. Nonetheless, with ongoing technological advancements and industry collaborations, IoT is expected to play a crucial role in shaping the future of agriculture.

5. Conclusions

In this manuscript, we have successfully completed a systematic review of the literature to showcase the role of integrating cutting-edge technologies such as IoT and DL in automating agriculture processes. Based on the review, we conclude that the DL models VGG-19, ResNet, DenseNet, AlexNet, and GoogleNet report higher accuracy but are storage-inefficient. Thus, they cannot be integrated with mobile applications. Other DL models, such as Inception, SqueezeNet, MobileNet, hybrid models developed by employing VGG and Inception models, or DenseNet, require less storage without compromising accuracy. Therefore, these can be integrated with mobile applications for disease detection and classification. The hybrid models are also efficient in disease recognition from images with complex backgrounds. Thus, they would be useful in real-life implementation. Moreover, integrating IoT and DL models may prove useful in developing tools to assist farmers to improve the productivity and quality of crop products. Thus, it may reduce the costs of agriculture and revolutionize the area of plant disease detection, visualization, and classification. However, while DL- and IoT-based systems are available, there is huge scope to design power-efficient IoT devices for agriculture. Moreover, a great deal of research is required to improve the privacy, security, and non-interrupted communication of data between DL and IoT systems.

Author Contributions

Conceptualization, N.K., V.S.D., G.R., E.Z. and E.V.; Data curation, N.K., G.R., E.Z. and E.V.; Formal analysis, G.R., E.Z. and E.V.; Funding acquisition, E.Z.; Investigation, V.S.D., G.R., E.Z. and E.V.; Methodology, N.K., V.S.D., G.R., E.Z. and E.V.; Project administration, V.S.D., G.R., E.Z. and E.V.; Resources, E.Z.; Software, N.K. and G.R.; Supervision, V.S.D., G.R. and E.V.; Validation, G.R., E.Z. and E.V.; Writing—original draft, N.K. and G.R.; Writing—review and editing, V.S.D., G.R., E.Z. and E.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Department of Informatics, Modeling, Electronics and Systems (DIMES), University of Calabria, and the APC was funded by SIMPATICO_ZUMPANO.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

This is a review article. Data sources used in the literature are mentioned in the text. No other data are associated with this article.

Acknowledgments

We acknowledge the support provided by Manipal University Jaipur and the University of Calabria to carry out this research.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DLDeep Learning
IoTInternet of Things
CNNConvolutional Neural Network
FCFully Connected

References

  1. Singh, R.; Singh, G.S. Traditional agriculture: A climate-smart approach for sustainable food production. Energy Ecol. Environ. 2017, 2, 296–316. [Google Scholar] [CrossRef]
  2. Nowak, B. Precision agriculture: Where do we stand? A review of the adoption of precision agriculture technologies on field crops farms in developed countries. Agric. Res. 2021, 10, 515–522. [Google Scholar]
  3. Dhaka, V.S.; Meena, S.V.; Rani, G.; Sinwar, D.; Kavita; Ijaz, M.F.; Woźniak, M. A survey of deep convolutional neural networks applied for prediction of plant leaf diseases. Sensors 2021, 21, 4749. [Google Scholar] [CrossRef]
  4. Kundu, N.; Rani, G.; Dhaka, V.S. A comparative analysis of deep learning models applied for disease classification in bell pepper. In Proceedings of the 2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC), Waknaghat, India, 6–8 November 2020; pp. 243–247. [Google Scholar]
  5. Gangwar, A.; Rani, G.; Dhaka, V.P.S.; Sonam. Detecting Tomato Crop Diseases with AI: Leaf Segmentation and Analysis. In Proceedings of the 2023 7th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 11–13 April 2023; pp. 902–907. [Google Scholar]
  6. Savla, D.; Dhaka, V.S.; Rani, G.; Oza, M. Apple Leaf Disease Detection and Classification Using CNN Models. In Proceedings of the International Conference on Computing in Engineering & Technology; Springer: Berlin/Heidelberg, Germany, 2022; pp. 277–290. [Google Scholar]
  7. García, L.; Parra, L.; Jimenez, J.M.; Lloret, J.; Lorenz, P. IoT-based smart irrigation systems: An overview on the recent trends on sensors and iot systems for irrigation in precision agriculture. Sensors 2020, 20, 1042. [Google Scholar] [CrossRef]
  8. Chen, C.J.; Huang, Y.Y.; Li, Y.S.; Chang, C.Y.; Huang, Y.M. An AIoT Based Smart Agricultural System for Pests Detection. IEEE Access 2020, 8, 180750–180761. [Google Scholar] [CrossRef]
  9. Dankhara, F.; Patel, K.; Doshi, N. Analysis of robust weed detection techniques based on the internet of things (iot). Procedia Comput. Sci. 2019, 160, 696–701. [Google Scholar] [CrossRef]
  10. Canalle, G.K.; Salgado, A.C.; Loscio, B.F. A survey on data fusion: What for? in what form? what is next? J. Intell. Inf. Syst. 2021, 57, 25–50. [Google Scholar]
  11. Kundu, N.; Rani, G.; Dhaka, V.S.; Gupta, K.; Nayak, S.C.; Verma, S.; Ijaz, M.F.; Woźniak, M. Iot and interpretable machine learning based framework for disease prediction in pearl millet. Sensors 2021, 21, 5386. [Google Scholar] [CrossRef]
  12. Zhang, Z.; Liu, H.; Meng, Z.; Chen, J. Deep learning-based automatic recognition network of agricultural machinery images. Comput. Electron. Agric. 2019, 166, 104978. [Google Scholar] [CrossRef]
  13. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1–10. [Google Scholar] [CrossRef]
  14. Fawakherji, M.; Youssef, A.; Bloisi, D.; Pretto, A.; Nardi, D. Crop and Weeds Classification for Precision Agriculture Using Context-Independent Pixel-Wise Segmentation. In Proceedings of the 3rd IEEE International Conference on Robotic Computing, IRC 2019, Naples, Italy, 25–27 February 2019; pp. 146–152. [Google Scholar] [CrossRef]
  15. Chen, S.W.; Shivakumar, S.S.; Dcunha, S.; Das, J.; Okon, E.; Qu, C.; Taylor, C.J.; Kumar, V. Counting Apples and Oranges with Deep Learning: A Data-Driven Approach. IEEE Robot. Autom. Lett. 2017, 2, 781–788. [Google Scholar] [CrossRef]
  16. Nevavuori, P.; Narra, N.; Lipping, T. Crop yield prediction with deep convolutional neural networks. Comput. Electron. Agric. 2019, 163, 104859. [Google Scholar] [CrossRef]
  17. Sinwar, D.; Dhaka, V.S.; Sharma, M.K.; Rani, G. AI-based yield prediction and smart irrigation. Internet Things Anal. Agric. 2020, 2, 155–180. [Google Scholar]
  18. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  19. Dara, S.; Tumma, P. Feature Extraction By Using Deep Learning: A Survey. In Proceedings of the 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 29–31 March 2018; pp. 1795–1801. [Google Scholar]
  20. Wiesel, D.H.H.A.T.N. Receptive Fields, Binocular Interaction and Functional Architecture in the Cat’s Visual Cortex. Diagn. Cytopathol. 1962, 14, 106–154. [Google Scholar] [CrossRef]
  21. Traore, B.B.; Kamsu-Foguem, B.; Tangara, F. Deep convolution neural network for image recognition. Ecol. Inform. 2018, 48, 257–268. [Google Scholar] [CrossRef]
  22. Hijazi, S.; Kumar, R.; Rowen, C. Using Convolutional Neural Networks for Image Recognition; Cadence Design Systems Inc.: San Jose, CA, USA, 2015; Volume 9, p. 1. [Google Scholar]
  23. Patil, B.V.; Patil, P.S. Computational Method for Cotton Plant Disease Detection of Crop Management Using Deep Learning and Internet of Things Platforms. In Proceedings of the Evolutionary Computing and Mobile Sustainable Networks; Suma, V., Bouhmala, N., Wang, H., Eds.; Springer Singapore: Singapore, 2021; pp. 875–885. [Google Scholar]
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012. [Google Scholar] [CrossRef]
  25. Andrea, C.C.; Daniel, B.M.; Misael, J.B.J. Precise weed and maize classification through convolutional neuronal networks. In Proceedings of the 2017 IEEE 2nd Ecuador Technical Chapters Meeting, ETCM 2017, Salinas, Ecuador, 16–20 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
  26. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  27. Team, O.T.R. GoogleNet. 2021. Available online: https://iq.opengenus.org/googlenet/ (accessed on 13 December 2021).
  28. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  29. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  30. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef]
  32. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. Pattern Recognit. Lett. 2014, 42, 11–24. [Google Scholar] [CrossRef]
  33. Chollet. Xception: Deep Learning with Depthwise Separable Convolutions. SAE Int. J. Mater. Manuf. 2017, 7, 1251–1258. [Google Scholar] [CrossRef]
  34. Sik-Ho-Tsang. Review: Inception-v3—1st Runner Up (Image Classification) in ILSVRC 2015, 2018.
  35. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  36. Huang, G.; Liu, Z.; Maaten, L.V.D.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef]
  37. Brahimi, M. Deep learning for plants diseases; Springer International Publishing: Cham, Switzerland, 2018; pp. 159–175. [Google Scholar] [CrossRef]
  38. Alex. Learning Multiple Layers of Features from Tiny Images. Asha 2009, 34. [Google Scholar]
  39. Zagoruyko, S.; Komodakis, N. Wide Residual Networks. In Proceedings of the British Machine Vision Conference 2016, BMVC 2016, York, UK, 19–22 September 2016; pp. 87.1–87.12. [Google Scholar] [CrossRef]
  40. Jha, K.; Doshi, A.; Patel, P.; Shah, M. A comprehensive review on automation in agriculture using artificial intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
  41. Shafi, U.; Mumtaz, R.; Shafaq, Z.; Zaidi, S.M.H.; Kaifi, M.O.; Mahmood, Z.; Zaidi, S.A.R. Wheat rust disease detection techniques: A technical perspective. J. Plant Dis. Prot. 2022, 129, 489–504. [Google Scholar] [CrossRef]
  42. Ayaz, M.; Ammad-Uddin, M.; Sharif, Z.; Mansour, A.; Aggoune, E.H.M. Internet-of-Things (IoT)-based smart agriculture: Toward making the fields talk. IEEE Access 2019, 7, 129551–129583. [Google Scholar] [CrossRef]
  43. Hu, W.J.; Fan, J.; Du, Y.X.; Li, B.S.; Xiong, N.; Bekkering, E. MDFC-ResNet: An Agricultural IoT System to Accurately Recognize Crop Diseases. IEEE Access 2020, 8, 115287–115298. [Google Scholar] [CrossRef]
  44. Garg, D.; Alam, M. Deep learning and IoT for agricultural applications. In Internet of Things (IoT): Concepts and Applications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 273–284. [Google Scholar] [CrossRef]
  45. Zhao, Y.; Liu, L.; Xie, C.; Wang, R.; Wang, F.; Bu, Y.; Zhang, S. An effective automatic system deployed in agricultural Internet of Things using Multi-Context Fusion Network towards crop disease recognition in the wild. Appl. Soft Comput. J. 2020, 89. [Google Scholar] [CrossRef]
  46. Zhang, J.; Pu, R.; Yuan, L.; Huang, W.; Nie, C.; Yang, G. Integrating remotely sensed and meteorological observations to forecast wheat powdery mildew at a regional scale. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4328–4339. [Google Scholar] [CrossRef]
  47. Orchi, H.; Sadik, M.; Khaldoun, M. On using artificial intelligence and the internet of things for crop disease detection: A contemporary survey. Agriculture 2022, 12, 9. [Google Scholar] [CrossRef]
  48. Mishra, M.; Choudhury, P.; Pati, B. Modified ride-NN optimizer for the IoT based plant disease detection. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 691–703. [Google Scholar] [CrossRef]
  49. Dataset, P. PlantVillage dataset, 2018.
  50. Kitpo, N.; Kugai, Y.; Inoue, M.; Yokemura, T.; Satomura, S. Internet of Things for Greenhouse Monitoring System Using Deep Learning and Bot Notification Services. In Proceedings of the 2019 IEEE International Conference on Consumer Electronics, ICCE 2019, Las Vegas, NV, USA, 11–13 January 2019; pp. 1–4. [Google Scholar] [CrossRef]
  51. Saranya, T.; Deisy, C.; Sridevi, S.; Anbananthen, K.S.M. A comparative study of deep learning and Internet of Things for precision agriculture. Eng. Appl. Artif. Intell. 2023, 122, 106034. [Google Scholar] [CrossRef]
  52. Ngugi, L.C.; Abelwahab, M.; Abo-Zahhad, M. Recent advances in image processing techniques for automated leaf pest and disease recognition—A review. Inf. Process. Agric. 2020. [Google Scholar] [CrossRef]
  53. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  54. Zhang, D.; Meng, D.; Han, J. Co-Saliency Detection via a Self-Paced Multiple-Instance Learning Framework. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 865–878. [Google Scholar] [CrossRef]
  55. Nagasubramanian, K.; Jones, S.; Singh, A.K.; Sarkar, S.; Singh, A.; Ganapathysubramanian, B. Plant disease identification using explainable 3D deep learning on hyperspectral images. Plant Methods 2019, 15, 1–10. [Google Scholar] [CrossRef]
  56. Wang, Q.; Qi, F. Tomato diseases recognition based on faster RCNN. In Proceedings of the 10th International Conference on Information Technology in Medicine and Education, ITME 2019, Qingdao, China, 23–25 August 2019; pp. 772–776. [Google Scholar] [CrossRef]
  57. Sharma, P.; Berwal, Y.P.S.; Ghai, W. Performance analysis of deep learning CNN models for disease detection in plants using image segmentation. Inf. Process. Agric. 2019. [Google Scholar] [CrossRef]
  58. Cao, X.; Tao, Z.; Zhang, B.; Fu, H.; Feng, W. Self-adaptively weighted co-saliency detection via rank constraint. IEEE Trans. Image Process. 2014, 23, 4175–4186. [Google Scholar] [CrossRef]
  59. Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop, IIPhDW 2018, Swinoujscie, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar] [CrossRef]
  60. Kaya, A.; Seydi, A.; Catal, C.; Yalin, H.; Temucin, H. Analysis of transfer learning for deep neural network based plant classification models. Comput. Electron. Agric. 2019, 158, 20–29. [Google Scholar] [CrossRef]
  61. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  62. Coulibaly, S.; Kamsu-Foguem, B.; Kamissoko, D.; Traore, D. Deep neural networks with transfer learning in millet crop images. Comput. Ind. 2019, 108, 115–120. [Google Scholar] [CrossRef]
  63. Wu, S.G.; Bao, F.S.; Xu, E.Y.; Wang, Y.X.; Chang, Y.F.; Xiang, Q.L. A leaf recognition algorithm for plant classification using probabilistic neural network. In Proceedings of the ISSPIT 2007—2007 IEEE International Symposium on Signal Processing and Information Technology, Giza, Egypt, 15–18 December 2007; pp. 11–16. [Google Scholar] [CrossRef]
  64. Söderkvist, O. Computer vision classification of leaves from swedish trees. 2001. [Google Scholar]
  65. Silva, P.F.; Marcal, A.R.; Silva, R.M.D. Evaluation of features for leaf discrimination. Lect. Notes Comput. Sci. 2013, 7950 LNCS, 197–204. [Google Scholar] [CrossRef]
  66. Fu, H.; Cao, X.; Tu, Z. Cluster-Based Co-Saliency Detection. IEEE Trans. Image Process. 2013, 22, 3766–3778. [Google Scholar] [CrossRef]
  67. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
  68. Li, Y.; Fu, K.; Liu, Z.; Yang, J. Efficient Saliency-Model-Guided. Spl 2015, 22, 588–592. [Google Scholar]
  69. Zhang, X.; Qiao, Y.; Meng, F.; Fan, C.; Zhang, M. Identification of maize leaf diseases using improved deep convolutional neural networks. IEEE Access 2018, 6, 30370–30377. [Google Scholar] [CrossRef]
  70. Liu, B.; Zhang, Y.; He, D.J.; Li, Y. Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry 2018, 10, 11. [Google Scholar] [CrossRef]
  71. Jiang, P.; Chen, Y.; Liu, B.; He, D.; Liang, C. Real-Time Detection of Apple Leaf Diseases Using Deep Learning Approach Based on Improved Convolutional Neural Networks. IEEE Access 2019, 7, 59069–59080. [Google Scholar] [CrossRef]
  72. KC, K.; Yin, Z.; Wu, M.; Wu, Z. Depthwise separable convolution architectures for plant disease classification. Comput. Electron. Agric. 2019, 165, 104948. [Google Scholar] [CrossRef]
  73. Aghamaleki, J.A.; Baharlou, S.M. Transfer learning approach for classification and noise reduction on noisy web data. Expert Syst. Appl. 2018, 105, 221–232. [Google Scholar] [CrossRef]
  74. Vision, S.; pronceton university Lab. ImageNet dataset, 2017.
  75. Shah, J.P.; Prajapati, H.B.; Dabhi, V.K. A survey on detection and classification of rice plant diseases. In Proceedings of the 2016 IEEE International Conference on Current Trends in Advanced Computing, ICCTAC 2016, Bangalore, India, 10–11 March 2016. [Google Scholar] [CrossRef]
  76. Thorat, A.; Kumari, S.; Valakunde, N.D. An IoT based smart solution for leaf disease detection. In Proceedings of the 2017 International Conference on Big Data, IoT and Data Science, BID 2017, Pune, India, 20–22 December 2017; pp. 193–198. [Google Scholar] [CrossRef]
  77. Kundu, N.; Rani, G.; Dhaka, V.S.; Gupta, K.; Nayaka, S.C.; Vocaturo, E.; Zumpano, E. Disease detection, severity prediction, and crop loss estimation in MaizeCrop using deep learning. Artif. Intell. Agric. 2022, 6, 276–291. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.