You are currently viewing a new version of our website. To view the old version click .
AgriEngineering
  • Article
  • Open Access

18 March 2024

A Performance Comparison of CNN Models for Bean Phenology Classification Using Transfer Learning Techniques

,
,
,
,
,
,
,
,
and
1
Instituto Politécnico Nacional, Unidad Profesional Interdisciplinaria de Ingeniería Campus Zacatecas (UPIIZ), Zacatecas 98160, Mexico
2
Laboratorio de Inteligencia Artificial Avanzada (LIAA), Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico
3
Posgrado en Ingeniería y Tecnología Aplicada, Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico
4
Instituto Nacional de Investigaciones Forestales, Agrícolas y Pecuarias, Campo Experimental Zacatecas (INIFAP), Zacatecas 98500, Mexico
This article belongs to the Special Issue Application of Artificial Neural Network in Agriculture

Abstract

The early and precise identification of the different phenological stages of the bean (Phaseolus vulgaris L.) allows for the determination of critical and timely moments for the implementation of certain agricultural activities that contribute in a significant manner to the output and quality of the harvest, as well as the necessary actions to prevent and control possible damage caused by plagues and diseases. Overall, the standard procedure for phenological identification is conducted by the farmer. This can lead to the possibility of overlooking important findings during the phenological development of the plant, which could result in the appearance of plagues and diseases. In recent years, deep learning (DL) methods have been used to analyze crop behavior and minimize risk in agricultural decision making. One of the most used DL methods in image processing is the convolutional neural network (CNN) due to its high capacity for learning relevant features and recognizing objects in images. In this article, a transfer learning approach and a data augmentation method were applied. A station equipped with RGB cameras was used to gather data from images during the complete phenological cycle of the bean. The information gathered was used to create a set of data to evaluate the performance of each of the four proposed network models: AlexNet, VGG19, SqueezeNet, and GoogleNet. The metrics used were accuracy, precision, sensitivity, specificity, and F1-Score. The results of the best architecture obtained in the validation were those of GoogleNet, which obtained 96.71% accuracy, 96.81% precision, 95.77% sensitivity, 98.73% specificity, and 96.25% F1-Score.

1. Introduction

According to the Food and Agriculture Organization of the United Nations (FAO), plant plagues are among the main causes of the loss of over 40 percent of food crops worldwide, exceeding losses of up to USD 220 billion each year [1]. In Mexico, bean production contributes MXN 5927 million to the annual income. However, in 2021, the registered loss was more than MXN 222 million, mainly due to diseases caused by viruses transmitted in seeds, aphids, white flies, and other similar insects [2]. Several factors interfere with food security, such as climate change [3,4,5], the lack of pollinators [6,7], plagues and plant diseases [8], the result of the COVID-19 pandemic, and the present war between Russia and Ukraine [9], among others.
Insect plagues, diseases, and other organisms significantly affect the quality and production of crops. These organisms feed off plants and transmit diseases that can cause severe disruption in the growth and development of plants, causing a major impact on food security, the economy, and the environment, thereby decreasing the availability of food, increasing production costs, and affecting the growth of rural areas and developing countries [10].
It is important to mention the strategies used to mitigate the effects produced by plagues and diseases around the world, such as the selection of resistant varieties, crop rotation, the use of natural enemies of plagues, and the rational use of chemical products, among others. More efforts need to be made to implement mechanisms and innovative strategies to reduce loss in food crops and sustainably contribute to food security [9,11,12].
In recent years, the use of artificial intelligence (AI) in applications has increased exponentially. Proof of this is the appearance of works related to image recognition, especially in the field of agriculture, where various approaches to using deep learning (DL) methods to classify the phenology of different food crops around the world have been presented. This allows us to have knowledge of the record of critical moments in the life cycle of the plant to program treatments, effectively and timely apply pesticides or fungicides, and prevent and control plagues and diseases; this offers great advantages in precision agriculture in a nonharmful manner, and it helps minimize damage to crops [13,14,15,16,17].
DL methods are used to identify the different phenological stages of crops, and there exists a diversity of approaches to address the classification of topics related to agricultural decision making that mainly influence the estimation of agricultural production. In this regard, the present work proposes a comparative study of the performance of four models of convolutional neural networks (CNNs), AlexNet, VGG19, SqueezeNet, and GoogleNet, in classifying the phenological stages of bean crops; the performance of each of the models is compared through the following metrics: accuracy, precision, sensitivity, specificity, and F-1 Score. The results are used to choose the architecture that best models the classification problem in bean phenology.
The goal of analyzing the different CNN architectures is to identify the best-performing one and, in the future, to embed networks in compact systems so that farmers can identify the phenological stages of plants, allowing them to take preventive measures.
The organization of the present work is structured as follows: Section 2 describes the most relevant works on transfer learning, related concepts, and a description of the CNN models used in the present work. Section 3 describes the methodology of the investigation work. Section 4 contains the obtained results and their discussion. Section 5 presents the conclusions, and finally, future work is described.

3. Materials and Methods

The methodology used in this study consists of three phases, as shown in Figure 3. The first phase describes the data acquisition procedure and the construction and general features of the obtained images. The second phase describes the transfer learning of CNN architectures used in this study and the configuration of hyperparameters, such as the learning rate, the lot size by iteration, the number of epochs, and the optimizer. The third phase describes the evaluation of the proposed models to measure the performance by applying different metrics.
Figure 3. Diagram of proposed methodology.

3.1. Acquisition of Data

The two selected bean parcels are located in the municipality of Calera de Víctor Rosales in the state of Zacatecas, Mexico (22°54′14.6″ N 102°39′32.5″ W). The variety of beans used was pinto Saltillo, and the data were collected between 12 May and 15 August of the year 2023. The camera model used was HC-801Pro, with 4G technology, a range of optical vision of 120°, IP65 protection, and a resolution of 30 megapixels to acquire high-quality images.
Two cameras were installed to capture the images, as shown in Figure 4. To determine the number of images for the training and testing data set, according to Tylor et al. [22], the average time for bean harvest after its emergence is from 65 to 85 days approximately, which is why an average of eight to ten images were captured per day since the emergence of the plant.
Figure 4. Installation of the GSM camera station in the open field for the capture of images: (a) camera station for the capture of images; (b) schematic diagram for the acquisition of images.
The shooting method used was for two samples per sequence for intervals of time between 8:00, 10:00, 12:00, 16:00, and 18:00 h, obtaining a total of 814 images, allowing the experimental data to include the bean growing cycle in the vegetative phase and the reproductive phase, from the germination phase of the plant (V0) through the emergence phase of the plant (V1), primary leaves (V2), the first trifoliate leaf (V3), the third trifoliate leaf (V4), prefloration (R5), floration (R6), pod formation (R8), pod filling (R8), and maturation (R9).
Generally, the bean’s phenology is classified into ten classes and divided into two main categories: the vegetative and the production phases. However, for this investigation, only four classes were selected according to the most significant number of examples per class since they tend to be the most representative, according to Etemadi et al. [39].
An example of each class of the obtained data set can be observed in Figure 5, labeling the vegetative phase in the stage of primary leaves, first and third trifoliate leaves (V2–V4), reproductive phase in the stage of prefloration and floration (R5–R6), reproductive phase in the stage of formation and filling of pods (R7–R8), and reproductive phase in the stage of maturation (R9).
Figure 5. Descriptive stages of the phenology of the bean: (a) vegetative phase in primary leaves, first and third trifoliate leaves; (b) reproductive phase in prefloration and floration; (c) reproductive stage in the formation and filling of pods; (d) reproductive phase in maturation.
The training data set and tests used per class can be observed in Figure 6. Most images have a resolution of 5120 × 3840 pixels. However, the images were re-dimensioned to adjust the size according to the entry specifications for each proposed model [34].
Figure 6. Images for training and tests per class.

3.2. Data Augmentation

The data augmentation contributes to avoiding overfitting the network and memorizing the exact details of the images during training. This is a common problem when the CNN model is exposed to small data sets where the learned patterns are not generalized into new data [40,41].
Currently, there is a tendency in deep learning training algorithms to allow for the increase of the initial data set through data augmentation techniques, obtaining results that can improve the precision performance in deep learning algorithms [42]. A series of aleatory transformations increased data to exploit the few examples of images and increase the precision of the proposed CNN models. The strategies of data augmentation used were rotation, translation, reflection, and scaling. Examples are shown in Figure 7.
Figure 7. Data augmentation in an image of the phenology of the bean: (a) original image without data increase; (b) image after rotation; (c) image after translation; (d) image after reflection; (e) image after scaling.

3.3. Training of the Models

For the training data set and tests, the images were divided randomly in a partition of 70% of the training data set and 30% for the test set. Table 2 shows the configuration of the experimental equipment used in this investigation. The four models previously selected were trained by the ImageNet database, which contains more than 15 million images [26].
Table 2. Configuration of experimental equipment.
The hyperparameters selected in this study from the revised literature and the previously mentioned hardware capacity are described in Table 3. The selection of hyperparameters significantly affects the performance of CNN models, which is why a good selection is crucial. The hyperparameters were standardized for each model to compare the performance of the proposed models [31,43,44].
Table 3. Training hyperparameters of pre-trained models.
The optimizer used is the Stochastic Gradient Descent with Momentum (SGDM) method, which combines stochastic gradient descent and momentum techniques. Each iteration calculates the gradient using a random sample from the training set. Then, the weight is updated considering the previous update, allowing convergence acceleration and keeping it at a local minimum.
Momentum is employed to improve the precision and velocity of the training by adding a fraction from the previous step to the present step in the weight update. This allows it to overcome the obstacle of local minimums and maintain a constant impulse in the direction of the gradient.
The epochs refer to the number of iterations carried out regarding the correlation of forward and reverse propagation to reduce loss. The size of the epochs describes the number of examples used in each iteration of the training algorithm. The learning rate is defined by the velocity size in which the optimization function performs a search to converge [45].

3.4. Performance Evaluation

At present, an extended variety of metrics are used to evaluate the performance of CNN models, where information is given about the aspects and characteristics that allow the evaluation of the performance of the models. A number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) are needed to calculate the performance of the models. These cases represent the combinations of true and predicted classes in classification problems. Therefore, TP + TN + FP + FN equals the total number of samples and is described in a confusion matrix [46].
When trained by transfer learning, the biased distributions appear naturally, producing an intrinsic unbalance. This is why it is necessary to employ metrics that evaluate the global performance of each model. In this regard, the metrics employed [16,20,22,46,47] were used to compare the performance of the models without setting aside the different characteristics of the training and validation data used in this study.
TP is the true positive, which means the prediction is positive. FP is the false positive, which means the prediction is negative. However, the prediction is positive. FN is the false negative, which means a positive prediction, but the result is negative. TN is the true negative, which means a negative result prediction.
The confusion matrix is a tool that allows the visualization of a model’s performance when classifying and containing the previously defined elements. The rows in the matrix represent the true class, and the columns, the predicted class, and the primary diagonal cells describe the correctly classified observations. In contrast, the lateral diagonals correspond to the incorrectly classified observations.
In this study, five metrics were used to evaluate the performance of the proposed models: accuracy, precision, sensitivity, specificity, and F1-Score [27,48]. Accuracy is the relation between the number of correct predictions and the total number of made predictions, as calculated by Equation (1).
A c c u r a c y = T P + T N T P + T N + F P + F N
Precision measures the proportion of correct predictions made by the model, in other words, the number of correctly classified elements as positives out of a total of elements identified as positive. The mathematical representation is described in Equation (2).
P r e c i s i o n = T P T P + F P
Sensitivity is also known as recall; it calculates the proportion of correctly identified cases as positive from a total of true positives, as described in Equation (3).
S e n s i t i v i t y = T P T P + F N
Specificity is the opposite of sensitivity or recall and calculates the portion of cases identified as negatives. It is calculated by Equation (4).
S p e c i f i c i t y = T N T N + F P
F1-Score allows the combination of precision and sensitivity or recall, where the value of one indicates a good balance between precision and sensitivity in the classification model. Its mathematical representation is described in Equation (5).
F 1 - s c o r e = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l

4. Results and Discussion

Table 4 provides a detailed calculation for each of the different metrics of AlexNet’s architecture. A sensitivity of 100% was obtained for the prediction of classes R9 and V2–V4, which correspond to the reproductive phenological phase in the stage of maturation and the vegetative phase in the stage of primary leaves and first and third trifoliate leaves, respectively.
Table 4. Calculation of metric for the AlexNet model.
The averages obtained for the accuracy, precision, sensitivity, specificity, and F1-Score were 95.8%, 94.1%, 97.2%, 98.6%, and 95.5% of the predicted classes during validation, respectively.
Table 5 shows the different metrics in the VGG19 model. In classes R5-R6, corresponding to the reproductive phenological phase in the stages of prefloration and floration, a precision of 100% was reached.
Table 5. Calculation of metrics for VGG19 model.
A 95% precision average for all classes can be observed. On the other hand, maximum sensitivity was also obtained for classes R9 and V2–V4, corresponding to the phenological phase of reproduction in the maturation stage and vegetative phase of primary leaves and first and third trifoliate leaves, respectively. An average of 97.4% of sensitivity is observed in each class. In addition, averages reached for specificity are 98.8% and 96% F1-Score in all classes, achieving the best scores compared to the other architectures.
A detailed calculation for each metric used in the architecture SqueezeNet can be shown in Table 6. For the classes V2–V4, which corresponds to the vegetative phenological phase in the stage of primary leaves and first and third trifoliate leaves, a sensitivity of 100% was obtained. On the other hand, averages in accuracy, precision, sensitivity, and F1-Score of 95.8%, 93.4%, 95.9%, 98.6%, and 94.4%, respectively, are observed for all predicted classes during validation.
Table 6. Calculation of metrics for SqueezeNet model.
The different metrics of the GoogleNet model can be observed in Table 7, where the average obtained for precision is 96.8% in all predicted classes, and there is a maximum sensitivity in the prediction of classes V2–V4 that corresponds to the vegetative phenological phase in the stage of primary leaves and first and third trifoliate leaves. The averages observed in the metrics of accuracy, precision, sensitivity, specificity, and F1-Score are 96.7%, 96.8%, 95.7%, 98.7%, and 96.2%, respectively, for all predicted classes during validation, which concur with the metric of precision obtained for the VGG19 model.
Table 7. Calculation of metrics for GoogleNet model.
Table 8 shows the results obtained in each metric, with the best values obtained during the architecture’s evaluation highlighted in bold. It shows that the architecture of VGG19 and GoogleNet obtained the best performance, and both concur in accuracy. On the other hand, the architecture with the lowest performance observed is SqueezeNet due to the values obtained that are generally lower than those obtained in other architectures. However, SqueezeNet required the least training time in comparison to the others.
Table 8. Comparison of results for each of the architectures.
A comparison of the accuracy obtained in each of the models during the training and validation stages is shown in Figure 8, where the architecture AlexNet reaches the highest accuracy percentage compared to the other models during the training stage. However, in the validation stage, VGG19 reached the highest accuracy percentage. This stage shows that the GoogleNet model obtained the lowest accuracy percentage compared to the other models. However, it is observed that this model reached the highest accuracy percentage during the validation stage, just like VGG19; in other words, they obtained a greater capacity.
Figure 8. Accuracy of models during training and validation.
The architecture performance summary with measurements obtained in each metric calculated during the validation stage can be observed in Figure 9; the accuracy metric obtained during the training stage is included for each model. It is observed that the GoogleNet architecture maintained the best balance in the projection of all metrics. On the other hand, it is observed that the architecture SqueezeNet obtained the best performance compared to the rest of the architectures, except in precision, where the result is the same as the AlexNet architecture, and sensitivity is less than the obtained with the GoogleNet architecture.
Figure 9. Summary of model performance: (a) AlexNet model; (b) VGG19 model; (c) SqueezeNet model; (d) GoogleNet model.
Based on the obtained results, the GoogleNet architecture has a higher performance during the validation compared to its performance during the training process; the cause of this behavior could be due to the limited amount of training data, the adequate selection of hyperparameters, and possibly an over-adjustment. However, the difference between the precision of training and validation is 1.4% compared to the models AlexNet, VGG19, and SqueezeNet, which present a difference of 1.7%, 3.3%, and 1.7%, respectively—considering that increased training data will give an outcome with a tendency to decrease the performance during the validation of each architecture.
According to the behavior during the validation, the architectures AlexNet and SqueezeNet presented a low-balanced tendency in the metrics, obtaining low results for precision and F1-Score. On the other hand, the architecture VGG19 registered the same level of performance as GoogleNet but with lower precision and F1-Score, giving a reason to consider the architecture GoogleNet as having the best global performance.
Figure 10 shows the confusion matrix of the four proposed CNN models. It also provides a detailed analysis of instance numbers correctly classified by each proposed architecture. Compared to other architectures, the AlexNet architecture presented problems in correctly classifying the class R9, which corresponds to the reproductive phenological phase in the stage of maturation, achieving the classification of only 85.2% of instances.
Figure 10. Confusion matrix of CNN models: (a) AlexNet architecture; (b) VGG19 architecture; (c) SqueezeNet architecture; (d) GoogleNet architecture.
The confusion matrix of the VGG19 architecture shows its high capacity to classify instances correctly. The diagonal shows the correctly classified instances; however, class R9 presented the most difficulty. The SqueezeNet architecture, like the previous architectures, shows more difficulties in correctly classifying the class R9; however, for the GoogleNet architecture, the class presents minimal difficulties.
Table 9 describes a comparison summary of the performance results obtained by other authors concerning the techniques and metrics used for this work. It breaks down the values obtained for accuracy, precision, sensitivity, and F1-Score for each proposed architecture.
Table 9. Comparison of similar work recently published.

5. Conclusions

The proposed methodology shows that the proposed CNN models allow the correct classification of more than 90% of the samples, even when working with an unbalanced and relatively minor data set. In addition, each analyzed architecture has different characteristics, such as the number of layers and used filters. However, it is crucial to highlight a suitable selection of metrics to discriminate one architecture from the other.
Evaluating different CNN topologies is significant for future work since the architectures can present bias due to being trained with numerous images from which many are not part of the final classification. In this regard, evaluating the performance by transfer with new data lays the foundation for new work, such as identifying nutrients or plagues for this species. The joint evaluation of the metrics accuracy, precision specificity, sensitivity, and F1-Score allows the obtention of a multifaceted analysis, resulting in a higher performance GoogleNet architecture. Even though the global performance of each model is acceptable, data augmentation can modify the performance of all architectures.
One of the main limitations in the implementation of CNN models is the lack of data for certain classes, for which the main contribution of this study is to be able to distinguish the performance obtained through a reduced data set, where the application of data augmentation techniques other than reducing the overfitting in training helps improve the capacity of generalization in the network in comparison to other studies where augmentation techniques were not applied in the same way as the performance results of the models in Table 9.
On the other hand, through a methodological analysis, the performance was compared and evaluated by applying five metrics to four CNN models. The GoogleNet architecture obtained the best performance, showing the best results in most metrics, obtaining 96.71% accuracy, 96.81% precision, 95.77% sensitivity, 98.73% specificity, and 96.25% F1-Score.

6. Future Work

This study will open other alternatives that could be applied using the same transfer-by-learning approach for the controlled prevention of plagues and diseases in bean crops through timely intervention and automated computerized image classification.

Author Contributions

Conceptualization, T.I.-P. and U.A.H.-G.; methodology, R.J.-M., M.d.R.M.-B. and J.I.C.-F.; software, U.A.H.-G. and H.A.G.-O.; validation, R.R.-M. and T.I.-P.; formal analysis, H.C.C.-A., U.A.H.-G. and R.R.-M.; investigation, C.N., H.C.C.-A. and R.J.-M.; resources, U.A.H.-G. and H.A.G.-O.; data curation, C.N., M.d.R.M.-B. and J.I.C.-F.; writing—original draft preparation, T.I.-P. and J.I.C.-F.; writing—review and editing, T.I.-P. and R.J.-M.; visualization, R.R.-M. and H.A.G.-O.; supervision, C.N. and M.d.R.M.-B.; project administration, T.I.-P. and F.D.M.-D.; funding acquisition, H.C.C.-A., F.D.M.-D. and T.I.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Instituto Politécnico Nacional (IPN) under grant number SIP/20230388.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

We want to deeply thank Instituto Nacional de Investigaciones Forestales, Agrícolas y Pecuarias Campo Experimental Zacatecas (INIFAP) for providing us with the experimental field for this research and the Consejo Zacatecano de Ciencia, Tecnología e Innovación (COZCyT). We sincerely thank the people who provided support and advice for this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. FAO. Food and Agriculture Organization of the United Nations International Year of Plant Health. Available online: https://www.fao.org/plant-health-2020/about/en/ (accessed on 11 December 2023).
  2. Velia, A.; Garay, A.; Alberto, J.; Gallegos, A.; Muro, L.R. El Cultivo Del Frijol Presente y Futuro Para México; INIFAP: Celaya, Gto., México, 2021; Volume 1, ISBN 978-607-37-1318-4. [Google Scholar]
  3. Gregory, P.J.; Ingram, J.S.I.; Brklacich, M. Climate Change and Food Security. Philos. Trans. R. Soc. B Biol. Sci. 2005, 360, 2139–2148. [Google Scholar] [CrossRef]
  4. Chakraborty, S.; Newton, A.C. Climate Change, Plant Diseases and Food Security: An Overview. Plant Pathol. 2011, 60, 2–14. [Google Scholar] [CrossRef]
  5. Mutengwa, C.S.; Mnkeni, P.; Kondwakwenda, A. Climate-Smart Agriculture and Food Security in Southern Africa: A Review of the Vulnerability of Smallholder Agriculture and Food Security to Climate Change. Sustainability 2023, 15, 2882. [Google Scholar] [CrossRef]
  6. Bailes, E.J.; Ollerton, J.; Pattrick, J.G.; Glover, B.J. How Can an Understanding of Plant-Pollinator Interactions Contribute to Global Food Security? Curr. Opin. Plant Biol. 2015, 26, 72–79. [Google Scholar] [CrossRef]
  7. Saha, H.; Chatterjee, S.; Paul, A. Role of Pollinators in Plant Reproduction and Food Security: A Concise Review. Res. J. Agric. Sci. 2023, 14, 72–79. [Google Scholar]
  8. Trebicki, P.; Finlay, K. Pests and Diseases under Climate Change; Its Threat to Food Security; John Wiley & Sons Ltd.: Chichester, UK, 2019; Volume 1, ISBN 9781119180654. [Google Scholar]
  9. Alam, F.B.; Tushar, S.R.; Zaman, S.M.; Gonzalez, E.D.S.; Bari, A.M.; Karmaker, C.L. Analysis of the Drivers of Agriculture 4.0 Implementation in the Emerging Economies: Implications towards Sustainability and Food Security. Green Technol. Sustain. 2023, 1, 100021. [Google Scholar] [CrossRef]
  10. Mcbeath, J.H.; Mcbeath, J. Environmental Change and Food Security in China; Beniston, M., Ed.; Springer: Fairbanks, AK, USA, 2010; Volume 35, ISBN 978-1-4020-9179-7. [Google Scholar]
  11. Calicioglu, O.; Flammini, A.; Bracco, S.; Bellù, L.; Sims, R. The Future Challenges of Food and Agriculture: An Integrated Analysis of Trends and Solutions. Sustainability 2019, 11, 222. [Google Scholar] [CrossRef]
  12. He, J.; Chen, K.; Pan, X.; Zhai, J.; Lin, X. Advanced Biosensing Technologies for Monitoring of Agriculture Pests and Diseases: A Review. J. Semicond. 2023, 44, 23104. [Google Scholar] [CrossRef]
  13. Mallick, M.D.T.; Biswas, S.; Das, A.K.; Saha, H.N.; Chakrabarti, A.; Deb, N. Deep Learning Based Automated Disease Detection and Pest Classification in Indian Mung Bean. Multimed. Tools Appl. 2023, 82, 12017–12041. [Google Scholar] [CrossRef]
  14. Hadipour-Rokni, R.; Asli-Ardeh, E.A.; Jahanbakhshi, A.; Paeen-Afrakoti, I.E.; Sabzi, S. Intelligent Detection of Citrus Fruit Pests Using Machine Vision System and Convolutional Neural Network through Transfer Learning Technique. Comput. Biol. Med. 2023, 155, 106611. [Google Scholar] [CrossRef]
  15. Datt, R.M.; Kukreja, V. Phenological Stage Recognition Model for Apple Crops Using Transfer Learning. In Proceedings of the 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 28–29 April 2022; pp. 1537–1542. [Google Scholar]
  16. Yalcin, H. Phenology Recognition Using Deep Learning. In Proceedings of the 2018 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT), Istanbul, Turkey, 18–19 April 2018; pp. 1–5. [Google Scholar]
  17. Yang, Q.; Shi, L.; Han, J.; Yu, J.; Huang, K. A near Real-Time Deep Learning Approach for Detecting Rice Phenology Based on UAV Images. Agric. For. Meteorol. 2020, 287, 107938. [Google Scholar] [CrossRef]
  18. Ge, S.; Zhang, J.; Pan, Y.; Yang, Z.; Zhu, S. Transferable Deep Learning Model Based on the Phenological Matching Principle for Mapping Crop Extent. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102451. [Google Scholar] [CrossRef]
  19. Wang, A.X.; Tran, C.; Desai, N.; Lobell, D.; Ermon, S. Deep Transfer Learning for Crop Yield Prediction with Remote Sensing Data. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, COMPASS, Menlo Park and San Jose, CA, USA, 20–22 June 2018. [Google Scholar]
  20. Reeb, R.A.; Aziz, N.; Lapp, S.M.; Kitzes, J.; Heberling, J.M.; Kuebbing, S.E. Using Convolutional Neural Networks to Efficiently Extract Immense Phenological Data From Community Science Images. Front. Plant Sci. 2022, 12, 787407. [Google Scholar] [CrossRef]
  21. Zhao, Y.; Han, S.; Meng, Y.; Feng, H.; Li, Z.; Chen, J.; Song, X.; Zhu, Y.; Yang, G. Transfer-Learning-Based Approach for Yield Prediction of Winter Wheat from Planet Data and SAFY Model. Remote Sens. 2022, 14, 5474. [Google Scholar] [CrossRef]
  22. Taylor, S.D.; Browning, D.M. Classification of Daily Crop Phenology in PhenoCams Using Deep Learning and Hidden Markov Models. Remote Sens. 2022, 14, 286. [Google Scholar] [CrossRef]
  23. Bailer, C.; Habtegebrial, T.; Varanasi, K.; Stricker, D. Fast Feature Extraction with CNNs with Pooling Layers. arXiv 2018, arXiv:1805.03096. [Google Scholar] [CrossRef]
  24. Paymode, A.S.; Malode, V.B. Transfer Learning for Multi-Crop Leaf Disease Image Classification Using Convolutional Neural Network VGG. Artif. Intell. Agric. 2022, 6, 23–33. [Google Scholar] [CrossRef]
  25. Bosilj, P.; Aptoula, E.; Duckett, T.; Cielniak, G. Transfer Learning between Crop Types for Semantic Segmentation of Crops versus Weeds in Precision Agriculture. J. Field Robot. 2020, 37, 7–19. [Google Scholar] [CrossRef]
  26. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  27. Rahman, T.; Chowdhury, M.E.H.; Khandakar, A.; Islam, K.R.; Islam, K.F.; Mahbub, Z.B.; Kadir, M.A.; Kashem, S. Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection Using Chest X-ray. Appl. Sci. 2020, 10, 3233. [Google Scholar] [CrossRef]
  28. Barbhuiya, A.A.; Karsh, R.K.; Jain, R. CNN Based Feature Extraction and Classification for Sign Language. Multimed Tools Appl. 2021, 80, 3051–3069. [Google Scholar] [CrossRef]
  29. Salehi, A.W.; Khan, S.; Gupta, G.; Alabduallah, B.I.; Almjally, A.; Alsolai, H.; Siddiqui, T.; Mellit, A. A Study of CNN and Transfer Learning in Medical Imaging: Advantages, Challenges, Future Scope. Sustainability 2023, 15, 5930. [Google Scholar] [CrossRef]
  30. Kim, H.E.; Cosa-Linan, A.; Santhanam, N.; Jannesari, M.; Maros, M.E.; Ganslandt, T. Comparison of Three Dimensional Reconstruction and Conventional Computer Tomography Angiography in Patients Undergoing Zero-Ischemia Laparoscopic Partial Nephrectomy. BMC Med. Imaging 2022, 22, 47. [Google Scholar] [CrossRef]
  31. Narvekar, C.; Rao, M. Flower Classification Using CNN and Transfer Learning in CNN-Agriculture Perspective. In Proceedings of the 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), Thoothukudi, India, 3–5 December 2020; pp. 660–664. [Google Scholar]
  32. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  33. Jogin, M.; Mohana, M.; Madhulika, M.; Divya, G.; Meghana, R.; Apoorva, S. Feature Extraction Using Convolution Neural Networks (CNN) and Deep Learning. In Proceedings of the 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 18–19 May 2018; pp. 2319–2323. [Google Scholar]
  34. Cuevas-Rodriguez, E.O.; Galvan-Tejada, C.E.; Maeda-Gutiérrez, V.; Moreno-Chávez, G.; Galván-Tejada, J.I.; Gamboa-Rosales, H.; Luna-García, H.; Moreno-Baez, A.; Celaya-Padilla, J.M. Comparative Study of Convolutional Neural Network Architectures for Gastrointestinal Lesions Classification. PeerJ 2023, 11, e14806. [Google Scholar] [CrossRef]
  35. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  36. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  37. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  38. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-Level Accuracy with 50× Fewer Parameters and <0.5 MB Model Size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  39. Etemadi, F.; Hashemi, M.; Zandvakili, O.R.; Mangan, F.X. Phenology, Yield and Growth Pattern of Faba Bean Varieties. Int. J. Plant Prod. 2018, 12, 243–250. [Google Scholar] [CrossRef]
  40. Kolar, Z.; Chen, H.; Luo, X. Transfer Learning and Deep Convolutional Neural Networks for Safety Guardrail Detection in 2D Images. Autom. Constr. 2018, 89, 58–70. [Google Scholar] [CrossRef]
  41. Lopez, A.; Giro-I-Nieto, X.; Burdick, J.; Marques, O. Skin Lesion Classification from Dermoscopic Images Using Deep Learning Techniques. In Proceedings of the 2017 13th IASTED International Conference on Biomedical Engineering (BioMed), Innsbruck, Austria, 20–21 February 2017; pp. 49–54. [Google Scholar]
  42. Kurek, J.; Antoniuk, I.; Górski, J.; Jegorowa, A.; Świderski, B.; Kruk, M.; Wieczorek, G.; Pach, J.; Orłowski, A.; Aleksiejuk-Gawron, J. Data Augmentation Techniques for Transfer Learning Improvement in Drill Wear Classification Using Convolutional Neural Network. Mach. Graph. Vis. 2019, 28, 3–12. [Google Scholar] [CrossRef]
  43. Hassan, S.M.; Maji, A.K.; Jasiński, M.; Leonowicz, Z.; Jasińska, E. Identification of Plant-Leaf Diseases Using Cnn and Transfer-Learning Approach. Electronics 2021, 10, 1388. [Google Scholar] [CrossRef]
  44. Thenmozhi, K.; Reddy, U.S. Crop Pest Classification Based on Deep Convolutional Neural Network and Transfer Learning. Comput. Electron. Agric. 2019, 164, 104906. [Google Scholar] [CrossRef]
  45. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef] [PubMed]
  46. Qin, J.; Hu, T.; Yuan, J.; Liu, Q.; Wang, W.; Liu, J.; Guo, L.; Song, G. Deep-Learning-Based Rice Phenological Stage Recognition. Remote Sens. 2023, 15, 2891. [Google Scholar] [CrossRef]
  47. Han, J.; Shi, L.; Yang, Q.; Huang, K.; Zha, Y.; Yu, J. Real-Time Detection of Rice Phenology through Convolutional Neural Network Using Handheld Camera Images. Precis. Agric. 2021, 22, 154–178. [Google Scholar] [CrossRef]
  48. Johnson, J.M.; Khoshgoftaar, T.M. Survey on Deep Learning with Class Imbalance. J. Big Data 2019, 6, 27. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.