Application of Vision Technology and Artificial Intelligence in Smart Farming—2nd Edition

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: 5 June 2024 | Viewed by 2897

Special Issue Editors

College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
Interests: microclimate analytics of poultry houses; intelligent agricultural equipment; smart farming; non-destructive detection of meat quality; agricultural robot
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Applied Meteorology, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: climate-smart agriculture; AI meteorology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
Interests: prediction model; computer simulation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
Interests: intelligent agricultural equipment; three-dimensional reconstruction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
Interests: intelligent agricultural equipment; disease detection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computer vision (CV) and artificial intelligence (AI) have been gaining traction in agriculture. From reducing production costs with intelligent automation to boosting productivity, CV and AI have massive potential to enhance the overall functioning of smart farming. Monitoring and analyzing the specific behaviors of livestock and poultry in large-scale farms, based on CV and AI, improves our knowledge of intensively raised livestock and poultry behaviors in relation to modern management techniques, allowing for improved health, welfare, and performance. In the field of planting, CV approaches are required to extract plant phenotypes from images and automate the detection of plants and plant organs. AI approaches give growers weapons against pests. Smart farming requires considerable processing power. The application of CV and AI helps crops progress towards a perfect stage of ripeness.

Based on the first volume, a Special Issue focused on the application of CV and AI in smart farming, we decided to continue with a second volume, researching topics that may include but are not limited to the following: the design and optimization of agricultural sensors, behavior recognition of livestock and poultry based on vision technology and deep learning, automation technology in agricultural equipment based on vision technology, the design and optimization of robots for livestock and poultry breeding based on vision technology and artificial intelligence, the non-destructive detection of meat quality, and agricultural big data analytics based on sensor data and deep learning. Both original research articles and reviews are accepted.

Dr. Xiuguo Zou
Dr. Xiaochen Zhu
Dr. Wentian Zhang
Dr. Yan Qian
Dr. Yuhua Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • agricultural sensors
  • behavior recognition of livestock and poultry
  • agricultural automation equipment
  • agricultural intelligent robot
  • intelligent robotic arm
  • non-destructive detection of meat quality
  • agricultural big data analytics

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 22454 KiB  
Article
Walnut Recognition Method for UAV Remote Sensing Images
by Mingjie Wu, Lijun Yun, Chen Xue, Zaiqing Chen and Yuelong Xia
Agriculture 2024, 14(4), 646; https://doi.org/10.3390/agriculture14040646 - 22 Apr 2024
Viewed by 502
Abstract
During the process of walnut identification and counting using UAVs in hilly areas, the complex lighting conditions on the surface of walnuts somewhat affect the detection effectiveness of deep learning models. To address this issue, we proposed a lightweight walnut small object recognition [...] Read more.
During the process of walnut identification and counting using UAVs in hilly areas, the complex lighting conditions on the surface of walnuts somewhat affect the detection effectiveness of deep learning models. To address this issue, we proposed a lightweight walnut small object recognition method called w-YOLO. We reconstructed the feature extraction network and feature fusion network of the model to reduce the volume and complexity of the model. Additionally, to improve the recognition accuracy of walnut objects under complex lighting conditions, we adopted an attention mechanism detection layer and redesigned a set of detection heads more suitable for walnut small objects. A series of experiments showed that when identifying walnut objects in UAV remote sensing images, w-YOLO outperforms other mainstream object detection models, achieving a mean Average Precision (mAP0.5) of 97% and an F1-score of 92%, with parameters reduced by 52.3% compared to the YOLOv8s model. Effectively addressed the identification of walnut targets in Yunnan, China, under the influence of complex lighting conditions. Full article
Show Figures

Figure 1

16 pages, 12037 KiB  
Article
Optimizing the YOLOv7-Tiny Model with Multiple Strategies for Citrus Fruit Yield Estimation in Complex Scenarios
by Juanli Jing, Menglin Zhai, Shiqing Dou, Lin Wang, Binghai Lou, Jichi Yan and Shixin Yuan
Agriculture 2024, 14(2), 303; https://doi.org/10.3390/agriculture14020303 - 13 Feb 2024
Cited by 1 | Viewed by 856
Abstract
The accurate identification of citrus fruits is important for fruit yield estimation in complex citrus orchards. In this study, the YOLOv7-tiny-BVP network is constructed based on the YOLOv7-tiny network, with citrus fruits as the research object. This network introduces a BiFormer bilevel routing [...] Read more.
The accurate identification of citrus fruits is important for fruit yield estimation in complex citrus orchards. In this study, the YOLOv7-tiny-BVP network is constructed based on the YOLOv7-tiny network, with citrus fruits as the research object. This network introduces a BiFormer bilevel routing attention mechanism, which replaces regular convolution with GSConv, adds the VoVGSCSP module to the neck network, and replaces the simplified efficient layer aggregation network (ELAN) with partial convolution (PConv) in the backbone network. The improved model significantly reduces the number of model parameters and the model inference time, while maintaining the network’s high recognition rate for citrus fruits. The results showed that the fruit recognition accuracy of the modified model was 97.9% on the test dataset. Compared with the YOLOv7-tiny, the number of parameters and the size of the improved network were reduced by 38.47% and 4.6 MB, respectively. Moreover, the recognition accuracy, frames per second (FPS), and F1 score improved by 0.9, 2.02, and 1%, respectively. The network model proposed in this paper has an accuracy of 97.9% even after the parameters are reduced by 38.47%, and the model size is only 7.7 MB, which provides a new idea for the development of a lightweight target detection model. Full article
Show Figures

Figure 1

15 pages, 2808 KiB  
Article
AG-YOLO: A Rapid Citrus Fruit Detection Algorithm with Global Context Fusion
by Yishen Lin, Zifan Huang, Yun Liang, Yunfan Liu and Weipeng Jiang
Agriculture 2024, 14(1), 114; https://doi.org/10.3390/agriculture14010114 - 10 Jan 2024
Cited by 1 | Viewed by 1143
Abstract
Citrus fruits hold pivotal positions within the agricultural sector. Accurate yield estimation for citrus fruits is crucial in orchard management, especially when facing challenges of fruit occlusion due to dense foliage or overlapping fruits. This study addresses the issues of low detection accuracy [...] Read more.
Citrus fruits hold pivotal positions within the agricultural sector. Accurate yield estimation for citrus fruits is crucial in orchard management, especially when facing challenges of fruit occlusion due to dense foliage or overlapping fruits. This study addresses the issues of low detection accuracy and the significant instances of missed detections in citrus fruit detection algorithms, particularly in scenarios of occlusion. It introduces AG-YOLO, an attention-based network designed to fuse contextual information. Leveraging NextViT as its primary architecture, AG-YOLO harnesses its ability to capture holistic contextual information within nearby scenes. Additionally, it introduces a Global Context Fusion Module (GCFM), facilitating the interaction and fusion of local and global features through self-attention mechanisms, significantly improving the model’s occluded target detection capabilities. An independent dataset comprising over 8000 outdoor images was collected for the purpose of evaluating AG-YOLO’s performance. After a meticulous selection process, a subset of 957 images meeting the criteria for occlusion scenarios of citrus fruits was obtained. This dataset includes instances of occlusion, severe occlusion, overlap, and severe overlap, covering a range of complex scenarios. AG-YOLO demonstrated exceptional performance on this dataset, achieving a precision (P) of 90.6%, a mean average precision (mAP)@50 of 83.2%, and an mAP@50:95 of 60.3%. These metrics surpass existing mainstream object detection methods, confirming AG-YOLO’s efficacy. AG-YOLO effectively addresses the challenge of occlusion detection, achieving a speed of 34.22 frames per second (FPS) while maintaining a high level of detection accuracy. This speed of 34.22 FPS showcases a relatively faster performance, particularly evident in handling the complexities posed by occlusion challenges, while maintaining a commendable balance between speed and accuracy. AG-YOLO, compared to existing models, demonstrates advantages in high localization accuracy, minimal missed detection rates, and swift detection speed, particularly evident in effectively addressing the challenges posed by severe occlusions in object detection. This highlights its role as an efficient and reliable solution for handling severe occlusions in the field of object detection. Full article
Show Figures

Figure 1

Back to TopTop