Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = sweet pepper harvesting automation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 5395 KiB  
Review
An Overview of End Effectors in Agricultural Robotic Harvesting Systems
by Eleni Vrochidou, Viktoria Nikoleta Tsakalidou, Ioannis Kalathas, Theodoros Gkrimpizis, Theodore Pachidis and Vassilis G. Kaburlasos
Agriculture 2022, 12(8), 1240; https://doi.org/10.3390/agriculture12081240 - 17 Aug 2022
Cited by 89 | Viewed by 13631
Abstract
In recent years, the agricultural sector has turned to robotic automation to deal with the growing demand for food. Harvesting fruits and vegetables is the most labor-intensive and time-consuming among the main agricultural tasks. However, seasonal labor shortage of experienced workers results in [...] Read more.
In recent years, the agricultural sector has turned to robotic automation to deal with the growing demand for food. Harvesting fruits and vegetables is the most labor-intensive and time-consuming among the main agricultural tasks. However, seasonal labor shortage of experienced workers results in low efficiency of harvesting, food losses, and quality deterioration. Therefore, research efforts focus on the automation of manual harvesting operations. Robotic manipulation of delicate products in unstructured environments is challenging. The development of suitable end effectors that meet manipulation requirements is necessary. To that end, this work reviews the state-of-the-art robotic end effectors for harvesting applications. Detachment methods, types of end effectors, and additional sensors are discussed. Performance measures are included to evaluate technologies and determine optimal end effectors for specific crops. Challenges and potential future trends of end effectors in agricultural robotic systems are reported. Research has shown that contact-grasping grippers for fruit holding are the most common type of end effectors. Furthermore, most research is concerned with tomato, apple, and sweet pepper harvesting applications. This work can be used as a guide for up-to-date technology for the selection of suitable end effectors for harvesting robots. Full article
Show Figures

Figure 1

20 pages, 11751 KiB  
Article
Comparing Performances of CNN, BP, and SVM Algorithms for Differentiating Sweet Pepper Parts for Harvest Automation
by Bongki Lee, Donghwan Kam, Yongjin Cho, Dae-Cheol Kim and Dong-Hoon Lee
Appl. Sci. 2021, 11(20), 9583; https://doi.org/10.3390/app11209583 - 14 Oct 2021
Cited by 5 | Viewed by 2363
Abstract
For harvest automation of sweet pepper, image recognition algorithms for differentiating each part of a sweet pepper plant were developed and performances of these algorithms were compared. An imaging system consisting of two cameras and six halogen lamps was built for sweet pepper [...] Read more.
For harvest automation of sweet pepper, image recognition algorithms for differentiating each part of a sweet pepper plant were developed and performances of these algorithms were compared. An imaging system consisting of two cameras and six halogen lamps was built for sweet pepper image acquisition. For image analysis using the normalized difference vegetation index (NDVI), a band-pass filter in the range of 435 to 950 nm with a broad spectrum from visible light to infrared was used. K-means clustering and morphological skeletonization were used to classify sweet pepper parts to which the NDVI was applied. Scale-invariant feature transform (SIFT) and speeded-up robust features (SURFs) were used to figure out local features. Classification performances of a support vector machine (SVM) using the radial basis function kernel and backpropagation (BP) algorithm were compared to classify local SURFs of fruits, nodes, leaves, and suckers. Accuracies of the BP algorithm and the SVM for classifying local features were 95.96 and 63.75%, respectively. When the BP algorithm was used for classification of plant parts, the recognition success rate was 94.44% for fruits, 84.73% for nodes, 69.97% for leaves, and 84.34% for suckers. When CNN was used for classifying plant parts, the recognition success rate was 99.50% for fruits, 87.75% for nodes, 90.50% for leaves, and 87.25% for suckers. Full article
(This article belongs to the Special Issue Engineering of Smart Agriculture)
Show Figures

Figure 1

23 pages, 12245 KiB  
Article
A Vision Servo System for Automated Harvest of Sweet Pepper in Korean Greenhouse Environment
by BongKi Lee, DongHwan Kam, ByeongRo Min, JiHo Hwa and SeBu Oh
Appl. Sci. 2019, 9(12), 2395; https://doi.org/10.3390/app9122395 - 12 Jun 2019
Cited by 31 | Viewed by 4613
Abstract
Recently, farmers of sweet pepper suffer from the increase of its unit production costs due to the rise of labor costs. The rise of unit production costs of sweet pepper, on the other hand, decreases its productivity and causes the lack of its [...] Read more.
Recently, farmers of sweet pepper suffer from the increase of its unit production costs due to the rise of labor costs. The rise of unit production costs of sweet pepper, on the other hand, decreases its productivity and causes the lack of its farming expertise, thus resulting in the quality degradation of products. In this regard, it is necessary to introduce an automated robot harvest system into the farming of sweet pepper. In this study, the authors developed an image-based closed-loop control system (a vision servo system) and an automated sweet pepper harvesting robot system and then carried out experiments to verify its efficiency. The working area of the manipulator that detects products through an imaging sensor in the farming environment of sweet pepper, decides whether to harvest it or not, and then informs the location of the product to the control center, which is set up at the distance scope of 350~600 mm from the center of the system and 1000 mm vertically. In order to confirm the performance of the sweet pepper recognition in this study, 269 sweet pepper images were used to extract fruits. Of 269 sweet pepper images, 82.16% were recognized successfully. The harvesting experiment of the system developed in this study was carried out with 100 sweet peppers. The result of experiment with 100 sweet peppers presents the fact that its approach rate to peduncle is about 86.7%, and via four sessions of repetitive harvest experiment it achieves a maximal 70% harvest rate, and its average time of harvest is 51.1 s. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

23 pages, 26252 KiB  
Article
DeepFruits: A Fruit Detection System Using Deep Neural Networks
by Inkyu Sa, Zongyuan Ge, Feras Dayoub, Ben Upcroft, Tristan Perez and Chris McCool
Sensors 2016, 16(8), 1222; https://doi.org/10.3390/s16081222 - 3 Aug 2016
Cited by 965 | Viewed by 66758
Abstract
This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element [...] Read more.
This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Show Figures

Figure 1

Back to TopTop