Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = vine trunk segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 62527 KB  
Article
Towards Intelligent Pruning of Vineyards by Direct Detection of Cutting Areas
by Elia Pacioni, Eugenio Abengózar, Miguel Macías Macías, Carlos J. García-Orellana, Ramón Gallardo and Horacio M. González Velasco
Agriculture 2025, 15(11), 1154; https://doi.org/10.3390/agriculture15111154 - 27 May 2025
Cited by 1 | Viewed by 1219
Abstract
The development of robots for automatic pruning of vineyards using deep learning techniques seems feasible in the medium term. In this context, it is essential to propose and study solutions that can be deployed on portable hardware, with artificial intelligence capabilities but reduced [...] Read more.
The development of robots for automatic pruning of vineyards using deep learning techniques seems feasible in the medium term. In this context, it is essential to propose and study solutions that can be deployed on portable hardware, with artificial intelligence capabilities but reduced computing power. In this paper, we propose a novel approach to vineyard pruning by direct detection of cutting areas in real time by comparing Mask R-CNN and YOLOv8 performances. The studied object segmentation architectures are able to segment the image by locating the trunk, and pruned and not pruned vine shoots. Our study analyzes the performance of both frameworks in terms of segmentation efficiency and inference times on a Jetson AGX Orin GPU. To compare segmentation efficiency, we used the mAP50 and AP50 per category metrics. Our results show that YOLOv8 is superior both in segmentation efficiency and inference time. Specifically, YOLOv8-S exhibits the best tradeoff between efficiency and inference time, showing an mAP50 of 0.883 and an AP50 of 0.748 for the shoot class, with an inference time of around 55 ms on a Jetson AGX Orin. Full article
Show Figures

Figure 1

20 pages, 46648 KB  
Article
Generating a Dataset for Semantic Segmentation of Vine Trunks in Vineyards Using Semi-Supervised Learning and Object Detection
by Petar Slaviček, Ivan Hrabar and Zdenko Kovačić
Robotics 2024, 13(2), 20; https://doi.org/10.3390/robotics13020020 - 23 Jan 2024
Cited by 4 | Viewed by 3698
Abstract
This article describes an experimentally tested approach using semi-supervised learning for generating new datasets for semantic segmentation of vine trunks with very little human-annotated data, resulting in significant savings in time and resources. The creation of such datasets is a crucial step towards [...] Read more.
This article describes an experimentally tested approach using semi-supervised learning for generating new datasets for semantic segmentation of vine trunks with very little human-annotated data, resulting in significant savings in time and resources. The creation of such datasets is a crucial step towards the development of autonomous robots for vineyard maintenance. In order for a mobile robot platform to perform a vineyard maintenance task, such as suckering, a semantically segmented view of the vine trunks is required. The robot must recognize the shape and position of the vine trunks and adapt its movements and actions accordingly. Starting with vine trunk recognition and ending with semi-supervised training for semantic segmentation, we have shown that the need for human annotation, which is usually a time-consuming and expensive process, can be significantly reduced if a dataset for object (vine trunk) detection is available. In this study, we generated about 35,000 images with semantic segmentation of vine trunks using only 300 images annotated by a human. This method eliminates about 99% of the time that would be required to manually annotate the entire dataset. Based on the evaluated dataset, we compared different semantic segmentation model architectures to determine the most suitable one for applications with mobile robots. A balance between accuracy, speed, and memory requirements was determined. The model with the best balance achieved a validation accuracy of 81% and a processing time of only 5 ms. The results of this work, obtained during experiments in a vineyard on karst, show the potential of intelligent annotation of data, reducing the time required for labeling and thus paving the way for further innovations in machine learning. Full article
(This article belongs to the Special Issue Robotics and AI for Precision Agriculture)
Show Figures

Figure 1

18 pages, 30221 KB  
Article
Automatic Grapevine Trunk Detection on UAV-Based Point Cloud
by Juan M. Jurado, Luís Pádua, Francisco R. Feito and Joaquim J. Sousa
Remote Sens. 2020, 12(18), 3043; https://doi.org/10.3390/rs12183043 - 17 Sep 2020
Cited by 45 | Viewed by 5358
Abstract
The optimisation of vineyards management requires efficient and automated methods able to identify individual plants. In the last few years, Unmanned Aerial Vehicles (UAVs) have become one of the main sources of remote sensing information for Precision Viticulture (PV) applications. In fact, high [...] Read more.
The optimisation of vineyards management requires efficient and automated methods able to identify individual plants. In the last few years, Unmanned Aerial Vehicles (UAVs) have become one of the main sources of remote sensing information for Precision Viticulture (PV) applications. In fact, high resolution UAV-based imagery offers a unique capability for modelling plant’s structure making possible the recognition of significant geometrical features in photogrammetric point clouds. Despite the proliferation of innovative technologies in viticulture, the identification of individual grapevines relies on image-based segmentation techniques. In that way, grapevine and non-grapevine features are separated and individual plants are estimated usually considering a fixed distance between them. In this study, an automatic method for grapevine trunk detection, using 3D point cloud data, is presented. The proposed method focuses on the recognition of key geometrical parameters to ensure the existence of every plant in the 3D model. The method was tested in different commercial vineyards and to push it to its limit a vineyard characterised by several missing plants along the vine rows, irregular distances between plants and occluded trunks by dense vegetation in some areas, was also used. The proposed method represents a disruption in relation to the state of the art, and is able to identify individual trunks, posts and missing plants based on the interpretation and analysis of a 3D point cloud. Moreover, a validation process was carried out allowing concluding that the method has a high performance, especially when it is applied to 3D point clouds generated in phases in which the leaves are not yet very dense (January to May). However, if correct flight parametrizations are set, the method remains effective throughout the entire vegetative cycle. Full article
(This article belongs to the Special Issue UAS-Remote Sensing Methods for Mapping, Monitoring and Modeling Crops)
Show Figures

Graphical abstract

Back to TopTop