Next Article in Journal
Remote Monitoring of Coffee Leaf Miner Infestation Using Fuzzy Logic and the Google Earth Engine Platform
Previous Article in Journal
Characteristics of Local Air Temperature of Serpentine Copper Pipe Heat Exchangers for Cooling Growing Crops in Greenhouses
Previous Article in Special Issue
Development of Prediction Models for Apple Fruit Diameter and Length Using Unmanned Aerial Vehicle-Based Multispectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vineyard Groundcover Biodiversity: Using Deep Learning to Differentiate Cover Crop Communities from Aerial RGB Imagery

by
Isabella Ghiglieno
1,
Girma Tariku Woldesemayat
1,
Andres Sanchez Morchio
1,*,
Celine Birolleau
1,
Luca Facciano
1,
Fulvio Gentilin
2,
Salvatore Mangiapane
2,
Anna Simonetto
1 and
Gianni Gilioli
1
1
Department of Civil, Environmental, Architectural Engineering, and Mathematics, Agrofood Research Hub, Università di Brescia, Branze 43, 25123 Brescia, Italy
2
GRiD Laboratory, Department of Civil, Environmental, Architectural Engineering, and Mathematics, University of Brescia, Branze 43, 25123 Brescia, Italy
*
Author to whom correspondence should be addressed.
AgriEngineering 2025, 7(12), 434; https://doi.org/10.3390/agriengineering7120434
Submission received: 22 September 2025 / Revised: 6 November 2025 / Accepted: 21 November 2025 / Published: 16 December 2025

Abstract

Monitoring groundcover diversity in vineyards is a complex task, often limited by the time and expertise required for accurate botanical identification. Remote sensing technologies and AI-based tools are still underutilized in this context, particularly for classifying herbaceous vegetation in inter-row areas. In this study, we introduce a novel approach to classify the groundcover into one of nine categories, in order to simplify this task. Using UAV images to train a convolutional neural network through a deep learning methodology, this study evaluates the effectiveness of different backbone structures applied to a UNet network for the classification of pixels into nine classes of groundcover: vine canopy, bare soil, and seven distinct cover crop community types. Our results demonstrate that the UNet model, especially when using an EfficientNetB0 backbone, significantly improves classification performance, achieving 85.4% accuracy, 59.8% mean Intersection over Union (IoU), and a Jaccard index of 73.0%. Although this study demonstrates the potential of integrating remote sensing and deep learning for vineyard biodiversity monitoring, its applicability is limited by the small image coverage, as data were collected from a single vineyard and only one drone flight. Future work will focus on expanding the model’s applicability to a broader range of vineyard systems, soil types, and geographic regions, as well as testing its performance on lower-resolution multispectral imagery to reduce data acquisition costs and time, enabling large-scale and cost-effective monitoring.

1. Introduction

Viticulture, one of the most widespread and economically significant perennial fruit crops globally [1], is increasingly the focus of sustainable management strategies aimed at integrating biodiversity into agricultural systems. A central component of these strategies is the management of inter-row groundcover vegetation, which plays a crucial role in enhancing ecological processes and delivering key ecosystem services.
Whether sown or spontaneous, cover crops can contribute to nitrogen fixation, improve soil structure, reduce erosion, attract pollinators, and support the biological control of pests [2,3,4]. Groundcovers also contribute to both planned biodiversity—intentionally introduced by vine-growers—and associated biodiversity, which emerges spontaneously within the vineyard agroecosystem [5].
Understanding and managing groundcover diversity is essential to optimize these benefits. However, significant knowledge gaps remain concerning the relationships between different cover crop communities and the provision of ecosystem services in vineyards. One of the main obstacles is the lack of practical tools for accurately assessing groundcover composition over time and space. In particular, there is limited quantitative understanding of how different groundcover types are distributed spatially within vineyards and what proportion of the inter-row area they occupy. Field-based surveys are time-consuming, require botanical expertise, and lack scalability. These limitations restrict the broader adoption of biodiversity-based practices and the ability to monitor their ecological impacts [6,7].
Remote sensing technologies and artificial intelligence (AI) now offer promising solutions to address these challenges. In particular, the combination of Unmanned Aerial Vehicles (UAVs) and deep learning (DL) models applied to image analysis enables the automated mapping of vegetation and land cover classification. Deep learning networks, such as convolutional neural networks and recurrent neural networks, are widely used in this context, making them particularly effective in image processing [8,9]. Although these technologies are increasingly used to monitor vine vigor, disease presence, and soil characteristics, their application to vineyard groundcover classification remains underdeveloped [10,11]. There is a clear need to design and test DL models specifically adapted to the spatial variability and structural complexity of herbaceous vegetation in inter-row areas.
This study addresses this gap by developing a deep learning-based approach for groundcover classification using UAV-acquired RGB imagery. We implemented a pixel-wise classification system capable of distinguishing nine classes: vine canopy, bare soil, and seven distinct cover crop communities. The first two categories allow for the separate analysis of the seven categories of cover crops that were chosen according to their functional role in the agroecosystem [12].
A key contribution of our work is the comparative evaluation of three UNet-based deep learning architectures, each featuring a different encoder backbone—a pre-trained convolutional neural network used as the feature extractor at the core of the segmentation model—namely, ResNet34, EfficientNet, InceptionV3 and DenseNet.
The objective of this study is to address the knowledge gap in applying DL models to the study of groundcovers in vineyards and to compare these DL models to determine which one is best suited to the case study. Future research should aim to expand the model’s applicability across a wider range of vineyard types, soil conditions, and image resolutions. In doing so, this approach could enable large-scale, cost-effective biodiversity monitoring and provide valuable insights into vineyard biodiversity dynamics and their functional implications over time and space.

2. Materials and Methods

2.1. Study Site

This study was conducted in a vineyard belonging to Azienda Agricola Ricci Curbastro in Capriolo, Brescia, Italy (Figure 1). The vineyard is situated in the Franciacorta wine region, known for its high-quality sparkling wine production.

2.2. Cover Crop Communities Classification

In this study, we focused on the inter-row areas of vineyards and categorized them into three main components: vine canopy, bare soil, and groundcover vegetation. Within the groundcover category, we further identified seven distinct types of cover crop communities, grouped according to their dominant botanical families or functional traits: graminoids, legumes (Fabaceae), mustards (Brassicaceae), composites (Asteraceae), Polygonaceae, Plantaginaceae, and other forbs. These cover crop communities play different functional roles and provide various ecosystem services (Table 1).

2.3. Image Acquisition

To acquire aerial imagery, a drone survey was carried out on 21 September 2023, at noun, on a cloudy day, using an DJI M300 RTK UAV (DJI, Shenzhen, China). The flight was autonomously managed via the Litchi application (version 4.25.0, released 4 July 2022, by VC Technology Ltd., London, UK), which enabled the execution of a structured photogrammetry mission. The UAV was flown at an altitude of 8 m above ground level. Four ground control points (GCPs) with known coordinates were used for georeferencing. A total of 24 high-resolution RGB images were acquired, ensuring full coverage of the study area, using a previously available DJI Zenmuse P1 Camera (Shenzhen, China), which has a 35.9 × 24 mm sensor and produces images with a resolution of 45 megapixels and a with a focal length of 35 mm, determining a pixel size of 0.1 cm. These 24 images were curated to ensure full coverage of the study area without overlapping between image footprints.

2.4. Data Preprocessing for Cover Crops Segmentation

The study utilized 24 RGB images (8192 × 5460 pixels), each representing 40 square meters, for a total analyzed area of 960 square meters. While DL offers advantages over traditional ML for plant species mapping by handling complex data, the large image size presents computational challenges. To address this, the images underwent preprocessing as follows:
  • A plant expert manually annotated masks for each training image using Roboflow, a web-based platform for image dataset creation and management.
  • The high-resolution images and their corresponding masks were partitioned into 256 × 256-pixel patches for training a U-Net segmentation model. This facilitated efficient image segmentation and feature identification. The trained model was then saved.
  • The saved model was used for the prediction and mapping of the full-size images.
Different types of ground cover (cover crops, vines, and soil) were manually labeled by an experienced botanist with in-depth knowledge of vineyard ecosystems. The expert classified each polygon in the mask image (8192 × 5460 px) by assigning labels corresponding to the established reference categories. Following approaches used in vegetation mapping studies [22,23], each polygon was attributed to the category whose vegetative morphological traits were visually dominant, that is, recognizable in more than half of the polygon area (≥60%). Subsequently, the image was divided into patches of 256 × 256 pixels, and the classification was verified by the same expert to ensure accuracy and consistency.
Figure 2 illustrates the core workflow of the semantic segmentation process. Semantic segmentation is a deep learning technique that classifies each pixel in an image into a predefined category, allowing detailed spatial mapping of vineyard groundcover types. A U-Net architecture was employed for model training. Once trained, the model was saved and applied to full-size images for pixel-wise prediction, enabling high-resolution groundcover mapping within vineyard inter-rows
This study addressed the challenges posed by the large size (8192 × 5460 pixels) of the original images, which resulted in high memory requirements and slow training times in deep learning models. A Python script was developed to efficiently process these images. The script iterated through the image and mask file directories, resizing each image and corresponding mask to the nearest dimensions divisible by 256 pixels (Figure 3). The patchify library (TensorFlow) was then used to segment the resized images and masks into non-overlapping 256 × 256-pixel patches. These patches were saved individually while maintaining their association with the corresponding masks. Finally, the splitfolders library (TensorFlow) was used to divide the patches into training (70%), validation (15%), and testing (15%) datasets.

2.5. Semantic Segmentation Model

A UNet semantic segmentation model [24], implemented in Keras (Python 3.13.7), was designed for pixel-wise classification of UAV RGB imagery, where each pixel is assigned a class label based on local features such as color, texture, or spatial context. This architecture, illustrated in Figure 4, comprises encoder and decoder pathways with skip connections to preserve spatial information and effectively capture both local and global features. The model outputs a segmented mask classifying each pixel into one of nine classes: seven cover crop communities (graminoids, legumes, mustards, composites, Polygonaceae, Plantaginaceae, and other forbs), vine, and soil. The training involved minimizing a loss function (measuring the discrepancy between predicted and ground truth masks), with performance evaluated using Intersection over Union (IoU), accuracy, and Mean IoU. We compared semantic segmentation with and without backbone architectures. The “no backbone” approach trained the UNet from scratch, relying solely on its inherent architecture for feature extraction. This was contrasted against models incorporating backbone architectures. The selection of backbone architectures was guided by their proven performance in semantic segmentation tasks, particularly in agricultural and environmental contexts. ResNet34 offers reliable feature extraction through residual learning [25], while Incep-tionV3 is noted for its computational efficiency and multi-scale feature processing [26]. EfficientNet, with its compound scaling method, optimizes performance relative to computational cost [27], and DenseNet promotes gradient flow and feature reuse via dense connectivity [28]. These architectures have been effectively applied in recent agricultural segmentation studies [29,30,31], justifying their selection for comparative evaluation in this work.

2.6. Metrics

In addition to overall pixel accuracy and mean Intersection over Union (mIoU), we report per-class IoU and a multiclass confusion matrix on the held-out test set to provide class-resolved performance. Accuracy measures the proportion of correctly labeled pixels over the entire image; it can be influenced by abundant classes (e.g., soil). Mean IoU averages the IoU across classes and thus better reflects species-level segmentation quality, particularly for minority classes (e.g., Mustards, Polygonaceae, Plantaginaceae).

2.7. Training Parameters

Model training was performed using the Adam optimizer with a learning rate of 0.001, β1 = 0.9, β2 = 0.999, and ε = 1 × 10−7. The batch size was set to 16, and training ran for 100 epochs with early stopping (patience = 10) based on validation loss. The loss function combined class-weighted Dice Loss and Categorical Focal Loss, and model performance was monitored using accuracy and Intersection-over-Union (IoU) metrics. All images were normalized to a [0, 1] range, and the dataset was split into 70% training, 15% validation, and 15% testing subsets. Batch normalization layers were included throughout the architecture to improve training stability and generalization.
For each backbone (U-Net without backbone, ResNet34, EfficientNet-B0, Inception-V3, DenseNet-121), models were trained with Adam (learning rate 0.001, β1 = 0.9, β2 = 0.999), batch size 16, and a composite loss of class-weighted Dice plus Categorical Focal. Early stopping monitored the validation loss (patience = 10, restore_best_weights = True). For reporting, the ‘stopped epoch’ is defined as the epoch attaining the minimum validation loss; all validation metrics (Accuracy, Precision, Recall, F1, Mean IoU) are computed from that checkpoint. We provide the full learning curves (training/validation loss and Mean IoU) for each backbone.

3. Results

Our UNet semantic segmentation model was trained on a dataset comprising 15,600 image patches. These patches were derived from 24 high-resolution RGB images (8192 × 5460 pixels) by dividing them into smaller 256 × 256-pixel sections. To ensure rigorous evaluation, the dataset was divided into a training set (70%), a validation set (15%), and a test set (15%). The model is configured with specific parameters, including the DensNet121, ResNet34, InceptionV3, and EfficientNet backbone architecture, with a combination of Dice loss and Categorical Focal loss as the optimization objective. The model is trained over 100 epochs, and the training progress is visualized using plots for loss and IOU score. After training, the model is saved for future use. Performance metrics such as accuracy, precision, recall, F1 score, Jaccard score, and Mean IoU are computed using the trained model on the test dataset. Finally, the results are printed to evaluate the segmentation model’s performance.

3.1. Class Imbalance and Data Augmentation

To address the class imbalance evident in Table 2, where certain classes (e.g., mustards, Polygonaceae, and Legumes) have significantly lower coverage areas than others, we implemented a class-balanced data augmentation strategy. Augmentation is strictly label-preserving and applied jointly to the RGB image and its ground-truth mask using only rigid transforms (rotations by 0°/90°/180°/270° and horizontal/vertical flips); no elastic/non-rigid warps or mask repainting are used, so the spatial arrangement of vegetation is preserved. This involved selectively augmenting the training data by creating new image patches that focused on the under-represented classes. Specifically, we modified the existing masks to temporarily ignore highly represented classes, effectively creating new masks where the under-represented classes became the dominant features. These temporary “ignore” masks were used only to guide patch selection (sampling) and were never used as training targets; the model was always supervised with the original, unmodified pixel-wise masks. This process increased the number of training samples for the under-represented classes without altering the original dataset. New image patches were then generated from these modified masks, thus enriching the training data with a more balanced representation of all classes. For clarity, while patch selection could be guided by temporary masks, the labels fed to the network were the original masks, ensuring that no class areas were synthetically inflated or spatially altered. This targeted approach ensured that the model received sufficient training examples for even the least prevalent classes, improving its ability to accurately segment these features.

3.2. Overfitting Prevention Strategies

To mitigate overfitting, we implemented early stopping. Training was halted when the validation loss stopped decreasing for a specified number of epochs (e.g., 10 epochs), ensuring that training continued only as long as the model was improving its performance on unseen data. Additionally, batch normalization was employed to stabilize and accelerate training, and class weighting in the loss function helped manage class imbalances. These strategies collectively contributed to preventing overfitting and enhancing the model’s generalization ability.

3.3. Model Performance

The comparison of UNet semantic segmentation models using different backbone architectures reveals nuanced differences in performance metrics (Table 3). Across all backbone configurations, including ResNet34, EfficientNet, InceptionV3, DenseNet, and a version without a specified backbone, there is a notable consistency in accuracy, precision, recall, and F1 score, with variations typically within a range of 1–2 percentage points. However, when assessing metrics more tailored to semantic segmentation tasks, such as mean IOU and Jaccard score, subtle disparities emerge. EfficientNet and DenseNet exhibit slightly higher mean IOU and Jaccard scores compared to ResNet34 and InceptionV3, highlighting their marginally superior ability to accurately segment objects in images. For instance, EfficientNet achieves an accuracy of 85.4%, while DenseNet reaches 83.6%, both with corresponding mean IOU scores of 59.8% and 52.1%, respectively. These results underscore the importance of selecting an appropriate backbone architecture, as models with dedicated architectures designed for image segmentation tasks demonstrate enhanced performance, particularly in terms of mean IOU and Jaccard score, compared to models without a specified backbone.
The training and validation loss curves for all backbone architectures (Figure 5) show a consistent decrease over epochs, indicating effective convergence without severe overfitting. Models with pretrained backbones particularly EfficientNet-B0 and ResNet50 achieved faster loss reduction and lower final validation loss compared to those without a backbone, reflecting more efficient feature learning. DenseNet-121 and Inception-V3 also demonstrated stable convergence, though with slightly higher final losses. The no-backbone model exhibited the slowest convergence and highest validation loss, confirming the benefit of transfer learning in improving model generalization and segmentation performance.
As shown in Table 4, all backbone architectures achieved high accuracy across most vegetation classes, with particularly strong performance for Plantaginaceae, vine, and mustards (IoU > 0.80 for the best-performing models). Among the architectures, EfficientNet-B0 and ResNet50 consistently produced the highest per-class precision and IoU values, indicating superior discrimination of vegetation types. In contrast, classes with limited spatial coverage, such as Polygonaceae and legumes, exhibited lower recall and IoU across all models, reflecting the effect of class imbalance on segmentation accuracy. Overall, backbone-based models outperformed the no-backbone configuration, confirming the benefit of transfer learning for fine-scale vegetation mapping.
The confusion matrices (Figure 6) demonstrate that all backbone-based U-Net models achieved strong class discrimination, with the highest accuracy for Plantaginaceae, vine, and mustards, which showed clear diagonal dominance and minimal misclassification. Misclassifications were more common among visually similar or spatially co-occurring vegetation classes such as graminoids, composite, and other forbs. Among the architectures, EfficientNet-B0 and ResNet50 exhibited the most distinct class separation, while the no-backbone model showed higher confusion across categories. Overall, the results confirm that transfer learning significantly enhances the model’s ability to differentiate vegetation types at fine spatial scales.
After loading the trained model, we generate a batch of test images and masks using the validation data generator. Next, we compute predictions for the test images using the loaded model and convert the predicted masks from categorical to integer format for visualization and IoU calculation. Finally, we visualize a randomly selected test image along with its corresponding ground truth mask and predicted mask for a qualitative assessment of the model’s performance (Figure 7).
Finally, the trained model was loaded to apply it to a large image, where it was segmented into patches of appropriate size for processing. The prediction process was conducted on each patch, and the resulting segmented patches were then stitched together to reconstruct the predicted mask for the entire large image, facilitating the application of the model to images beyond the validation dataset (see Figure 8).

4. Discussion

Our study aims to establish a method for identifying vineyard inter-row groundcovers, with the ultimate goal of creating a comprehensive groundcover map and supporting decision-making for soil and cover crops management in viticulture. The management of cover crops in vineyards is an increasingly widespread agronomic practice, as it enhances soil fertility, increases biodiversity, and mitigates the effects of climate change, with particular attention to water availability management and erosion control [10]. The adoption of cover crops between vineyard rows helps restore soil ecosystem functions, contributing to a more sustainable and resilient viticulture [5]. However, the specific composition of plant communities significantly influences the benefits obtained. For this reason, it is crucial to develop advanced tools for classifying and monitoring soil biodiversity, enabling agronomic decisions based on precise and georeferenced data.
In this work, the potential of DL techniques applied to RGB images was evaluated in a vineyard located in the Franciacorta wine-growing area.
The use of artificial intelligence for cover crop recognition represents an innovative approach that enables the automation of vegetation analysis, significantly reducing monitoring time and costs compared to traditional methods based on manual sampling and field botanical analysis.
The main focus of the study was to distinguish between nine classes of groundcover. Beyond the two classes vine and bare soil, other seven different groups of cover crops communities were distinguished: graminoids, composite, mustards, legumes, Polygonaceae, Plantaginaceae, and other forbs. This classification is essential, as the phenology, morphology, and functional traits of plants within each group determine their contribution to ecosystem services in the vineyard. Cover crop communities play a crucial role in maintaining agroecosystem balance. Morphological and functional differences among species directly influence the ecosystem services they provide, including soil fertility regulation, improvement of soil physical structure, and control of interspecies competition. Graminoids, with their fibrous root systems, enhance the structural stability of the upper soil layers, contributing to erosion reduction and increased water infiltration capacity [32]. Brassicaceae, on the other hand, can aid in soil biofumigation, reducing pathogen presence and promoting a better microbial balance [33]. Legumes, through atmospheric nitrogen fixation, improve nutrient availability and reduce dependency on chemical fertilizers, supporting a more sustainable agricultural model [34]. Polygonaceae and Plantaginaceae play a complementary role by enhancing soil structure at greater depths and improving functional biodiversity through their ability to attract pollinators and natural predators of pests. The precise identification and mapping of these plant communities are therefore fundamental for optimizing vineyard management, allowing for the selection of the most suitable species combinations based on local pedoclimatic conditions and specific agronomic objectives.
Using a total of 24 RGB images, taken with a drone at 8 m height, five UNet-based deep learning models with different encoder backbones [19] were trained and tested to assess their accuracy in predicting the observed soil cover. By employing a UNet architecture, which excels in pixel-wise classification tasks through its symmetrical encoder–decoder structure and skip connections, the model effectively segments UAV RGB images into predefined classes such as vine, soil, and various vegetation types. The integration of advanced DL techniques, such as semantic segmentation with backbones, offers a notable improvement in accuracy over methods without backbones, as evidenced by performance metrics. The use of backbones like EfficientNet [25], ResNet34 [23], InceptionV3 [24] and DenseNet [26] provided a robust foundation for feature extraction, enhancing the model’s ability to capture hierarchical features and spatial relationships within the images. Even though all the architectures analyzed showed high performance, exceeding an accuracy of 70%, the UNet model with an EfficientNet backbone stood out above the rest, achieving an accuracy of 85.4%, with a precision of 84.97%. Unlike generalized datasets, ours is tailored to the unique ecological conditions and management practices of vineyards, making direct comparisons with other segmentation benchmarks difficult. To assess our UNet model, we therefore had compared it to relevant studies in agricultural image segmentation using aerial imagery. While many studies, like those by Zuo and Li [31] on weed segmentation in corn fields and Shai et al. [32] on weed detection using UAV images, focus on different crop types and scales, our dataset’s emphasis on vineyard cover crops requires a more granular analysis. Our UNet architecture aligns with common practices in image segmentation, as exemplified by Wang et al. and Zhao et al. [31,35].
The integration of advanced deep learning and remote sensing techniques opens new possibilities for vineyard management. Automating biodiversity mapping enables real-time monitoring of the evolution of plant communities within the vineyard, facilitating adaptive management strategies by adjusting cover crop selection based on actual agronomic needs. This approach reduces monitoring costs by eliminating the need for extensive manual sampling and optimizes resource management, such as fertilization and irrigation, based on the composition of plant communities and their interactions with the soil. Although the specific case analyzed here concentrates on the inter-row zones, since the under-vine (in-row) areas were tilled and thus classified as soil, the broader goal of this research is to monitor the entire vineyard groundcover, including all cover crop communities visible from aerial imagery. In situations where vegetation is also present beneath the vines, these areas could either be included in the analysis, at least for the portions visible from above and not obscured by the canopy, or excluded during the image pre-processing phase, depending on the study objectives.
The results of this study highlight the potential of DL models, particularly those utilizing advanced backbone architectures, to enhance the precision and efficiency of groundcover classification in vineyards. This advancement in technology provides valuable insights into the spatial distribution and ecological roles of different groundcover types, which can significantly aid vineyard management practices. Specifically, this study represents the first step in applying DL to the classification of cover crop communities in vineyard agroecosystems. The results demonstrate that innovative application of these technologies can address complex, long-standing challenges such as monitoring vineyard biodiversity. Moreover, results obtained in the present study provide the groundwork for future studies on how cover crops may contribute to the ecological balance and sustainability of vineyard ecosystems through ecosystem services provision. However, some limitations have to be underlined. The first key limitation is the potential for bias arising from class imbalance in the dataset. While class-balancing techniques (targeted data augmentation and class weighting) were employed, under-represented classes may still bias predictions, particularly with unseen data. Future research should address this by expanding the dataset to encompass a broader range of vineyard types, soil conditions, and geographic locations. In terms of outputs obtained, although the DL model developed is able to automatically discriminate among nine classes of groundcover, it needs to be evolved in order to generate a real operational tool. Specifically, the results obtained from high-resolution images need to be replicated on lower-resolution images to make the system applicable to much larger areas in hectares, significantly reducing the costs and time required for UAV image acquisition.
This study proved the potential of RGB images in the identification of the seven types of cover crops that can be associated with specific botanical families, and when combined with additional data sources such as multispectral or LiDAR imagery, can be further correlated with ecological traits of the vineyard. This information enables a better understanding of the agroecosystem’s health status and biodiversity. From a decision-making perspective, these results can support vineyard managers at both operational and strategic levels. Operationally, they can be used to adjust mowing or tillage schedules based on the spatial distribution and growth dynamics of cover crops and for the planning of sowing operations, supporting the selection of optimal species mixtures. Strategically, this approach can support the evaluation of the relationship between cover crop performance and vine development. and encourage the adoption of more sustainable vineyard management practices aimed at improving soil health, biodiversity, and overall vineyard resilience.
This study represents a step forward in the application of artificial intelligence to vineyard management, providing an innovative approach for soil biodiversity monitoring. The integration of deep learning, UAVs, and remote sensing enables the development of operational tools for more sustainable viticulture, based on precise and up-to-date data [36]. Our approach provides a solid scientific foundation for future developments, with the goal of expanding research across different vineyard types and pedoclimatic conditions, optimizing segmentation models for large-scale operational applications, and promoting the adoption of AI in agriculture to enhance biodiversity management and support more sustainable practices.

5. Conclusions

This study validates the potential of applying DL models to soil cover classification (proving the U-Net model with the EfficientNet backbone to be the most appropriate to apply in the case study), which would simplify the work of winegrowers when monitoring biodiversity in the field and provides valuable information for decision-making related to soil and vegetation management in viticulture.
Future research should focus on extending the model’s capacity to cover larger areas more efficiently, incorporating multispectral images, while optimizing processing time and costs. Additionally, expanding the dataset to include a wider variety of vineyard types, managements, soil conditions, and geographical contexts will be essential to improve the model’s generalizability and applicability across different viticultural environments.

Author Contributions

Conceptualization, I.G., G.T.W. and G.G.; methodology, I.G., G.T.W., F.G. and S.M.; software, G.T.W., A.S.M., L.F., C.B. and A.S.; validation, I.G., A.S.M., L.F. and C.B.; formal analysis, I.G., G.T.W. and A.S.; investigation, I.G., G.T.W., F.G. and S.M.; resources, G.G. and A.S.; data curation, G.T.W., A.S.M., L.F. and C.B.; writing—original draft preparation, I.G., G.T.W., A.S.M., L.F. and C.B.; writing—review and editing, I.G., A.S.M. and A.S.; visualization, I.G. and G.G.; supervision, G.G.; project administration, I.G., A.S. and G.G. All authors have read and agreed to the published version of the manuscript.

Funding

The study has been partially supported by Fondazione Cariplo (Italy) through Project 2023-3340, Data Science-Based Adaptive Solutions and Technologies for Agriculture and Climate (DATA) and by “Regione Lombardia” (Italy) under the project “Biodiversità, suolo e servizi ecosistemici. Strategie, metodi e tecniche per la realizzazione di food system robusti, resilienti e sostenibili”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset presented in this study is publicly available at https://doi.org/10.5281/zenodo.17701564 (accessed on 5 November 2025).

Acknowledgments

The authors would like to thank Azienda Agricola Ricci Curbastro for their support and availability during the development of this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Homet, P.; Gallardo-Reina, M.Á.; Aguiar, J.F.; Liberal, I.M.; Casimiro-Soriguer, R.; Ochoa-Hueso, R. Viticulture and the European Union’s Common Agricultural Policy (CAP): Historical Overview, Current Situation and Future Perspective. J. Sustain. Agric. Environ. 2024, 3, e12099. [Google Scholar] [CrossRef]
  2. Blanco-Canqui, H.; Shaver, T.M.; Lindquist, J.L.; Shapiro, C.A.; Elmore, R.W.; Francis, C.A.; Hergert, G.W. Cover Crops and Ecosystem Services: Insights from Studies in Temperate Soils. Agron. J. 2015, 107, 2449–2474. [Google Scholar] [CrossRef]
  3. Daryanto, S.; Fu, B.; Wang, L.; Jacinthe, P.-A.; Zhao, W. Quantitative Synthesis on the Ecosystem Services of Cover Crops. Earth-Sci. Rev. 2018, 185, 357–373. [Google Scholar] [CrossRef]
  4. Eckert, M.; Mathulwe, L.; Gaigher, R.; Joubert, L.; Pryke, J. Native Cover Crops Enhance Arthropod Diversity in Vineyards of the Cape Floristic Region. J. Insect Conserv. 2020, 24, 133–1479. [Google Scholar] [CrossRef]
  5. Novara, A.; Catania, V.; Tolone, M.; Gristina, L.; Laudicina, V.A.; Quatrini, P. Cover Crop Impact on Soil Organic Carbon, Nitrogen Dynamics and Microbial Diversity in a Mediterranean Semiarid Vineyard. Sustainability 2020, 12, 3256. [Google Scholar] [CrossRef]
  6. Labeyrie, V.; Renard, D.; Aumeeruddy-Thomas, Y.; Benyei, P.; Caillon, S.; Calvet-Mir, L.; Carrière, S.M.; Demongeot, M.; Descamps, E.; Braga Junqueira, A.; et al. The Role of Crop Diversity in Climate Change Adaptation: Insights from Local Observations to Inform Decision Making in Agriculture. Curr. Opin. Environ. Sustain. 2021, 51, 15–23. [Google Scholar] [CrossRef]
  7. Crotty, F.V.; Stoate, C. The Legacy of Cover Crops on the Soil Habitat and Ecosystem Services in a Heavy Clay, Minimum Tillage Rotation. Food Energy Secur. 2019, 8, e00169. [Google Scholar] [CrossRef]
  8. Du, K.-L.; Swamy, M.N.S. Deep Learning. In Neural Networks and Statistical Learning; Du, K.-L., Swamy, M.N.S., Eds.; Springer: London, UK, 2019; pp. 717–736. ISBN 978-1-4471-7452-3. [Google Scholar]
  9. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep Learning Techniques to Classify Agricultural Crops through UAV Imagery: A Review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef]
  10. Mărculescu, S.-I.; Badea, A.; Teodorescu, R.I.; Begea, M.; Frîncu, M.; Bărbulescu, I.D. Application of Artificial Intelligence Technologies in Viticulture. Sci. Pap. Ser. Manag. Econ. Eng. Agric. Rural. Dev. 2024, 24, 563–578. [Google Scholar]
  11. Epifani, L.; Caruso, A. A Survey on Deep Learning in UAV Imagery for Precision Agriculture and Wild Flora Monitoring: Datasets, Models and Challenges. Smart Agric. Technol. 2024, 9, 100625. [Google Scholar] [CrossRef]
  12. Abad, J.; Hermoso de Mendoza, I.; Marín, D.; Orcaray, L.; Santesteban, L.G. Cover Crops in Viticulture. A Systematic Review (1): Implications on Soil Characteristics and Biodiversity in Vineyard. OENO One 2021, 55, 295–312. [Google Scholar] [CrossRef]
  13. Van Sundert, K.; Arfin Khan, M.A.S.; Bharath, S.; Buckley, Y.M.; Caldeira, M.C.; Donohue, I.; Dubbert, M.; Ebeling, A.; Eisenhauer, N.; Eskelinen, A.; et al. Fertilized Graminoids Intensify Negative Drought Effects on Grassland Productivity. Glob. Change Biol. 2021, 27, 2441–2457. [Google Scholar] [CrossRef] [PubMed]
  14. Vandvik, V.; Althuizen, I.; Jaroszynska, F.; Krüger, L.; Lee, H.; Goldberg, D.; Klanderud, K.; Olsen, S.; Telford, R.; Östman, S.; et al. The Role of Plant Functional Groups Mediating Climate Impacts on Carbon and Biodiversity of Alpine Grasslands. Sci. Data 2022, 9, 451. [Google Scholar] [CrossRef]
  15. Perrone, S.; Grossman, J.; Liebman, A.; Wells, S.; Sooksa-nguan, T.; Jordan, N. Legume Cover Crop Contributions to Ecological Nutrient Management in Upper Midwest Vegetable Systems. Front. Sustain. Food Syst. 2022, 6, 712152. [Google Scholar] [CrossRef]
  16. Muhammad, I.; Wang, J.; Sainju, U.M.; Zhang, S.; Zhao, F.; Khan, A. Cover Cropping Enhances Soil Microbial Biomass and Affects Microbial Community Structure: A Meta-Analysis. Geoderma 2021, 381, 114696. [Google Scholar] [CrossRef]
  17. Richards, A.; Estaki, M.; Úrbez-Torres, J.R.; Bowen, P.; Lowery, T.; Hart, M. Cover Crop Diversity as a Tool to Mitigate Vine Decline and Reduce Pathogens in Vineyard Soils. Diversity 2020, 12, 128. [Google Scholar] [CrossRef]
  18. Sáenz-Romo, M.G.; Veas-Bernal, A.; Martínez-García, H.; Campos-Herrera, R.; Ibáñez-Pascual, S.; Martínez-Villar, E.; Pérez-Moreno, I.; Marco-Mancebón, V.S. Ground Cover Management in a Mediterranean Vineyard: Impact on Insect Abundance and Diversity. Agric. Ecosyst. Environ. 2019, 283, 106571. [Google Scholar] [CrossRef]
  19. Björkman, T.; Shail, J.W. Using a Buckwheat Cover Crop for Maximum Weed Suppression after Early Vegetables. HortTechnology 2013, 23, 575–580. [Google Scholar] [CrossRef]
  20. Miglécz, T.; Valkó, O.; Török, P.; Deák, B.; Kelemen, A.; Donkó, Á.; Drexler, D.; Tóthmérész, B. Establishment of Three Cover Crop Mixtures in Vineyards. Sci. Hortic. 2015, 197, 117–123. [Google Scholar] [CrossRef]
  21. Lundholm, J.T. Green Roof Plant Species Diversity Improves Ecosystem Multifunctionality. J. Appl. Ecol. 2015, 52, 726–734. [Google Scholar] [CrossRef]
  22. Trisakti, B. Vegetation Type Classification and Vegetation Cover Percentage Estimation in Urban Green Zone Using Pleiades Imagery. IOP Conf. Ser. Earth Environ. Sci. 2017, 54, 012003. [Google Scholar] [CrossRef]
  23. Grassland-Modelling-Report.Pdf. Available online: https://ec.europa.eu/eurostat/documents/205002/9722562/Grassland-Modelling-Report.pdf (accessed on 1 September 2025).
  24. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  26. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  27. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning (PMLR), Long Beach, CA, USA, 10–15 June 2019; pp. 6105–6114. [Google Scholar]
  28. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  29. Zuo, Y.; Li, W. An Improved UNet Lightweight Network for Semantic Segmentation of Weed Images in Corn Fields. CMC 2024, 79, 4413–4431. [Google Scholar] [CrossRef]
  30. Shahi, T.B.; Dahal, S.; Sitaula, C.; Neupane, A.; Guo, W. Deep Learning-Based Weed Detection Using UAV Images: A Comparative Study. Drones 2023, 7, 624. [Google Scholar] [CrossRef]
  31. Wang, Y.; Gu, L.; Jiang, T.; Gao, F. MDE-UNet: A Multitask Deformable UNet Combined Enhancement Network for Farmland Boundary Segmentation. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  32. Martin, A.R.; Isaac, M.E. Functional Traits in Agroecology: Advancing Description and Prediction in Agroecosystems. J. Appl. Ecol. 2018, 55, 5–11. [Google Scholar] [CrossRef]
  33. O’Farrell, C.; Forge, T.; Hart, M.M. Using Brassica Cover Crops as Living Mulch in a Vineyard, Changes over One Growing Season. Int. J. Plant Biol. 2023, 14, 1105–1116. [Google Scholar] [CrossRef]
  34. Ghafoor, A.Z.; Javed, H.H.; Karim, H.; Studnicki, M.; Ali, I.; Yue, H.; Xiao, P.; Asghar, M.A.; Brock, C.; Wu, Y. Biological Nitrogen Fixation for Sustainable Agriculture Development Under Climate Change–New Insights From a Meta-Analysis. J. Agron. Crop Sci. 2024, 210, e12754. [Google Scholar] [CrossRef]
  35. Zhao, X.; Yuan, Y.; Song, M.; Ding, Y.; Lin, F.; Liang, D.; Zhang, D. Use of Unmanned Aerial Vehicle Imagery and Deep Learning UNet to Extract Rice Lodging. Sensors 2019, 19, 3859. [Google Scholar] [CrossRef]
  36. Shahi, T.B.; Xu, C.-Y.; Neupane, A.; Guo, W.; Shahi, T.B.; Xu, C.-Y.; Neupane, A.; Guo, W. Machine Learning Methods for Precision Agriculture with UAV Imagery: A Review. Electron. Res. Arch. 2022, 30, 4277–4317. [Google Scholar] [CrossRef]
Figure 1. Progressive zoom into the study site: (a) Italy and the Lombardy region, (b) detail of Lombardy, (c) municipality of Capriolo, and (d) vineyard of interest.
Figure 1. Progressive zoom into the study site: (a) Italy and the Lombardy region, (b) detail of Lombardy, (c) municipality of Capriolo, and (d) vineyard of interest.
Agriengineering 07 00434 g001
Figure 2. Workflow of the semantic segmentation process applied to high-resolution UAV images for groundcover classification in vineyard inter-rows.
Figure 2. Workflow of the semantic segmentation process applied to high-resolution UAV images for groundcover classification in vineyard inter-rows.
Agriengineering 07 00434 g002
Figure 3. Illustration of four randomly selected image patches (A, B, C, D) of size 256 × 256 pixels, extracted from the original vineyard images along with their corresponding ground truth masks. These patches are used to train and evaluate a deep-learning model. The legend describes the color representing each class (In the case of these images, no patches of the mustard variety are observed).
Figure 3. Illustration of four randomly selected image patches (A, B, C, D) of size 256 × 256 pixels, extracted from the original vineyard images along with their corresponding ground truth masks. These patches are used to train and evaluate a deep-learning model. The legend describes the color representing each class (In the case of these images, no patches of the mustard variety are observed).
Agriengineering 07 00434 g003
Figure 4. The architecture of the UNet Semantic Segmentation Model.
Figure 4. The architecture of the UNet Semantic Segmentation Model.
Agriengineering 07 00434 g004
Figure 5. Training and validation loss curves for U-Net models with different backbone architectures: (a) ResNet34, (b) InceptionV3, (c) EfficientNet-B0, (d) DenseNet121, and (e) no-backbone configuration. The steady decline in both training and validation losses indicates stable convergence across all models, with pretrained backbones (ResNet34, InceptionV3, EfficientNet-B0, and DenseNet121) achieving faster convergence and lower final validation losses compared to the model without a backbone.
Figure 5. Training and validation loss curves for U-Net models with different backbone architectures: (a) ResNet34, (b) InceptionV3, (c) EfficientNet-B0, (d) DenseNet121, and (e) no-backbone configuration. The steady decline in both training and validation losses indicates stable convergence across all models, with pretrained backbones (ResNet34, InceptionV3, EfficientNet-B0, and DenseNet121) achieving faster convergence and lower final validation losses compared to the model without a backbone.
Agriengineering 07 00434 g005
Figure 6. Confusion matrices for U-Net models with different backbone architectures: (a) ResNet34, (b) InceptionV3, (c) EfficientNet-B0, (d) DenseNet121, and (e) no-backbone configuration. The matrices illustrate class-wise prediction performance across nine vegetation and soil categories. High diagonal values indicate accurate classification, with Plantaginaceae, vine, and mustards showing the highest true positive rates, while classes with smaller spatial coverage, such as Polygonaceae and legumes, exhibited higher misclassification rates.
Figure 6. Confusion matrices for U-Net models with different backbone architectures: (a) ResNet34, (b) InceptionV3, (c) EfficientNet-B0, (d) DenseNet121, and (e) no-backbone configuration. The matrices illustrate class-wise prediction performance across nine vegetation and soil categories. High diagonal values indicate accurate classification, with Plantaginaceae, vine, and mustards showing the highest true positive rates, while classes with smaller spatial coverage, such as Polygonaceae and legumes, exhibited higher misclassification rates.
Agriengineering 07 00434 g006
Figure 7. Visualization of a randomly selected test image, ground truth mask, and predicted mask. The figure illustrates the qualitative assessment of the model’s performance, comparing the original test image to its corresponding ground truth and predicted masks. The legend describes the color representing each class.
Figure 7. Visualization of a randomly selected test image, ground truth mask, and predicted mask. The figure illustrates the qualitative assessment of the model’s performance, comparing the original test image to its corresponding ground truth and predicted masks. The legend describes the color representing each class.
Agriengineering 07 00434 g007
Figure 8. Illustrates the original image, ground truth mask, and the final segmented image mask obtained by applying a trained model. The model segments the large image into patches for processing, generates predictions for each patch, and seamlessly stitches together the resulting segmented patches to reconstruct the predicted mask for the entire large image. The legend describes the color representation of each class.
Figure 8. Illustrates the original image, ground truth mask, and the final segmented image mask obtained by applying a trained model. The model segments the large image into patches for processing, generates predictions for each patch, and seamlessly stitches together the resulting segmented patches to reconstruct the predicted mask for the entire large image. The legend describes the color representation of each class.
Agriengineering 07 00434 g008
Table 1. The cover crops identified in the images and their functional roles.
Table 1. The cover crops identified in the images and their functional roles.
Cover CropFunctional Role
GraminoidsCombating soil erosion and weed competition [13]
LegumesNitrogen fixation and the enhancement of soil health and biological fertility [14,15]
MustardsSuppression of soil-borne pathogens in vineyards and nurseries [16]
CompositesSupporting beneficial insects [17]
PolygonaceaeSuppress weeds due to its rapid growth and allelopathic effects, also hosting many arthropods that contribute to pest control [18]
PlantaginaceaeSignificant suppression of weeds [19]
Other forbsContribute to soil structure improvement and impact on the dynamics of organic carbon in the soil [20,21]
Table 2. Presents the coverage area distribution for various classes in different images, measured as a percentage. Each row represents a distinct image, while the columns indicate the coverage area percentage for different classes, including graminoids, forbs (composite, mustards, legumes, Polygonaceae, Plantaginaceae, and other forbs), vine, and soil, respectively.
Table 2. Presents the coverage area distribution for various classes in different images, measured as a percentage. Each row represents a distinct image, while the columns indicate the coverage area percentage for different classes, including graminoids, forbs (composite, mustards, legumes, Polygonaceae, Plantaginaceae, and other forbs), vine, and soil, respectively.
ImageComposite (%)Mustards (%)Legumes (%)Polygonaceae (%)Plantaginaceae (%)Other Forbs (%)Graminoids (%)Soil (%)Vine (%)
114.560.180.540.380.6220.063.099.9229.18
27.120.110.271.500.371.036.448.8525.54
37.070.007.541.9118.427.7315.2516.4425.32
48.230.046.395.4712.102.9013.9519.6828.29
59.700.121.070.761.2824.274.4811.2936.70
61.640.821.700.002.728.384.8725.4444.36
74.300.010.190.116.1812.9424.224.7521.69
811.320.027.096.906.867.8411.9715.9131.06
97.640.003.690.8711.6815.4819.8516.7723.76
100.820.003.363.0913.535.0614.3020.7132.00
115.600.012.512.8110.709.9821.6917.7328.39
125.720.003.813.6710.714.1916.0621.8333.90
1324.560.001.640.290.9515.7020.710.9224.24
148.140.005.250.3012.594.9814.8623.1130.42
1512.870.051.711.211.5223.503.657.7619.29
164.310.010.190.116.1812.9424.224.7521.69
178.980.260.252.450.0411.977.6413.3327.78
180.820.003.210.6617.084.5315.6622.0728.22
198.980.260.252.460.0511.977.6413.3327.78
207.070.007.541.9118.427.7315.2516.4425.32
218.230.046.395.4712.102.9013.9519.6828.29
2211.320.037.096.906.867.8411.9715.9131.06
235.600.022.512.8110.709.9821.6917.7328.39
243.870.000.502.610.3017.4210.5721.5734.51
Table 3. Performance Metrics of UNet Semantic Segmentation Models with Different Backbone Architectures and without.
Table 3. Performance Metrics of UNet Semantic Segmentation Models with Different Backbone Architectures and without.
BackboneAccuracyPrecision Recall F1
(Correct)
Mean IOUJaccard Score
ResNet34 80.079.879.379.550.563.1
EfficientNet B085.484.9775.980.259.873.0
Inception V382.982.382.682.453.866.4
DenseNet83.683.983.483.652.165.1
Without Backbone78.077.977.877.848.961.2
Accuracy: This metric measures the overall correctness of the segmentation by calculating the ratio of correctly predicted pixels to the total number of pixels. Precision: Precision quantifies the model’s ability to correctly identify positive predictions among all predicted positives. It’s calculated as the ratio of true positives to the sum of true positives and false positives. Recall: Recall, also known as sensitivity, measures the ability of the model to detect all relevant instances of the class in the image. It’s calculated as the ratio of true positives to the sum of true positives and false negatives. F1 Score: The F1 score is the harmonic mean of precision and recall. It provides a balanced measure between precision and recall and is calculated as 2 × (precision × recall)/(precision + recall). Mean IoU: Mean IoU calculates the average IoU across all classes. It’s a popular metric for semantic segmentation tasks as it provides an overall measure of segmentation accuracy across different classes. Jaccard Score (IoU): The Jaccard score, or Intersection over Union (IoU), measures the ratio of the intersection of the predicted and ground truth segmentation masks to their union. It evaluates the overlap between the predicted and ground truth regions.
Table 4. Per-class accuracy, precision, recall, and IoU values for U-Net models with different backbones on the test dataset.
Table 4. Per-class accuracy, precision, recall, and IoU values for U-Net models with different backbones on the test dataset.
BackboneClassAccuracyPrecisionRecallIoU
EfficientNet-B0Plantaginaceae0.9940.9170.910.841
EfficientNet-B0Polygonaceae0.990.9610.3980.392
EfficientNet-B0composite0.8720.5610.6480.43
EfficientNet-B0graminoids0.9420.8470.4610.426
EfficientNet-B0legumes0.9960.7470.5340.452
EfficientNet-B0mustards1.00.8620.8590.754
EfficientNet-B0other forbs0.9440.8290.7930.682
EfficientNet-B0soil0.9520.8340.8440.723
EfficientNet-B0vine0.9290.8770.9560.843
DenseNet-121Plantaginaceae0.9940.9170.9250.854
DenseNet-121Polygonaceae0.9910.9430.4730.46
DenseNet-121composite0.8620.5440.5340.369
DenseNet-121graminoids0.9290.6320.5510.417
DenseNet-121legumes0.9960.7660.4640.407
DenseNet-121mustards1.00.8310.8250.706
DenseNet-121other forbs0.9450.8170.8010.679
DenseNet-121soil0.9430.8050.8170.682
DenseNet-121vine0.9350.8930.9510.854
Inception-V3Plantaginaceae0.9940.9370.9110.858
Inception-V3Polygonaceae0.9830.840.1840.178
Inception-V3composite0.8630.5170.6030.386
Inception-V3graminoids0.9390.8040.450.405
Inception-V3legumes0.9960.7670.4940.429
Inception-V3mustards1.00.6610.8790.606
Inception-V3other forbs0.9250.7510.7480.599
Inception-V3soil0.9380.7770.8190.663
Inception-V3vine0.9320.8880.950.849
No-backbonePlantaginaceae0.9920.9190.8540.794
No-backbonePolygonaceae0.9920.8870.4270.405
No-backbonecomposite0.8750.580.4750.354
No-backbonegraminoids0.9310.6840.5070.411
No-backbonelegumes0.9960.6650.4350.357
No-backbonemustards0.9990.00.00.0
No-backboneother forbs0.9240.7450.7180.576
No-backbonesoil0.9230.7330.7720.603
No-backbonevine0.8940.8230.9430.784
ResNet50Plantaginaceae0.9940.9060.9330.851
ResNet50Polygonaceae0.9850.9630.2150.213
ResNet50composite0.8650.5440.580.39
ResNet50graminoids0.9390.7250.5490.455
ResNet50legumes0.9970.7960.5880.511
ResNet50mustards1.00.8740.9410.829
ResNet50other forbs0.9380.8250.7480.646
ResNet50soil0.9460.8270.8080.691
ResNet50vine0.9180.8560.9540.822
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghiglieno, I.; Woldesemayat, G.T.; Sanchez Morchio, A.; Birolleau, C.; Facciano, L.; Gentilin, F.; Mangiapane, S.; Simonetto, A.; Gilioli, G. Vineyard Groundcover Biodiversity: Using Deep Learning to Differentiate Cover Crop Communities from Aerial RGB Imagery. AgriEngineering 2025, 7, 434. https://doi.org/10.3390/agriengineering7120434

AMA Style

Ghiglieno I, Woldesemayat GT, Sanchez Morchio A, Birolleau C, Facciano L, Gentilin F, Mangiapane S, Simonetto A, Gilioli G. Vineyard Groundcover Biodiversity: Using Deep Learning to Differentiate Cover Crop Communities from Aerial RGB Imagery. AgriEngineering. 2025; 7(12):434. https://doi.org/10.3390/agriengineering7120434

Chicago/Turabian Style

Ghiglieno, Isabella, Girma Tariku Woldesemayat, Andres Sanchez Morchio, Celine Birolleau, Luca Facciano, Fulvio Gentilin, Salvatore Mangiapane, Anna Simonetto, and Gianni Gilioli. 2025. "Vineyard Groundcover Biodiversity: Using Deep Learning to Differentiate Cover Crop Communities from Aerial RGB Imagery" AgriEngineering 7, no. 12: 434. https://doi.org/10.3390/agriengineering7120434

APA Style

Ghiglieno, I., Woldesemayat, G. T., Sanchez Morchio, A., Birolleau, C., Facciano, L., Gentilin, F., Mangiapane, S., Simonetto, A., & Gilioli, G. (2025). Vineyard Groundcover Biodiversity: Using Deep Learning to Differentiate Cover Crop Communities from Aerial RGB Imagery. AgriEngineering, 7(12), 434. https://doi.org/10.3390/agriengineering7120434

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop