Next Article in Journal
Quality Detection Model for Apricots (Diaoganxing) Based on Spectral Morphological Feature Fusion Across Different Moisture Intervals
Previous Article in Journal
Ultrafine Bubble Water for Crop Stress Management in Plant Protection Practices: Property, Generation, Application, and Future Direction
Previous Article in Special Issue
Design and Application of a Portable Chestnut-Harvesting Device
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Segmentation of Cottony Mass Produced by Euphyllura olivina (Hemiptera: Psyllidae) in Olive Trees Using Deep Learning

by
Henry O. Velesaca
1,
Francisca Ruano
2,
Alice Gomez-Cantos
1 and
Juan A. Holgado-Terriza
1,*
1
Software Engineering Department, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18071 Granada, Spain
2
Department of Zoology, Faculty of Sciences, University of Granada, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(23), 2485; https://doi.org/10.3390/agriculture15232485 (registering DOI)
Submission received: 13 October 2025 / Revised: 15 November 2025 / Accepted: 22 November 2025 / Published: 29 November 2025

Abstract

The olive psyllid (Euphyllura olivina), previously considered a secondary pest in Spain, is becoming more prevalent due to climate change and rising average temperatures. Its cottony wax secretions can cause substantial damage to olive crops under certain climatic conditions. Traditional monitoring methods for this pest are often labor-intensive, subjective, and impractical for large-scale surveillance. This study presents an automatic image segmentation approach based on deep learning to detect and quantify the cottony masses produced by E. olivina in olive trees. A well-annotated image dataset is developed and published, and a thorough evaluation of current camouflaged object detection (COD) methods is carried out for this task. Our results show that deep learning-based segmentation enables accurate and non-invasive assessment of pest symptoms, even in challenging visual conditions. However, further calibration and field validation are required before these methods can be deployed for operational integrated pest management. This work establishes a public dataset and a baseline benchmark, providing a foundation for future research and decision-support tools in precision agriculture.

1. Introduction

The olive psyllid, Euphyllura olivina (Costa, 1839), is a significant pest in Mediterranean olive groves, especially in North-African countries, but is increasing its impact in other olive-producing regions such as Spain (known here as “Algodoncillo del Olivo”), because of climate change and the increase in mean temperatures in early spring [1,2]. Recent studies have highlighted the effects of climate change on olive production in Spain, further emphasizing the need for improved pest monitoring and management strategies [3]. Under these climatic conditions, this insect causes both direct and indirect damage during the flowering and fruit-setting stages [4,5]. One of the most characteristic symptoms of its presence is the production of conspicuous white waxy secretions by the nymphs, which cover inflorescences and young shoots, leading to flower drop, reduced fruit set, and the development of sooty mold that further affects photosynthesis and yield [6,7]. The extent of these cottony masses is closely related to the population density of the pest, making their detection and quantification a valuable proxy for monitoring E. olivina outbreaks [6].
Traditional monitoring methods rely on manual inspection and counting, which are labor-intensive, subjective, and often impractical for large-scale or high-frequency surveillance. Additionally, the inaccessibility of some olive farms in the mountains poses significant challenges. Recent advances in computer vision and image analysis offer promising alternatives for the automatic detection and quantification of pest symptoms in crops [8]. Although direct detection of this pest is complicated due to its small size (e.g., adults 2 to 3 mm) [9], the segmentation of the white cottony mass secretions present in images of olive trees could enable rapid, non-invasive estimation of E. olivina populations, facilitating timely and targeted pest management interventions [7,10,11].
This work presents a methodology for the image segmentation of the cottony mass produced by E. olivina using camouflaged object detection (COD) techniques. The quantification of the area covered by these secretions is intended to establish a foundation for future studies correlating cottony mass coverage with pest population density. This will ultimately contribute to the development of precision agriculture tools for integrated pest management in olive cultivation.
The key contributions of this work include:
  • A first well-annotated benchmark dataset was used for the segmentation of cottony masses produced by E. olivina in olive trees.
  • Fine-tuning of ten state-of-the-art (SOTA) COD techniques was applied for the segmentation of cottony masses by E. olivina.
  • A quantitative and qualitative evaluation of the results was obtained and discussed with these ten SOTA COD techniques for the segmentation of cottony mass produced by E. olivina.
The manuscript is organized as follows. Section 2 presents the dataset, the techniques used, and the evaluation metrics. Then, Section 3 shows the experimental results, both quantitative and qualitative, on the dataset. Finally, discussions are given in Section 4.

2. Proposed Methodology

This section presents the proposed methodology used to evaluate the image segmentation of cottony masses produced by Euphyllura olivina in olive trees (see Figure 1). As illustrated in Figure 2, the pipeline consists of six main stages:
  • Dataset Acquisition. A set of images containing cottony masses was collected from different sources.
  • Image Annotation. The collected images were annotated following strict guidelines to determine areas of cottony masses.
  • Training COD Techniques. COD techniques were applied using the annotated dataset for training image-segmentation models.
  • Testing COD Techniques. The trained models were evaluated on a test set to generate segmentation masks that identify the cottony masses.
  • Evaluation. Performance was assessed using both quantitative metrics and qualitative visual comparisons.
  • Result Interpretation. The segmentation outputs were analyzed to determine the effectiveness of each COD technique in detecting and quantifying the cottony masses under challenging visual conditions.

2.1. Dataset Acquisition

The study presents the first well-annotated benchmark dataset for segmenting cottony masses produced by Euphyllura olivina on olive trees. The dataset contains 442 images with a split for training (354 images), validation (66 images), and testing (22 images). Images were obtained from public repositories, including iNaturalist (https://www.inaturalist.org/, accessed on 18 September 2025) and ForestryImages (https://forestryimages.org, accessed on 18 September 2025), and supplemented with specific images taken in the province of Jaén, Spain. The images originate from heterogeneous sources, devices, and environmental conditions, including contributions from different countries (e.g., USA, Europe, and North Africa (see Figure 3)), which naturally introduces variability in viewpoint, optics, and scene composition. This heterogeneity is intentional to improve model generalization to real-world conditions. The test set of 22 images and validation set of 66 images include only original images, with no augmented images, ensuring strict separation between training/validation and testing.
The dataset was carefully constructed to avoid data leakage and ensure that images in the training, validation, and test splits are independent. Images from similar scenes were assigned to a single partition, preventing overlap between splits and supporting a fair evaluation of model performance.

2.2. Image Annotation

The annotation process was conducted using Roboflow (https://app.roboflow.com/, accessed on 18 September 2025), a web-based platform designed for computer vision tasks. Each image was annotated with meticulous attention to detail, following a set of strict guidelines to ensure consistency and accuracy across the dataset. First, annotators were instructed to: (a) exclusively label cottony mass instances, disregarding all other visual elements, and (b) ensure complete coverage by annotating every visible cottony mass without omission. Second, quality specifications required (a) individual polygon annotations for each cottony mass in multi-instance images, and (b) pixel-precise boundary refinement to accurately capture morphological features.
To improve the generalization of the models to be trained, a data augmentation process is performed, obtaining a final image count of 655, split into training (567 images), validation (66 images), and testing (22 images) subsets. The original distribution is maintained in the validation and testing subsets. The following augmentations are applied to the original dataset:
  • Flip: Horizontal, Vertical
  • 90° Rotate: Clockwise, Counter-Clockwise, Upside Down
  • Rotation: Between −15° and +15°
  • Shear: ±10° Horizontal, ±10° Vertical
Figure 4 shows examples of the annotated dataset. The first and third columns show RGB images with the presence of cottony mass, while the second and fourth columns show the binary masks that segment the presence of cottony mass. The resulting dataset is publicly available (see the Data Availability Statement). Additionally, Figure 5 (top) shows the scatter plot that visualizes the spatial distribution of the mask centroids (geometric centers) within the images. Each point represents the centroid of a mask, with its position determined by the X and Y coordinates. Also, Figure 5 (bottom) presents a histogram showing the percentage of each image area covered by masks. Coverage is calculated as the ratio of mask pixels to the total number of pixels in each image.

2.3. Training COD Techniques

Camouflaged Object Detection (COD) is a specialized subfield of computer vision that focuses on identifying objects that are visually blended into their surroundings. The choice to focus on COD-based models was motivated by the unique visual challenges presented by the cottony mass, which often exhibits low contrast, irregular shapes, and blends into complex backgrounds. COD models are specifically designed to address these challenges, and have demonstrated superior performance over general-purpose segmentation frameworks such as U-Net++ [12] or DeepLabv3+ [13] in camouflaged object scenarios. This specialization makes them particularly suitable for the segmentation of E. olivina symptoms in olive trees.
In recent years, advances in deep learning have led to the development of state-of-the-art (SOTA) models specifically designed to address these challenges [14]. This section reviews a variety of approaches, highlighting how each method tackles the inherent difficulties of COD through different architectural strategies and loss functions.
For example, BASNet [15] utilizes a predict-and-refine strategy with a hybrid loss function (BCE, SSIM, IoU), where SSIM implicitly enhances edge quality, though it does not explicitly model edges. SINet [16] implements a biologically inspired search-and-identification framework, explicitly using edge cues to locate and segment camouflaged objects. EAMNet [17] features dual parallel branches for segmentation and edge detection, with cross-refinement that explicitly incorporates edge information into the learning process. HitNet [18] employs iterative prediction refinement through feedback loops, which indirectly improves boundary clarity without dedicated edge modeling.
For integrating global and local features, CTF-Net [19] merges CNN-based local features with Transformer-based global context, implicitly enhancing boundary precision. In contrast, BGNet [20] introduces a dedicated boundary-guidance branch to directly model object contours. C2F-Net [21] applies cross-level context fusion, implicitly strengthening edges for better structural coherence. DGNet [22] explicitly captures edge information via gradient flow, enabling the detection of subtle contrast variations in low-texture regions.
Addressing domain-specific challenges, PCNet [23] focuses on plant camouflage using multi-scale refinement, explicitly handling irregular plant edges. Lastly, OCENet [24] models aleatoric uncertainty to dynamically supervise regions with high uncertainty (such as boundaries), thereby implicitly enhancing edge detection. Collectively, these methods advance COD by balancing explicit edge guidance with implicit boundary refinement, adapting to a range of camouflage scenarios. Table 1 summarizes the distinctive characteristics of the evaluated SOTA COD methods.

2.4. Testing COD Techniques

All COD models are evaluated under the same simple test protocol to make results comparable. For each method, select the checkpoint with the best validation performance and use it for inference on the test set. Inputs are resized to the resolution expected by each model and normalized according to the model’s original preprocessing, with no random transformations applied at test time. Inference is single-scale (no test-time augmentation), and each model produces a probability map that is resized back to the original image size for evaluation and visualization. For the figures, a binary mask using a fixed threshold of 0.25 is also shown. Also, it does not apply morphological or other post-processing refinements, keeping the comparison focused on the models themselves. This simple setup is easy to reproduce and ensures a fair, like-for-like comparison across methods.

2.5. Evaluation

This study utilizes five widely adopted evaluation metrics to assess the performance of COD models. These metrics offer a comprehensive framework for analyzing detection accuracy and effectiveness across different approaches: Structure-measure ( S α ) [29], weighted F-measure ( F β w ) [30], Mean Absolute Error (M) [31], E-measure ( E ϕ ) [32], and F-measure ( F β ) [33]. The metrics are computed at the original image resolution after up-sampling predictions.
The S α metric is employed to evaluate the structural similarity between predicted and ground-truth maps, indicating the extent to which the overall structural information is preserved. The weighted F-measure ( F β w ) extends the traditional F β by incorporating spatial weights, thereby providing a more nuanced assessment of segmentation quality with a focus on boundary precision and the spatial significance of detected pixels. The Mean Absolute Error (M) measures pixel-level discrepancies between normalized predictions and ground truth, offering a direct indicator of prediction accuracy. The E-measure ( E ϕ ) simultaneously considers both global and local detection accuracy, grounded in human visual perception, to deliver a perceptually meaningful evaluation. Finally, the F-measure ( F β ) combines precision and recall to provide a balanced review of overall detection performance.
For both the F-measure and E-measure, multiple scores are computed based on different precision–recall pairs, resulting in adaptive F-measure ( F β a d p ), mean F-measure ( F β m e a n ), and maximum F-measure ( F β m a x ). Similarly, the E-measure includes adaptive, mean, and maximum variants, denoted as E ϕ a d p , E ϕ m e a n , and E ϕ m a x , which are also used as evaluation metrics.
In addition to the metrics previously reported ( S α , F β w , M, E ϕ , F β ), traditional metrics such as intersection over union (IoU) and Dice coefficient scores are also included. These metrics provide a direct and interpretable measure of the spatial overlap between predicted and ground-truth masks, which is central to the intended application in pest quantification. Specifically, the Intersection over Union (IoU) quantifies the ratio between the area of overlap and the area of union of the predicted and ground-truth masks, offering a strict assessment of segmentation accuracy. On the other hand, the Dice coefficient, also known as the F1 score for segmentation, measures the harmonic mean of precision and recall at the pixel level, emphasizing the balance between false positives and false negatives. Both metrics are widely used in image segmentation tasks, as they directly reflect how well the predicted regions match the actual annotated regions.
To provide a more comprehensive evaluation, specific IoU thresholds are also reported, including IoU@50 and IoU@75, which represent the average accuracy when the IoU exceeds the thresholds of 0.50 and 0.75, respectively. In addition, IoU@[50–95] is included, calculated using multiple IoU thresholds from 0.50 to 0.95 in increments of 0.05, following the COCO evaluation protocol. These variants provide a more complete view of segmentation performance, showing how well the model works under both easy and demanding conditions.

2.6. Training Details

This subsection summarizes the training configuration used across all COD techniques. Table 2 reports the optimizer, learning rate, batch size, number of epochs, scheduler, and loss functions employed per model. All models were trained under consistent splits and without test-time augmentation, as described in Section 2.4 and Section 2.5, to enable a fair comparison.

3. Results

The performance of the ten SOTA COD techniques is quantitatively evaluated on the test set using the metrics described in Section 2.5. Table 3 summarizes the results for each model, highlighting the top three performers for each metric.
Among all evaluated techniques, CTF-Net achieved the highest Structure-measure ( S α = 0.8101), indicating superior preservation of structural information in the segmented masks. PCNet and BGNet also performed strongly, with S α scores of 0.8075 and 0.8022, respectively. In terms of F-measure ( F β ), PCNet (0.6787) and HitNet (0.6735) outperformed the rest, reflecting a strong balance between precision and recall. The lowest Mean Absolute Error (M) is obtained by HitNet (0.0220), closely followed by PCNet (0.0234), indicating high pixel-level accuracy.
For the E-measure ( E ϕ ), which combines global and local detection accuracy, OCENet (0.9099) and HitNet (0.9125) achieved the best results, while CTF-Net and BGNet also demonstrated robust performance. The adaptive, mean, and maximum variants of F-measure and E-measure further confirmed the consistency of these top-performing models.
In relation to the results obtained in traditional metrics, the values indicate in Table 4 that CTF-Net achieves the best overall performance across all highest evaluated metrics, obtaining the values for Dice (0.735), IoU (0.621), IoU@50 (0.562), IoU@75 (0.279), and IoU@[0.50:0.95] (0.302). BGNet and PCNet also demonstrate strong performance, particularly in the Dice and IoU metrics. In contrast, methods such as BASNet and C2F-Net show the lowest values in most metrics, indicating lower overlap and accuracy in segmentation. Overall, the table highlights significant differences in the effectiveness of the various techniques, with CTF-Net standing out as the most effective method for the evaluated segmentation task according to these classical indicators.
Qualitative results are illustrated in Figure 6 and Figure 7, where segmentation outputs from all models are compared against ground truth masks. The best-performing models, such as PCNet, CTF-Net, HitNet, and BGNet, consistently produced accurate segmentation, with minimal over-segmentation (red) and under-segmentation (blue) errors. These models are particularly effective in scenarios with low-contrast boundaries and complex backgrounds, which are characteristic of the cottony masses produced by E. olivina.
To assess the generalization power of the four techniques (as indicated by the quantitative results) in challenging scenarios, Figure 8 is presented, which displays an olive tree with a high density of cottony mass. The techniques in this scenario present challenges in identifying areas of overlap with the GT (white areas), although a large number of areas are generally affected. This image was taken of an olive tree in the province of Jaén, Spain, with a high population of E. olivina and cottony mass.

4. Discussion

Interpretation framework and robustness: We interpret S α , F β w , E ϕ , F β (and their variants) as indicators of structural preservation, precision–recall balance, and perceptual alignment, respectively, whereas lower M indicates fewer pixel-wise errors. Given the small test set (22 images), we report 95% confidence intervals to qualify the stability of model rankings and to emphasize practical reliability rather than single-metric peaks.
The results obtained in this study provide valuable insights into the effectiveness of COD techniques for segmenting cottony masses produced by Euphyllura olivina in olive trees. The first point to review is the quality and distribution of the annotated dataset. The centroids are distributed across the entire image space, indicating that the segmented objects are not concentrated in a specific region of the images. This suggests that the dataset captures a wide variety of object placements, which is beneficial for training robust models. However, if certain areas are underrepresented, it might indicate a potential bias in the dataset (see Figure 5 (top)). The majority of images exhibit low to moderate mask coverage, indicating that the segmented objects occupy a relatively small portion of the image. This is commonly observed in datasets where the objects of interest are sparse or small compared to the overall image size. However, a few images have higher coverage, which suggests the presence of larger or more densely packed objects (see Figure 5 (bottom)).
Quantitative results demonstrate that recent advances in COD, particularly those leveraging transformer-based architectures and explicit edge modeling, are highly effective for the segmentation of cottony masses produced by E. olivina in olive trees. Models such as CTF-Net and PCNet stood out for their ability to integrate global context and local features, which is crucial for distinguishing the subtle, low-contrast wax secretions from the surrounding foliage.
The strong performance of HitNet and BGNet further highlights the importance of iterative refinement and boundary guidance in improving segmentation accuracy for camouflaged targets. These findings are consistent with previous studies in COD, where explicit edge information and multi-scale feature fusion have been shown to enhance detection in complex natural scenes.
Despite these achievements, some limitations were observed. Acquisition conditions were not controlled, as images were captured by different people, devices, and countries; therefore, lighting, lens orientation, and environmental conditions (including temperature) vary between samples. Temperature, in particular, influences the phenology and abundance of E. olivina [1], which can introduce temporal differences in cottony mass expression. Paradoxically, this heterogeneity enriches the dataset and favors generalization, but it can also lead to false positives in under-segmentation of small deposits or in regions with similar texture or color to the cottony mass, such as light-colored stems or background reflections. These limitations justify future standardized acquisition protocols and sampling stratification by light and phenological state.
From a management standpoint, early developmental stages usually cause minor direct damage, whereas higher densities are associated with visible plant stress and reduced fruit set; furthermore, cottony mass can hinder the effectiveness of contact insecticides by acting as a physical barrier, reinforcing the importance of early and objective detection.
The traditional method for controlling the population of this pest is to count the number of shoots or inflorescences with cottony mass. The threshold for taking action against the pest is 60% of shoots or inflorescences affected. This task of collecting shoots or inflorescences and subsequently analyzing them in the laboratory is very time-consuming and may delay the decision-making process. In some areas, access to olive trees is difficult because they are located far from roads and/or on steep slopes, which makes it difficult to collect shoots. These areas can become reservoirs for the pest. In this regard, once the model has been trained, taking photographs with a drone can greatly speed up data collection, analysis of the state of the pest, decision-making on mitigating measures, and assessment of their real effect on the pest [34].
The availability of a well-annotated, public dataset and the benchmarking of multiple SOTA techniques provide a valuable resource for the research community. The results establish a strong baseline for future work, including the integration of temporal information from video, the use of hyperspectral data, or the development of lightweight models for deployment on edge devices in the field.
In summary, this study confirms the feasibility and effectiveness of deep learning-based image segmentation for the automatic detection and quantification of E. olivina infestations. The proposed approach offers a scalable, objective, and non-invasive tool for precision pest monitoring, supporting integrated pest management strategies in olive cultivation.

5. Future Directions

A future study will use drones to monitor orchards at scale in accordance with the scheme proposed in Figure 9. Flights will be planned (altitude, speed, overlap) to capture enough detail to spot cottony masses, build orthomosaics, and measure each tree’s canopy, so COD results can be summed per tree or plot. To move from close-up to aerial views, models will be retrained with multi-scale data, large-mosaic tiling, and domain adaptation.
A key goal is to link image indicators (area of cottony masses) to real field counts of E. olivina. This will use synchronized drone images and standardized sampling (branch checks and/or traps), then fit simple, interpretable models that consider growth stage, cultivar, lighting, and orchard differences. Because segmentation is only a proxy, validation will rely on established entomology methods—especially traps that provide absolute densities with uncertainty and guide sampling effort and action thresholds.
The study will also test extras like multi-spectral imaging to handle tough lighting and backgrounds. Overall, the aim is to gather drone images with field-matched labels, set aerial COD baselines, and train calibrated models that turn segmentation into accurate, low-error population estimates for tree- and block-level decisions.

6. Conclusions

This study establishes a first public, well-annotated dataset for cottony mass segmentation in E. olivina and benchmarks ten COD models under a unified protocol. Models that fuse global context with explicit boundary cues (CTF-Net, PCNet, HitNet, BGNet) delivered the strongest and most balanced performance across multiple metrics and qualitative overlays. Heterogeneous acquisition improved generalization but introduced edge cases (e.g., confusion with light stems, glare, overlapped deposits). The small test set underscores the need for broader validation; we mitigate this by reporting 95% confidence intervals and emphasizing multi-metric consistency.
Looking ahead, calibrating image-derived indicators against standardized field counts will enable actionable density estimates and deployment on edge devices and aerial imagery. These steps will support scalable, objective monitoring in integrated pest management.

Author Contributions

Conceptualization, H.O.V., F.R. and J.A.H.-T.; methodology, H.O.V. and J.A.H.-T.; validation, H.O.V., F.R. and J.A.H.-T.; formal analysis, H.O.V. and J.A.H.-T.; investigation, H.O.V., A.G.-C. and J.A.H.-T.; resources, F.R. and A.G.-C.; data curation, H.O.V. and A.G.-C.; writing—original draft preparation, H.O.V., F.R. and A.G.-C.; writing—review and editing, F.R. and J.A.H.-T.; visualization, J.A.H.-T.; supervision, F.R. and J.A.H.-T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the University of Granada.

Data Availability Statement

The original data presented in the study are openly available in GitHub at https://github.com/Sistemas-Concurrentes/cottony_mass_cod/, accessed on 21 November 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. El Aalaoui, M.; Sbaghi, M.; Mokrini, F. Effect of temperature on the development and reproduction of olive psyllid Euphyllura olivina Costa (Hemiptera: Psyllidae). Crop Prot. 2025, 190, 107131. [Google Scholar] [CrossRef]
  2. Ksantini, M.; Jardak, T.; Bouain, A. Temperature effect on the biology of Euphyllura olivina Costa. In Proceedings of the IV International Symposium on Olive Growing 586, Valenzano, Italy, 30 October 2002; pp. 827–829. [Google Scholar]
  3. Guise, I.; Silva, B.; Mestre, F.; Muñoz-Rojas, J.; Duarte, M.F.; Herrera, J.M. Climate change is expected to severely impact Protected Designation of Origin olive growing regions over the Iberian Peninsula. Agric. Syst. 2024, 220, 104108. [Google Scholar] [CrossRef]
  4. Mustafa, T. Factors affecting the distribution of Euphyllura olivina Costa (Hom., Psyllidae) on olive. Z. Angew. Entomol. 1984, 97, 371–375. [Google Scholar] [CrossRef]
  5. Hougardy, E.; Wang, X.; Hogg, B.N.; Johnson, M.W.; Daane, K.M.; Pickett, C.H. Current distribution of the olive psyllid, Euphyllura olivina, in California and initial evaluation of the Mediterranean parasitoid Psyllaephagus euphyllurae as a biological control candidate. Insects 2020, 11, 146. [Google Scholar] [CrossRef] [PubMed]
  6. Guessab, A.; Elouissi, M.; Lazreg, F.; Elouissi, A. Population dynamics, seasonal fluctuations and spatial distribution of the olive psyllid Euphyllura olivina Costa (Homoptera, Psyllidae) in Algeria. Arx. Misc. Zool. 2021, 19, 183–196. [Google Scholar] [CrossRef]
  7. Azimi, M.; Marouf, A.; Shafiei, S.E.; Abdollahi, A. The effects of phenolic compounds on the abundance of olive psyllid, Euphyllura straminea Loginova in commercial and promising olive cultivars. BMC Plant Biol. 2025, 25, 798. [Google Scholar] [CrossRef] [PubMed]
  8. Gharbi, N. Effectiveness of inundative releases of Anthocoris nemoralis (Hemiptera: Anthocoridae) in controlling the olive psyllid Euphyllura olivina (Hemiptera: Psyllidae). Eur. J. Entomol. 2021, 118, 135–141. [Google Scholar] [CrossRef]
  9. Barranco Navero, D.; Fernandez Escobar, R.; Rallo Romero, L. El Cultivo del Olivo, 7th ed.; Ediciones Mundi-Prensa: Madrid, Spain, 2017. [Google Scholar]
  10. Onufrieva, K.S.; Onufriev, A.V. How to count bugs: A method to estimate the most probable absolute population density and its statistical bounds from a single trap catch. Insects 2021, 12, 932. [Google Scholar] [CrossRef] [PubMed]
  11. Martínez-Ferrer, M.T.; Ripollés, J.L.; Garcia-Marí, F. Enumerative and binomial sampling plans for citrus mealybug (Homoptera: Pseudococcidae) in citrus groves. J. Econ. Entomol. 2006, 99, 993–1001. [Google Scholar] [CrossRef] [PubMed]
  12. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Proceedings of the International Workshop on Deep Learning in Medical Image Analysis, Granada, Spain, 20 September 2018; Springer: Cham, Switzerland, 2018; pp. 3–11. [Google Scholar]
  13. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  14. Zhong, J.; Wang, A.; Ren, C.; Wu, J. A survey on deep learning-based camouflaged object detection. Multimed. Syst. 2024, 30, 268. [Google Scholar] [CrossRef]
  15. Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; Jagersand, M. BASNet: Boundary-Aware Salient Object Detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  16. Fan, D.P.; Ji, G.P.; Cheng, M.M.; Shao, L. Concealed object detection. Trans. Pattern Anal. Mach. Intell. 2021, 44, 6024–6042. [Google Scholar] [CrossRef] [PubMed]
  17. Sun, D.; Jiang, S.; Qi, L. Edge-aware mirror network for camouflaged object detection. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Brisbane, Australia, 10–14 July 2023; IEEE: New York, NY, USA, 2023; pp. 2465–2470. [Google Scholar]
  18. Hu, X.; Wang, S.; Qin, X.; Dai, H.; Ren, W.; Luo, D.; Tai, Y.; Shao, L. High-resolution iterative feedback network for camouflaged object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 881–889. [Google Scholar]
  19. Zhang, D.; Wang, C.; Wang, H.; Fu, Q.; Li, Z. An effective CNN and Transformer fusion network for camouflaged object detection. Comput. Vis. Image Underst. 2025, 259, 104431. [Google Scholar] [CrossRef]
  20. Chen, T.; Xiao, J.; Hu, X.; Zhang, G.; Wang, S. Boundary-guided network for camouflaged object detection. Knowl.-Based Syst. 2022, 248, 108901. [Google Scholar] [CrossRef]
  21. Chen, G.; Liu, S.J.; Sun, Y.J.; Ji, G.P.; Wu, Y.F.; Zhou, T. Camouflaged object detection via context-aware cross-level fusion. Trans. Circuits Syst. Video Technol. 2022, 32, 6981–6993. [Google Scholar] [CrossRef]
  22. Ji, G.P.; Fan, D.P.; Chou, Y.C.; Dai, D.; Liniger, A.; Van Gool, L. Deep gradient learning for efficient camouflaged object detection. Mach. Intell. Res. 2023, 20, 92–108. [Google Scholar] [CrossRef]
  23. Yang, J.; Wang, Q.; Zheng, F.; Chen, P.; Leonardis, A.; Fan, D.P. PlantCamo: Plant Camouflage Detection. arXiv 2024, arXiv:2410.17598. [Google Scholar] [CrossRef]
  24. Liu, J.; Zhang, J.; Barnes, N. Modeling aleatoric uncertainty for camouflaged object detection. In Proceedings of the Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 1445–1454. [Google Scholar]
  25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  26. Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2net: A new multi-scale backbone architecture. Trans. Pattern Anal. Mach. Intell. 2019, 43, 652–662. [Google Scholar] [CrossRef] [PubMed]
  27. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  28. Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pvt v2: Improved baselines with pyramid vision transformer. Comput. Vis. Media 2022, 8, 415–424. [Google Scholar] [CrossRef]
  29. Fan, D.P.; Cheng, M.M.; Liu, Y.; Li, T.; Borji, A. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017; pp. 4548–4557. [Google Scholar]
  30. Margolin, R.; Zelnik-Manor, L.; Tal, A. How to evaluate foreground maps? In Proceedings of the Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; IEEE: New York, NY, USA, 2014; pp. 248–255. [Google Scholar]
  31. Perazzi, F.; Krähenbühl, P.; Pritch, Y.; Hornung, A. Saliency filters: Contrast based filtering for salient region detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE: New York, NY, USA, 2012; pp. 733–740. [Google Scholar]
  32. Fan, D.P.; Gong, C.; Cao, Y.; Ren, B.; Cheng, M.M.; Borji, A. Enhanced-alignment measure for binary foreground map evaluation. arXiv 2018, arXiv:1805.10421. [Google Scholar] [CrossRef]
  33. Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned salient region detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: New York, NY, USA, 2009; pp. 1597–1604. [Google Scholar]
  34. Torres, M.J.R. Guía de manejo integrado del algodoncillo,“ Euphyllura olivina”. Phytoma España Rev. Prof. Sanid. Veg. 2022, 343, 64–66. [Google Scholar]
Figure 1. Example images of E. olivina in olive trees.
Figure 1. Example images of E. olivina in olive trees.
Agriculture 15 02485 g001
Figure 2. Overall pipeline of the proposed methodology.
Figure 2. Overall pipeline of the proposed methodology.
Agriculture 15 02485 g002
Figure 3. Geographical location distribution of annotated images around the world, marked with orange squares. Source: iNaturalist.
Figure 3. Geographical location distribution of annotated images around the world, marked with orange squares. Source: iNaturalist.
Agriculture 15 02485 g003
Figure 4. Example images of annotated datasets. Images captured under different environmental conditions, lighting, object distance, and devices.
Figure 4. Example images of annotated datasets. Images captured under different environmental conditions, lighting, object distance, and devices.
Agriculture 15 02485 g004
Figure 5. Statistics based on the images in the annotated dataset. (top) The scatter plot visualizes the spatial distribution of the centroids (geometric centers) of the masks within the images. Each point represents the centroid of a mask, with its position determined by the X and Y coordinates. (bottom) The histogram shows the percentage of each image covered by masks. Coverage is calculated as the ratio of mask pixels to the total number of pixels in the image.
Figure 5. Statistics based on the images in the annotated dataset. (top) The scatter plot visualizes the spatial distribution of the centroids (geometric centers) of the masks within the images. Each point represents the centroid of a mask, with its position determined by the X and Y coordinates. (bottom) The histogram shows the percentage of each image covered by masks. Coverage is calculated as the ratio of mask pixels to the total number of pixels in the image.
Agriculture 15 02485 g005aAgriculture 15 02485 g005b
Figure 6. Prediction mask results using different SOTA COD techniques. Successful matches between GT and predicted masks (white areas); False positive regions (red areas, over-segmentation); and false negative regions (blue areas, miss-segmentation).
Figure 6. Prediction mask results using different SOTA COD techniques. Successful matches between GT and predicted masks (white areas); False positive regions (red areas, over-segmentation); and false negative regions (blue areas, miss-segmentation).
Agriculture 15 02485 g006
Figure 7. Example image segmentation result of the four best techniques according to Table 3. Successful matches between GT and predicted masks (white areas); False positive regions (red areas, over-segmentation); and false negative regions (blue areas, miss-segmentation).
Figure 7. Example image segmentation result of the four best techniques according to Table 3. Successful matches between GT and predicted masks (white areas); False positive regions (red areas, over-segmentation); and false negative regions (blue areas, miss-segmentation).
Agriculture 15 02485 g007
Figure 8. Example image segmentation result on a challenging scenario with high density of cottony mass using the four best techniques according to Table 3. Successful matches between GT and predicted masks (white areas); False positive regions (red areas, over-segmentation); and false negative regions (blue areas, miss-segmentation).
Figure 8. Example image segmentation result on a challenging scenario with high density of cottony mass using the four best techniques according to Table 3. Successful matches between GT and predicted masks (white areas); False positive regions (red areas, over-segmentation); and false negative regions (blue areas, miss-segmentation).
Agriculture 15 02485 g008aAgriculture 15 02485 g008b
Figure 9. Schematic representation of a future application for automatic image acquisition using drones.
Figure 9. Schematic representation of a future application for automatic image acquisition using drones.
Agriculture 15 02485 g009
Table 1. Distinctive characteristics of the evaluated SOTA COD techniques.
Table 1. Distinctive characteristics of the evaluated SOTA COD techniques.
TechniqueSourceYearSource TypeImage Size (px)Backbone#Par. (M)
BASNet [15]CVPR2019Conference 256 × 256 ResNet-34 [25]87.06
SINet-v2 [16]TPAMI2021Journal 352 × 352 Res2Net-50 [26]24.93
BGNet [20]IJCAI2022Conference 416 × 416 Res2Net-50 [26]77.80
C2F-Net [21]TCSVT2022Conference 352 × 352 Res2Net-50 [26]26.36
OCENet [24]WACV2022Conference 352 × 352 ResNet-50 [25]58.17
EAMNet [17]ICME2023Conference 384 × 384 Res2Net-50 [26]30.51
DGNet [22]MIR2023Journal 352 × 352 EfficientNet [27]8.30
HitNet [18]AAAI2023Conference 352 × 352 PVTv2 [28]25.73
PCNet [23]arXiv2024- 352 × 352 PVTv2 [28]27.66
CTF-Net [19]CVIU2025Journal 384 × 384 PVTv2 [28]64.48
Table 2. Details of the training parameters used in evaluated SOTA COD techniques. Learning rate (LR); Batch size (BS).
Table 2. Details of the training parameters used in evaluated SOTA COD techniques. Learning rate (LR); Batch size (BS).
TechniqueOptimizerLRBSEpochsSchedulerLoss Function
BASNet [15]Adam1 × 10 3 81000ReduceLROnPlateauBCE + SSIM + IOU (multi-stage fusion)
SINet-v2 [16]Adam1 × 10 4 16150Custom (Adjust LR)Structure loss (weighted BCE + weighted IOU)
BGNet [20]Adam1 × 10 4 12100Custom (Poly LR)Structure loss (weighted BCE + weighted IOU) + Dice loss (edge)
C2F-Net [21]AdaXW1 × 10 4 3250Custom (Poly LR)Structure loss (weighted BCE + weighted IOU)
OCENet [24]Adam1 × 10 5 450StepLRUncertainty aware structure loss (weighted BCE + weighted IOU)
EAMNet [17]AdamW5 × 10 5 16150Custom (Adjust LR)Hybrid loss (weighted BCE + weighted IOU) + Edge loss (edge)
DGNet [22]AdamW5 × 10 5 16150CosineAnnealingLRHybrid loss (weighted BCE + weighted IOU) + MSE loss (grad)
HitNet [18]AdamW1 × 10 4 8150Custom (Adjust LR)Structure loss (weighted BCE + weighted IOU)
PCNet [23]AdamW1 × 10 4 8150Custom (Adjust LR)Structure loss (weighted BCE + weighted IOU)
CTF-Net [19]Adam1 × 10 4 12100Custom (Poly LR)Structure loss (weighted BCE + weighted IOU) + Dice loss (edge)
Table 3. Metric evaluation results for each COD techniques—notation as presented in Section 2.5, “ / ” indicates that larger or smaller is better. Top 3 results are shown in red, blue, and green.
Table 3. Metric evaluation results for each COD techniques—notation as presented in Section 2.5, “ / ” indicates that larger or smaller is better. Top 3 results are shown in red, blue, and green.
Technique S α F β w M E ϕ adp E ϕ mean E ϕ max F β adp F β mean F β max
BASNet [15]0.75560.56340.03290.81440.83640.85550.57440.60950.6395
SINet-v2 [16]0.77780.60380.02850.83430.88260.91600.58090.64560.6873
BGNet [20]0.80220.59780.02780.89510.90540.92620.68160.71900.7528
C2F-Net [21]0.77080.52480.03420.82820.88070.92200.57800.63860.6937
OCENet [24]0.78540.65580.02510.90990.89820.91780.67260.70540.7277
EAMNet [17]0.74350.41250.04770.88210.88390.91950.64980.67380.7340
DGNet [22]0.78330.60600.02780.83620.88730.91440.58330.65100.6877
HitNet [18]0.79290.67350.02200.91250.91550.91950.69110.70460.7206
PCNet [23]0.80750.67870.02340.90160.91440.92320.68400.71010.7394
CTF-Net [19]0.81010.62240.03140.88590.90130.92800.66560.70410.7493
Table 4. Metric evaluation results for each COD techniques using traditional metrics (i.e., IoU and Dice)—notation as presented in Section 2.5. Top 3 results are shown in red, blue, and green.
Table 4. Metric evaluation results for each COD techniques using traditional metrics (i.e., IoU and Dice)—notation as presented in Section 2.5. Top 3 results are shown in red, blue, and green.
TechniqueDiceIoUIoU@50IoU@75IoU@[0.50:0.95]
BASNet [15]0.6290.5000.3960.1450.167
SINet-V2 [16]0.6700.5470.4590.2080.217
BGNet [20]0.7320.6160.5220.2780.297
C2F-Net [21]0.6770.5470.4210.1510.205
OCENet [24]0.6950.5680.5050.1800.221
EAMNet [17]0.7080.5870.4840.2440.256
DGNet [22]0.6750.5440.4430.1510.194
Hitnet [18]0.7040.5830.5140.2670.245
PCNet [23]0.7180.5990.5080.2720.268
CTF-Net [19]0.7350.6210.5620.2790.302
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Velesaca, H.O.; Ruano, F.; Gomez-Cantos, A.; Holgado-Terriza, J.A. Image Segmentation of Cottony Mass Produced by Euphyllura olivina (Hemiptera: Psyllidae) in Olive Trees Using Deep Learning. Agriculture 2025, 15, 2485. https://doi.org/10.3390/agriculture15232485

AMA Style

Velesaca HO, Ruano F, Gomez-Cantos A, Holgado-Terriza JA. Image Segmentation of Cottony Mass Produced by Euphyllura olivina (Hemiptera: Psyllidae) in Olive Trees Using Deep Learning. Agriculture. 2025; 15(23):2485. https://doi.org/10.3390/agriculture15232485

Chicago/Turabian Style

Velesaca, Henry O., Francisca Ruano, Alice Gomez-Cantos, and Juan A. Holgado-Terriza. 2025. "Image Segmentation of Cottony Mass Produced by Euphyllura olivina (Hemiptera: Psyllidae) in Olive Trees Using Deep Learning" Agriculture 15, no. 23: 2485. https://doi.org/10.3390/agriculture15232485

APA Style

Velesaca, H. O., Ruano, F., Gomez-Cantos, A., & Holgado-Terriza, J. A. (2025). Image Segmentation of Cottony Mass Produced by Euphyllura olivina (Hemiptera: Psyllidae) in Olive Trees Using Deep Learning. Agriculture, 15(23), 2485. https://doi.org/10.3390/agriculture15232485

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop