Next Article in Journal
Emerging Drying Technologies and Their Impact on Bioactive Compounds: A Systematic and Bibliometric Review
Previous Article in Journal
Geant4 Simulations of a Scintillator Cosmic-Ray Detector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a Computer Vision-Based Method for Sizing and Boat Error Assessment in Olive Pitting Machines

by
Luis Villanueva Gandul
1,*,
Antonio Madueño-Luna
1,
José Miguel Madueño-Luna
2,
Miguel Calixto López-Gordillo
2 and
Manuel Jesús González-Ortega
1
1
Department of Aerospace Engineering and Fluid Mechanics, University of Seville, 41013 Seville, Spain
2
Department of Graphic Engineering, 41013 Seville, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(12), 6648; https://doi.org/10.3390/app15126648
Submission received: 11 May 2025 / Revised: 27 May 2025 / Accepted: 5 June 2025 / Published: 13 June 2025

Abstract

Table olive pitting machines (DRRs) are essential in the agri-food industry but face significant limitations that constrain their performance and compromise process reliability. The main defect, known as the “boat error”, results from improper olive orientation during pitting, leading to bone fragmentation, pulp damage, and potential risks to consumer safety. Traditional quality control methods, such as the use of flotation tanks and expert sensory evaluation, rely on destructive sampling, are time-consuming, and reduce overall productivity. To address these challenges, this study presents a novel computer vision (CV) system integrated into a commercial DRR machine. The system captures high-speed images of Gordal olives (Olea europaea regalis) just before pitting; these are later analyzed offline using a custom MATLAB application that applies HSV-based segmentation and morphological analysis to quantify the olive size and orientation. The method accurately identifies boat error cases based on angular thresholds, without interrupting the production flow or damaging the product. The results show that 97% of olives were correctly aligned, with only 1.1% presenting critical misorientation. Additionally, for the first time, the system allowed a detailed evaluation of the olive size distribution at the machine inlet, revealing an unexpected proportion of off-caliber olives. This contamination in sizing suggests a possible link between calibration deviations and the occurrence of boat errors, introducing a new hypothesis for future investigation. While the current implementation is limited to offline analysis, it represents a non-destructive, low-cost, and highly precise diagnostic tool. This work lays the foundation for a deeper understanding of DRR machine behavior and provides a framework for future developments aimed at optimizing their performance through targeted correction strategies.

1. Introduction

Table olives (Olea europaea L.) [1,2] are subjected to industrial operations performed by a standardized automatic mechanical system developed 50 years ago [3,4], known as the pitting, slicing, and stuffing machine, or DRR, according to its Spanish acronym [3,4,5]. The system comprises three components (Figure 1) in a sequence that involves fruit loading (olive feeder), serial transport via a compartmentalized bucket chain (olive carrier), and final pitting through a stone detaching device equipped with awls, combining rotary and linear movements. After decades of significant pitting capacity achievements, nowadays, DRR machines can operate at a rate of up to 2500–3000 olives per minute for certain varieties, with optional slicing or stuffing operations. However, they still suffer from persistent random defects, which remain a challenge to overcome [6,7].
Some errors include paired olives entering the olive carrier, olive fragmentation during olive feeder intake, empty cases (less concerning) due to insertion blockages, or improper pitting leading to possible breakage, bone retention, or bone fragment embedding in the pulp.
This last error, known as “beata” (tilted olive) or the “barco” (boat olive) depending on its severity [6,7], is particularly concerning for the pitting industry due to its difficult identification and negative impacts on consumer perception, with potential health risks. The discomfort of chewing a tender-textured fruit with localized hard fragments justifies this.
Traditionally, the correction of defects has relied on rigorous localized mechanical adjustments. Lucas Pascual et al. [8] demonstrated that the poor calibration of components to the processed fruit size and misalignments in component coupling could reduce system efficiency by up to 30%. However, experience shows that a strategy based only on fine adjustments is insufficient, as errors persist.
Currently, due to the inability to eliminate defects, the industry opts to operate at pitting speeds below capacity, accepting an offset to reduce the defect rates to a profitable loss threshold of 1–2%.
In this context, computer vision (CV) has emerged as a promising alternative, achieving significant advancements in the agri-food industry over the past two decades. It offers a wide range of possibilities, becoming a new standard for visual inspection processes [9,10,11,12,13,14,15,16,17,18,19,20] with high precision, non-destructive testing, uninterrupted operation, adaptability to high speeds, and cost-effective setup investments [21,22,23,24,25,26,27,28,29,30,31,32]. Specifically for table olives, this technique has enabled the study of the boat error by analyzing the olive positioning just before pitting [6,7]. This error occurs when the awl meets the olive’s major axis eccentrically, producing a longitudinal perforation even though the stone is removed cleanly.
However, the influence of other factors, such as the morphological characteristics of the olive or its behavior during transport, remains unknown. For instance, the caliber of each fruit entering the system is often entrusted to pre-pitting operations.
Beyond product inspection, CV also offers opportunities to streamline the supervision of the mechanical system’s functionality [6,8]. Currently, the monitoring of components’ conditions and performance, and their maintenance, relies on technicians’ skills and verification strategies that may not be as thorough as control sensors allow [7,8], sometimes requiring specialized equipment setups and process interruptions for checks. CV promises to streamline and enhance the consistency and rigor of component maintenance and management, complementing periodic tasks.
In the developmental context of CV, exploratory pathways remain open for the pursuit of more advanced solutions or equally efficient new combinations of physical and software components [6,7,8,33,34,35,36,37,38,39,40,41].
So far, studies have focused on conventional olives [6,7,8,36,37,38,39], overlooking other market-relevant varieties or those that are challenging for conventional CV systems based on OpenCV.
Developing or exploring alternatives remains critical to finding short-term solutions that ensure the full efficiency of DRR machines, enabling increased processing speeds for higher production while simplifying daily control tasks.
The objective of this study was to develop an alternative method for the evaluation of the pitting operations of DRR machines, capable of accurately measuring the boat error and computing the full range of caliber characteristics of the olives entering the system. This is presented as an alternative to the current occasional densimeter flotation phase and expert sensory evaluation, which yield estimative results. To this end, a CV system [25,33] was integrated into a pitting machine processing oxidized black Gordal olives (O. europaea regalis). The CV system enables continuous image capture for each olive (digital sensory evaluation), allowing deferred morphometric–statistical analysis through a parallel application developed in the MATLAB R2022b programming environment.

2. Materials and Methods

2.1. Olive Varieties and Treatment

The experiment was conducted in a facility near Seville (Spain) using Gordal olives, O. europaea regalis, processed in the “confite” or “Californian” style [3]. This variety is notable for its large size and international market significance, with calibers ranging from 70 to 140 olives/kg [5].
Through oxygenation during pickling, the olive acquires a metallic black color from its original natural green with whitish speckles because of early harvest, as well losing its bitterness [5,42]. The treatment primarily involves a series of 2–5 progressive immersions in a sodium hydroxide (NaOH) solution below 4% (w/v), with a 1:1 olive-to-solution volumetric ratio, lasting 1–4 h. Between immersions, washing cycles remove excess alkali, completing a 24 h cycle [3,5,42,43]. Air exposure, either open-air or via injection, progressively darkens the pulp, with the color stabilized by adding iron salts [3,5,42,43]. Once selected and sorted by caliber, the olives are fed into a DRR machine (Morón de la Frontera, Spain), finely adjusted to a specific caliber for pitting.

2.2. Machine Vision System and DRR

A computer vision (CV) system for image acquisition, based on four main components (Figure 2) [7,8,25,33], was installed on a Sadrym model 130 (Morón de la Frontera, Spain) pitting machine (see Figure 2b) at a control point near the olive–awl contact moment, where the orientation of the incoming olives can be identified. The DRR machine is a short-tail model suitable for calibers ranging from 140 to 80 olives/kg, with a 3/4″ feed chain width, adjusted by technicians in this case to process a nominal caliber of 80 olives/kg.
The setup was implemented using a floating PVC tube housing the camera and LED device (Figure 3), triggered by pulses from a magnetic sensor positioned at track level. This sensor operates at a nominal voltage of 12 V, detecting metal at a maximum distance of 8 mm via a flat head. It adapts to the DRR machine’s operating speed, recognizing the intervals of metallic separators transporting the olives along the feed chain during industrial operation.
All trigger electronics and data transmission responding to the magnetic sensor signal are managed from an electronic control panel (ECP) connected to the power grid and switched to 12 V (Figure 4) [8].
The camera incorporates a 1/3″ CMOS MT9V024 sensor housed in an industrial enclosure (The Imaging Source Europe GmbH, Bremen, Germany; model DFK33GV024; 29 × 29 × 57 mm) [44]. At full color resolution, the sensor streams up to 87 fps with a 1 ms exposure time, and raw data are transferred via USB-2.0 (480 Mbit s−1).
A Fujinon HF16HA-1B C-mount lens (FUJIFILM Corporation, Tokyo, Japan; fixed focal length 16 mm, F1.4–F16, manual iris) is used. The lens covers sensors up to 2/3″, so the 1/3″ imager is fully illuminated with negligible vignetting, while the 16 mm focal length provides the required 25 mm × 18 mm field of view at the working distance. Its manual iris allows a fixed f-number matched to the constant LED illumination, ensuring exposure stability and eliminating mechanical wear associated with auto-iris drives.
The lighting unit is a coaxial LED ring comprising 16 unbranded SMD diodes (purchased via AliExpress marketplace, Shenzhen, China; 900 lm, 3000 K), capable of 2 ms strobe pulses, synchronized with the global shutter exposure to maximize contrast and freeze motion.
This design allows the image capture system to avoid overheating, with an ample margin at the olive passage speed, enabling 1:1 photography. For instance, according to the manufacturer, the camera could operate at up to 5220 frames/min—a 48% increase. An example of a capture case during operation is shown in Figure 5.

2.3. Computer Vision Software for Image Processing

2.3.1. Application of Qt-Creator/OpenCV for Color Sampling of Californian Black Olives

Images captured by the camera are streamed to a standard PC and displayed by custom software written in C++/Qt-Creator 4.2.0 (Community) with OpenCV 3.4.3 functions [25,33]. The application allows the operator to define a region of interest (ROI) that excludes problematic areas where olives never appear, and it verifies the correctness of the erosion–dilation sequence—given a user-selected set of parameters—by overlaying the original frame with its binarized version. The same interface controls the electronic subsystem and provides a live visual inspection of the process, distinguishing five scene categories: empty (no olive), normal (properly oriented olive), boat (olive perpendicular to the awl), olive fragment, and double olive.
Frames can also be downloaded to the hard drive for offline analysis. In the present study, a statistically representative set of 1638 raw images was acquired and later reduced to normal and boat cases in order to analyze only the caliber and orientation distributions.
The class distribution obtained during the trial is summarized in Figure 6: normal = 96.46%, tilted = 2.44%, boat = 1.10%, and other cases = 0.61% (1 638 olives in total). Other cases—empty frames, double olives, and loose fragments—occur only when the mechanical alignment of the line is severely out of tolerance; in regular production, they are actively avoided by the maintenance team that continuously monitors the machine. Tilted olives (20° ≤ θ < 80° or 100° ≤ θ < 160°) fall within the manufacturer’s acceptable orientation envelope and therefore do not cause misfeeds. By contrast, boat olives (80° ≤ θ ≤ 100°) appear even with a perfectly adjusted machine and constitute the main source of defective feedings. Consequently, and in order to avoid an extreme class imbalance problem, the subsequent analysis focuses on discriminating the boat class from the rest, as this is the only orientation that degrades product quality under normal operating conditions.
Thus, the final digital inspection consists of 1628 color images of oxidized black Gordal olives in .png format, sized 320 × 240 pixels (Figure 7), which are representative for the extrapolation of the results to an infinite population over the system’s lifespan, with a Type I error below 5%.

2.3.2. Self-Developed Software to Process Black Gordal Color Images

In parallel, an application was developed in the MATLAB R2022b environment, consisting of two scripts [45]. Figure 8 presents a diagram of the workflow of the first script.
  • Binarization.
A graphical user interface implemented in MATLAB (Figure 9) performs HSV-based segmentation and orientation detection on each olive [46,47]. HSV was selected for its robustness to illumination changes, its intuitive two-parameter tuning, and the negligible conversion software cost of a single cv::cvtColor call.
The image processing stage follows the pipeline summarized in Figure 8 and detailed in Algorithm 1. Each RGB frame is first converted to HSV; a binary mask M is obtained by applying fixed thresholds on the three channels (Equation (1)). The mask is refined through an erosion–dilation–erosion sequence with circular structuring elements of radii r1, r2, and r3 (Equation (2)), after which the largest connected component is retained and internal holes are filled. The central moments of this blob are then computed and the olive’s major axis orientation θ is calculated from Equation (3), while the pixel area A is obtained by direct summation. All threshold limits (Hmin, Hmax, Smin, Smax, Vmin, Vmax) and kernel radii are read from a calibration file, so no manual tuning is required at run time. The complete MATLAB implementation of Algorithm 1 is provided in the script binarize2d.m (Code S1 in the Supplementary Materials).
Algorithm 1: HSV segmentation and orientation extraction
Input: RGB frame I; thresholds (Hmin…Vmax); radii (r1, r2, r3)
Output: Clean mask Mclean, projected area A, major-axis angle θ
1   IHSV ← convertRGBtoHSV(I)
2   M0    ← applyThreshold(IHSV)        ▹ binary mask, see Equation (1)
3   M1    ← erode(M0, disk r1)                ▹ Equation (2)
4   M2    ← dilate(M1, disk r2)                ▹ Equation (2)
5   M3    ← erode(M2, disk r3)                ▹ Equation (2)
6   Mask_clean← fillHoles( largestComponent(M3) )
7   (μ20, μ02, μ11) ← centralMoments(Mclean)
8   (θ, A) ← orientationAndArea(μ20, μ02, μ11)▹ Equation (3)
9   return (Mclean, A, θ)
m a s k x , y = 1   H m i n H x , y H m a x     S m i n S ( x , y ) S m a x V m i n V ( x , y ) V m a x 0   o t h e r w i s e
E r o d e M , S E r = p | S E r + p M ;         D i l a t e M , S E r = { p | S E r V + p M
θ = 1 2 · t a n 1 2 · J x y J x x J y y ;                     A = x , y m a s k ( x , y )
The morphological segmentation separates olive pixels from the conveyor background, exploiting intensity differences caused by highlights and shadows in an otherwise low-contrast scene. The algorithm determines the extreme HSV values—Hmin, Hmax, Smin, Smax, Vmin, and Vmax—within the largest blob (Figure 9b). These limits are then applied to the original frame to obtain an initial mask that outlines the fruit (Figure 9c). The threshold values and the three morphological radii are automatically written to “settings.txt”.
Additionally, the mask is refined with three morphological operations—erosion, dilation, and a second erosion—to remove background noise and to close gaps inside the olive produced by over-illumination or shadow artefacts (Figure 9d). Although this sequence cleans the segmentation, it can introduce a systematic bias in area measurements. To mitigate this risk, the binarization, erosion, and dilation parameters tuned in the MATLAB script are transferred to the OpenCV/Qt-Creator application, where the original frame and its binarized mask are overlaid so that the operator can ensure correct overlap.
  • Morphological analysis.
Using the cleaned mask obtained after segmentation, the routine supplied in Supplementary File S1 computes the second-order central moments and determines the major axis orientation with respect to the horizontal. The calculated angle is visualized by overlaying a line on the olive contour (Figure 9e). The routine also derives the projected area by counting the white pixels. Both the orientation (θ) and area (A) are automatically written to “output.txt”.
  • Statistical analysis.
A second MATLAB routine (Algorithm 2; Supplementary File S2) performs the batch processing of all PNG images contained in non_processed_olives. The batch procedure is implemented in the MATLAB script processed_olives_3.m (provided as Code S2 in the Supplementary Materials). First, the HSV thresholds and morphological kernel radii stored in settings.txt are read. Each image is then segmented by the function updateImage, producing a cleaned mask whose area (A) and major axis orientation (θ) are recorded; the annotated frame is saved in processed_olives. Once the folder has been exhausted, the script compiles the full list of areas and angles, plots their histograms, fits a normal density to the area distribution, and exports the descriptive statistics (mean μ and variance σ2) to normal.txt while appending every individual measurement to statistics.txt. Figure 10 shows the corresponding flow chart, and Algorithm 2 formalizes the steps.
Algorithm 2: Batch processing and statistical analysis
Input: settings.txt, folder non_processed_olives
Output: Folder processed_olives, normal.txt,
statistics.txt, histograms
1   Read HSV thresholds (Hmin…Vmax) and radii (r1, r2, r3) from settings.txt
2   for each PNG file I in non_processed_olives do
3         (Iproc, A, θ) ← updateImage(I, thresholds, radii)
4         Save Iproc into processed_olives
5         Append A to array Areas; θ to array Angles
6         Append <filename, A, θ> to statistics.txt
7   end for
8   μ ← mean(Areas)
9   σ2 ← variance(Areas)
10 Plot histogram of Areas (PDF) and overlay N(μ, σ2)
11 Plot histogram of Angles
12 Write “Mean: μ, Variance: σ2” into normal.txt
Equation (4)—Descriptive statistics
μ = 1 N · i = 1 N A i σ 2 = 1 N · i = 1 N ( A i μ ) 2
Equation (5)—Normal probability density function
f x = 1 σ · 2 · π · e x p ( x μ 2 2 · σ 2 )

2.3.3. Comparison with Existing CV Approaches

Table 1 benchmarks the proposed system against the three patented CV solutions currently employed in Spanish DRR lines.
Unlike ES-2403580 A2 [48] and ES-2529816 B2 [49]—which, respectively, eject fruit after pitting or simply count occupied buckets—our design measures both the orientation (θ) and projected area for every olive, matching the analytical capabilities of ES-2732765 B2 [50] while omitting its pneumatic re-feed module. The HSV + morphology pipeline executes at 0.3 ms frame−1 on a low-power PC (e.g., Intel® i3-N305); thus, the real-time requirement (≤0.7 ms) is met with an ample margin and the hardware is limited to a single area camera and LED ring.
All frames are archived, so the data set can later be re-processed with alternative algorithms—including lightweight CNN detectors such as YOLO—whenever application scenarios present poor olive–background contrast. In the present study, however, the olives were well contrasted and the long-term objective was to deliver online analytics; neural approaches were therefore deferred to future work because their current latency (>6 ms frame−1 on embedded GPUs) would have violated the real-time budget. This hybrid strategy combines low-cost inline inspection with the possibility of more sophisticated offline analysis, without further modification of the pitting machine.

3. Results

The application developed in MATLAB, in collaboration with the artificial image acquisition system, enabled the extraction of 1628 segmentation masks (see Figure 6), with major axis identification for each processed oxidized black Gordal olive (processed_olives) during the pitting operation (Figure 11).
The overlap accuracy between the original olive image and its mask, evaluated from the contour pixels, yields a general error percentage below 0.1%. This was calculated as the non-overlapping pixels of the mask relative to the original image (Figure 12), using a 320 × 240 pixel space (equivalent to 76,800 pixels), through the OpenCV/Qt-Creator application.
The segmentation masks obtained during the pitting process serve as a diagnostic tool by providing data on the area and orientation for each olive as it passes through the machine. These values are automatically generated after the morphological analysis and are stored in the file statistics.txt, which contains three columns: the first identifies the image, the second lists the area in pixels, and the third indicates the inclination angle of the olive’s major axis relative to the horizontal.
Based on these data, two output histograms have been generated, showing the distributions of the areas (Figure 13, left) and angles (Figure 13, right). These reflect the overall size distribution processed during the pitting operation of the DRR system, aligned with the nominal calibration settings, as well as the number of olives that exhibited undesirable alignment with the awl tool, potentially resulting in “boat” defects.
The area histogram (Figure 13, left), organized in 1000-pixel segments, shows the highest concentration of olives within the [22,000, 30,000] pixel range, accounting for 1393 cases (85.5%). The bin with the greatest density is [25,000, 26,000], with 249 olives (15.3%), corresponding to caliber 80, which matches the nominal calibration for which the machine was adjusted. To establish a direct relationship between the caliber and pixel value, the pixel value of a model olive (caliber 80) was empirically determined, maintaining a fixed camera-to-track distance.
The angle histogram, segmented every 20°, shows the distribution of the orientation angles detected during the pitting operation. The [80, 100]° range corresponds to vertically oriented olives, representing the “boat error” [4], with a rate of 1.1% (18 cases). Olives in the [0, 20] ∪ [160, 180]° range are aligned parallel to the awl axis—97% of the total, i.e., 1580 cases. The remaining olives fall into an intermediate pivoting position, with potential for reorientation (tilted olives), which may be accepted or discarded depending on the specific criteria set by the pitting system.
Additionally, the mean and variance values of the sizes recorded in the file normal.txt were calculated. A new area distribution histogram by probability was generated, with the corresponding normal curve superimposed (Figure 14).
The contrast between the histogram and the normal distribution curve can indicate anomalies or deviations at both the segment and sample levels. In this operation, a group of 27 olives (1.66%) was detected in the [10,000, 14,000] range, corresponding to a contaminated subset of the original sample (cases 202, 299, 553, 739, 795, 800, 888, 891, 911, 933, 942, 1021, 1027, 1198, 1294, 1372, 1387, 1431, 1513, 1587; Figure 15).

4. Discussion

The development of a MATLAB application for the segmentation and morphological analysis of images from a computer vision system installed on the feeding line of a DRR machine—at a critical control point just before the pitting stage—has enabled a rigorous assessment of the full range of angles at which olives engage with the awl during an entire pitting operation. In addition, the application now illustrates the detailed distribution of the input calibers.
This work therefore presents an alternative method for the accurate diagnosis of these systems. It is applicable to any operation, does not require product disposal for quality control, and eliminates the need for on-site personnel or additional equipment. Currently, DRR machines are calibrated through periodic evaluations that take place only occasionally. After interrupting the process, a random batch of freshly pitted olives is placed in a flotation tank with an adjusted density so that, in theory, only olives without pits or embedded fragments float due to their lower densities. However, this method is not fully reliable. Some olives may trap air because of pit fragments lodged in the pulp, causing them to float and be mistakenly classified as compliant. To overcome this limitation, producers often rely on professional tasters to estimate the percentage of “boat” errors using subjective methods.
Although this traditional approach yields reasonably accurate results, it is not free from shortcomings. It results in product loss, requires evaluation downtime, and depends on external personnel or complementary equipment to prepare for a new processing cycle. In contrast, the method proposed here allows for the highly accurate identification of boat errors without material loss, applicable to any batch and at a low cost. It is aligned with other efforts aimed at improving maintenance operations, system control, and performance assessment [6,7,8].
In addition, the system was used to evaluate and graph the distribution of olive calibers entering the machine. This parameter is typically assumed to be controlled by previous sorting stages, whose accuracy and impact are often overlooked. In this case, the input caliber distribution was heterogeneous and contaminated, possibly related to the shift in the mean size compared to the nominal caliber (mean: 25,222 pixels < 25,500 pixels; see Figure 14), along with wide variance (10,610,215 pixels2, around six to seven times the nominal value). This reveals that upstream sorting stages neither classify sizes precisely nor are free from contamination (Figure 15), compromising the performance of systems that rely on accurate calibration [6,7,8].
Recognizing this raises a hypothesis for future research: there may be a correlation between poor calibration and the occurrence of boat errors. Olives with sizes far from the nominal value might be more likely to misalign with the awl. This could explain the residual error rate, which the industry can currently only reduce to around 1–2% [6,7,8]. The idea is especially relevant because it offers a short-term strategy to improve productivity by increasing system efficiency and possibly raising the speed limits of DRR machines. This could even eliminate the need for complementary systems like the CV unit described in this work.
Moreover, the proposed application provides an alternative approach to interpreting images in the context of machine vision, offering a structured method for the extraction of performance-related information in critical processes. In this case, it succeeds in segmenting complex images that are challenging for conventional OpenCV-based systems, where the lack of contrast, along with reflections and shadows from both the olives and the metallic conveyor, complicates contour detection. However, the processing speed remains a significant limitation, which currently restricts its use to offline analysis without real-time evaluation.
Future developments should focus on creating more advanced models, possibly artificial neural networks (ANNs) [7], inspired by the design proposed in this study. These models could operate at speeds above 40 frames per second, making real-time analysis possible. Such a system would overcome the bottlenecks caused by boat errors and other issues that become more prominent at high processing speeds or with poor calibration [6,7,8]. It is worth noting that both the CV system and the DRR machines used in this work are capable of handling over 2500 olives per minute.

5. Conclusions

The development of a MATLAB-based application for the segmentation and morphological–statistical analysis of images captured by a computer vision (CV) system is a viable alternative for the diagnosis of industrial pitting machines (DRR). The method proposed in this work is more advanced, as it accurately quantifies errors such as the boat error, in contrast to the conventional method based on flotation tanks and expert tasting. The traditional approach produces only estimated results and leads to product loss, and it also needs additional equipment or personnel.
In addition, the system has allowed for the evaluation and graphical representation of the range of calibers processed during the pitting operation. This is a parameter that is typically unknown, as its control is assumed to be managed by upstream sorting processes. However, this study shows that such processes are not entirely effective, being critical for systems whose efficiency depends strictly on precision. Access to this information raises the hypothesis that contamination or heterogeneity in calibers, particularly those far from the nominal size, could be related to the occurrence of the boat error. This is a topic that will be addressed in future work.
The segmentation performed by the application achieved high accuracy in contour detection, with an error below 0.1 percent, even for images that are difficult to interpret using conventional OpenCV-based machine vision systems. These systems struggle due to the lack of contrast and the frequent presence of highlights and shadows. Although this analysis was applied to a specific subset of olives, the model could be adapted for general use. In other words, the methodology used in this extreme case can be extrapolated to more common situations.
This development may serve as the basis for more advanced solutions, including other applications or even artificial neural networks (ANNs) capable of processing images at speeds greater than 40 frames per second. This would enable real-time diagnostics and allow coordination with other automated systems to generate immediate responses.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15126648/s1. File S1 (settings.txt)—HSV thresholds and morphological-kernel radii used by Algorithms 1 and 2 (location in archive: Configuration files/). File S2 (binarize2d.m)—MATLAB script implementing Algorithm 1 (HSV segmentation, morphological filtering, single-frame orientation extraction) (location in archive: Code/Code S1/). File S3 (processed_olives_3.m)—MATLAB batch-processing script implementing Algorithm 2 (statistical analysis, histogram plotting, report generation) (location in archive: Code/Code S2/). File S4 (non_processed_olives)—Raw RGB images of table-olive samples acquired under controlled lighting (location in archive: Data S1/). File S5 (processed_olives)—Same images after HSV segmentation, orientation classification and bounding-box labelling (location in archive: Data S2/). All items are supplied in their original formats inside a single compressed file.

Author Contributions

Conceptualization, A.M.-L.; methodology, A.M.-L., and L.V.G.; software, A.M.-L. and L.V.G.; validation, A.M.-L., L.V.G. and J.M.M.-L.; formal analysis, A.M.-L. and L.V.G.; investigation, L.V.G. and M.C.L.-G.; data curation, L.V.G.; resources, J.M.M.-L., M.C.L.-G. and M.J.G.-O.; writing—original draft preparation, L.V.G., A.M.-L. and M.J.G.-O.; writing—review and editing, L.V.G., A.M.-L. and M.J.G.-O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data, configuration files and MATLAB source codes developed for this study are available as Supplementary Materials with the present article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CVComputer Vision
DFKDigital Frame Kamera (Imaging Source)
DRRDeshuesadora–Rodajadora–Rellenadora (Pitting, Slicing, and Stuffing Machine)
GUIGraphical User Interface
HSVHue, Saturation, Value

References

  1. Internacional Olive Council. Trade Standard Applying to Table Olives (COI/OT/NC, 1). 2004. Available online: https://www.internationaloliveoil.org/what-we-do/chemistry-standardisation-unit/standards-and-methods/ (accessed on 6 May 2025).
  2. España. Real Decreto 679/2016, de 16 de Diciembre, por el que se Establece la Norma de Calidad de las Aceitunas de Mesa. Boletín Oficial del Estado, 304. 2016. Available online: https://www.boe.es/buscar/doc.php?id=BOE-A-2016-11953 (accessed on 6 May 2025).
  3. Santos-Siles, F. Las nuevas tecnologías aplicadas al sector de la aceituna manzanilla fina. Grasas Y Aceites 1999, 50, 131–140. [Google Scholar]
  4. Soler Esteban, A.; Van Olmen, S.H. Olive Pitting Machine and Method to Pit Olives Used by Said Machine (PCT No. WO2011/131215A1). 2011. Available online: https://patents.google.com/patent/WO2011131215A1/en (accessed on 6 May 2025).
  5. Estrada-Cabezas, J. La Aceituna de Mesa: Nociones Sobre sus Características, Elaboración y Cualidades; Diputación Provincial de Sevilla & Fundación para el fomento y la Promoción de la Aceituna de Mesa: Seville, Spain, 2011. [Google Scholar]
  6. Lucas-Pascual, A.; Madueño-Luna, A.; Jódar-Lázaro, M.; Molina-Martínez, J.; Ruiz-Canales, A.; Madueño-Luna, J. Others Analysis of the Functionality of the Feed Chain in Olive Pitting, Slicing and Stuffing Machines by IoT, Computer Vision and Neural Network Diagnosis. Sensors 2020, 20, 1541. [Google Scholar] [CrossRef] [PubMed]
  7. Jódar-Lázaro, M.; Madueño-Luna, A.; Lucas-Pascual, A.; Molina-Martínez, J.; Ruiz-Canales, A.; Madueño-Luna, J. Others Deep Learning in Olive Pitting Machines by Computer Vision. Comput. Electron. Agric. 2020, 171, 105304. [Google Scholar] [CrossRef]
  8. Lucas Pascual, A. Mejoras En El Control de Máquinas Deshuesadoras-Rodajadoras y de Relleno de Aceituna de Mesa. Ph.D. Dissertation, Universidad Politécnica de Cartagena, Cartagena, Spain, 2020. [Google Scholar]
  9. Abdullah, M. Image Acquisition Systems. In Computer Vision Technology for Food Quality Evaluation; Sun, D., Ed.; Elsevier: Dublin, Republic of Ireland, 2016; pp. 3–39. [Google Scholar]
  10. Dhanush, G.; Khatri, N.; Kumar, S.; Shukla, P. A Comprehensive Review of Machine Vision Systems and Artificial Intelligence Algorithms for the Detection and Harvesting of Agricultural Produce. Sci. Afr. 2023, 21, e01798. [Google Scholar] [CrossRef]
  11. Liu, Z.; Wang, S.; Zhang, Y.; Feng, Y.; Liu, J.; Zhu, H. Artificial intelligence in food safety: A decade review and bibliometric analysis. Foods 2023, 12, 1242. [Google Scholar] [CrossRef]
  12. Xiao, Z.; Wang, J.; Han, L.; Guo, S.; Cui, Q. Application of machine vision system in food detection. Front. Nutr. 2022, 9, 888245. [Google Scholar] [CrossRef]
  13. Kondoyanni, M.; Loukatos, D.; Templalexis, C.; Lentzou, D.; Xanthopoulos, G.; Arvanitis, K.G. Computer vision in monitoring fruit browning: Neural networks vs. stochastic modelling. Sensors 2025, 25, 2482. [Google Scholar] [CrossRef]
  14. Vale Filho, E.; Lang, L.; Aguiar, M.L.; Antunes, R.; Pereira, N.; Gaspar, P.D. Computer vision as a tool to support quality control and robotic handling of fruit: A case study. Appl. Sci. 2024, 14, 9727. [Google Scholar] [CrossRef]
  15. Rojas Santelices, I.; Cano, S.; Moreira, F.; Peña Fritz, Á. Artificial vision systems for fruit inspection and classification: A systematic literature review. Sensors 2025, 25, 1524. [Google Scholar] [CrossRef]
  16. Mai, B.; Liu, T.; Liu, Z.; Liang, Z.; Liu, S. A machine vision method for detecting pineapple fruit mechanical damage. Agriculture 2025, 15, 1063. [Google Scholar] [CrossRef]
  17. Lopes, J.F.; Ludwig, L.; Barbin, D.F.; Grossmann, M.V.E.; Barbon, S., Jr. Computer vision classification of barley flour based on spatial pyramid partition ensemble. Sensors 2019, 19, 2953. [Google Scholar] [CrossRef] [PubMed]
  18. Zhu, L.; Spachos, P.; Pensini, E.; Plataniotis, K.N. Deep learning and machine vision for food processing: A survey. Curr. Res. Food Sci. 2021, 4, 233–249. [Google Scholar] [CrossRef]
  19. Aznan, A.; González Viejo, C.; Pang, A.; Fuentes, S. Computer vision and machine learning analysis of commercial rice grains: A potential digital approach for consumer perception studies. Sensors 2021, 21, 6354. [Google Scholar] [CrossRef]
  20. Huang, Y.; Li, Z.; Bian, Z.; Jin, H.; Zheng, G.; Hu, D.; Sun, Y. Overview of deep learning and nondestructive detection technology for quality assessment of tomatoes. Foods 2025, 14, 286. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, H.; Ji, S.; Shao, M.; Pu, H.; Zhang, L. Non-destructive internal defect detection of in-shell walnuts by X-ray technology based on improved Faster R-CNN. Appl. Sci. 2023, 13, 7311. [Google Scholar] [CrossRef]
  22. Hsiao, W.-T.; Kuo, W.-C.; Lin, H.-H.; Lai, L.-H. Assessment and feasibility study of lemon ripening using X-ray image information visualization. Appl. Sci. 2021, 11, 3261. [Google Scholar] [CrossRef]
  23. Yu, K.; Zhong, M.; Zhu, W.; Rashid, A.; Han, R.; Virk, M.S.; Duan, K.; Zhao, Y.; Ren, X. Advances in computer vision and spectroscopy techniques for non-destructive quality assessment of citrus fruits: A comprehensive review. Foods 2025, 14, 386. [Google Scholar] [CrossRef]
  24. Ma, L.; Sun, K.; Tu, K.; Pan, L.; Zhang, W. Identification of double-yolked duck egg using computer vision. PLoS ONE 2017, 12, e0190054. [Google Scholar] [CrossRef]
  25. Cheng, J.; Sun, D.; Nagata, M.; Tallada, J. Quality Evaluation of Strawberry. In Computer Vision Technology for Food Quality Evaluation; Sun, D., Ed.; Elsevier: Dublin, Republic of Ireland, 2016; pp. 327–349. [Google Scholar]
  26. Menendez, A.; Paillet, G. Fish Inspection System Using a Parallel Neural Network Chip and the Image Knowledge Builder Application. AI Mag. 2008, 29, 21. [Google Scholar] [CrossRef]
  27. Lu, Y.; Lu, R. Quality Evaluation of Apples. In Computer Vision Technology for Food Quality Evaluation; Sun, D., Ed.; Elsevier: Dublin, Republic of Ireland, 2016; pp. 273–304. [Google Scholar]
  28. Blasco, J.; Cubero, S.; Moltó, E. Quality evaluation of citrus fruits. In Computer Vision Technology for Food Quality Evaluation, 2nd ed.; Sun, D., Ed.; Elsevier: Amsterdam, The Netherlands, 2016; pp. 305–325. [Google Scholar]
  29. Gao, X.; Li, S.; Su, X.; Li, Y.; Tang, W.; Zhang, Y.; Dong, M. Application of advanced deep learning models for efficient apple defect detection and quality grading in agricultural production. Agriculture 2024, 14, 1098. [Google Scholar] [CrossRef]
  30. Pedreschi, F.; Mery, D.; Marique, T. Grading of potatoes. In Computer Vision Technology for Food Quality Evaluation, 2nd ed.; Sun, D., Ed.; Elsevier: Amsterdam, The Netherlands, 2016; pp. 369–382. [Google Scholar] [CrossRef]
  31. Manikandan, R.; Rahimi, M.; Gandomi, A.H. Computer vision system for mango fruit defect detection using deep convolutional neural network. Foods 2022, 11, 3483. [Google Scholar] [CrossRef] [PubMed]
  32. Isingizwe Nturambirwe, J.F.; Perold, W.J.; Opara, U.L. Classification learning of latent bruise damage to apples using shortwave infrared hyperspectral imaging. Sensors 2021, 21, 4990. [Google Scholar] [CrossRef]
  33. Díaz, R. Classification and Quality Evaluation of Table Olives. In Computer Vision Technology for Food Quality Evaluation; Sun, D., Ed.; Elsevier: Dublin, Republic of Ireland, 2016; pp. 351–365. [Google Scholar]
  34. Hayajneh, A.M.; Batayneh, S.; Alzoubi, E.; Alwedyan, M. TinyML olive fruit variety classification by means of convolutional neural networks on IoT edge devices. AgriEngineering 2023, 5, 2266–2283. [Google Scholar] [CrossRef]
  35. Navarro Soto, J.; Satorres Martínez, S.; Martínez Gila, D.M.; Gómez Ortega, J.; Gámez García, J. Fast and reliable determination of virgin olive oil quality by fruit inspection using computer vision. Sensors 2018, 18, 3826. [Google Scholar] [CrossRef]
  36. Ponce, J.; Aquino, A.; Millan, B.; Andújar, J. Automatic Counting and Individual Size and Mass Estimation of Olive-Fruits Through Computer Vision Techniques. IEEE Access 2019, 7, 59451–59465. [Google Scholar] [CrossRef]
  37. Ponce, J.M.; Aquino, A.; Millán, B.; Andújar, J.M. Olive-fruit mass and size estimation using image analysis and feature modeling. Sensors 2018, 18, 2930. [Google Scholar] [CrossRef]
  38. Nasr-Esfahani, S.; Muthukumar, V.; Regentova, E.; Taghva, K.; Trabia, M. Detection of Pitts in Olive Using Hyperspectral Imaging Data. IEEE Access 2022, 10, 58525–58536. [Google Scholar] [CrossRef]
  39. Cano-Marchal, P.; Satorres-Martinez, S.; Gómez-Ortega, J.; Gámez-García, J. Automatic system for the detection of defects on olive fruit in an oil mill. Appl. Sci. 2021, 11, 8167. [Google Scholar] [CrossRef]
  40. Figorilli, S.; Violino, S.; Moscovini, L.; Ortenzi, L.; Salvucci, G.; Vasta, S.; Tocci, F. Olive fruit selection through AI algorithms and RGB imaging. Foods 2022, 11, 3391. [Google Scholar] [CrossRef]
  41. Aguilera Puerto, D.; Martínez Gila, D.M.; Gámez García, J.; Gómez Ortega, J. Sorting olive batches for the milling process using image processing. Sensors 2015, 15, 15738–15754. [Google Scholar] [CrossRef]
  42. Gómez, A.; García, P.; Navarro, L. Elaboration of table olives. Grasas y Aceites 2006, 57, 86–94. [Google Scholar] [CrossRef]
  43. González, J.F.; Fernández, A.G.; García, P.G.; Balbuena, M.B.; Quintana, M.C.D. Characteristics of the fermentation process that occurs during the storage in brine of Hojiblanca cultivar, used to elaborate ripe olives. Grasas y Aceites 1992, 43, 212–218. [Google Scholar] [CrossRef]
  44. The Imaging Source Website. The Camera Model DFK 33GV024 2024. Available online: https://www.theimagingsource.com/en-us/product/industrial/33g/dfk33gv024/ (accessed on 6 May 2025).
  45. Villanueva, L.; Madueño-Luna, A.; Madueño-Luna, J.M. Matlab_Experimental_Software; Version 1.0.0; University of Seville: Seville, Spain, 2023. [Google Scholar]
  46. Giuliani, D. Metaheuristic algorithms applied to color image segmentation on HSV space. J. Imaging 2022, 8, 6. [Google Scholar] [CrossRef] [PubMed]
  47. Kang, H.-C.; Han, H.; Bae, H.; Kim, M.; Son, J.; Kim, Y. HSV color-space-based automated object localization for robot grasping without prior knowledge. Appl. Sci. 2021, 11, 7593. [Google Scholar] [CrossRef]
  48. Soler Esteban, A.; Van Olmen, S.H. Máquina Deshuesadora de Aceitunas y Método para Deshuesar Aceitunas que Utiliza Dicha Máquina. Spanish Patent No. ES-2403580 A2, 20 May 2013. [Google Scholar]
  49. Madueño Luna, A.; López Lineros, M.; Madueño Luna, J.M. Sistema y Procedimiento Basado en un Sensor de Sincronismo para la Detección de Fallos de Funcionamiento en Máquinas Deshuesdoras/Rodajadoras y de Relleno, Cuantificación y Optimización del Rendimiento, Señalización, Monitorización y Control Remoto. Spanish Patent No. ES-2529816 B2, 11 November 2016. [Google Scholar]
  50. Madueño Luna, A.; López Lineros, M.; Madueño Luna, J.M. Procedimiento y Sistema para la Reducción Activa de Aceitunas mal Posicionadas en las Máquinas Deshuesadoras, Rellenadoras y Rodajadoras de Aceitunas. Spanish Patent No. ES-2732765 B2, 28 December 2020. [Google Scholar]
Figure 1. DRR machine main components: (1) olive feeder, (2) olive carrier, and (3) stone detaching device.
Figure 1. DRR machine main components: (1) olive feeder, (2) olive carrier, and (3) stone detaching device.
Applsci 15 06648 g001
Figure 2. Diagram of the computer vision (CV) system setup installed on the DRR machine. (a) Basic components of the CV system: (MS) magnetic sensor, (LS) LED ring (illumination), (Cam) optical sensor, electronic control panel (ECP—trigger electronics and LED control), and PC + OpenCV/Qt-Creator. (b) Location point of CV system in carrier chain Sadrym 130.
Figure 2. Diagram of the computer vision (CV) system setup installed on the DRR machine. (a) Basic components of the CV system: (MS) magnetic sensor, (LS) LED ring (illumination), (Cam) optical sensor, electronic control panel (ECP—trigger electronics and LED control), and PC + OpenCV/Qt-Creator. (b) Location point of CV system in carrier chain Sadrym 130.
Applsci 15 06648 g002
Figure 3. Computer vision system setup on a Sadrym model 130 DRR machine for olive pitting analysis.
Figure 3. Computer vision system setup on a Sadrym model 130 DRR machine for olive pitting analysis.
Applsci 15 06648 g003
Figure 4. Diagram of the electronic control panel (ECP).
Figure 4. Diagram of the electronic control panel (ECP).
Applsci 15 06648 g004
Figure 5. Image of a boat error case in an oxidized black Gordal olive just before pitting.
Figure 5. Image of a boat error case in an oxidized black Gordal olive just before pitting.
Applsci 15 06648 g005
Figure 6. Counts, percentages, and orientation ranges for each olive category (normal, tilted, boat, and other cases). The subsequent analysis considers only olives in the normal and boat orientations, as these two classes account for the operational performance of the line.
Figure 6. Counts, percentages, and orientation ranges for each olive category (normal, tilted, boat, and other cases). The subsequent analysis considers only olives in the normal and boat orientations, as these two classes account for the operational performance of the line.
Applsci 15 06648 g006
Figure 7. Representative raw images captured on the feed chain immediately upstream of the pitting station. Olives whose major axis is aligned with the awl—i.e., in the correct orientation for pitting—are enclosed by a green frame, whereas fruits whose major axis lies perpendicular to the awl, exhibiting the so-called boat orientation and therefore unsuitable for pitting, are marked with a red frame.
Figure 7. Representative raw images captured on the feed chain immediately upstream of the pitting station. Olives whose major axis is aligned with the awl—i.e., in the correct orientation for pitting—are enclosed by a green frame, whereas fruits whose major axis lies perpendicular to the awl, exhibiting the so-called boat orientation and therefore unsuitable for pitting, are marked with a red frame.
Applsci 15 06648 g007
Figure 8. Workflow of the MATLAB application.
Figure 8. Workflow of the MATLAB application.
Applsci 15 06648 g008
Figure 9. Workflow of the MATLAB application: (a) image captured from the processing line, (b) pixel value decomposition commands based on HSV, (c) initial segmentation, (d) erosion and dilation commands, (e) final segmentation with identification of the major axis orientation.
Figure 9. Workflow of the MATLAB application: (a) image captured from the processing line, (b) pixel value decomposition commands based on HSV, (c) initial segmentation, (d) erosion and dilation commands, (e) final segmentation with identification of the major axis orientation.
Applsci 15 06648 g009
Figure 10. Flow chart of Algorithm 2: batch processing and statistical analysis of the olive data set.
Figure 10. Flow chart of Algorithm 2: batch processing and statistical analysis of the olive data set.
Applsci 15 06648 g010
Figure 11. Final segmentation examples with superimposed orientation axis. (a) Normal, (b) misaligned (“boat” case).
Figure 11. Final segmentation examples with superimposed orientation axis. (a) Normal, (b) misaligned (“boat” case).
Applsci 15 06648 g011
Figure 12. Calibration frame executed via OpenCV/Qt-Creator.
Figure 12. Calibration frame executed via OpenCV/Qt-Creator.
Applsci 15 06648 g012
Figure 13. Histogram of calibers (left) and orientation angles (right).
Figure 13. Histogram of calibers (left) and orientation angles (right).
Applsci 15 06648 g013
Figure 14. Distribution by size proportional to total population, with superimposed normal curve. Mean: 24,154 pixels; variance: 5,783,793 pixels2.
Figure 14. Distribution by size proportional to total population, with superimposed normal curve. Mean: 24,154 pixels; variance: 5,783,793 pixels2.
Applsci 15 06648 g014
Figure 15. Cluster of detected sub-nominal size classes (10 000–14 000 px2). The green bounding box marks a nominal-size olive, whereas the red bounding boxes mark olives detected as undersized.
Figure 15. Cluster of detected sub-nominal size classes (10 000–14 000 px2). The green bounding box marks a nominal-size olive, whereas the red bounding boxes mark olives detected as undersized.
Applsci 15 06648 g015
Table 1. Comparison of industrial approaches to orientation faults in DRR machines.
Table 1. Comparison of industrial approaches to orientation faults in DRR machines.
SolutionSensor and Detection PrincipleReal-Time ActionMechanical ModificationAdvantagesLimitations
ES-2403580 A2Two area cameras compare the olive geometry during the cutting stroke; abnormal deformation ⇒ defectEjects defective fruit with an air pulseNone (retrofit kit)Removes fruit that retain the stone; no downtimePerformance strongly caliber-dependent; retrofit proved unreliable; does not correct orientation
ES-2529816 B2Horseshoe magnetic + optical sensor counts bucket occupancy (no imaging)Alarm/stop onlyNoneVery low cost; detects empty bucketsCannot detect mis-orientation; no angle or size data
ES-2732765 B2Magnetic trigger + area camera over a machined gap in the brush; OpenCV heuristics compute orientation and areaPneumatic pulse returns mis-oriented olives to feederBrush gap + compressed-air manifoldDetects all defect classes and corrects them inlineIntrusive pneumatic system; proprietary software; increased maintenance
Proposed methodSame magnetic trigger + area camera over brush gap; HSV segmentation + morphology (offline batch)Diagnostic report (offline)Brush gap only (no pneumatics)Provides θ and area for every olive; runs in 0.3 ms frame−1 on a low-power PC; open-source; minimal maintenance; low costDoes not re-feed olives in real time and assumes stable illumination
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gandul, L.V.; Madueño-Luna, A.; Madueño-Luna, J.M.; López-Gordillo, M.C.; González-Ortega, M.J. Development of a Computer Vision-Based Method for Sizing and Boat Error Assessment in Olive Pitting Machines. Appl. Sci. 2025, 15, 6648. https://doi.org/10.3390/app15126648

AMA Style

Gandul LV, Madueño-Luna A, Madueño-Luna JM, López-Gordillo MC, González-Ortega MJ. Development of a Computer Vision-Based Method for Sizing and Boat Error Assessment in Olive Pitting Machines. Applied Sciences. 2025; 15(12):6648. https://doi.org/10.3390/app15126648

Chicago/Turabian Style

Gandul, Luis Villanueva, Antonio Madueño-Luna, José Miguel Madueño-Luna, Miguel Calixto López-Gordillo, and Manuel Jesús González-Ortega. 2025. "Development of a Computer Vision-Based Method for Sizing and Boat Error Assessment in Olive Pitting Machines" Applied Sciences 15, no. 12: 6648. https://doi.org/10.3390/app15126648

APA Style

Gandul, L. V., Madueño-Luna, A., Madueño-Luna, J. M., López-Gordillo, M. C., & González-Ortega, M. J. (2025). Development of a Computer Vision-Based Method for Sizing and Boat Error Assessment in Olive Pitting Machines. Applied Sciences, 15(12), 6648. https://doi.org/10.3390/app15126648

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop