Next Article in Journal
Lake SkyWater—A Portable Buoy for Measuring Water-Leaving Radiance in Lakes Under Optimal Geometric Conditions
Previous Article in Journal
SASBLS: An Advanced Model for Sleep Apnea Detection Based on Single-Channel SpO2
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Vision Systems for Fruit Inspection and Classification: Systematic Literature Review

by
Ignacio Rojas Santelices
1,
Sandra Cano
2,
Fernando Moreira
3,4,* and
Álvaro Peña Fritz
5
1
Doctorate in Smart Industry, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2141, Valparaiso 2370688, Chile
2
School of Informatics Engineering, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaiso 2370688, Chile
3
REMIT (Research on Economics, Management and Information Technologies), IJP (Instituto Jurídico Portucalense), Universidade Portucalense, Rua Dr. António Bernardino de Almeida, 541-619, 4200-072 Porto, Portugal
4
IEETA (Instituto de Engenharia Electrónica e Telemática de Aveiro), Universidade de Aveiro, 3810-193 Aveiro, Portugal
5
School of Construction and Transportation Engineering, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2147, Valparaiso 2370688, Chile
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(5), 1524; https://doi.org/10.3390/s25051524
Submission received: 18 December 2024 / Revised: 19 February 2025 / Accepted: 27 February 2025 / Published: 28 February 2025

Abstract

:
Fruit sorting and quality inspection using computer vision is a key tool to ensure quality and safety in the fruit industry. This study presents a systematic literature review, following the PRISMA methodology, with the aim of identifying different fields of application, typical hardware configurations, and the techniques and algorithms used for fruit sorting. In this study, 56 articles published between 2015 and 2024 were analyzed, selected from relevant databases such as Web of Science and Scopus. The results indicate that the main fields of application include orchards, industrial processing lines, and final consumption points, such as supermarkets and homes, each with specific technical requirements. Regarding hardware, RGB cameras and LED lighting systems predominate in controlled applications, although multispectral cameras are also important in complex applications such as foreign material detection. Processing techniques include traditional algorithms such as Otsu and Sobel for segmentation and deep learning models such as ResNet and VGG, often optimized with transfer learning for classification. This systematic review could provide a basic guide for the development of fruit quality inspection and classification systems in different environments.

1. Introduction

The fruit industry is an essential pillar of the global economy and a fundamental component of the world’s food supply [1]. The growing demand for high-quality fresh products has driven the need to implement efficient inspection and control systems that guarantee the safety and absence of contaminants in fruits intended for consumption [2]. Accurate classification and early detection of defects and contaminants not only ensure consumer satisfaction but also align with international food safety regulations [3].
The application of computer vision in fruit inspection and quality control has emerged as an advanced technological solution to address these challenges [4]. In the field of precision agriculture, these systems are used during the harvesting stage to determine the optimal picking time, ensuring that fruits achieve their maximum nutritional and organoleptic quality [5]. Furthermore, computer vision enables the precise recognition and localization of fruits on the tree, facilitating harvest automation through robotic pickers. These systems can identify the maturity, size, and position of the fruit, allowing for selective harvesting that minimizes fruit damage and enhances the efficiency of automated harvesting [6,7]. In the packaging and sorting stages of fruit-exporting companies, artificial vision allows for more efficient and precise selection, reducing the margin for human error and speeding up production processes [8]. In addition, in supermarkets, these technologies can help the consumer by evaluating the freshness and quality of fruits in real time, facilitating more informed purchasing decisions [9]. A study proposed by [10] investigated methods and challenges associated with fruit size measurement through artificial intelligence, addressing detection and segmentation techniques. The study concludes that deep learning methods have significantly outperformed traditional techniques, although challenges remain concerning the variability of lighting conditions and the lack of specialized public datasets. The authors conducted a review specifically focused on systems applied to orchards, highlighting the importance of technologies that facilitate automated fruit harvesting. Their work focuses on developing mechanisms capable of operating in open-field conditions, where factors such as light variability and the presence of vegetation complicate the precise detection of fruits.
Traditionally, manual inspection has been the primary method for assessing fruit quality, but this process is time-consuming, subjective, and prone to human error. To address these limitations, researchers have explored the use of artificial vision systems for automated fruit quality inspection. Artificial vision systems, powered by advanced image processing and machine learning algorithms, can overcome these limitations and offer a more robust and efficient solution for quality inspection. These systems can perform real-time defect detection, classify defects, and provide automated decision-making capabilities, thereby improving the overall quality control process [11]. In [12], the impact of deep learning in fruit image analysis, addressing various approaches and model architectures, such as classical CNNs, R-CNN, and YOLO. A relevant aspect mentioned in this study is the challenges associated with the lack of data, the complexity of labeling, variations in fruit characteristics, and computational efficiency. The authors also emphasize the importance of multispectral data integration and knowledge transfer between species to improve the robustness of systems in dynamic environments. This work represents a significant contribution by identifying trends and suggesting future lines of research, such as the integration of models in real agricultural systems and the development of advanced data augmentation techniques to overcome limitations in reduced datasets.
Although there are significant advances in this field and the use of different techniques and approaches for fruit inspection and sorting, there are difficulties in identifying which are the most efficient and reliable hardware and software configurations. The speed and accuracy of the algorithms are critical factors, especially when handling large amounts of images in real time, which highlights the importance of developing systems capable of processing data quickly without compromising accuracy. This capability is fundamental for the successful adoption of these systems in industrial and commercial environments.
Main contribution with this study is to identify key trends, methodologies, and technologies used in this field such machine learning, deep learning, and image-processing techniques. In addition, this study provides a detailed understanding of the current state of research in artificial vision systems for inspection and classification. This study could be an alternative to help researchers quickly understand which techniques are most suitable for specific tasks (e.g., defect detection, quality grading). This study also highlights the importance of preprocessing techniques, segmentation algorithms, feature extraction, and classification methods in the development of effective computer-vision-based quality inspection systems. This knowledge is crucial in guiding the development of more advanced and robust artificial vision systems for fruit inspection and classification.
This article is structured as follows: Section 2 presents the objectives of this systematic literature review and defines its research questions. Section 3 explains the methodology used for the literature search, which is guided by the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) guidelines [13]. Section 4 provides the results found in the articles selected. Section 5 discusses the reviewed studies. Finally, Section 6 offers the conclusions.

2. Objectives

The objective of this systematic literature review is to examine the research that has been carried out on inspection of quality of fruits using vision artificial systems
The objective is to synthesize the current research to increase our understanding of algorithms and techniques that must follow artificial vision systems for quality inspection of fruit. This systematic literature review (SLR) aims to answer three research questions that are derived from the previously mentioned problems:
RQ1: What are the application fields where it is required to classify and inspect fruit using artificial vision?
RQ2: What are the typical hardware configurations in machine vision systems used for image acquisition in fruit classification and inspection?
RQ3: What are the most used image-processing algorithms and techniques in fruit classification and inspection?

3. Methodology

To address the research questions, a thorough plan for the search and selection of relevant literature was carried out following the PRISMA guidelines [13]. The search for articles was carried out in two of the most important scientific databases: Web of Science and Scopus. Key search terms related to the topic of the review were defined, ensuring adequate coverage of computer vision techniques and applications in fruit classification. The terms used were as follows: “Fruit Classification”, “Fruit defect classification”, “Fruit defect recognition”, “Fruit grading”, “Machine Vision”, “Computer vision”, “Image processing”, “Artificial vision”, “Feature Engineering”, and “Artificial Intelligence”. These terms can be divided into two interconnected groups: the first refers to aspects related to fruit, and the second to computer vision and image processing. The scheme presented in Figure 1 shows the search strategy that links both groups. The identification of these key terms was performed by searching in the titles, abstracts, and keywords of the articles.
The initial search generated a total of 505 articles combining WoS and Scopus. Subsequently, additional database filters were applied to refine the results: articles published between 2015 and 2024 were selected, the document type was restricted to “Article”, the source to “Journal”, the language of publication to “English”, and the availability of the articles, including in this review only those categorized as open-access. This allows for focusing on the most recent literature and previous studies relevant to computer vision applied to fruit classification and quality inspection. After applying these filters, 310 documents were removed. Then, 126 duplicate articles were eliminated using the Mendeley tool, leaving 69 articles. In the screening stage, a manual review of the titles and abstracts was performed to assess their eligibility, excluding those papers that did not meet the inclusion criteria set out in Table 1. One article that could not be retrieved was excluded. Finally, after a complete texts review, one article was excluded by applying the exclusion criterion (EC1) and one article by applying the exclusion criterion (EC2). As a result, 56 articles were selected for the final review.
Figure 2 represents the flow chart of the studies selected according to the PRISMA statements.

4. Results

The search of the databases resulted in a total of 56 articles. Finally, 56 articles published between 2015 and 2024 were selected after considering the inclusion and exclusion criteria. The selected articles allowed us to answer the study questions.

4.1. Data Synthesis

Table 2 shows a summary of articles selected, which describes a set of aspects found for each study.

4.2. General Articles Characteristics

The general characteristics of the selected studies are presented below, which allows for contextualizing the temporal, geographical, and thematic distribution of research in artificial vision applied to the classification and quality control of fruits.
A. Distribution of studies in the time analyzed: Approximately 70% of the studies were published between 2020 and 2024, reflecting a significant growth in recent research. This increase may be related to advances in computer vision techniques and the growing interest in applying these technologies to agriculture and the food industry to optimize fruit sorting and quality control processes. Figure 3 shows the temporal evolution of research.
B. Productivity by Country: Figure 4 shows studies distribution by geographical location. The leading countries in terms of the number of publications in this field are India, with 11 studies, followed by China with 9 studies, and Spain and Iran with 4 publications each. This suggests a particular interest in these regions, possibly due to their relevance in fruit production and export, as well as the presence of active research groups in agricultural technology and food processing.
C. Product Type: The relevance of certain types of fruit varies by region, but there is significant interest in classifying multiple types of fruit or varieties. This type of work represents 35% of the studies. In terms of specific fruits, apples stand out, with 10.5% of the studies, followed by dates, with 8.8%. The distribution shown in Figure 5 indicates a trend towards the creation of versatile systems that can be applied to different fruits, as well as a specialization in products of high commercial relevance.
D. Type of Dataset Used: Most studies (74%) used proprietary datasets, created specifically to develop and test their classification models. In total, 24% of the studies used benchmark datasets, while approximately 2% used a combination of benchmark and proprietary datasets. This predominance of custom datasets indicates the need to adapt the models to specific characteristics of each type of fruit or capture condition, although the use of benchmark datasets is beginning to consolidate as a practice to ensure the reproducibility and comparability of the results. Table 3 shows the datasets used in the articles under study, including information on the number of images taken from each one for training the classification algorithms.
In the reviewed studies, a variety of classification objectives are observed, reflecting the specific needs of each application in artificial vision for the fruit industry. The most common objective was classification by fruit type, present in 32% of the studies, highlighting its importance for tasks such as automation in production lines and fruit recognition at points of consumption. The second most frequent objective was the assessment of the state of ripeness, with 23% of the studies focusing on determining levels of ripeness for applications such as automated harvesting, post-harvest quality control, and fruit selection according to commercial standards.
E. Classification by type: Classification by type of variety represented 16% of the studies, a particularly relevant task for products with multiple commercially significant varieties, such as apples, dates, or mangoes, where varietal characteristics directly affect commercial value and consumer preferences. These objectives reflect how classification applications vary by context, from early stages of production to direct interaction with the final consumer. Figure 6 shows the percentage distribution of each type of application.

4.3. Answering the Research Questions

The objective of the abstraction was to respond to the following questions:
Question 1. What are the application fields where it is required to classify and inspect fruit using artificial vision?
Artificial vision is applied in three main areas within the fruit industry:
A.
Orchard: At this stage, machine vision systems are used to automate the harvesting and sorting of fruit directly in the field. These systems are useful for identifying the fruit classes [32,38,43], assessing maturity levels [23,24,28,30,51], pest monitoring [49], and pesticide monitoring [65]. In addition, their implementation makes it possible to address the challenges related to labor shortages in agricultural activities.
B.
Fruit-processing industries: In industrial processing lines, machine vision is used for tasks such as varieties detection [14,20,50,54,64,70], fruit classes [18,19,35], ripeness level [21,25,27,55,57,66], size classification [22,26,29,33,36,60,63], and quality defects sorting [34,42,52,56,62,67]. This use stands out for its ability to reduce human error, increase inspection speed, and improve consistency in product quality.
C.
Retail or Final Consumption Points: In this emerging area, artificial vision systems are designed to assist the consumer or distribution chains in assessing freshness [48], [53], identifying varieties [17,58,61], and detecting the type of fruit [15,16,31,37,39,40,41,44,45,46,59,69]. Recent advances have allowed for technologies integrated into smartphones to classify fruit in real time, facilitating informed purchasing decisions.
Question 2. What are the typical hardware configurations in machine vision systems used for image acquisition in fruit classification and inspection?
Hardware configurations vary depending on the application context, but some main trends are identified:
A.
Image capture:
The reviewed studies show that RGB (Red, Green, Blue) cameras are the most used sensor type in fruit classification, accounting for 84% of applications (Table 4.) RGB cameras enable effective detection of product variety and distinguish between different types of fruit, as well as identifying quality defects, such as ripeness and presence of rot. This type of sensor is suitable for detecting visual characteristics in the visible-light range, facilitating basic quality analysis and fruit sorting.
On the other hand, a set of studies focus their efforts on the use of hyperspectral cameras [34,49,62], which provide a broader spectrum of information and allow for the detection of insect contamination, the presence of pesticides, and surface damage that are not perceptible with RGB sensors. Hyperspectral technology is especially useful in applications that require a detailed assessment of the surface and composition of the fruit to ensure product safety and quality.
The use of smartphones as sensors for image acquisition has gained importance in recent years, especially in studies aimed at fruit recognition at final consumption points [17,27]. These smartphone-based models allow for the identification of the fruit type, as well as the evaluation of its state of ripeness and detection of possible rot, making this technology accessible to end users.
B.
Acquisition Conditions:
Regarding acquisition conditions, it is observed that 61% of the studies use ambient light to capture images, mainly in sorting applications in orchards or at final consumption points, where light variability is a natural condition. This type of light requires that the artificial vision algorithms adapt to light fluctuations, since ambient lighting can affect the accuracy of detection and sorting.
In studies related to the fruit-processing industry, the use of LED lights predominates, which provide controlled and stable lighting. This type of light allows for greater consistency in images, which is crucial for industrial applications that require high precision and consistency in quality assessment. In addition to the lighting system, in projects related to the food industry, it is important to have a transport system that allows for in-line sorting, managing continuous production volumes for efficient processing (Figure 7).
In addition, the assembly of cameras and lighting systems is carried out within a dome that allows the sorting system to be isolated from external agents. This dome provides uniform lighting, which reduces the appearance of shadows and reflections to a minimum, thus improving the quality of the images obtained and the accuracy of classification.
Studies using hyperspectral cameras predominantly use halogen light. The need to record images in the infrared range, where some quality defects are more visible, makes halogen light a suitable choice, as it provides a broad spectrum that facilitates the detection of quality details that are not evident in the RGB color space.
C.
Capture speed:
Capture speed is not a critical factor in the studies reviewed, as most of them work under static conditions for the generation of datasets or for classification in the laboratory. This means that image capture is carried out slowly, without the need for a high capture speed, which minimizes the influence of this factor on the results obtained.
Question 3. What are the most used image-processing algorithms and techniques in fruit classification and inspection?
The most used techniques range from traditional approaches to modern deep learning models:
A.
Preprocessing:
The preprocessing stage is essential in fruit classification using computer vision, as it allows for images to be optimized before being analyzed by segmentation and classification algorithms. Depending on the type of study and the environment in which the images are captured (such as orchards, processing lines, or supermarkets), significant differences may arise in terms of size, lighting, resolution, and quality of the images. These variations require specific adjustments to ensure that the images are consistent and suitable for analysis.
  • Image filtering and enhancement
Studies show the use of filtering techniques to reduce noise and improve the visibility of objects. In [61], a stylization filter is used (see Figure 8), which allows for drawing a black line on the outline of the object to make a better segmentation later.
Gaussian filtering is one of the most widely applied techniques, allowing images to be smoothed without compromising the relevant details that facilitate classification [21,63,66,68]. Furthermore, image enhancement through contrast and sharpness adjustments is commonly used to highlight distinctive features of fruits, such as defects or variations in texture, which facilitates more accurate and effective feature extraction later on [23].
  • Color adjustment and lighting correction
The reviewed studies highlight its importance in environments where lighting is variable, such as in orchards or points of consumption. Researchers have used the RGB value normalization techniques and transformations in alternative color spaces, such as HSV (Hue, Saturation, Value) or CIE L*a*b*, to compensate for light fluctuations and improve consistency in images [24,25]. Also, in low-light conditions, in [23], image fusion methods were applied, combining data from the visible and infrared spectrum to achieve greater contrast and visual quality between the fruit and the background composed mainly of vegetation.
  • Geometric transformations
According to the information extracted, the main objective of applying geometric transformations in the reviewed studies is to increase the amount of available data through data augmentation. Studies such as [21,25,41,70] use rotation techniques, saturation adjustment, brightness adjustments, hue modification, as well as reflection and mirroring.
Data augmentation is essential in image analysis because it allows to artificially expand the size of the dataset without the need to collect new images, which is especially useful when the amount of available data is limited. By introducing variations in the original images (see Figure 9), such as changes in orientation, color, and brightness, the computer vision model is exposed to a more diverse set of data, which improves its generalization ability and reduces the risk of overfitting [21]. This is critical for the model to perform well in unseen conditions and to be more robust against variations in the environment, lighting, and object position in real-world scenarios.
A.
Segmentation:
The next step after image capture and preprocessing is segmentation, a key stage whose main objective is to separate the object of interest, such as a fruit, from the image background. This background can vary from uniform colors to complex environments with vegetation, accessories or other elements. As evidenced in studies reviewed, the need for segmentation depends on the type of study and its application. For example, in cases such as studies [25,28], segmentation is not carried out; instead, the process goes directly from preprocessing to feature extraction. This occurs especially in works related to classification at points of consumption or retail, where the objective is to identify the type of fruit (for example, distinguishing between an apple and a banana).
On the other hand, in most studies, segmentation is necessary to individualize each fruit, separating it from the background, and thus perform a more accurate classification without influences from other objects. In some cases, such as in studies [17,21,25], uniform black or white backgrounds are used to generate a high contrast, facilitating the application of segmentation algorithms. However, in other studies [59,66] where fruits are found in uncontrolled backgrounds, it is necessary to resort to more advanced techniques, such as adaptive thresholding, a method that dynamically adjusts the threshold value to segment regions of interest in images with lighting variations or complex backgrounds.
  • Segmentation algorithms
Segmentation is a crucial stage in computer vision systems, and the algorithms applied depend on the characteristics of the images and the objectives of the analysis. In the reviewed studies, multiple techniques are identified to separate objects from the background, many of which generate binarized masks that facilitate subsequent analysis. Some of the most used algorithms and methods, as well as complementary techniques, are described below.
  • Sobel Filter [17,27]: It is an edge detection technique that calculates the derivative of pixel intensity in horizontal and vertical directions, highlighting areas where sharp changes in intensity occur. This method uses two convolutional masks (kernels), one for each direction, and combines the results to obtain a gradient image. It is useful for identifying contours in images where the edges are sharp and well defined, providing key information for segmenting objects such as fruits. Although it is efficient, its performance can be affected in noisy images, so it is often combined with pre-filtering techniques to improve the quality of the results.
    Canny Filter [22,33,57]: This algorithm is a more advanced technique for edge detection. It works in several stages: First, it applies a Gaussian filter to smooth the image and reduce noise; then, it calculates intensity gradients to identify areas with pronounced changes. Next, it uses a “non-maximum suppression” process to refine the detected edges and remove spurious lines. Finally, it applies a double threshold to identify strong and weak edges, connecting weak ones to strong ones if they are related. The Canny filter is especially effective on complex images, as it generates more accurate edges than other techniques.
    Otsu Thresholding [42,46,52,55,64]: Otsu is a threshold-based segmentation technique used to binarize images. This algorithm automatically determines the optimal threshold value by minimizing the intra-class variance and maximizing the inter-class variance. In practical terms, it searches for the ideal cut-off point to separate pixels into two groups: background and object. It is especially useful when the image histogram shows a bimodal distribution, meaning there are two distinct classes (for example, a fruit and its background). Otsu is commonly used on images with uniform illumination and is efficient for applications where an automatic and fast segmentation process is required.
    Mean Shift Clustering [66]: It is a method based on grouping pixels according to their similarity in features such as color or intensity. This algorithm iterates to find the highest densities in the feature space, moving a kernel towards areas with higher density until reaching convergence. It is particularly useful for segmenting images with homogeneous color regions, such as fruits on uniform backgrounds, since it does not require a fixed number of clusters to be specified.
    Watershed Segmentation [20,66,70]: It is based on interpreting the intensity of pixels as a topography where the lowest values represent valleys and the highest, ridges. This method floods the valleys of the image with “water” from marked points, separating regions based on their natural boundaries. It is ideal for segmenting objects that are superimposed or in contact, such as stacked fruit, and allows for obtaining precise contours in complex images. To avoid over-segmentation, it is often combined with preprocessing techniques, such as smoothing and edge detection.
    Combined Applications: In the reviewed studies, it was observed that segmentation techniques are often applied in combination to improve accuracy. The Sobel filter is employed to detect initial contours, which are then refined using the Canny algorithm. Mean Shift Clustering is used to cluster pixels before applying Watershed [66], which reduces noise and improves object separation. In complex or noisy images, these techniques are combined with transformations to color spaces, such as CIE L*a*b* or HSV [27], where chromatic differences between the object and the background are more pronounced, facilitating segmentation. The use of binarized masks not only facilitates background removal but also enables the analysis of physical features, such as measuring longitudinal and transverse axes, calculating area, or identifying specific shapes [26].
In the study [42], two main approaches for segmentation are mentioned:
  • Discontinuity-based: Identify abrupt changes in pixel intensity, such as edges or lines. This approach is useful for detecting contours and separating regions with defined boundaries.
    Similarity-based: Groups regions with homogeneous characteristics, such as color intensity or texture. An example of this method is Otsu Thresholding.
  • Segmentation Objectives
Segmentation objectives vary depending on the application of the study:
  • Object Detection: Identify and isolate the fruit from the background to perform a specific analysis.
    Shape Analysis: Extract the geometry and dimensions of the fruit to evaluate its quality or classify it according to specific standards.
    Color Detection: Identify shades that allow determining the level of ripeness, freshness or presence of defects.
    Defect Detection: Highlight imperfections such as bruises, stains, or physical damage that affect the quality of the fruit.
B.
Feature extraction:
Feature extraction is a fundamental stage in classification systems based on image processing, as it allows visual data to be transformed into quantitative descriptors that facilitate the precise and efficient classification of objects. From the articles analyzed, it was found that this stage focuses on identifying attributes such as size, color, texture, shape, and spectral characteristics, depending on the objective of the analysis and the type of sensor used to capture images.
1.
Type of extracted features
The reviewed studies highlight that the most extracted features include color, texture, shape, and size (Table 3). For example, color and texture are key indicators in detecting ripeness and defects in fruits, while size and shape are essential for assessing the quality, grade, or variety of fruits. In specific research, such as in the case of olives and mangoes, spectral and textural features play a crucial role in variety classification and defect analysis.
2.
Feature Extraction Methods
Among the methods used for feature extraction, two main approaches stand out: deep learning-based techniques and classical image-processing methods. Both approaches are used to identify and extract key attributes, such as texture, color, shape, and size, depending on the specific objectives of each study.
  • Deep Learning-Based Approaches: Deep learning models, such as VGG-16, ResNet, DenseNet, and YOLO, are widely used for feature extraction. These models can identify complex patterns related to texture, color, and shape. In the context of fruit classification, these features are essential for tasks, such as defect detection, quality assessment, and classification by type or variety.
  • Use of Color Spaces: The reviewed studies reveal the importance of using different color spaces in feature extraction for fruit classification, as these provide more specific and discriminative representations compared to the standard RGB color space.
In the study on starfruit ripeness classification, RGB, HSV, and CIE L*a*b* color spaces were used to extract color-related features at different stages of ripeness (green, ripe, and overripe). The authors found that the CIE L*a*b* space provided better discrimination between classes due to its ability to separate lightness and color information, achieving accuracies of up to 96.2% with linear discriminant analysis (LDA) [27].
Another study on Batuan fruit grading used HSV space to assess ripeness based on the distribution of hues and saturations. This space is particularly useful for identifying changes in hues associated with ripeness or surface defects under different lighting conditions. This approach allowed fruit to be graded accurately, highlighting the utility of HSV space for handling lighting variations in uncontrolled environments [42].
In the analysis of Moroccan Date fruit varieties, 12 color channels were used, including L, a, and b, as well as others such as R, G, B, and saturation. The CIE L*a*b* space stood out for its ability to capture differences in color intensity and hues between varieties. Models based on this color space achieved an accuracy of 98% when combining textures extracted from images processed in different channels [54].
C.
Classification:
The classification stage is the final component in machine vision systems. Its main objective is to assign a specific label or category to each analyzed fruit, whether based on its quality, ripeness, type, variety, or presence of defects. This stage builds on the previously extracted and processed features, using algorithms that interpret these features to make accurate and consistent predictions.
In the reviewed studies, classification is carried out using a variety of approaches, including traditional machine learning algorithms and advanced deep learning models (Table 3). These algorithms are evaluated based on metrics such as precision and accuracy, as well as their ability to handle large volumes of data in real-world scenarios. Furthermore, the selection of the classification method depends on factors such as the type of data, system characteristics, and the application context, whether in orchards, industrial processing lines, or end-use points.
  • Classification algorithms used
In the studies analyzed, the use of machine learning techniques stands out, with a particular emphasis on deep learning models using convolutional neural networks (CNNs). These algorithms have proven to be highly effective in addressing the classification stage in computer vision systems, thanks to their ability to learn complex and generalizable representations directly from the input data.
Pre-trained models, such as ResNet, VGG, Inception, and DenseNet, are widely used for the classification of multiple types of fruits. These models, originally designed for general image recognition tasks, are adapted through transfer learning to handle specific categories within fruit classification. Their application is particularly useful in scenarios where category-specific analysis is required, such as differentiation between fruit varieties or analysis of ripeness and defects.
Table 2 shows consolidated information on the classification methods used in each study. Other algorithms used to compare their efficiency are also described.
2.
Model accuracy and performance
A direct comparison between all models is not possible in the reviewed studies, as each investigation uses specific datasets, capture conditions, and objectives that affect these parameters. Factors such as fruit type, image quality, lighting conditions, and pre-processing methods vary significantly, making it difficult to establish homogeneous comparisons.
Although a direct comparison between all studies is not possible due to their contextual differences, several stand out for evaluating multiple methods under the same conditions, allowing for a more accurate comparison. Some examples are presented below.
In the paper [27], the authors evaluated different classification models to determine the ripeness level of starfruit using images captured with a smartphone. These models included Linear Discriminant Analysis (LDA), Linear Support Vector Machines (SVMs), Quadratic SVM, Fine K-Nearest Neighbor (KNN), and Subspace Discriminant Analysis (SDA). The models were evaluated in terms of accuracy during calibration and validation stages (see Table 5).
In [46], the performance of several pre-trained deep learning models, adapted to a specific dataset for fruit classification, is analyzed by applying transfer learning. The evaluated models include VGG19, ResNet50, DenseNet121, DenseNet201, MobileNetV3, InceptionV3, NASNetMobile, and the proposed FruitVision model.
The FruitVision model, optimized using transfer learning on the MobileNetV3 architecture, achieved the highest accuracy on the evaluated datasets, with values between 97.96% and 99.50%, depending on the type of fruit. For example, it achieved 99.50% accuracy on bananas, outperforming DenseNet201 98.84%, which was the second-best model. In addition, FruitVision demonstrated a remarkable generalization capacity and consistency in metrics such as recall and F1-score, minimizing prediction errors. Other models, such as MobileNetV3 and ResNet101, showed good but inferior results (details in Table 6). These results highlight pre-trained models adapted through transfer learning as an efficient and accurate solution for fruit classification.
3.
Classification Objective
In the reviewed studies, a variety of classification objectives are observed (Table 2), reflecting the specific needs of each application in artificial vision for the fruit industry. The most common objective was classification by fruit type, present in 32% of the studies, highlighting its importance for tasks such as automation in production lines and fruit recognition at points of consumption.
The second-most frequent objective was the assessment of the state of ripeness, with 23% of the studies focusing on determining levels of ripeness for applications such as automated harvesting, post-harvest quality control, and fruit selection according to commercial standards.
Finally, classification by type of variety represented 16% of the studies, a particularly relevant task for products with multiple commercially significant varieties, such as apples, dates, or mangoes, where varietal characteristics directly affect commercial value and consumer preferences.
These objectives reflect how classification applications vary by context, from early stages of production to direct interaction with the final consumer. Figure 10 shows the percentage distribution of each type of application.
The results obtained in this systematic review highlight the diversity of approaches and objectives in the application of computer vision for the classification and evaluation of fruits. Across the categories analyzed, it was evident how acquisition, preprocessing, segmentation, feature extraction, and classification techniques are implemented and adapted according to the specific needs of each study.
4.
Techniques and algorithms
In relation to research question RQ3, three key stages of image processing were identified that require advanced algorithms: segmentation, feature extraction, and classification. In segmentation, the standard includes techniques such as edge detection (Sobel, Canny) and thresholding methods (Otsu, mean shift clustering, watershed segmentation), which help to accurately define regions of interest by ignoring non-relevant elements of the image. It would be valuable to explore future approaches where algorithms recognize, rather than ignore, all elements present in the scene, especially in applications related to harvesting and classification at points of consumption.
In feature extraction and classification, a clear trend towards the increasing use of machine learning techniques, particularly deep learning, is evident. Pre-trained models, such as VGG-16, ResNet, and DenseNet, are commonly adapted through transfer learning to fit the specific conditions of each study, offering enhanced accuracy and adaptability across various datasets. However, the analysis of the studies highlights a lack of consensus on the optimal configurations and techniques, suggesting the need to establish common standards in the industry to improve reproducibility and performance consistency.
Figure 11 details the evolution of algorithm usage over time, revealing a significant shift in the field. The studies conducted between 2015 and 2024 show a steady increase in the use of deep learning methods, particularly after 2020, with a parallel decline in the reliance on traditional vision-based techniques, such as edge detection and basic histogram-based methods. For clarity, all algorithms were classified into three categories: deep learning, supervised algorithms (such as decision trees and Support Vector Machines), and traditional vision-based techniques. The results emphasize that, while deep learning currently dominates the landscape, supervised learning algorithms remain relevant in certain applications where computational resources are limited or where datasets are smaller.

5. Discussion

This study was designed to address key challenges in fruit sorting and inspection using machine vision. These include identifying application fields, selecting optimal hardware configurations for image capture, and analyzing the most used or best performing techniques and algorithms for fruit sorting and quality inspection.
As artificial vision systems continue to evolve, future directions in their application for agriculture, particularly in fruit inspection and classification, hold substantial promise, alongside notable challenges. One significant advancement lies in the integration of machine learning algorithms that enhance the accuracy of image analysis, allowing for real-time assessment of fruit quality and ripeness. However, the variability of fruits, influenced by diverse lighting conditions and occlusions, presents an ongoing challenge that necessitates robust adaptive algorithms. All artificial vision systems data are generated by sensors or cameras. These data are images that represent visual information in different ways. Some of the models found in the reviews were RGB, RGB-D, HSV, hyperspectral, and multispectral. From this dataset, the required characteristics must be extracted and processed according to the corresponding task. For instance, the use of mean image, color, and histogram gradients feature extraction techniques has shown promising results in classifying fruit quality [78].
Quality inspection in Industry 4.0 is crucial to integrate AI as quality control, automating the process in a more accurate and cost-effective way. Traditionally, manual inspection has been the primary method for assessing fruit quality, but this process is time-consuming, subjective, and prone to human error. Studies had explored the use of artificial vision systems for automated fruit quality inspection. These systems leverage quality attributes such as color, texture, size, shape, and the presence of defects [79].
One of the key challenges in this domain is the development of robust and accurate computer vision algorithms that can reliably detect and classify different varieties of defects for one fruit. Some studies have explored the use of multispectral imaging, including near-infrared and ultraviolet wavelengths, to enhance the detection of non-visible defects [80]. Additionally, the use of machine learning algorithms, such as convolutional neural networks, has shown promising results in the classification and grading of fruits based on their visual characteristics. These algorithms can be trained on large datasets or labeled fruit images, enabling them to learn the distinctive features associated with different quality levels and defect types.
Studies reviewed AI methods, particularly the use of CNN, commonly used for fruit sorting [17,44,46]. Several studies have explored the use of CNN for quality control, and it has been observed that in recent years, the use of CNN for fruit classification and quality inspection has significantly increased, leading to excellent results in terms of accuracy. A study proposed by [81] provides a comprehensive review of the current state of the art in CNN-based fruit classification, highlighting the key challenges, such as that the data size must be sufficiently large and well labeled to train CNN. In turn the search for model parameters and hyperparameters that are suitable to solve the specific problem, where it remains a relevant problem, as it is solved by trial-and-error adjustment until the best fits are obtained, which can be time-consuming, more so for very deep models. In turn, most of the selected quality control studies were carried out in laboratory conditions, and others were based on datasets such as Fruit 360 [19], Supermarket [15,16], and FruitNet [46]. Also, it was identified that the most used CNN architectures are the ResNet [46], VGG16 [21], VGG19 [30], and AlexNet [41]. Furthermore, MobileNet is used in studies such as [44,46], where MobileNet is more efficient with comparable effectiveness. Therefore, MobileNet may be an alternative for quality inspection in fruit processing because it involves detecting defects, ripeness levels, size classification, and sorting. Additionally, MobileNet is ideal for tasks such as (1) fast and lightweight: can run on edge devices like raspberry Pi or smartphones; (2) high accuracy: despite being lightweight, it achieves good performance in image classification; (3) real-time processing: which enables on-the-fly quality assessment; (4) pre-trained model available: can be fine-tuned using transfer learning on a fruit dataset.
For deep learning algorithms, two main categories are identified: custom models and pre-trained models adapted through transfer learning. Figure 12 provides a detailed overview of all classification models used in the reviewed studies, considering that several studies evaluate more than one model to achieve the same objective, highlighting the diversity of approaches and the continuous pursuit of optimization in each application. Similarly, Figure 13 provides, for comparison purposes, a detail of the traditional algorithms used in the analyzed studies. It is possible to see that, in this case, the predominant classification models are Support Vector Machines (SVMs) and linear regression methods.
Figure 12 shows that pre-trained models are predominant overall. When analyzing their usage across the three main application areas, it becomes clear that pre-trained models dominate in the retail sector and points of consumption (Figure 14), where environmental conditions, fruit types, and image capture stability are not controlled. The large amount of data incorporated into these models contributes to developing more robust and adaptable systems.

6. Conclusions

The present systematic literature review, guided by the PRISMA methodology, allowed us to analyze the advances in fruit sorting and quality inspection using computer vision. This study identified the main fields of application, hardware configurations, and methodologies adopted for fruit sorting using image-processing techniques. This review provides valuable guidance for future research, offering information on potential applications, the type of sensors required for image capture, the specific objectives of each application, and the techniques used in the five stages of image processing. Furthermore, the study proposes approaches that could be explored in future research, with the aim of optimizing the balance between accuracy, speed, and efficiency in industrial and commercial applications related to fruit sorting and quality inspection.
The results show new areas of application that had not been previously analyzed, such as the retail sector or final consumption points. This study also addresses in detail each of the algorithms used in the different stages of image processing, obtaining fundamental information on the techniques used for pre-processing, image segmentation, feature extraction, and classification. As demonstrated, data augmentation and transfer learning play a fundamental role in expanding the capacity of models to generalize under various conditions, providing robust performance in both industrial processing lines and retail applications.
Overall, the findings not only reinforce the state of the art but also offer new approaches that could be explored in future research, aiming to optimize the balance between accuracy, speed, and efficiency in industrial and commercial applications related to fruit sorting and quality inspection. The proposed strategies contribute to advancing the development of more scalable, accessible, and cost-effective vision systems, ensuring their adaptability in different sectors of the agricultural industry.
This review provides valuable guidance for future research, offering insights into potential applications, the type of sensors required for image capture, the specific objectives of each application, and the techniques used in the five stages of image processing. Furthermore, this study proposes approaches that could be explored in future research, with the aim of optimizing the balance between accuracy, speed, and efficiency in industrial and commercial applications related to fruit sorting and quality inspection.

Author Contributions

Conceptualization, I.R.S. and S.C.; methodology, I.R.S.; formal analysis, I.R.S.; investigation, I.R.S.; data curation, I.R.S.; writing—original draft preparation, F.M.; writing—review and editing, I.R.S. and S.C. visualization, I.R.S.; supervision, S.C.; project administration, Á.P.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the FCT—Fundação para a Ciência e a Tecnologia, I.P. [Project UIDB/05105/2020].

Acknowledgments

The authors extend their appreciation to the Doctorate Program in Smart Industry at the Pontifical Catholic University of Valparaiso for supporting this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Economic Research Service U.S. Department of Agriculture. Ag and Food Sectors and the Economy. Available online: https://www.ers.usda.gov/data-products/ag-and-food-statistics-charting-the-essentials/ag-and-food-sectors-and-the-economy/ (accessed on 27 September 2024).
  2. U.S. Food & Drug Administration. Inspections to Protect the Food Supply. Available online: https://www.fda.gov/food/compliance-enforcement-food/inspections-protect-food-supply (accessed on 27 September 2024).
  3. FAO; WHO. Codex Alimentarius Commission Procedural Manual; Twenty-Eighth: Rome, Italy, 2023; Available online: https://openknowledge.fao.org/items/dfc93e42-67f3-4de9-9dad-b33fb1600b32 (accessed on 27 September 2024).
  4. Goyal, K.; Kumar, P.; Verma, K. AI-based fruit identification and quality detection system. Multimed. Tools Appl. 2022, 82, 24573–24604. [Google Scholar] [CrossRef]
  5. Potdar, R.M.; Sahu, D. Defect Identification and Maturity Detection of Mango Fruits Using Image Analysis. Artic. Int. J. Artif. Intell. 2017, 1, 5–14. [Google Scholar] [CrossRef]
  6. Albarrak, K.; Gulzar, Y.; Hamid, Y.; Mehmood, A.; Soomro, A.B. A Deep Learning-Based Model for Date Fruit Classification. Sustainability 2022, 14, 6339. [Google Scholar] [CrossRef]
  7. Tianjing, Y.; Mhamed, M. Developments in Automated Harvesting Equipment for the Apple in the orchard: Review. Smart Agric. Technol. 2024, 9, 100491. [Google Scholar] [CrossRef]
  8. Chhetri, K.B. Applications of Artificial Intelligence and Machine Learning in Food Quality Control and Safety Assessment. Food Eng. Rev. 2023, 16, 1–21. [Google Scholar] [CrossRef]
  9. Yue, Z.; Miao, Z. Research on Fruit and Vegetable Classification and Recognition Method Based on Depth-Wise Separable Convolution. 2023. Available online: https://www.researchsquare.com/article/rs-3326048/v1 (accessed on 12 February 2025). [CrossRef]
  10. Miranda, J.C.; Gené-Mola, J.; Zude-Sasse, M.; Tsoulias, N.; Escolà, A.; Arnó, J.; Rosell-Polo, J.R.; Sanz-Cortiella, R.; Martínez-Casasnovas, J.A.; Gregorio, E. Fruit sizing using AI: A review of methods and challenges. Postharvest Biol. Technol. 2023, 206, 112587. [Google Scholar] [CrossRef]
  11. Sundaram, S.; Zeid, A. Artificial Intelligence-Based Smart Quality Inspection for Manufacturing. Micromachines 2023, 14, 570. [Google Scholar] [CrossRef]
  12. Espinoza, S.; Aguilera, C.; Rojas, L.; Campos, P.G. Analysis of Fruit Images With Deep Learning: A Systematic Literature Review and Future Directions. IEEE Access 2023, 12, 3837–3859. [Google Scholar] [CrossRef]
  13. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, 71. [Google Scholar] [CrossRef]
  14. Sabzi, S.; Abbaspour-Gilandeh, Y.; García-Mateos, G. A new approach for visual identification of orange varieties using neural networks and metaheuristic algorithms. Inf. Process. Agric. 2018, 5, 162–172. [Google Scholar] [CrossRef]
  15. Shankar, K.; Kumar, S.; Dutta, A.K.; Alkhayyat, A.; Jawad, A.J.M.; Abbas, A.H.; Yousif, Y.K. An Automated Hyperparameter Tuning Recurrent Neural Network Model for Fruit Classification. Mathematics 2022, 10, 2358. [Google Scholar] [CrossRef]
  16. Murthy, T.S.; Kumar, K.V.; Alenezi, F.; Lydia, E.L.; Park, G.-C.; Song, H.-K.; Joshi, G.P.; Moon, H. Artificial Humming Bird Optimization with Siamese Convolutional Neural Network Based Fruit Classification Model. Comput. Syst. Sci. Eng. 2023, 47, 1633–1650. [Google Scholar] [CrossRef]
  17. Ratha, A.K.; Barpanda, N.K.; Sethy, P.K.; Behera, S.K. Automated Classification of Indian Mango Varieties Using Machine Learning and MobileNet-v2 Deep Features. Trait. Du Signal 2024, 41, 669–679. [Google Scholar] [CrossRef]
  18. Zhang, W. Automated Fruit Grading in Precise Agriculture using You Only Look Once Algorithm. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1136–1144. [Google Scholar] [CrossRef]
  19. Adigun, J.O.; Okikiola, F.M.; Aigbokhan, E.E.; Rufai, M.M. Automated System for Grading Apples using Convolutional Neural Network. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 1458–1464. [Google Scholar] [CrossRef]
  20. Ponce, J.M.; Aquino, A.; Millan, B.; Andujar, J.M. Automatic Counting and Individual Size and Mass Estimation of Olive-Fruits Through Computer Vision Techniques. IEEE Access 2019, 7, 59451–59465. [Google Scholar] [CrossRef]
  21. Yang, L.; Cui, B.; Wu, J.; Xiao, X.; Luo, Y.; Peng, Q.; Zhang, Y. Automatic Detection of Banana Maturity—Application of Image Recognition in Agricultural Production. Processes 2024, 12, 799. [Google Scholar] [CrossRef]
  22. Abbaspour-Gilandeh, S.S. Automatic grading of emperor apples based on image processing and ANFIS. J. Agric. Sci. Bilim. Derg. 2015, 21, 326–336. [Google Scholar] [CrossRef]
  23. Morales-Vargas, E.; Fuentes-Aguilar, R.Q.; De-La-Cruz-Espinosa, E.; Hernández-Melgarejo, G. Blackberry Fruit Classification in Underexposed Images Combining Deep Learning and Image Fusion Methods. Sensors 2023, 23, 9543. [Google Scholar] [CrossRef]
  24. Patkar, G.S.; Anjaneyulu, G.S.G.N.; Mouli, P.V.S.S.R.C. Challenging Issues in Automated Oil Palm Fruit Grading. IAES Int. J. Artif. Intell. (IJ-AI) 2018, 7, 111–118. [Google Scholar] [CrossRef]
  25. Hanafi, M.; Shafie, S.M.; Ibrahim, Z. Classification of C. annuum and C. frutescens Ripening Stages: How Well Does Deep Learning Perform? IIUM Eng. J. 2024, 25, 167–178. [Google Scholar] [CrossRef]
  26. Fu, L.; Sun, S.; Li, R.; Wang, S. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera. Sensors 2016, 16, 1012. [Google Scholar] [CrossRef] [PubMed]
  27. Saha, K.K.; Rahman, A.; Moniruzzaman, M.; Syduzzaman, M.; Uddin, Z.; Rahman, M.; Ali, A.; al Riza, D.F.; Oliver, M.H. Classification of starfruit maturity using smartphone-image and multivariate analysis. J. Agric. Food Res. 2022, 11, 100473. [Google Scholar] [CrossRef]
  28. Altaheri, H.; Alsulaiman, M.; Muhammad, G. Date Fruit Classification for Robotic Harvesting in a Natural Environment Using Deep Learning. IEEE Access 2019, 7, 117115–117133. [Google Scholar] [CrossRef]
  29. Alturki, A.S.; Islam, M.; Alsharekh, M.F.; Almanee, M.S.; Ibrahim, A.H. Date Fruits Grading and Sorting Classification Algoritham Using Colors and Shape Features. Int. J. Eng. Res. Technol. 2020, 13, 1917–1920. [Google Scholar] [CrossRef]
  30. Faisal, M.; Albogamy, F.; Elgibreen, H.; Algabri, M.; Alqershi, F.A. Deep Learning and Computer Vision for Estimating Date Fruits Type, Maturity Level, and Weight. IEEE Access 2020, 8, 206770–206782. [Google Scholar] [CrossRef]
  31. Nasir, I.M.; Bibi, A.; Shah, J.H.; Khan, M.A.; Sharif, M.; Iqbal, K.; Nam, Y.; Kadry, S. Deep Learning-based Classification of Fruit Diseases: An Application for Precision Agriculture. Comput. Mater. Contin. 2021, 66, 1949–1962. [Google Scholar] [CrossRef]
  32. Khalifa, N.E.M.; Wang, J.; Taha, M.H.N.; Zhang, Y. DeepDate: A deep fusion model based on whale optimization and artificial neural network for Arabian date classification. PLoS ONE 2024, 19, e0305292. [Google Scholar] [CrossRef]
  33. Örnek, M.N.; Haciseferoğullari, H. Design of Real Time Image Processing Machine for Carrot Classification. Yuz. Yıl Univ. J. Agric. Sci. 2020, 30, 355–366. [Google Scholar] [CrossRef]
  34. Wang, B.; Yang, H.; Zhang, S.; Li, L. Detection of Defective Features in Cerasus Humilis Fruit Based on Hyperspectral Imaging Technology. Appl. Sci. 2023, 13, 3279. [Google Scholar] [CrossRef]
  35. Liu, H.; He, J.; Fan, X.; Liu, B. Detection of variety and wax bloom of Shaanxi plum during post-harvest handling. Chemom. Intell. Lab. Syst. 2024, 246, 105066. [Google Scholar] [CrossRef]
  36. Tai, N.D.; Lin, W.C.; Trieu, N.M.; Thinh, N.T. Development of a Mango-Grading and -Sorting System Based on External Features, Using Machine Learning Algorithms. Agronomy 2024, 14, 831. [Google Scholar] [CrossRef]
  37. Yogesh; Dubey, A.K.; Ratan, R. Development of Feature Based Classification of Fruit using Deep Learning. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 3285–3290. [Google Scholar] [CrossRef]
  38. Gill, H.S.; Khehra, B.S. Efficient image classification technique for weather degraded fruit images. IET Image Process. 2020, 14, 3463–3470. [Google Scholar] [CrossRef]
  39. Wang, S.; Zhang, Y.; Ji, G.; Yang, J.; Wu, J.; Wei, L. Fruit Classification by Wavelet-Entropy and Feedforward Neural Network Trained by Fitness-Scaled Chaotic ABC and Biogeography-Based Optimization. Entropy 2015, 17, 5711–5728. [Google Scholar] [CrossRef]
  40. Chen, Y.; Sun, H.; Zhou, G.; Peng, B. Fruit Classification Model Based on Residual Filtering Network for Smart Community Robot. Wirel. Commun. Mob. Comput. 2021, 2021, 5541665. [Google Scholar] [CrossRef]
  41. Gom-Os, D.F.K. Fruit Classification using Colorized Depth Images. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 01405106. [Google Scholar] [CrossRef]
  42. Militante, S.V. Fruit Grading of Garcinia Binucao (Batuan) using Image Processing. Int. J. Recent Technol. Eng. 2019, 8, 1828–1832. [Google Scholar] [CrossRef]
  43. Gill, H.S.; Khalaf, O.I.; Alotaibi, Y.; Alghamdi, S.; Alassery, F. Fruit Image Classification Using Deep Learning. Comput. Mater. Contin. 2022, 71, 5135–5150. [Google Scholar] [CrossRef]
  44. Siddiqi, R. Fruit-classification model resilience under adversarial attack. SN Appl. Sci. 2022, 4, 1–22. [Google Scholar] [CrossRef]
  45. Mimma, N.-E.; Ahmed, S.; Rahman, T.; Khan, R. Fruits Classification and Detection Application Using Deep Learning. Sci. Program. 2022, 2022, 1–16. [Google Scholar] [CrossRef]
  46. Hayat, A.; Morgado-Dias, F.; Choudhury, T.; Singh, T.P.; Kotecha, K. FruitVision: A deep learning based automatic fruit grading system. Open Agric. 2024, 9, 20220276. [Google Scholar] [CrossRef]
  47. Hartley, Z.K.; Jackson, A.S.; Pound, M.; French, A.P. GANana: Unsupervised Domain Adaptation for Volumetric Regression of Fruit. Plant Phenomics 2021, 2021, 9874597. [Google Scholar] [CrossRef] [PubMed]
  48. Fu, Y.; Nguyen, M.; Yan, W.Q. Grading Methods for Fruit Freshness Based on Deep Learning. SN Comput. Sci. 2022, 3, 1–13. [Google Scholar] [CrossRef]
  49. Hu, Y.; Chang, J.; Li, Y.; Zhang, W.; Lai, X.; Mu, Q. High Zoom Ratio Foveated Snapshot Hyperspectral Imaging for Fruit Pest Monitoring. J. Spectrosc. 2023, 2023, 1–13. [Google Scholar] [CrossRef]
  50. Al-Saif, A.M.; Abdel-Sattar, M.; Aboukarima, A.M.; Eshra, D.H. Identification of Indian jujube varieties cultivated in Saudi Arabia using an artificial neural network. Saudi J. Biol. Sci. 2021, 28, 5765–5772. [Google Scholar] [CrossRef]
  51. Faisal, M.; Alsulaiman, M.; Arafah, M.; Mekhtiche, M.A. IHDS: Intelligent Harvesting Decision System for Date Fruit Based on Maturity Stage Using Deep Learning and Computer Vision. IEEE Access 2020, 8, 167985–167997. [Google Scholar] [CrossRef]
  52. Pushpavalli, M. Image Processing Technique for Fruit Grading. Int. J. Eng. Adv. Technol. 2019, 8, 3894–3897. [Google Scholar] [CrossRef]
  53. Mukhiddinov, M.; Muminov, A.; Cho, J. Improved Classification Approach for Fruits and Vegetables Freshness Based on Deep Learning. Sensors 2022, 22, 8192. [Google Scholar] [CrossRef]
  54. Noutfia, Y.; Ropelewska, E. Innovative Models Built Based on Image Textures Using Traditional Machine Learning Algorithms for Distinguishing Different Varieties of Moroccan Date Palm Fruit (Phoenix dactylifera L.). Agriculture 2022, 13, 26. [Google Scholar] [CrossRef]
  55. Azadnia, R.; Fouladi, S.; Jahanbakhshi, A. Intelligent detection and waste control of hawthorn fruit based on ripening level using machine vision system and deep learning techniques. Results Eng. 2023, 17, 100891. [Google Scholar] [CrossRef]
  56. Rezaei, P.; Hemmat, A.; Shahpari, N.; Mireei, S.A. Machine vision-based algorithms to detect sunburn pomegranate for use in a sorting machine. Measurement 2024, 232, 114682. [Google Scholar] [CrossRef]
  57. Wakchaure, G.; Nikam, S.B.; Barge, K.R.; Kumar, S.; Meena, K.K.; Nagalkar, V.J.; Choudhari, J.; Kad, V.; Reddy, K. Maturity stages detection prototype device for classifying custard apple (Annona squamosa L.) fruit using image processing approach. Smart Agric. Technol. 2023, 7, 100394. [Google Scholar] [CrossRef]
  58. Meshram, V.A.; Patil, K.; Ramteke, S.D. MNet: A Framework to Reduce Fruit Image Misclassification. Ing. Des Syst. D’Inf. 2021, 26, 159–170. [Google Scholar] [CrossRef]
  59. Khan, R.; Debnath, R. Multi Class Fruit Classification Using Efficient Object Detection and Recognition Techniques. Int. J. Image Graph. Signal Process. 2019, 11, 1–18. [Google Scholar] [CrossRef]
  60. Kumar, R.A.; Rajpurohit, V.S.; Bidari, K.Y. Multi Class Grading and Quality Assessment of Pomegranate Fruits Based on Physical and Visual Parameters. Int. J. Fruit Sci. 2018, 19, 372–396. [Google Scholar] [CrossRef]
  61. Aldakhil, L.A.; Almutairi, A.A. Multi-Fruit Classification and Grading Using a Same-Domain Transfer Learning Approach. IEEE Access 2024, 12, 44960–44971. [Google Scholar] [CrossRef]
  62. Shurygin, B.; Smirnov, I.; Chilikin, A.; Khort, D.; Kutyrev, A.; Zhukovskaya, S.; Solovchenko, A. Mutual Augmentation of Spectral Sensing and Machine Learning for Non-Invasive Detection of Apple Fruit Damages. Horticulturae 2022, 8, 1111. [Google Scholar] [CrossRef]
  63. Ponce, J.M.; Aquino, A.; Millán, B.; Andújar, J.M. Olive-Fruit Mass and Size Estimation Using Image Analysis and Feature Modeling. Sensors 2018, 18, 2930. [Google Scholar] [CrossRef]
  64. Ponce, J.M.; Aquino, A.; Andujar, J.M. Olive-Fruit Variety Classification by Means of Image Processing and Convolutional Neural Networks. IEEE Access 2019, 7, 147629–147641. [Google Scholar] [CrossRef]
  65. Lopez-Ruiz, N.; Granados-Ortega, F.; Carvajal, M.A.; Martinez-Olmos, A. Portable multispectral imaging system based on Raspberry Pi. Sens. Rev. 2017, 37, 322–329. [Google Scholar] [CrossRef]
  66. Ismail, N.; Malik, O.A. Real-time Visual Inspection System for Grading Fruits using Computer Vision and Deep Learning Techniques. Inf. Process. Agric. 2021, 9, 24–37. [Google Scholar] [CrossRef]
  67. Sajitha, P.; Andrushia, A.D.; Mostafa, N.; Shdefat, A.Y.; Suni, S.; Anand, N. Smart farming application using knowledge embedded-graph convolutional neural network (KEGCNN) for banana quality detection. J. Agric. Food Res. 2023, 14, 100767. [Google Scholar] [CrossRef]
  68. Ma, H.; Chen, M.; Zhang, J. Study on the Fruit Grading Recognition System Based on Machine Vision. Adv. J. Food Sci. Technol. 2015, 8, 777–780. [Google Scholar] [CrossRef]
  69. Tran, V.L.; Doan, T.N.C.; Ferrero, F.; Le Huy, T.; Le-Thanh, N. The Novel Combination of Nano Vector Network Analyzer and Machine Learning for Fruit Identification and Ripeness Grading. Sensors 2023, 23, 952. [Google Scholar] [CrossRef]
  70. Hayajneh, A.M.; Batayneh, S.; Alzoubi, E.; Alwedyan, M. TinyML Olive Fruit Variety Classification by Means of Convolutional Neural Networks on IoT Edge Devices. Agriengineering 2023, 5, 2266–2283. [Google Scholar] [CrossRef]
  71. Rocha, A.; Hauagge, D.C.; Wainer, J.; Goldenstein, S. Automatic fruit and vegetable classification from images. Comput. Electron. Agric. 2009, 70, 96–104. [Google Scholar] [CrossRef]
  72. Prabira, K.; Sethy, B.S.; Pandey, C. Mango Variety, 2023 [Online]. Available online: https://data.mendeley.com/datasets/tk6d98f87d/2 (accessed on 27 September 2024).
  73. Mureşan, H.; Oltean, M. Fruit recognition from images using deep learning. Acta Univ. Sapientiae Inform. 2018, 10, 26–42. [Google Scholar] [CrossRef]
  74. Zarouit, Y.; Zekkouri, H.; Ouhda, M.; Aksasse, B. Date fruit detection dataset for automatic harvesting. Data Brief 2023, 52, 109876. [Google Scholar] [CrossRef]
  75. Marko, S. Automatic Fruit Recognition Using Computer Vision. Mentor, Matej Kristan, Fakulteta Racunalnistvo Informatiko, Univerza Ljubljani, 2013. Available online: https://www.vicos.si/resources/fids30/ (accessed on 12 February 2025).
  76. Meshram, V.; Patil, K. FruitNet: Indian fruits image dataset with quality for machine learning applications. Data Brief 2021, 40, 107686. [Google Scholar] [CrossRef]
  77. Li, G.; Holguin, G.; Park, J.; Lehman, B.; Hull, L.; Jones, V.. The CASC IFW Database. Available online: https://engineering.purdue.edu/RVL/Database/IFW/index.html (accessed on 10 November 2024).
  78. Priya, P.S.; Jyoshna, N.; Amaraneni, S.; Swamy, J. Real time fruits quality detection with the help of artificial intelligence. Mater. Today Proc. 2020, 33, 4900–4906. [Google Scholar] [CrossRef]
  79. Saldaña, E.; Siche, R.; Luján, M.; Quevedo, R. Review: Computer vision applied to the inspection and quality control of fruits and vegetables. Braz. J. Food Technol. 2013, 16, 254–272. [Google Scholar] [CrossRef]
  80. Zhang, B.; Gu, B.; Tian, G.; Zhou, J.; Huang, J.; Xiong, Y. Challenges and solutions of optical-based nondestructive quality inspection for robotic fruit and vegetable grading systems: A technical review. Trends Food Sci. Technol. 2018, 81, 213–231. [Google Scholar] [CrossRef]
  81. Naranjo-Torres, J.; Mora, M.; Hernández-García, R.; Barrientos, R.J.; Fredes, C.; Valenzuela, A. A Review of Convolutional Neural Network Applied to Fruit Image Processing. Appl. Sci. 2020, 10, 3443. [Google Scholar] [CrossRef]
Figure 1. Search strategy used in databases.
Figure 1. Search strategy used in databases.
Sensors 25 01524 g001
Figure 2. Article selection scheme according to PRISMA [13].
Figure 2. Article selection scheme according to PRISMA [13].
Sensors 25 01524 g002
Figure 3. Distribution of studies in the time analyzed.
Figure 3. Distribution of studies in the time analyzed.
Sensors 25 01524 g003
Figure 4. Studies distribution by geographical location.
Figure 4. Studies distribution by geographical location.
Sensors 25 01524 g004
Figure 5. Percentage distribution according to fruit type.
Figure 5. Percentage distribution according to fruit type.
Sensors 25 01524 g005
Figure 6. Percentage distribution by classification objective.
Figure 6. Percentage distribution by classification objective.
Sensors 25 01524 g006
Figure 7. Carrot-sorting machine [33].
Figure 7. Carrot-sorting machine [33].
Sensors 25 01524 g007
Figure 8. Stylization filter applied to improve object detection [61].
Figure 8. Stylization filter applied to improve object detection [61].
Sensors 25 01524 g008
Figure 9. Different transformations applying data augmentation: (a) original image, (b) rotation, (c) darkening, (d) brightening, (e) pretzel, and (f) blurring [21].
Figure 9. Different transformations applying data augmentation: (a) original image, (b) rotation, (c) darkening, (d) brightening, (e) pretzel, and (f) blurring [21].
Sensors 25 01524 g009
Figure 10. Percentage distribution of each type of application.
Figure 10. Percentage distribution of each type of application.
Sensors 25 01524 g010
Figure 11. Temporal evolution of the use of different classification algorithms.
Figure 11. Temporal evolution of the use of different classification algorithms.
Sensors 25 01524 g011
Figure 12. Deep learning algorithms used in the studies.
Figure 12. Deep learning algorithms used in the studies.
Sensors 25 01524 g012
Figure 13. Traditional algorithms used in the studies.
Figure 13. Traditional algorithms used in the studies.
Sensors 25 01524 g013
Figure 14. Environmental conditions of deep learning algorithms used in the studies.
Figure 14. Environmental conditions of deep learning algorithms used in the studies.
Sensors 25 01524 g014
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
Inclusion/ExclusionCriteria
Inclusion criteriaIC1:Studies that address inspection, classification, or detection of defects in fruits using artificial vision or image processing.
IC2:Articles with open access accessibility
IC3:Articles published between 2015 and 2024
IC4:Articles written in English
IC5Works published as scientific articles in journals (“journal articles”).
IC6Empirical studies presenting algorithms, image processing techniques, hardware configurations or evaluation of characteristics relevant to fruit quality.
Exclusion CriteriaEC1:Studies that do not address fruit inspection, grading or quality or that focus on other agricultural applications with no direct relation to fruit quality control. Even if they have the search terms in the title, abstract or keywords.
EC2:Literature reviews, conference papers, abstracts, letters to the editor, theses, technical reports, patents or other documents that are not original scientific articles.
EC3:Articles written in languages other than English.
EC4:Articles published before 2015.
EC5:Papers whose full text is not available for review
Table 2. Summary of published selected.
Table 2. Summary of published selected.
ArticleYearFruit TypeObjectiveCamera Type/Lighting SourceFeature TypeClassification MethodAlgorithms Used and ComparedTarget
[14]2018OrangesVarietiesGigE industrial camera /RGB/controlled artificialTexture, Color, ShapeANN and metaheuristic Custom ANNFruit-processing industries
[15]2022Multiple fruitsFruit classesRGB/ambient lightingShape, Texture, ColorRNNAdam with DenseNet169 Retail
[16]2023Multiple fruitsFruit classesRGB/ambient lightingColor, Shape, TextureCNNVGG-16 with Spiral OptimizationRetail
[17]2024MangoVarietiesSmartphone/ambient lightingDeep FeaturesCubic SVMMobileNet-v2Retail
[18]2023Multiple fruitsFruit classesRGB/ambient lightingSize, ShapeCNNYOLOv5Fruit-processing industries
[19]2019AppleFruit classesRGB/ambient lightingColor, Shape, SizeCNNResNet50Fruit-processing industries
[20]2019OliveVarieties24Mpx CCD/HSV/controlled artificialSize, Mass-Linear Regression methodsFruit-processing industries
[21]2024BananaRipenessRGB/ambient lightingColor, TextureCNNResNet 34, ResNet 101, VGG16, VGG19Fruit-processing industries
[22]2015AppleSize classificationRGB/controlled artificialMass, SizeFuzzy neural networkANFIS + Linear Regression methodsFruit-processing industries
[23]2023BlackberryRipenessMultispectral/ambient lightingRipeness StageCNNCustom CNN, ResNet50Orchard
[24]2018Oil palm fruitRipenessRGB/ambient lightingColor, Texture, SizeFuzzy Logic MethodFuzzy Logic MethodOrchard
[25]2024ChiliRipenessSmartphone/ambient lightingColor, Texture, SizeCNNEfficientNetB0, VGG16, ResNet50Fruit-processing industries
[26]2016KiwifruitShapeRGB/ambient lightingShape, Size-Linear Regression methodsFruit-processing industries
[27]2023StarfruitRipenessSmartphone/controlled artificialColor Space ModelLDA, KNN, SVMLinear Discriminant Analysis (LDA), Linear Support Vector Machine, Quadratic SVM, Fine KNN, Subspace Discriminant AnalysisFruit-processing industries
[28]2019Date fruitRipenessRGB/ambient lightingColor, Texture, SizeCNNAlexNet, VGG16Orchard
[29]2020Date fruitShapeRGB/ambient lightingColor, ShapeVision-Based Algorithms-Fruit-processing industries
[30]2020Date fruitRipenessRGB/ambient lightingTexture, Color, ShapeCNN, SVMResNet, VGG-19, Inception-V3, NASNet, SVMOrchard
[31]2021Multiple fruitsFruit classesRGB/ambient lightingColor, Shape, TextureCNNVGG-19 + Pyramid histogram of oriented gradient (PHOG)Retail
[32]2024Multiple fruitsFruit classesRGB/ambient lightingTexture, Shape, ColorANN, CNNCustom ANN, AlexNet, Squeezenet, GoogLeNet, ResNet50Orchard
[33]2020CarrotShapeRGB/controlled artificialLength, Diameter, ShapeVision-Based Algorithms-Fruit-processing industries
[34]2023Cerasus HumilisQuality DefectsHyperspectral/Controlled artificialDefectsLS-SVMLeast Squares–Support Vector MachineFruit-processing industries
[35]2024Shaanxi PlumFruit classesRGB/ambient lightingVariety, Wax BloomCNNRetinaNet, Faster R-CNN, YOLOv3, YOLOv5, YOLOv7Fruit-processing industries
[36]2024MangoShapeHSV/controlled artificialShape, Surface DefectsKNN, DT, RF, ADB, XGB, GB, ET, SVM.XGBoost, Random Forest, Extra Tree Classifier, Gradient Boosting, SVM, Adaboost, Decision Tree, KNNFruit-processing industries
[37]2019Multiple fruitsFruit classesRGB/ambient lightingShape, TextureCNNAlexnet, GoogLeNetRetail
[38]2020Multiple fruitsFruit classesRGB/ambient lightingContrast Enhanced FeaturesCNN + RNNCustom CNNOrchard
[39]2015Multiple fruitsFruit classesRGB/ambient lightingWavelet-EntropyFNNFeed-Forward Neural NetworkRetail
[40]2021Multiple fruitsFruit classesRGB/ambient lightingResidual FeaturesSVMSVM, DT, Forest, KNN.Retail
[41]2023Multiple fruitsFruit classesRGB/controlled artificialAdversarial Robust FeaturesCNNAlexNet, GoogLeNet, ResNet101, VGG16Retail
[42]2019BatuanQuality DefectsRGB/ambient lightingDepth, ShapeSVMSVMFruit-processing industries
[43]2022AppleFruit classesRGB/ambient lightingColor, SizeCNN + RNN + LSTMCNN + RNN + LSTMOrchard
[44]2022Multiple fruitsFruit classesRGB/ambient lightingEnhanced FeaturesCNNIndusNet, VGG16, MobileNetRetail
[45]2022Multiple fruitsFruit classesRGB/ambient lightingShape, ColorCNNYOLOv7, ResNet50, VGG16Retail
[46]2024Multiple fruitsFruit classesRGB/ambient lightingTexture, Size, ColorCNNFruitVision-(MobileNetV3, VGG19, ResNet50, Resnet101, DenseNet121, DenseNet201, InceptionV3, NASNetMobile)Retail
[47]2021Banana3D ReconstructionRGB/mixed3D Volumetric FeaturesGANGANRetail
[48]2022Multiple fruitsRipenessRGB/ambient lightingColor, TextureCNNAlexnet, VGG, GoogLeNet, ResnetRetail
[49]2023GrapesPest monitoringHyperspectral/Controlled artificialHyperspectral FeaturesSpectral AnalysisSpectral AnalysisOrchard
[50]2021Indian JujubeVarietiesRGB/ambient lightingMorphological, ColorANNCustom ANNFruit-processing industries
[51]2020Date fruitRipenessRGB/ambient lightingMaturity IndicatorsCNNVGG-19, NASNet, Inception-V3Orchard
[52]2019MangoQuality DefectsRGB/Controlled artificialColor, SizeKNNKNNFruit-processing industries
[53]2022Multiple fruitsRipenessRGB/ambient lightingFreshness AttributesCNNYOLOv4Retail
[54]2023Date fruitVarietiesRGB/Controlled artificialTexture FeaturesSMO, Naive Bayes, Ibk, LogitBoost, LMTSMO, Naive Bayes, Ibk, LogitBoost, LMTFruit-processing industries
[55]2023Hawthorn FruitRipenessRGB/Controlled artificialColor, RipenessCNNCustom CNN, Inception-V3, ResNet50Fruit-processing industries
[56]2024PomegranateQuality DefectsRGB/Controlled artificialSunburn FeaturesANN, SVMANN, SVMFruit-processing industries
[57]2024Custard applesRipenessRGB/ambient lightingColor, Areole OpeningSVM, KNNSVM, K-MeansFruit-processing industries
[58]2021Multiple fruitsVarietiesRGB/MixedEnhanced FeaturesCNNInception-V3Retail
[59]2019Multiple fruitsObject detectionRGB/MixedShape, ColorCNNCustom CNNRetail
[60]2019PomegranateSize classificationRGB/ambient lightingWeight, SizeANNCustom ANNFruit-processing industries
[61]2024Multiple fruitsVarietiesRGB/MixedVisual and TexturalCNNEfficientNetV2Retail
[62]2022AppleQuality DefectsHyperspectral/Controlled ArtificialSpectral and SpatialRFRandom ForestFruit-processing industries
[63]2018OliveSize classificationRGB/Controlled ArtificialSize, Mass-Linear Regression methodsFruit-processing industries
[64]2019OliveVarietiesRGB/Controlled ArtificialVarietyCNNInception—ResNetV2 (AlexNet, InceptionV1, InceptionV3, Resnet-50, ResNet-101)Fruit-processing industries
[65]2017ApplePesticide monitoringMultispectral/Controlled ArtificialSpectral FeaturesVision-Based AlgorithmsThreshold + Histogram analysisOrchard
[66]2022AppleRipenessRGB/Controlled ArtificialAppearance, FreshnessCNNResNet, DenseNet, MobileNetV2, NASNet, EfficientNetFruit-processing industries
[67]2023BananaQuality DefectsRGB/ambient lightingColor, Texture, SizeCNNKEGCNN (Knowledge Embedded-Graph CNNFruit-processing industries
[68]2015Multiple fruitsFruit classesRGB/ambient lightingSignal Parameters (S11, S21)KNN, ANNKNN, ANNFruit-processing industries
[69]2023Multiple fruitsFruit classes VarietyCNNCustom CNNRetail
[70]2023OliveVarietiesRGB/ambient lightingVarietyCNNTinyML approachFruit-processing industries
Table 3. Benchmark datasets used in the articles under study.
Table 3. Benchmark datasets used in the articles under study.
ReferenceDataset NameImages UsedDataset Reference
[15,16]Supermarket produce2633[71]
[17]Mango Variety1853[72]
[19]Fruit 3608271[73]
[30]Date fruit dataset for automated harvesting and visual yield estimation8079[74]
[31]Fruit 36065,429[73]
[32]Fruit 3608072[73]
[40]Fruit 36022,688[73]
[45]FIDS-30971[75]
[46]FruitNet19,526[76]
[51]Date fruit dataset for automated harvesting and visual yield estimation8079[74]
[61]FruitNet14,700[76]
[66]Internal feeding-worm database of the comprehensive automation for specialty crops8791[77]
Table 4. Percentage distribution by image capture device.
Table 4. Percentage distribution by image capture device.
Capture DeviceQuantityReference
RGB Camera84%[14,15,16,18,19,20,21,22,24,26,28,29,30,31,32,33,35,36,37,38,39,40,41,42,43,44,45,46,47,48,50,51,52,53,54,55,56,57,58,59,60,61,63,64,66,67,70]
Hyperspectral5%[34,49,62]
Smartphone5%[17,27]
Multispectral4%[23,65]
Radio frequency2%[69]
Table 5. Comparative results between different types of classification methods [27].
Table 5. Comparative results between different types of classification methods [27].
Classification ModelAccuracy of Classification (%)Precision of Validation (%)
Linear Discriminant Analysis (LDA)96.293.3
Linear SVM88.786.7
Quadratic SVM90.386.7
Fine KNN94.393.3
Subspace Discriminant Analysis (SDA)90.486.7
Table 6. Comparative results between different types of classification methods and the method FruitVision proposed by [27] for bananas.
Table 6. Comparative results between different types of classification methods and the method FruitVision proposed by [27] for bananas.
ModelAccuracy (%)Precision (%)Recall (%)F1 Score (%)Specificity (%)
VGG1997.54 ± 0.17 97.25 ± 0.34 96.85 ± 0.21 97.05 ± 0.48 96.12 ± 0.21
ResNet5096.21 ± 0.6296.42 ± 0.2796.05 ± 0.68 96.23 ± 0.30 95.74 ± 0.38
ResnNet10198.30 ± 0.0497.66 ± 0.31 96.94 ± 0.37 97.30 ± 0.4995.76 ± 0.62
DenseNet12198.42 ± 0.33 97.21 ± 0.28 97.14 ± 0.24 97.17 ± 0.33 96.27 ± 0.84
DenseNet20198.84 ± 0.28 98.35 ± 0.45 97.51 ± 0.29 97.93 ± 0.67 97.10 ± 0.36
MobileNetV397.24 ± 0.43 96.82 ± 0.36 96.83 ± 0.47 96.82 ± 0.31 96.88 ± 0.22
InceptionV394.11 ± 0.57 94.21 ± 0.65 93.64 ± 0.62 93.92 ± 0.57 93.27 ± 0.26
NASNetMobile96.74 ± 0.27 96.72 ± 0.38 96.25 ± 0.34 96.48 ± 0.63 96.38 ± 0.65
FruitVision (Proposed)99.50 ± 0.2099.19 ± 0.2898.88 ± 0.7499.03 ± 0.5598.77 ± 0.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rojas Santelices, I.; Cano, S.; Moreira, F.; Peña Fritz, Á. Artificial Vision Systems for Fruit Inspection and Classification: Systematic Literature Review. Sensors 2025, 25, 1524. https://doi.org/10.3390/s25051524

AMA Style

Rojas Santelices I, Cano S, Moreira F, Peña Fritz Á. Artificial Vision Systems for Fruit Inspection and Classification: Systematic Literature Review. Sensors. 2025; 25(5):1524. https://doi.org/10.3390/s25051524

Chicago/Turabian Style

Rojas Santelices, Ignacio, Sandra Cano, Fernando Moreira, and Álvaro Peña Fritz. 2025. "Artificial Vision Systems for Fruit Inspection and Classification: Systematic Literature Review" Sensors 25, no. 5: 1524. https://doi.org/10.3390/s25051524

APA Style

Rojas Santelices, I., Cano, S., Moreira, F., & Peña Fritz, Á. (2025). Artificial Vision Systems for Fruit Inspection and Classification: Systematic Literature Review. Sensors, 25(5), 1524. https://doi.org/10.3390/s25051524

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop