Next Article in Journal
Transcriptome and Metabolome Analyses Reveal Ascorbic Acid Ameliorates Cold Tolerance in Rice Seedling Plants
Next Article in Special Issue
Heat Transfer Process of the Tea Plant under the Action of Air Disturbance Frost Protection
Previous Article in Journal
Weed Recognition at Soybean Seedling Stage Based on YOLOV8nGP + NExG Algorithm
Previous Article in Special Issue
A Lightweight, Secure Authentication Model for the Smart Agricultural Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

YOLO-BLBE: A Novel Model for Identifying Blueberry Fruits with Different Maturities Using the I-MSRCR Method

1
Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming 650504, China
2
Foshan-Zhongke Innovation Research Institute of Intelligent Agriculture and Robotics, Guangzhou 528231, China
3
College of Engineering, South China Agricultural University, Guangzhou 510642, China
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(4), 658; https://doi.org/10.3390/agronomy14040658
Submission received: 27 February 2024 / Revised: 19 March 2024 / Accepted: 21 March 2024 / Published: 24 March 2024

Abstract

:
Blueberry is among the fruits with high economic gains for orchard farmers. Identification of blueberry fruits with different maturities has economic significance to help orchard farmers plan pesticide application, estimate yield, and conduct harvest operations efficiently. Vision systems for automated orchard yield estimation have received growing attention toward fruit identification with different maturity stages. However, due to interfering factors such as varying outdoor illuminations, similar colors with the surrounding canopy, imaging distance, and occlusion in natural environments, it remains a serious challenge to develop reliable visual methods for identifying blueberry fruits with different maturities. This study constructed a YOLO-BLBE (Blueberry) model combined with an innovative I-MSRCR (Improved MSRCR (Multi-Scale Retinex with Color Restoration)) method to accurately identify blueberry fruits with different maturities. The color feature of blueberry fruit in the original image was enhanced by the I-MSRCR algorithm, which was improved based on the traditional MSRCR algorithm by adjusting the proportion of color restoration factors. The GhostNet model embedded by the CA (coordinate attention) mechanism module replaced the original backbone network of the YOLOv5s model to form the backbone of the YOLO-BLBE model. The BIFPN (Bidirectional Feature Pyramid Network) structure was applied in the neck network of the YOLO-BLBE model, and Alpha-EIOU was used as the loss function of the model to determine and filter candidate boxes. The main contributions of this study are as follows: (1) The I-MSRCR algorithm proposed in this paper can effectively amplify the color differences between blueberry fruits of different maturities. (2) Adding the synthesized blueberry images processed by the I-MSRCR algorithm to the training set for training can improve the model’s recognition accuracy for blueberries of different maturity levels. (3) The YOLO-BLBE model achieved an average identification accuracy of 99.58% for mature blueberry fruits, 96.77% for semi-mature blueberry fruits, and 98.07% for immature blueberry fruits. (4) The YOLO-BLBE model had a size of 12.75 MB and an average detection speed of 0.009 s.

1. Introduction

In recent years, global blueberry production has been on the rise, establishing it as the second most economically significant soft fruit in the world [1]. Over the past decade, the United States, as the world’s leading blueberry producer, has contributed 50% of the global blueberry supply [2]. Blueberry is a type of berry with high nutritional and economic value [3]. The maturity level playing a crucial role as a phenotypic characteristic is related to the ease of harvesting and the overall yield of blueberry fruits. It can also serve as a valuable indicator for tracking berry growth and enhancing crop management practices [4,5]. By monitoring the ripeness levels of blueberry fruits in orchards, farmers can gain insights into the growth status of blueberries, enabling them to estimate yields and take appropriate agronomic measures. Compared to identifying a single category of fruit, recognizing fruits at different ripeness levels can guide robot path planning and obstacle avoidance for continuous operations [6,7].
The maturity level of blueberry fruit is divided into three different stages: immature, semi-mature, and mature [8]. With blueberry fruit growing, chlorophyll degrades and anthocyanin synthesis increases [9], and fruit color progresses from white-green to pink to red to blue [10]. Based on the characteristics of the blueberry growth mentioned above, there are some challenges in identifying blueberry maturity. Blueberry fruit clusters are severely obstructed and have a long maturity span, resulting in fruits with different maturities and sizes often appearing in the same cluster [11]. For immature blueberry fruits, their color is similar to that of the leaves, and the detection accuracy of immature blueberry fruits is usually poor [12]. The above issues pose challenges in identifying blueberry fruits with different maturities.
Machine learning and deep learning can overcome the aforementioned challenges by relying on color features to identify fruits, such as RGB (red, green, blue), HSV (hue, saturation, value), spectral information, and so on [13]. Integrating completely local binary color patterns with color features can effectively achieve the classification of fruits and vegetables [14]. The identification accuracy of common fruits can be enhanced by combining the HSV and RGB information of fruits [15]. The blueberry fruits with different maturities were identified using the classification and recognition methods of combing RGB color operators [16]. A method of grey correlation analysis can be used to extract fruit features and accurately identify spherical fruits [17]. Utilizing support vector data description and K-means clustering methods to process spectral information enables a comprehensive identification of blueberry fruits at the pixel color level [8]. The surface quality of fruits can be detected by using genetic-algorithm-based extraction of color and texture features [18]. Sobel operators and Gaussian blur can be used to extract features of green oranges [19]. Traditional machine learning algorithms often rely heavily on superficial features, which may not capture complex patterns effectively. This can limit their ability to generalize to new data or adapt to changes in the environment. Compared with classical machine learning methods, deep learning methods may have higher accuracy in the field of recognition [20].
The improved YOLOv3 model with a refined loss function has shown promising detection results for densely packed and shadowed tomato fruits [21]. Using the YOLOv4 model for kiwifruit recognition, categorizing them based on occlusion levels helps avoid selecting heavily occluded fruits for harvesting [22]. The I-YOLOv4-tiny model was developed to identify blueberry fruits with different maturities, achieving an average accuracy of 96.24% in complex scenes with uneven occlusion and lighting conditions [12]. An improved YOLOMuskmelon model achieved good detection speed in the field of fruit detection [23]. The identification accuracy of small target tomatoes can be enhanced by using the improved YOLOv5s model by adding mosaic data to the training dataset [24]. The improved YOLOv7 model successfully addressed the issue of low identification accuracy caused by high apple fruit density and severe occlusions and overlaps [25]. The improved YOLOv8 model is used to identify lychee mother branches and calculate picking points to guide the operation of picking robots [26].
Based on the above challenges and inspiration, an improved YOLOv5s model called YOLO-BLBE model combining color features was used to identify blueberry fruits with different maturities in natural environments. The specific research content of this article is as follows:
In Section 2.1, an I-MSRCR algorithm is innovatively proposed to improve the color restoration factor and scale decomposition size of the traditional MSRCR (Multi-Scale Retinex with Color Restoration) algorithm. The color features of blueberry fruit images were enhanced using the I-MSRCR algorithm. I-MSRCR-processed blueberry fruit parts were then combined with the original background to construct a composite image. The deep learning model training dataset was constructed by combining natural images with composite images. In Section 2.2, The YOLO-BLBE model was constructed which incorporated the GhostNet architecture embedded by the CA (coordinate attention) attention mechanism, BIFPN structure, and Alpha-EIOU to improve the backbone network, neck network, and prediction network of YOLOv5s, respectively. In Section 3, multiple groups of experiments were designed to evaluate the enhancement effect of the proposed I-MSRCR algorithm on images and to evaluate the performance of the proposed model. In Section 4, the research conclusion is drawn.

2. Materials and Methods

2.1. Dataset Construction of Blueberry Images

2.1.1. Dataset Composition

The image dataset of this study included both natural and synthetic images. This study was a follow-up study to the paper published by Tan et al. in 2018 [8] and used the same natural image dataset that was used in the previous paper. The information and acquisition details of natural images can be found in the paper.
The process of generating a synthetic blueberry image is shown in Figure 1. The color of the original blueberry image was enhanced using the I-MSRCR algorithm (proposed in Section 2.1.2). The color-enhanced image and the original image were manually extracted fruit parts and the background, respectively, using Photoshop 2018. The synthetic image was obtained by superimposing the color-enhanced blueberry fruits onto the background of the original blueberry image.
The natural and synthetic image dataset was expanded through rotation, translation, flipping, and scaling. The total number of datasets was expanded to 1452 images. The dataset was divided into the training set, validation set, and test set with proportions of 70%, 15%, and 15%, respectively. The types of images included in the dataset and the number of images for each type are listed in Table 1. Some representative images in the dataset are shown in Figure 2.

2.1.2. I-MSRCR Algorithm

The MSRCR algorithm is an image-processing technology that can mitigate image distortions such as shadows and highlights while preserving detail [27]. However, the algorithm process is complex, time-consuming, sensitive to noise, and prone to color distortion. This research proposed an I-MSRCR (improved MSRCR) algorithm to enhance color features of blueberry fruits. Five representative images of blueberry fruits at three different growth stages (mature, semi-mature, and immature) were selected. Through various methods such as histogram analysis, color mean calculation, and manual selection, the most representative color values for each growth stage of the blueberry fruit were determined. As shown in Figure 3a, a complete gradient color band was created using a method of five representative color gradient stitching to simulate the primary color changes during the ripening process of blueberry fruit. RGB color curves corresponding to the gradient color band were constructed as shown in Figure 3b.
The length of the gradient color band was artificially set to 100 units. A total of twenty units were evenly distributed to five representative colors. Due to the even gradient of the color band, the unit values corresponding to the five representative colors on the color band were the middle values of each of the 20 units, namely, the color values of the five representative blueberry fruits shown in Figure 3a corresponding to the values of 10, 30, 50, 70, and 90 horizontal coordinates shown in Figure 3b, respectively. The values of 20 and 80 horizontal coordinates found in Figure 3b were the key thresholds for dividing blueberry fruits into different maturity levels, where pink appeared to exceed the proportion of green and blue appeared to exceed the proportion of purple. Blueberry fruits within the range of 0 < x < 20, 20 < x < 80, and 80 < x < 100 could be defined as immature blueberry fruits, semi-mature blueberry fruits, and mature blueberry fruits, respectively. It can be seen from Figure 3b that the color characteristics of blueberry fruits at different growth stages mainly depended on the proportion changes of R and B channels while showing a relatively small correlation with the changes in the G channel. To further validate this hypothesis, the contribution assessment of each color channel to the color difference between blueberry fruits of different maturities is required. For example, the contribution calculation of the R channel is shown in Formulas (1) and (2).
P R = R 2 R 1 D ,
R contribute = P R P R + P G + P B ,
where D represents the color difference calculated by Euclidean distance; R 1 and R 2 are R values of two different colors; P R   P G   P B represents the color difference ratio of the R channel, G channel, and B channel, respectively; and R contribute represents the contribution of the R channel to the color difference. The color difference values and contributions of the R, G, and B channels between blueberry fruits with different maturities are shown in Table 1.
It can be confirmed from Table 2 that the R and B channels had a significant contribution to the color difference, while the contribution of the G channel was relatively small. Based on the above analysis, this article proposed an I-MSRCR algorithm by adjusting the proportion of color channels in the traditional MSRCR algorithm’s color recovery factor and setting the proportions of the R, G, and B color channels to 35%, 30%, and 35%, respectively.
The color enhancement of blueberry images based on the I-MSRCR algorithm is illustrated in Figure 4. In step 1, the input blueberry color image was converted from the RGB color space to the CIE L*a*b* (CIE Luminance Chroma) color space.
In step 2, multi-scale decomposition and Retinex-based enhancement were used to process the CIE L*a*b* color space image of the input image. A Gaussian filter was applied in performing the multi-scale decomposition of the brightness component in the CIE L*a*b* color space. The process is shown in Formulas (3) and (4).
I σ x , y = I x , y   ×   G σ x , y ,
G σ x , y = 1 2 Π σ 2 e x 2 + y 2 2 σ 2 ,
where σ is the standard deviation of a Gaussian filter to control the degree of filtering, G σ x , y is a Gaussian filter at scale σ , I x , y is the input image, and I σ x , y is a smooth image filtered by a Gaussian filter at scale σ . In this study, σ was set to three scales, namely, 3, 1.5, and 0.75.
In step 3, Retinex-based enhancement was performed on each scale image to enhance the brightness and contrast. The enhancement process is shown in Formulas (5) and (6).
singleScaleRetinex I , σ = log I log G σ   ×   I ,
multiScaleRetinex I , σ List = 1 N α σ List singleScaleRetinex I , σ ,
In Formula (5), I represents the original image, and sin gleScaleRetinex I , σ represents the result of single-scale Retinex enhancement at the current scale σ . In Formula (6), σ List represents a series of scale σ , N represents the number of scales, and multiScaleRetinex I , σ List represents the result of multi-scale decomposition.
In step 4, color restoration was performed on all pixels that had been enhanced at different scales, which was able to correct the color shift introduced during the Retinex-based enhancement process. The process of color restoration is shown in Formulas (7)–(9).
R MSRC R i x , y = C i x , y × R MS R i x , y ,
C i x , y = f I i x , y = f I i x , y j = 1 N I j x , y ,
f I i x , y = β log α I i x , y = β log α I i x , y log j = 1 N I j x , y ,
In Formula (7), R MSRC R i x , y represents the result of the multi-scale decomposition, and C i x , y represents the color recovery factor of the RGB color channels, where i is one of the three RGB color channels. In Formula (8), f represents the mapping function of the RGB color space, and N is the number of RGB color channels. In Formula (9), β is the gain constant with a value of 46, and α is the controlled nonlinear strength with a value of 125.
In step 5, an I-MSRCR-based blueberry color enhancement image was obtained by synthesizing color restoration images at each scale.

2.2. Construction of the YOLO-BLBE Model

The YOLO-BLBE model proposed in this paper was constructed based on the network architecture of YOLOv5s, which was a lightweight model with fast detection speed for identifying blueberry fruits with different maturities.

2.2.1. Backbone Network of the YOLO-BLBE Model

The combination of the GhostNet model and CA block replaced the backbone network of the YOLOv5 model to form the backbone network of the YOLO-BLBE model. The structure of the GhostNet model and the processing flow of the image in the GhostNet model are shown in Figure 5. After the image was input into the GhostNet model, the preliminary features of the input image were extracted by implementing preliminary convolution using a small number of convolution kernels with a size of 3 × 3. The preliminary convolution results were divided into two parts, one part was mapped, and the other part containing α1, α2, …, αn preliminary convolutional feature maps was deeply convolutional. The results of the two parts were combined to obtain the output feature map.
The YOLO-BLBE model added the CA block at the end of the backbone. As shown in Figure 6, the input feature map was encoded and aggregated onto each channel of the image using two pooling layers, X Avg and Y Avg. Two directional feature maps were concatenated, and two-dimensional convolution was implemented. Then, batch normalization and split were implemented, and the split feature images were convolved using convolution kernels with a size of 1 × 1. Attention vectors were output by using the Sigmoid function to process the split feature images.

2.2.2. Neck Network of the YOLO-BLBE Model

A BIFPN (Bidirectional Feature Pyramid Network) structure was used as the neck network of the YOLO-BLBE model. The feature map was input into the neck network and processed into multi-level resolution images P 1 to P 7 , as shown in Figure 7. The multi-level resolution images were fused in the BIFPN structure, and the multi-scale feature pyramid was output from the structure. The steps of the BIFPN structure consisted of three parts, which could be illustrated by taking the sixth-level resolution image as an example. The calculation part of weight O is shown in Formula (10), where w i and I i are the weights within the BIFPN structure and the input feature maps with different resolutions, respectively. The intermediate feature calculation part of the sixth-level resolution image from top to bottom is shown in Formula (11), where P 6 in is the input feature of the sixth-level resolution image, Resize P 7 in is a feature that is as large as the input feature of the sixth-level resolution image after being processed by up-sampling or down-sampling for the seventh-level resolution image, and P 6 td is the intermediate feature. The sixth layer of the multi-scale feature pyramid P 6 out can be obtained by using Formula (12). The construction process of other layers of multi-scale feature pyramid was the same as that of the sixth layer.
O = i w i + j w j · I i ,
P 6 td = Conv w 1 · P 6 in + w 2 · Resize P 7 in w 1 + w 2 + ,
P 6 out = Conv w 1 · P 6 in + w 2 · P 6 td + w 3 · Resize P 5 out w 1 + w 2 + w 3 + ,

2.2.3. Prediction Network of the YOLO-BLBE Model

Alpha-EIOU was used as the loss function of the prediction network of the YOLO-BLBE model. The calculation process of EIOU is shown in Formula (13).
L EIOU = L IOU + L dis + L asp ,
where L IOU , L dis , and L asp are the overlap loss, center distance loss, and width height loss of the predicted box and the real box, respectively. The parameter α was added to EIOU for unifying exponentiation, and the calculation process of Alpha-EIOU can be shown in Formula (14).
L α - EIOU = 1 L IOU + ρ 2 b , b gt c 2 + ρ 2 w , w gt c w 2 + ρ 2 h , h gt c h 2 α ,
In Formula (14), ρ is the Euclidean distance; c is the diagonal length of the minimum box; b, w, and h are the center, width, and height of the prediction box, respectively; and b gt , w gt , and h gt are the center, width, and height of the real box, respectively. The detection boxes were sorted in order of confidence box scores from high to low. If the L α - EIOU value of the detection box was less than the threshold 0.5, the detection box would be retained.

2.2.4. YOLO-BLBE Model

The overall network architecture and image processing flow are shown in Figure 8. The input original blueberry color image was divided into three color channels in the RGB color space. The preliminary feature map of the original image was obtained after three color channels processed in the backbone network. The feature pyramid map was obtained after the preliminary feature map was processed in the neck network. The original image with a confidence box was output after the feature pyramid map was processed in the prediction network.

2.3. Performance Analysis Experiment

To validate the effectiveness of the proposed model and algorithm in this study, four sets of experiments were conducted sequentially.
The first set of experiments was used to verify the rationality of the color recovery factor ratio set by the proposed I-MSRCR algorithm. This study set the proportions of R, G, and B color channels to be 35%, 30%, and 35%, respectively. The color channel ratios for the other two comparative groups were set to 30%, 40%, and 30%, and 40%, 20%, and 40%, respectively. Five pairs of blueberry fruits at different growth stages were extracted from the original image and the color-enhanced image using the I-MSRCR algorithm, individually. The single channel value of each pixel of the blueberry fruit in the original image was recorded as the horizontal axis of an image, and that in the color-enhanced image was recorded as the vertical axis of the image. The single-channel values of each pixel in five pairs of blueberry fruit images under three different color channel ratios were sequentially recorded.
In the second set of experiments, the proposed YOLO-BLBE model was trained using a dataset containing natural and synthetic blueberry images and a dataset containing only natural blueberry images, separately. The performance during the training process and the detection results of the trained model were recorded and analyzed.
In the third set of experiments, the model proposed in this study, along with several common deep learning network models, such as the YOLOv4 model, YOLOv5s model, YOLOv7 model, YOLOv8s model, and faster-RCNN, were trained using the dataset containing natural and synthetic blueberry images. A comparative performance analysis of the models was conducted.
The fourth set of experiments was used to test the performance of the proposed YOLO-BLBE model to identify blueberry fruits with different maturities in different natural environments. The hardware environment of the experiment was mainly a computer equipped with an Intel i5-13600kf processor, 32 GB RAM, and GeForce GTX 4080 GPU. The computer used CUDA 11.2 parallel computing architecture and the NVIDIA cuDNN 8.0.5 GPU acceleration library. The software simulation environment was the Pytorch deep learning framework (Python version 3.10). Data preprocessing was performed using Labelimg 1.8.6, Photoshop 2018, and Matlab2020b. Anaconda 2023 was used to configure and manage the virtual environment, and Pycharm was used to compile and run programs.
Model performance metrics mainly included P (precision), R (recall), F1 (harmonic average), AP (average precision), and mAP (mean average precision), as shown in Formulas (15)–(17).
p r e c i s i o n = T p T p + F p & r e c a l l = T p T p + F N F 1 = 2 × precision × recall precision + recall ,
AP = precision N ,
mAP = i = 1 K A P i N C ,
where T p represents the number of blueberry fruits correctly identified, F p represents the number of blueberry fruits incorrectly identified, F N represents the number of missed blueberry fruits, N represents the total number of images, and N C represents the number of categories of the blueberry fruit maturities. AP represents that the integral of accuracy rate to recall rate is equal to the area under the P-R curve, and mAP is the average of the average precision of all categories.
During the training process of the model, the epoch was set to 200, Batchsize was set to 16, Decay was 0.005, and the learning rate was 0.01.

3. Results and Discussion

3.1. Performance Analysis of Blueberry Fruit Color Enhancement

In the first set of experiments, the color enhancement results of blueberry fruits were as shown in Figure 9.
In the results under the first group of color channel ratios (30%, 40%, 30%), it can be seen that the values of R and B were densely distributed below the diagonal of each image, while the values of G were densely distributed above the diagonal of each image. This indicates that this color channel ratio cannot effectively enhance the surface R and B features of blueberry fruits. Under the second group of color channel ratios (35%, 30%, 35%), the results showed that values of R and B were distributed on both sides of the diagonal, and the values of R and B above the diagonal were relatively dense. This implies that after processing with the I-MSCR algorithm, the R and B values of most pixels on the surface of blueberry fruits were increased, which helped increase the color difference between blueberry fruits at different growth stages and enhanced the model’s ability to obtain color features of blueberry fruits. For the distribution of G values, the points were evenly distributed on both sides of the diagonal, which indicated that the overall G values of the image before and after processing were less affected by the proposed I-MSRCR algorithm. In the results under the third group of color channel ratios (40%, 20%, 40%), the R and B values were too densely distributed above the diagonal, and the distribution of G values was densely below the diagonal. These results indicate that the channel ratio excessively increased the values of R and B, which may mean the original color features were lost. Based on the above results and discussion, it was appropriate to choose the channel ratios (35%, 30%, 35%) as the color recovery factor channel ratio for the I-MSRCR algorithm, with the algorithm being able to appropriately increase the weights of the R channel and B channel, reducing color shifts generated during the color restoration process for blueberry fruits images.
Similar results and discussions can be found in a published article, such as adjusting the proportion or composition of color channels to help distinguish fruits of different maturities [28].

3.2. Comparison of Identification Performance of the YOLO-BLBE Model Trained with Different Datasets

In the second set of experiments, the AP curves and loss convergence curves of the YOLO-BLBE model trained with different datasets were as shown in Figure 10 and Figure 11.
By comparing (a) and (b) in Figure 10, it can be found that the AP value of mature blueberry fruit identified by the YOLO-BLBE model trained with both natural and synthetic images was 99.58%, which was 0.08% higher than the AP value 99.50% of mature blueberry fruit identified by the YOLO-BLBE model trained with only natural images. In particular, the AP value of semi-mature blueberry fruit identified by the YOLO-BLBE model trained with both natural and synthetic images was 96.77%, which was 2.47% higher than the AP value 94.30% of semi-mature blueberry fruit identified by the YOLO-BLBE model trained with only natural images. By using the I-MSRCR algorithm, the R and B values of blueberry fruits surface were enhanced, which strengthened the distinguishing features between mature and semi-mature blueberry fruits. It explained the reason for the increase in AP values of mature and semi-mature blueberry fruits using the combination of the YOLO-BLBE model and the I-MSRCR algorithm. However, the AP value of immature blueberry fruit identified by the YOLO-BLBE model trained with both natural and synthetic images was 98.07%, which was 1.46% lower than the AP value 99.53% of immature blueberry fruit identified by the YOLO-BLBE model trained with only natural images. It also confirmed that the proposed I-MSRCR algorithm increased the values of R and B and decreased the values of G. By reducing the values of G color channel, the improvement of surface color features of immature fruits was weaker than that of mature and semi-mature fruits. However, an ideal result was obtained that the mAP value obtained by training the YOLO-BLBE model using data containing natural and synthetic blueberry fruits images was 98.14%, which was 0.34% higher than the mAP value of 97.80% obtained by training the YOLO-BLBE model using data containing only natural blueberry fruit images. It indicated that the YOLO-BLBE model combined with the I-MSRCR algorithm can improve the overall performance of blueberry fruit recognition accuracy for three different maturities.
Figure 11a–d shows two sets of loss convergence curves of the YOLO-BLBE model representing the loss function changing of the training set and the loss function changing of the verification set, respectively. In the local zoomed window shown in Figure 11, it can be seen that the loss curve of the YOLO-BLBE model without combining the I-MSRCR algorithm exhibited a jitter phenomenon, while the loss curve of the YOLO-BLBE model in combination with the I-MSRCR algorithm converged more smoothly. For the train_loss curve of the YOLO-BLBE model, the smooth curve change trend indicated that the feature of the training target was prominent in the training dataset using the I-MSRCR algorithm to enhance color, and the training dataset and the YOLO-BLBE model matched each other. For the val_loss curve of the YOLO-BLBE model, the smooth curve change trend indicated that a reasonable enhancement in surface color of blueberry fruits could help the YOLO-BLBE model effectively identify them.
Table 3 shows the performance parameters comparison of the YOLO-BLBE model were obtained by being trained with different datasets. It can be seen that compared with the training using only natural blueberry fruit images, the YOLO-BLBE model trained with natural and synthetic blueberry fruit images improved the accuracy rate by 1.58%, the recall rate by 4.05%, the F1 score by 2.78%, and the AP by 0.32%. It directly demonstrates that training the YOLO-BLBE model with natural and synthetic blueberry fruit images can improve the model performance, which indirectly demonstrates the effectiveness and rationality of the I-MSRCR algorithm in enhancing the color features of blueberry fruits.
Figure 12 shows the identification results and comparisons of the YOLO-BLBE model trained with different datasets at different shooting distances. The confidence of all identification results was greater than 0.81, which reflected that there was no significant difference in the identification of blueberry fruits at different shooting distances. It also implied that the proposed YOLO-BLBE model had stable identification performance at different shooting distances. However, at the near shooting distance shown in Figure 12a,c, identification confidence of the YOLO-BLBE model trained with natural and synthetic images was higher than that of the YOLO-BLBE model trained with natural images. For example, the identification confidences of blueberry fruits A and B were 0.98 and 0.99, respectively, as shown in Figure 12c, being higher than the identification confidence 0.95 of blueberry fruit A and the identification confidence 0.96 of blueberry fruit B shown in Figure 12a. The same results can be seen in Figure 12b,d. The identification confidences of blueberry fruits C, D, and E were 0.93, 0.89, and 0.89, respectively, as shown in Figure 12b, which was higher than the identification confidence 0.89 of blueberry fruit C, the identification confidence 0.81 of blueberry fruit D, and the identification confidence 0.83 of blueberry fruit E shown in Figure 12b. The blueberry fruits A to E shown in Figure 12 belonged to different maturities and degrees of obstruction, and the comparison results revealed that by training natural and synthetic images, the YOLO-BLBE model performance could be improved in identifying blueberry fruits at different maturity and occlusion levels. It also indirectly verified the effectiveness of the I-MSRCR algorithm in enhancing the surface color features of blueberry fruits.

3.3. Performance Comparison among Various Network Models

In the third set of experiments, some deep learning network models were trained with the same dataset including natural and synthetic blueberry images, and their identification performance was compared. The changing process of mAP curves during different model trainings is shown in Figure 13.
By observing the results shown in Figure 13, it was observed that the mAP curve of the YOLO-BLBE model tended to be stable when the epoch was after 150 times. Compared to other models, the mAP curve of the YOLO-BLBE model was smooth and had a fast convergence speed, and the final convergence value of the mAP curve was higher than those of other models, which indicated that the YOLO-BLBE model structure matched well with the training dataset, making it easy to train the YOLO-BLBE model and showing that it had an excellent network performance.
Close-up single cluster blueberry images and long-range multiple cluster blueberry images were used to test the identification performance of different deep learning models, and the identification results are presented in Figure 14.
By observing the identification results for close-up single-cluster blueberry images, it was found that the YOLO-BLBE model had higher identification confidence for blueberry fruits with different maturities compared to other models. For long-range multiple-cluster blueberry images, the identification was difficult due to the color similarity between the immature blueberry fruits and the leaves. The YOLO-BLBE model was able to demonstrate higher confidence for immature fruits, which was believed to be due to the application of the BIFPN structure to extract features of immature fruits in both directions.
The comparison of model size and identification speed for different deep learning models is shown in Figure 15, and the performance parameter comparison of different models is shown in Table 4.
As shown in Figure 15, although the size of the YOLO-BLBE model was similar to that of the YOLOv5s model and the YOLOv8s model, the identification speed of the YOLO-BLBE model was much faster than that of the YOLOv5s model and the YOLOv8s model. The size and identification speed of the YOLO-BLBE model were far superior to those of the YOLOv4-tiny model, the YOLOv7 model, and the Faster RCNN model. These results and comparison revealed that the structure and performance of the proposed YOLO-BLBE model were excellent, which also implied that the proposed method of using the GhostNet model as the backbone network of the YOLO-BLBE model reduced the computational complexity of the model and optimized its structure. The results obtained in the article published by Cao can confirm the above results [29].
By observing the results in Table 4, it could be found that the mAP value of the YOLO-BLBE model was 98.14%, significantly higher than the 92.54% of the YOLOv5s model, as well as 92.33% of the YOLOv4 tiny model, and 91.15% of the Fast RCNN model. The mAP value of the proposed YOLO-BLBE model was also higher than the 95.46% of the YOLOv7 model and 97.23% of the YOLOv8s model, the latest version of the YOLO series. The P value and R value of the YOLO-BLBE model were 93.72% and 97.56%, respectively, being superior to those of other models, indicating that the YOLO-BLBE model could comprehensively and correctly identify blueberry fruits with different maturities in natural environments with high accuracy. In addition, the F1 score of the YOLO-BLBE model increased by 6.02 compared to YOLOv5s, 4.43 compared to YOLOv4 tiny, 9.11 compared to Faster RCNN, 2.08 compared to YOLOv7, and 2.35 compared to YOLOv8. This indicated that the YOLO-BLBE model achieved an excellent balance between accuracy and recall and had a good comprehensive identification performance for the identification of blueberry fruits with different maturities.

3.4. YOLO-BLBE Identification Performance in Natural Environments

In the fourth experiment, the performance of the YOLO-BLBE model was evaluated for identifying blueberry fruits with different maturities in natural environments. The over-exposure images, backlight images, close-range single-cluster images, remote multi-cluster images, and images of severely obstructing dense fruits were used to test the performance of the YOLO-BLBE model. The identification results are shown in Figure 16.
By observing the experimental results in over-exposure and backlight environments, it was found that the YOLO-BLBE model was able to fully identify all blueberry fruits with different maturities. This indicated that the model was able to adapt to different lighting environments by combining with the proposed I-MSRCR algorithm. The reason was that the I-MSRCR algorithm weakened the influence of light and retained most surface color features of the blueberry fruits during the color restoration process, being able to improve the identification performance of the YOLO-BLBE model. A similar result wherein the identification accuracy of the model was able to be improved by reasonably improving the fruit surface color was found in a published article [30]. The YOLO-BLBE model reflected accurate identifications for detecting close-range single-cluster images, remote multi-cluster images, and images of severely obstructing dense fruits, which reflected the model’s ability to identify occluded and small fruits. The proposed Alpha-EIOU loss function had the advantage of using three different IOU loss functions to determine whether the prediction box was retained, which improved the model ability to avoid the missed detection of small fruits with severe occlusion. It also implied that the proposed model had a good performance architecture. The similar result of improving the loss function to enhance the model identification of severely occluded target fruits is found in the article [31].

4. Conclusions

This study proposed an I-MSRCR algorithm to correct the color recovery factor ratio of the traditional MSRCR algorithm to enhance the color of blueberry images. The YOLO-BLBE model was constructed by improving the YOLOv5s model architecture. The identification performance of the YOLO-BLBE model for the blueberry fruits with different maturities was improved by training with natural and synthetic blueberry images. The main conclusions are as follows:
  • By adjusting the proportion of R, G, and B color channels in the MSRCR algorithm to 35%, 30%, and 35%, respectively, the proposed I-MSRCR algorithm was able to effectively amplify the color differences between blueberry fruits with different maturities.
  • A novel YOLO-BLBE model was constructed by replacing the backbone network of the YOLOv5s model with the combination of the GhostNet model and CA block, as well as applying the BIFPN structure and Alpha-EIOU loss function to the neck and prediction networks of the YOLOv5s model.
  • By training with the dataset including natural and synthetic blueberry images, the YOLO-BLBE model achieved an average identification accuracy of 99.58% for mature blueberry fruits, 96.77% for semi-mature blueberry fruits, and 98.07% for immature blueberry fruits. The performance metrics of the model were a mAP of 98.14%, P of 93.74%, Recall of 97.56%, and F1 score of 95.60%. The model had an average detection speed of 0.009 s and a model size of 12.75 MB.
  • Compared with the YOLOv5s model, the proposed YOLO-BLBE model reduced the size of the model by 9.7%; accelerated the average detection speed by 0.005 s; and improved precision by 8.52%, recall by 3.12%, F1 score by 6.02%, and mAP by 5.6% in terms of identification performance of blueberry fruits with different maturities.
The proposed method can identify blueberry fruits with different maturities in natural environments effectively and accurately. In the future, identification of semi-mature blueberry fruits at different growth stages will continue to be pursued.

Author Contributions

Conceptualization, C.W., Q.H. and C.L.; methodology, C.W. and J.L.; investigation, Q.H. and C.L.; resources, C.W. and X.Z.; writing—original draft preparation, Q.H.; writing—review and editing, C.W. and Q.H.; project administration, J.L.; funding acquisition, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by grants from the National Natural Science Foundation of China (52005069), the Guangdong Basic and Applied Basic Research Foundation (2022A1515140162), the Yunnan Major Science and Technology Special Plan (202302AE090024) and the Yunnan Fundamental Research Projects (202101AT070113).

Data Availability Statement

The program in this study can be obtained at the following address: https://github.com/abysswatcher-hqy/YOLO-BLBE.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Giongo, L.; Poncetta, P.; Loretti, P.; Costa, F. Texture profiling of blueberries (Vaccinium spp.) during fruit development, ripening and storage. Postharvest Biol. Technol. 2013, 76, 34–39. [Google Scholar] [CrossRef]
  2. Soto-Caro, A.; Wu, F.; Xia, T.; Guan, Z. Demand analysis with structural changes: Model and application to the US blueberry market. Agribusiness 2023, 39, 1100–1116. [Google Scholar] [CrossRef]
  3. Cheng, J.; He, L.; Sun, H.; Pan, Y.; Ma, J. Inhibition of cell wall pectin metabolism by plasma activated water (PAW) to maintain firmness and quality of postharvest blueberry. Plant Physiol. Biochem. 2023, 201, 107803. [Google Scholar] [CrossRef]
  4. Ni, X.; Li, C.; Jiang, H.; Fumiomi, T. Three-dimensional photogrammetry with deep learning instance segmentation to extract berry fruit harvestability traits. ISPRS J. Photogramm. Remote Sens. 2021, 171, 297–309. [Google Scholar] [CrossRef]
  5. Meng, F.; Li, J.; Zhang, Y.; Qi, S.; Tang, Y. Transforming unmanned pineapple picking with spatio-temporal convolutional neural networks. Comput. Electron. Agric. 2023, 214, 108298. [Google Scholar] [CrossRef]
  6. Chen, M.; Chen, Z.; Luo, L.; Tang, Y.; Cheng, J.; Wei, H.; Wang, J. Dynamic visual servo control methods for continuous operation of a fruit harvesting robot working throughout an orchard. Comput. Electron. Agric. 2024, 219, 108774. [Google Scholar] [CrossRef]
  7. Tang, Y.; Qi, S.; Zhu, L.; Zhuo, X.; Zhang, Y.; Meng, F. Obstacle Avoidance Motion in Mobile Robotics. J. Syst. Simul. 2024, 36, 1. [Google Scholar]
  8. Tan, K.; Lee, W.S.; Gan, H.; Wang, S. Recognising blueberry fruit of different maturity using histogram oriented gradients and colour features in outdoor scenes. Biosyst. Eng. 2018, 176, 59–72. [Google Scholar] [CrossRef]
  9. Routray, W.; Orsat, V. Variation of phenolic profile and antioxidant activity of North American highbush blueberry leaves with variation of time of harvest and cultivar. Ind. Crop. Prod. 2014, 62, 147–155. [Google Scholar] [CrossRef]
  10. Little, C.; Chapman, T.; Hillier, N. Effect of Color and Contrast of Highbush Blueberries to Host-Finding Behavior by Drosophila suzukii (Diptera: Drosophilidae). Environ. Entomol. 2018, 47, 1242–1251. [Google Scholar] [CrossRef]
  11. Zhu, X.; Ma, H.; Ji, J.; Jin, X.; Zhao, K.; Zhang, K. Detecting and identifying blueberry canopy fruits based on Faster R-CNN. J. South Argic. 2020, 51, 1493–1501. [Google Scholar]
  12. Wang, L.; Qin, M.; Lei, J.; Wang, X.; Tan, K. Blueberry maturity recognition method based on improved YOLOv4-Tiny. Trans. Chin. Soc. Agric. Eng. 2021, 37, 170–178. [Google Scholar]
  13. Barnea, E.; Mairon, R.; Ben-shahar, O. Colour-agnostic shape-based 3D fruit detection for crop harvesting robots. Biosyst. Eng. 2016, 146, 57–70. [Google Scholar] [CrossRef]
  14. Tao, H.; Zhao, L.; Xi, J.; Yu, L.; Wang, T. Fruits and vegetables recognition based on color and texture features. Trans. Chin. Soc. Agric. Eng. 2014, 30, 305–311. [Google Scholar]
  15. Farjon, G.; Krikeb, O.; Hillel, A.; Alchanatis, V. Detection and counting of flowers on apple trees for better chemical thinning decisions. Precis. Agric. 2020, 21, 503–521. [Google Scholar] [CrossRef]
  16. Li, H.; Lee, W.S.; Wang, K. Identifying blueberry fruit of different growth stages using natural outdoor color images. Comput. Electron. Agric. 2014, 106, 91–101. [Google Scholar] [CrossRef]
  17. Zhu, J.; Lei, J.; Zhai, D.; Huang, Z. Spherical fruit automatic recognition method based on grey relational analysis and fuzzy membership degree matching. Chin. J. Sci. Instrum. 2012, 33, 1826–1836. [Google Scholar]
  18. Khoje, S. Appearance and characterization of fruit image textures for quality sorting using wavelet transform and genetic algorithms. J. Texture Stud. 2018, 49, 65–83. [Google Scholar] [CrossRef] [PubMed]
  19. Maldonado, W., Jr.; Barbosa, J.C. Automatic green fruit counting in orange trees using digital images. Comput. Electron. Agric. 2016, 127, 572–581. [Google Scholar] [CrossRef]
  20. Alresheedi, K.; Aladhadh, S.; Khan, R.; Qamar, A. Dates Fruit Recognition: From Classical Fusion to Deep Learning. Comput. Syst. Sci. Eng. 2022, 40, 151–166. [Google Scholar] [CrossRef]
  21. Wang, X.; Zubko, V.; Onychko, V.; Wu, Z.; Zhao, M. Online recognition and yield estimation of tomato in plant factory based on YOLOv3. Sci. Rep. 2022, 12, 8686. [Google Scholar] [CrossRef]
  22. Suo, R.; Gao, F.; Zhou, Z.; Fu, L.; Song, Z.; Dhupia, J.; Li, R.; Cui, Y. Improved multi-classes kiwifruit detection in orchard to avoid collisions during robotic picking. Comput. Electron. Agric. 2021, 182, 106052. [Google Scholar] [CrossRef]
  23. Lawal, O. YOLOMuskmelon: Quest for fruit detection speed and accuracy using deep learning. IEEE Access 2021, 9, 15221–15227. [Google Scholar] [CrossRef]
  24. Li, R.; Ji, Z.; Hu, S.; Huang, X.; Yang, J.; Li, W. Tomato Maturity Recognition Model Based on Improved YOLOv5 in Greenhouse. Agronomy 2023, 13, 603. [Google Scholar] [CrossRef]
  25. Yang, H.; Liu, Y.; Wang, S.; Qu, H.; Li, N. Improved Apple Fruit Target Recognition Method Based on YOLOv7 Model. Agriculture 2023, 13, 1278. [Google Scholar] [CrossRef]
  26. Wang, C.; Li, C.; Han, Q.; Wu, F.; Zou, X. A Performance Analysis of a Litchi Picking Robot System for Actively Removing Obstructions, Using an Artificial Intelligence Algorithm. Agronomy 2023, 13, 2795. [Google Scholar] [CrossRef]
  27. Rahman, Z.; Jobson, D.; Woodell, G. A multiscale retinex for color rendition and dynamic range compression. Int. Soc. Opt. Eng. 1996, 2847, 183–191. [Google Scholar]
  28. Ropelewska, E.; Rutkowski, K. The Classification of Peaches at Different Ripening Stages Using Machine Learning Models Based on Texture Parameters of Flesh Images. Agriculture 2023, 13, 498. [Google Scholar] [CrossRef]
  29. Cao, M.; Fu, H.; Zhu, J.; Cai, C. Lightweight tea bud recognition network integrating GhostNet and YOLOv5. Math. Biosci. Eng. 2022, 19, 12897–12914. [Google Scholar] [CrossRef] [PubMed]
  30. Sun, G.; Li, Y.; Wang, X.; Hu, G.; Wang, X.; Zhang, Y. Image segmentation algorithm for greenhouse cucumber canopy under various natural lighting conditions. Int. J. Agric. Biol. Eng. 2016, 9, 130–138. [Google Scholar]
  31. Wang, M.; Fu, B.; Fan, J.; Wang, Y.; Zhang, L.; Xia, C. Sweet potato leaf detection in a natural scene based on faster R-CNN with a visual attention mechanism and DIoU-NMS. Ecol. Inform. 2023, 73, 101931. [Google Scholar] [CrossRef]
Figure 1. The construction process of the synthetic image.
Figure 1. The construction process of the synthetic image.
Agronomy 14 00658 g001
Figure 2. Representative images in the dataset. (ad) Slight occlusion, serious occlusion, backlight, and overexposure in the natural image dataset, respectively; (e,f) slight occlusion and serious occlusion in the synthetic image dataset, respectively; (g) sky; (h) land.
Figure 2. Representative images in the dataset. (ad) Slight occlusion, serious occlusion, backlight, and overexposure in the natural image dataset, respectively; (e,f) slight occlusion and serious occlusion in the synthetic image dataset, respectively; (g) sky; (h) land.
Agronomy 14 00658 g002
Figure 3. Gradient color bands and RGB color curves of blueberry fruits at different stages: (a) gradient color bands; (b) RGB color curves corresponding to different color bands.
Figure 3. Gradient color bands and RGB color curves of blueberry fruits at different stages: (a) gradient color bands; (b) RGB color curves corresponding to different color bands.
Agronomy 14 00658 g003
Figure 4. Color enhancement process of blueberry images based on the I-MSRCR algorithm.
Figure 4. Color enhancement process of blueberry images based on the I-MSRCR algorithm.
Agronomy 14 00658 g004
Figure 5. The structure and processing flow of GhostNet.
Figure 5. The structure and processing flow of GhostNet.
Agronomy 14 00658 g005
Figure 6. The structure and processing flow of the CA block.
Figure 6. The structure and processing flow of the CA block.
Agronomy 14 00658 g006
Figure 7. The structure and processing flow of BiFPN.
Figure 7. The structure and processing flow of BiFPN.
Agronomy 14 00658 g007
Figure 8. The structure and processing flow of YOLO-BLBE.
Figure 8. The structure and processing flow of YOLO-BLBE.
Agronomy 14 00658 g008
Figure 9. Distribution of pixel color values of the original blueberry fruits and color enhancement blueberry fruits under different color channel ratios.
Figure 9. Distribution of pixel color values of the original blueberry fruits and color enhancement blueberry fruits under different color channel ratios.
Agronomy 14 00658 g009
Figure 10. AP curves of the YOLO-BLBE model trained with different datasets. (a) AP curves obtained by training with natural and synthetic images; (b) AP curves obtained by training with only natural images.
Figure 10. AP curves of the YOLO-BLBE model trained with different datasets. (a) AP curves obtained by training with natural and synthetic images; (b) AP curves obtained by training with only natural images.
Agronomy 14 00658 g010
Figure 11. Loss curves of the YOLO-BLBE model trained with different datasets. (a,c) train_loss and val_loss of the YOLO-BLBE model trained with natural images; (b,d) train_loss and val_loss of the YOLO-BLBE model trained with natural and synthetic images.
Figure 11. Loss curves of the YOLO-BLBE model trained with different datasets. (a,c) train_loss and val_loss of the YOLO-BLBE model trained with natural images; (b,d) train_loss and val_loss of the YOLO-BLBE model trained with natural and synthetic images.
Agronomy 14 00658 g011
Figure 12. Maturity identification results and comparisons of the YOLO-BLBE model trained with different datasets at different shooting distances. (a,b) Identification results of the YOLO-BLBE model trained with natural images at near and far shooting distances, respectively; (c,d) identification results of the YOLO-BLBE model trained with natural and synthetic images at near and far shooting distances, respectively. A. B, C, D, and E represent the blueberry fruits involved in the discussion.
Figure 12. Maturity identification results and comparisons of the YOLO-BLBE model trained with different datasets at different shooting distances. (a,b) Identification results of the YOLO-BLBE model trained with natural images at near and far shooting distances, respectively; (c,d) identification results of the YOLO-BLBE model trained with natural and synthetic images at near and far shooting distances, respectively. A. B, C, D, and E represent the blueberry fruits involved in the discussion.
Agronomy 14 00658 g012
Figure 13. The changing process of mAP curves during the training of different models.
Figure 13. The changing process of mAP curves during the training of different models.
Agronomy 14 00658 g013
Figure 14. The identification results of different deep learning models. (a) YOLO-BLBE-model-based identification results; (b) YOLOv8s-model-based identification results; (c) YOLOv7-model-based identification results; (d) YOLOv5s-model-based identification results; (e) YOLOv4-tiny-model-based identification results; (f) Faster-RCNN-model-based identification results.
Figure 14. The identification results of different deep learning models. (a) YOLO-BLBE-model-based identification results; (b) YOLOv8s-model-based identification results; (c) YOLOv7-model-based identification results; (d) YOLOv5s-model-based identification results; (e) YOLOv4-tiny-model-based identification results; (f) Faster-RCNN-model-based identification results.
Agronomy 14 00658 g014aAgronomy 14 00658 g014bAgronomy 14 00658 g014c
Figure 15. The comparison of model size and identification speed for different deep learning models.
Figure 15. The comparison of model size and identification speed for different deep learning models.
Agronomy 14 00658 g015
Figure 16. Identification results of the YOLO-BLBE model for blueberry fruits with different maturities in natural environments.
Figure 16. Identification results of the YOLO-BLBE model for blueberry fruits with different maturities in natural environments.
Agronomy 14 00658 g016
Table 1. Specific composition of the dataset.
Table 1. Specific composition of the dataset.
Blueberry ImagesImage TypeNumber of Pictures
Natural imagesslight occlusion263
serious occlusion241
backlight images182
overexposure images168
background images without targets140
Synthetic imagesslight occlusion246
serious occlusion212
Table 2. Color difference values and color channel contributions.
Table 2. Color difference values and color channel contributions.
X = 10X = 30X = 50X = 70X = 90
X = 10-----
X = 30370.08 (33.2%, 33.4%, 33.4%)----
X = 50310.27 (46.3%, 46.3%, 7.4%)179.11 (2.7%, 2%, 95.3%)---
X = 70253.45 (33.2%, 58%, 8.8%)201.30 (32.2%, 1.5%, 66.3%)93.02 (97.9%, 0%, 2.1%)--
X = 90308.41 (7.8%, 46.4%, 45.8%)250.03 (98%, 1.5%, 0.5%)312.13 (58.6%, 0%, 41.4%)243.66 (47.1%, 0%, 52.9%)-
Note: Each column in parentheses represents the contribution of R, G, and B in sequence, and X represents the position on the gradient color bands.
Table 3. Model performance parameters comparison based on different training datasets.
Table 3. Model performance parameters comparison based on different training datasets.
ModelTraining DatasetsP/%R/%F1/%AP/%
YOLO-BLBEnatural and synthetic93.7297.5695.6098.14
natural92.1493.5192.8697.82
Table 4. Comparison of model parameters between different models.
Table 4. Comparison of model parameters between different models.
ModelModel Size (MB)Detection Speed
(s/per Image)
P/%R/%F1scoremAP
YOLO-BLBE12.750.00993.7297.5695.6098.14
YOLOv772.100.01592.8694.1893.5295.46
YOLOv8s11.200.02493.5292.9993.2597.23
YOLOv5s13.730.01485.2094.4489.5892.54
YOLOv4-tiny23.150.01188.7693.7191.1792.33
Faster RCNN116.300.02880.1193.9886.4991.15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Han, Q.; Li, J.; Li, C.; Zou, X. YOLO-BLBE: A Novel Model for Identifying Blueberry Fruits with Different Maturities Using the I-MSRCR Method. Agronomy 2024, 14, 658. https://doi.org/10.3390/agronomy14040658

AMA Style

Wang C, Han Q, Li J, Li C, Zou X. YOLO-BLBE: A Novel Model for Identifying Blueberry Fruits with Different Maturities Using the I-MSRCR Method. Agronomy. 2024; 14(4):658. https://doi.org/10.3390/agronomy14040658

Chicago/Turabian Style

Wang, Chenglin, Qiyu Han, Jianian Li, Chunjiang Li, and Xiangjun Zou. 2024. "YOLO-BLBE: A Novel Model for Identifying Blueberry Fruits with Different Maturities Using the I-MSRCR Method" Agronomy 14, no. 4: 658. https://doi.org/10.3390/agronomy14040658

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop