Next Article in Journal
Fate of Mycotoxins in Local-Race Populations of Maize Collected in the Southwest of France, from the Field to the Flour and Meal in Organic Farms
Previous Article in Journal
Near-Infrared Reflectance Spectroscopy Calibration for Trypsin Inhibitor in Soybean Seed and Meal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Machine Vision Method for Detecting Pineapple Fruit Mechanical Damage

1
College of Engineering, South China Agricultural University, Guangzhou 510642, China
2
College of Agriculture, South China Agricultural University, Guangzhou 510642, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(10), 1063; https://doi.org/10.3390/agriculture15101063
Submission received: 28 March 2025 / Revised: 7 May 2025 / Accepted: 9 May 2025 / Published: 15 May 2025
(This article belongs to the Section Digital Agriculture)

Abstract

:
In the mechanical harvesting process, pineapple fruits are prone to damage. Traditional detection methods struggle to quantitatively assess pineapple damage and often operate at slow speeds. To address these challenges, this paper proposes a pineapple mechanical damage detection method based on machine vision, which segments the damaged region and calculates its area using multiple image processing algorithms. First, both color and depth images of the damaged pineapple are captured using a RealSense depth camera, and their pixel information is aligned. Subsequently, preprocessing techniques such as grayscale conversion, contrast enhancement, and Gaussian denoising are applied to the color images to generate grayscale images with prominent damage features. Next, an image segmentation method that combines thresholding, edge detection, and morphological processing is employed to process the images and output the damage contour images with smoother boundaries. After contour-filling and isolation of the smaller connected regions, a binary image of the damaged area is generated. Finally, a calibration object with a known surface area is used to derive both the depth values and pixel area. By integrating the depth information with the pixel area of the binary image, the damaged area of the pineapple is calculated. The damage detection system was implemented in MATLAB, and the experimental results showed that compared with the actual measured damaged area, the proposed method achieved an average error of 5.67% and an area calculation accuracy of 94.33%, even under the conditions of minimal skin color differences and low image resolution. Compared to traditional manual detection, this approach increases detection speed by over 30 times.

1. Introduction

Pineapple is one of the four major tropical fruits—alongside banana, mango, and coconut—and is widely favored by consumers. It is also a distinctive and competitive fruit in southern China, playing a significant role in regional economic development and agricultural advancement [1,2]. With annual increases in production and labor costs, mechanical harvesting has become a viable solution for reducing harvesting expenses [3].
Recently, mechanical harvesting has been actively explored by many researchers. For example, Liu et al. [4,5,6] proposed various pineapple harvesting mechanisms, including a multi-flexible fingered roller, a lever-feeding-type roller, and a chain-feeding roller combined with a flexible rod-breaking mechanism, achieving harvesting success rates of 85%, 84%, and 82%, respectively. Their approaches significantly improved harvesting efficiency compared to manual picking. Cross Agricultural Engineering designed a tracked, fully automated pineapple harvester that used a tine-chain rake for taking in pineapple fruits, stems, and leaves [7]. Kurbah et al. developed a gripper-cutting-type pineapple harvesting mechanism [8].
Although these methods aimed to minimize fruit and plant damage while maximizing fruit removal, the pineapple is a delicate fruit and is still inevitably injured by mechanical harvesters [9]. For instance, the mechanical damage rate can exceed 20% [4,5,6]. Detecting and accurately assessing such damage is therefore critical for improving pineapple harvester design and performance. Due to the irregular shapes of mechanical damage, manual measurements using tools such as calipers often lead to substantial errors, sometimes reaching up to 50%.
Quantitatively assessing fruit damage remains challenging. To enhance both accuracy and speed in fruit damage detection, many researchers have proposed machine vision-based solutions. For instance, Zhang et al. [10] presented a nondestructive method using hyperspectral spectroscopy based on a stacking model to detect slight mechanical damage in apples, achieving a detection accuracy of 96.67%. Chiu et al. [11] proposed an automated approach that used fluorescence imagery to detect apple bruises. Dubey et al. [12] introduced a system for recognizing surface defects in apples based on local binary patterns, achieving a detection accuracy of 93.14%. Ahmed et al. [13] developed a mango fruit lesion segmentation method that outperformed threshold-based, edge-based, texture-based, and color-based segmentation techniques. Nadarajan et al. [14] also introduced a watershed algorithm-based method for detecting mango defects. Wang et al. [15] applied a Fisher’s Linear Discriminant Analysis (LDA) to detect orange skin defects and achieved a detection accuracy of 96.70%. Slaughter et al. [16] employed machine vision with ultraviolet fluorescence to detect orange damage, achieving 87.9% accuracy. Fu et al. [17] used fluorescence hyperspectral imaging to detect early bruises in pears, achieving 93.33% accuracy 15 min after bruising had occurred. Okere et al. [18] used Vis-NIR and SWIR hyperspectral imaging to detect pomegranate bruises, achieving an 80–96.7% accuracy range in bruise severity classification.
Research on pineapple fruit detection, particularly damage detection, is still in its infancy. Nonetheless, some researchers have published related results. For example, Chen et al. [19] developed a golden diamond pineapple surface detection system based on CycleGAN and YOLOv4, achieving an average precision of 84.86%. Li et al. [20] proposed a pineapple thorn detection method based on feature recognition and spatial localization. By extracting the characteristics of pineapple thorns through image processing and performing coordinate transformations on surface contour features, their method effectively detected and localized thorns. However, it faced challenges when the color difference between the pineapple’s surface and its thorns was small. Other work in this field has primarily focused on fruit object detection or maturity evaluation [21,22,23,24], but studies specifically addressing pineapple surface damage detection remain scarce and preliminary.
Against this backdrop, the present study proposes a pineapple mechanical damage detection method based on machine vision. The research concentrates on image segmentation algorithms and damage area calculation, aiming to facilitate the accurate and efficient detection of mechanically damaged regions on pineapple fruit surfaces.

2. Materials and Methods

2.1. Structure of the Machine Vision System

An experimental machine vision system was designed and developed to detect mechanical damage in pineapple fruit. Figure 1 shows the overall structure of the system, which consisted of three main components: a camera, a Tianxuan 4 laptop (ASUS Computer Inc, Made in China Suzhou), and a supporting framework. As shown, the camera was mounted on an L-shaped bracket, which was then fixed to a horizontal plate. This horizontal plate was secured to an aluminum platform, where the injured pineapple fruit was placed. In the image, the distance between the camera and the top surface of the pineapple is denoted as D and the distance between the camera and the platform surface (used to place the fruit) is denoted as H. The camera was oriented perpendicular to the platform to minimize depth-measurement errors associated with lens tilt. To avoid interference from extraneous colors, the platform’s tabletop (which held the fruit) was made of white acrylic. The camera connected to the computer via a USB interface for image-data transmission. The laptop handled image processing and other computational tasks.

2.2. Pineapple Image Acquisition Equipment

The image acquisition device in this experiment was the Intel RealSense D435i depth camera, which comprised a pair of infrared cameras, an infrared laser projector, and an RGB camera. It could capture both color and depth images and output depth measurements in the range of 0.1–10 m.
The working principle of the D435i depth camera was based on triangulation. An infrared laser projector casted a pattern onto the scene, and the left and right infrared cameras captured the reflected infrared light, producing point-cloud data and corresponding grayscale images. By calculating the disparity between these two infrared images, the camera generated a final depth map. Meanwhile, the onboard RGB camera captured color images. Through data alignment, each RGB pixel corresponded to a specific depth value, resulting in a color image that included depth information.
The depth information acquisition was constrained by factors such as the minimum working distance, the field of view and accuracy of the depth sensor, and the density of the projected infrared pattern. Consequently, the distance between the camera and the object was required meet the sensor’s minimum working-distance requirement to ensure sufficient clearance for accurate depth capture. If the object was placed too close, the camera failed to record valid depth measurements, assigning a depth value of zero (and producing black regions in the software). Figure 2a,b shows examples of the depth images captured at valid and invalid distances, respectively, using Intel’s RealSense Viewer software (Intel-RealSense-Calibration-Tool-2.14.2.0).

2.3. Pineapple Detection Workflow

The pineapple detection procedure began with image capture of the pineapple placed on the experimental platform using a D435i depth camera (Intel Corporation, Santa Clara, CA, USA). Next, image-data alignment was performed, and the damage region was extracted and binarized using image-processing algorithms. Depth values were subsequently obtained and, by combining this depth information with the pixel data, the actual surface damage area of the pineapple was calculated. Figure 3 provides an overview of this workflow, illustrating how the system transitioned from raw image capture to final damage-area computation.

2.4. System Interface Design

MATLAB 2023b software (MathWorks, Inc., Natick, MA, USA) was chosen as the development platform due to its robust image-processing capabilities, providing a wide range of image processing functions and algorithms, which shorten development time. It enables data visualization of image processing results, facilitating quick data analysis and algorithm optimization. In addition, the official Intel RealSense SDK provides tools for calling the depth camera through MATLAB, enabling operations such as data stream alignment, image acquisition, and data collection, which reduces algorithm development time. Therefore, MATLAB was selected as the algorithm development platform, and a visualized experimental software interface was designed.
The software interface included five major modules: video loading, image cropping, region selection, image processing, and area measurement. During the experiments, the camera was first activated for real-time image capture, allowing the user to adjust the pineapple’s position immediately. The image was then cropped and displayed on the interface. Because the 1280 × 720 resolution from the depth camera may have been too large for detailed processing of the damaged area, the user selected a region of interest to isolate the pineapple fruit damage area. After region selection, image-processing algorithms generated a binary image of the damaged area, and finally, the system calculated and output the measured damage area. Figure 4 shows the MATLAB software interface designed to implement these functions.

2.5. Image Processing Algorithms

Image processing applies computer algorithms to modify pixel information and spatial structures within an image, yielding images with specific qualities and features [25]. The proposed algorithm first preprocessed the acquired pineapple damage images via methods such as grayscale conversion, intensity adjustment, and denoising. The aim was to produce grayscale images with minimal noise, sharp contours, and high contrast in the damaged regions. Lastly, an image segmentation algorithm was used to obtain the damage contours.

2.5.1. Grayscale Processing

Grayscale conversion transforms a color image into shades of gray ranging from 0 (black) to 255 (white), thereby enhancing processing speed and facilitating contrast adjustments to highlight important features [26]. Four standard approaches exist for converting RGB images to grayscale: (1) using an arbitrary channel; (2) taking the maximum value of the R, G, and B channels; (3) taking their average value; or (4) using a weighted average of the three channels. Here, the weighted-average method was adopted, and its function could be expressed as follows:
gray = 0.2989·R + 0.5870·G + 0.1140·B.
Figure 5a,b shows the RGB image of a pineapple’s damaged area and its corresponding grayscale image.

2.5.2. Contrast Enhancement

Since a pineapple’s damage area was identified using color differences, a contrast enhancement algorithm was applied to accentuate the grayscale difference between the damaged and undamaged pixels, thereby improving the recognition rate. A linear grayscale transformation was used to change the grayscale values via a linear function, as described in Equation (2):
S = k·R + b (0 ≤ S ≤ 255),
where R represents the original grayscale value, S represents the transformed grayscale value, k represents the contrast factor, and b represents the brightness factor. When k > 1, the grayscale values increased, resulting in a higher contrast in the image, and when 0 < k < 1, the contrast decreased. If k < 0, bright pixels became darker, while dark pixels became brighter. Adjusting the b value modified the brightness of the grayscale image. In this study, k was set as 1.25. Figure 5c shows the result of applying this linear grayscale transformation to Figure 5b.

2.5.3. Image Denoising

The grayscale-transformed pineapple fruit damage image often contained noise, which diminished clarity and may have interfered with subsequent edge detection. Image denoising therefore enhanced clarity and prepared the image for contour extraction. Denoising is primarily achieved using filtering algorithms [27]. Here, Gaussian filtering was first used to remove noise, followed by Laplacian filtering to highlight image details and edges.
Gaussian filtering is a linear smoothing technique that applies a Gaussian kernel with circular symmetry, performing a weighted sum of the pixel’s neighborhood. Each pixel’s value is updated by the weighted average of its own value and those of adjacent pixels. To ensure a clear central point, the Gaussian kernel size is generally odd, and in this study, a [3 × 3] kernel was selected for the filtering calculation. The Gaussian filtering operation was as expressed in Equations (3) and (4):
g x , y = s = a a t = b b w s , t f ( x + s , y + t ) ,   and
w s , t = G s , t = K e s 2 + t 2 2 σ 2 ,
where f represents the original image, w denotes the Gaussian filter kernel, g x , y is the filtered image, K is a constant, and σ is the standard deviation of the Gaussian distribution.
As the center of the Gaussian kernel moved across the image, a filtered image was produced. While Gaussian filtering effectively removed noise, it also blurred edges. Therefore, after the Gaussian filtering, Laplacian filtering was employed to sharpen the image and enhance details. The Laplacian algorithm computed the Laplacian of the image and added it back to the original image to produce the sharpened result. It can be expressed as follows:
2 f = 2 f x 2 + 2 f y 2 ,   and
G x , y = f x , y + c [ 2 f ( x , y ) ] ,
where 2 f represents the Laplacian operator, f x , y and G x , y denote the input and output images, respectively, and c is the coefficient corresponding to the second-order derivative of the Laplacian kernel.
For discrete images, the first order derivative was expressed as follows:
f ( x , y ) x = x   f x , y = f x , y f ( x 1 , y ) ,   and
f ( x , y ) y = y   f x , y = f x , y f ( x , y 1 ) .
Hence,
2 f x , y x 2 = x f x + 1 , y x f x , y = f x + 1 , y + f x 1 , y 2 f ( x , y ) ,
2 f x , y y 2 = y f x , y + 1 y f x , y = f x , y + 1 + f x , y 1 2 f ( x , y ) ,   and
2 f = f x + 1 , y + f x 1 , y + f x , y + 1 + f x , y 1 4 f ( x , y ) .
By applying the Laplacian operator, small details were amplified, and the images were sharpened. Figure 6a and Figure 6b show the outcomes of applying Gaussian and Laplacian filtering, respectively, to Figure 5c.

2.5.4. Image Segmentation

The commonly used image segmentation methods include thresholding, clustering-based segmentation, edge detection, and morphological segmentation [28]. Thresholding is widely used in fruit recognition and detection as an initial step that binarizes an image, thereby simplifying subsequent feature extraction [29]. Clustering-based methods group regions with similar grayscale or color values; however, their accuracy often diminishes under complex conditions. Morphological methods operate on a binary image using specific structural elements.
Since more than 30% of a mature pineapple’s surface is yellow and a damaged area often appears in a similar shade of yellow, their color differences can be minimal. Additionally, the damage boundary tends to be highly irregular, making it challenging to capture a complete and continuous contour using a single segmentation technique. Therefore, multiple segmentation methods were combined here to ensure that the extracted damage contours closely matched the actual pineapple damage region and maintained smooth edges.
  • Threshold Segmentation
The first step in segmenting a pineapple’s damaged region was to binarize the preprocessed grayscale image using Otsu’s method. This method determines a threshold by maximizing between-class variance and then segments an image to produce a binary result. The binary image corresponding to Figure 6b is shown in Figure 7.
2.
Morphological Segmentation
After thresholding, scattered noise appeared in the binary image, caused by factors such as the pineapple’s surface texture or light reflections. A morphological algorithm was used to remove these background noise points and smooth the damage contours by applying structural elements to the binary image through intersection and union operations.
An opening operation followed by a closing operation effectively reduced noise in the image backgrounds and yielded smoother damage boundaries [30,31]. This procedure can be expressed by Equation (12):
(ABB = [(AB) ⊕ B] ⊖ B,
where ∘ represents the opening operation and · represents the closing operation, which are shown in Equations (13) and (14), respectively:
AB = (AB) ⊕ B, and
A · B = (AB) ⊖ B,
where ⊖ represents the corrosion operation and ⊕ represents the dilation operation, which are shown in Equations (15) and (16), respectively:
AB = {o∣ (B)oA}, and
A B = { o   ( B ^ ) o     A   =   } ,
where A represents the set of foreground pixels (targets), B and B ^ are structural elements, o is a foreground pixel, AB represents the corrosion of A by B, and AB represents the dilation of A by B.
To ensure an accurate damage-region output, the structural element’s radius could not have been too large. In this work, a circular structuring element with a radius of 5 pixels was chosen to effectively remove background noise while preserving the actual damage region. The resulting binary image corresponding to Figure 7 is shown in Figure 8.
3.
Edge Detection
The final step of the image segmentation procedure was edge detection. Besides the Sobel operator, Prewitt, LoG, and Canny operators are commonly used for edge detection. Figure 9 compares the edge images extracted by these operators, with identical Gaussian standard deviation parameters. The results indicated that the Canny operator provided clearer and smoother closed contours, demonstrating superior edge-recognition performance over the other three operators. Therefore, the Canny operator was selected for edge detection.

2.5.5. Damage Area Filling

To generate a binary image containing only the damage regions, the edge contours obtained through the Canny edge-detection algorithm were filled. This process removed large-scale interference areas that were difficult to eliminate with denoising alone. The filling equation is shown in Equation (17):
P k = P k 1 C I c ,
where C represents the structural element and I c represents the image when k = 0, P 0   represents the array with element 0, and when P k   =   P k 1 , for P k , all contained holes would be filled. Both closed contours and smaller enclosed regions in the pineapple damage image were filled, whereas open regions appeared as open-loop contours. The filling result for the binary image in Figure 9d is shown in Figure 10.
The binary image was composed of the pixel values 0 and 1, where 1 indicated a highlighted (filled) region. By summing all non-zero pixel values, the filled area could be computed relative to the total number of pixels. In this example, the filled region reached 66,233 pixels, with less than 20% of that area being erroneous fills. By removing filled regions above a pixel threshold of 12,000, a background-interference-free binary image was obtained, as shown in Figure 11. The equation for removing these large, incorrect fills is shown as Equation (18).
A comparison of Figure 10 and Figure 11 reveals that smaller filled areas in the background and open-loop contours were removed, thereby producing a complete image of the damage region. Furthermore, a comparison of Figure 11 with Figure 5a demonstrates that after a series of image-processing steps, the final damage region closely matched the actual damage. A series of experimental results confirmed that the recognition accuracy exceeded 80%.

2.6. Damage Area Calculation

The pixel area of the pineapple’s damaged region could be obtained by summing all non-zero pixel values in the binary image. However, this pixel area was not equal to the actual damage area and, therefore, needed to be calibrated. An object with a known actual area and recognizable contour features was used as the calibration object. The depth information was acquired from the depth camera, and a specific point on the calibration object alone with a specific point on the pineapple’s damaged surface were selected to obtain their respective depth values. The depth calculation formula was denoted as shown in Equation (19):
a r e a , b a c k g r o u n d , i f   t h e   d i a m e t e r   o f   t h e   m i n i m u m   e n c l o s i n g   c i r c l e t h r e s h o l d o t h e r w i s e ,   and
D = f B d ,
where D represents the depth at a certain pixel, f represents the focal length of the camera, B represents the distance between the two infrared cameras, d represents the parallax, and f and B are the intrinsic parameters of the camera.
Since the depth image could be interpreted as a two-dimensional array where each pixel’s value corresponded to a depth measurement, the depth value of any pixel could be extracted based on its coordinates. The depth values Ds were then calculated using Equation (20):
Ds = N × D(X − 1,Y − 1),
where X = [x] and Y = [y], x and y represent the selected pixel coordinates in the depth image, and N represents the currently extracted depth frame parameter.
After obtaining the depth values of both the calibration object and the pineapple’s damaged area, the same image-processing steps were performed on the calibration object to determine its pixel area in the resulting binary image. A proportional relationship existed between the binary image’s pixel area and the real damaged area. This relationship is defined in Equation (21):
k 1 = A 1 r e a l A 1 i m a g e ,
where A1real represents the actual areas of the calibration object and A1image represents the pixel areas of the calibration object.
Because the depth values of both the pineapple’s damaged surface and the calibration object were known, the pineapple’s actual damaged area could be calculated using Equation (22):
A 0 r e a l = k 1 A 0 i m a g e ( D 0 D 1 ) 2 ,
where A0real denotes the pineapple’s actual damaged area, A0image represents the pixel area of the damaged region in the pineapple’s binary image, and D0 and D1 are the depths of the pineapple’s damaged surface and the calibration object, respectively.

2.7. Camera Calibration

Camera calibration primarily involves comparing a computed area via an image-processing method with the true area of a known reference object, thereby evaluating the deviation. In this study, a spacer with a 33 mm diameter was selected as the reference object, providing a surface area of 855.3 mm2. Image processing yielded a binary image area of 10,542 pixels, and the final calculation accuracy reached 99% when comparing the measured area to the spacer’s actual area.

2.8. Materials

The test samples (Figure 12b) were mechanically damaged pineapples harvested on 10 January 2024 (Figure 12a) using a pineapple harvester in Shenwan Town, Zhongshan, Guangdong, China. All tests were conducted at the laboratory of the South China Agricultural University in Guangzhou, China. Immediately after harvesting, the specimens were wrapped in plastic and transported to the laboratory. Each sample was tested within 24 h of harvesting. Ten pineapples with varying degrees of damage and different maturities were randomly selected for the experiments. The maximum horizontal diameter among these samples was 97 mm.
Because the distance between the camera and the object should not have been less than the depth sensor’s minimum working distance, a sufficient distance needed to be maintained to avoid losing depth information due to a low camera angle. Such loss would adversely affect infrared imaging. Consequently, the distance D between the testing platform and the camera was fixed at 270 mm.

3. Results and Discussion

3.1. Results

A case study of the interface operation for pineapple fruit mechanical damage detection is shown in Figure 13. The depth value, detected damage area, actual damage area, deviation, and error rates are listed in Table 1. The mechanical damage and corresponding segmentation results for two samples are illustrated in Figure 14a,b. The errors for all 10 samples were less than 10%, with an average error of 5.67%. Accordingly, the mean detection accuracy reached 94.33%. The detection time for each sample was less than 1 s. Compared to manual detection, which takes approximately 30 s per sample, the detection speed was increased by over 30 times.
Figure 13 demonstrates the mechanical damage detection under two lighting conditions—structured light and an LED lamp supplemental light—and the results showed that the method could accurately identify pineapple damage under both lighting conditions. Even when the captured images were relatively dark and the resolution of the cropped damaged area images was under 30 × 30 pixels, the damage region could still be clearly segmented. The shape of the segmented damage area closely resembled the actual damage region, accurately reflecting the pineapple’s damage contours. These results indicated that the proposed method could provide accurate data for the quantitative assessment of mechanical damage in pineapples.

3.2. Discussion

Numerous studies have been completed in the past in the field of fruit damage assessment using image processing, machine learning, deep learning, and non-destructive techniques. For example, Bakar et al. [32] put forward a method assessing the quality of Harumanis mango based on color feature extraction, and they achieved a recognition rate average of approximately 90.4%. Velez Rivera et al. [33] put forward a system for evaluating mechanically induced damage in the pericarp of ‘Manila’ mangos at different stages of ripeness based on the analysis of hyperspectral images, and they achieved a recognition rate average of approximately 97.9%. Hadipour et al. [34] used four pre-trained CNN models, namely, ResNet-50, GoogleNet, VGG-16, and AlexNet, as well as the SGDm, RMSProp, and Adam optimization algorithms to identify and classify healthy fruit and fruit infected with the Mediterranean fly, and they achieved an accuracy values of 98.33%, 98.36%, 99.33%, and 99.34%, respectively. Deep-learning models require significant computational resources and time-consuming training procedures. The hyperspectral detection technique is more often applied to the detection of fruits’ internal defects, and a hyperspectral camera is more expensive than an RGB camera. In contrast to these techniques, our pineapple damage detection method could detect the mechanical damage quantitatively.
Although the approach has advantages in cost and computational resources, limitations still exist. For example, the system could only detect damage on the up-side of a pineapple. If it was desired to detect all surface damage at once, at least four cameras would be required. Of course, it is also possible to rotate a pineapple to photograph damages in all directions and then calculate the total damage area. This method was mainly applied to pineapple mechanical damage detection and would be limited in recognizing damage types such as black rot and fruit collapse due to Chrysosporium.
Since the color of the skin of a Shenwan pineapple, the sample selected for the experiments, is golden yellow, which is similar to the color of the skin of common pineapple species, the method can also be applied to the detection of mechanical damage in other pineapple species.

4. Conclusions

A machine vision detection method for pineapple fruit mechanical damage was developed to address the low accuracy and slow speed associated with manual detection. Images were captured using a depth camera, and an image-preprocessing algorithm was proposed to obtain the binary damage-contour image. Additionally, an algorithm for computing the area of mechanical damage was implemented.
Verification experiments showed that this method achieved a high detection success rate, whether under low-resolution conditions or normal resolution. The average error between the estimated and actual damaged areas was 5.67%, demonstrating its robust performance. Further detection tests, based on images generated by the proposed image-processing algorithm, revealed that the calculation accuracy for 10 sample areas reached 94.33%. The area-calculation algorithm exhibited high precision. Therefore, the test method presented in this paper can accurately detect the area of pineapple mechanical damage and generate a corresponding damage-contour image. Compared to traditional manual detection, this approach increases detection speed by over 30 times. Consequently, it can provide accurate data for quantitatively assessing pineapple mechanical damage.

Author Contributions

Conceptualization, J.L. and B.M.; methodology, J.L. and B.M.; software, J.L. and B.M.; validation, J.L. and B.M.; formal analysis, J.L. and B.M.; investigation, Z.L. (Zhaozheng Liang) and Z.L. (Zicheng Liu); resources, J.L. and B.M.; data curation, J.L., T.L., and B.M.; writing—original draft preparation, J.L.; writing—review and editing, T.L. and Z.L. (Zicheng Liu); visualization, T.L. and S.L.; supervision, T.L., J.L. and B.M.; project administration and funding acquisition, T.L. and B.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant no. 52175229).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be addressed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, C.; Liu, Y. Current status of pineapple production and research in China. Guangdong Agric. Sci. 2010, 37, 65–68, (In Chinese with English Abstract). [Google Scholar]
  2. Liu, Q.; Zhou, S.; Deng, G.; Li, Q.; Huang, Q.; Li, G. The Current Situation and Countermeasures of Industrial Development in The Main Pineapple Producing Areas of China. Trans. Mod. Agric. Equip. (Trans. MAE) 2024, 45, 11–15+20, (In Chinese with English Abstract). [Google Scholar]
  3. Fang, W. Present situation and development suggestion of pineapple industry in Guangdong Province. China Fruits 2023, 06, 123–126, (In Chinese with English Abstract). [Google Scholar]
  4. Liu, T.; Liu, W.; Zeng, T.; Qi, L.; Zhao, W.; Cheng, Y.; Zhang, D. Working principle and design of the multi-flexible fingered roller pineapple harvesting mechanism. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2022, 38, 21–26, (In Chinese with English Abstract). [Google Scholar]
  5. Liu, T.; Cheng, Y.; Li, J.; Chen, S.; Lai, J.; Liu, Y.; Qi, L.; Yang, X. Feeding-type harvesting mechanism with the rotational lever for pineapple fruit. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2023, 39, 27–38, (In Chinese with English Abstract). [Google Scholar]
  6. Liu, T.; Mai, B.; Zhang, J.; Liu, S.; Chen, J.; Sun, W. Design and experiment of a chain feeding combined with roller and flexible rod breaking pineapple picking mechanism. Trans. Chin. Soc. Agric. Mach. 2024, 55, 116–125. [Google Scholar] [CrossRef]
  7. Pineapple Harvesting Machine. [EB/OL]. 1 January 2021. Available online: https://www.youtube.com/user/CrossAgriEngineering/videos (accessed on 5 December 2024).
  8. Kurbah, F.; Marwein, S.; Marngar, T.; Sarkar, B.K. Design and development of the pineapple harvesting robotic gripper. Commun. Control Robot. Syst. Smart Innov. Syst. Technol. 2022, 229, 437–454. [Google Scholar]
  9. Chen, Z.; Wang, H.; Li, H.; Yu, Z.; Wang, C. Research Progress and Development Trend of Pineapple Mechanized Picking Technology and Equipment. J. Agric. Mech. Res. (Trans. JAMR) 2024, 1–8, (In Chinese with English Abstract). [Google Scholar] [CrossRef]
  10. Zhang, Y.; Li, Y.; Song, Y. Nondestructive Detection of Slight Mechanical Damage of Apple by Hyperspectral Spectroscopy Based on Stacking Model. Spectrosc. Spectr. Anal. 2023, 43, 2272–2277. [Google Scholar]
  11. Chiu, Y.C.; Chou, X.L.; Grift, T.E.; Chen, M.T. Automated detection of mechanically induced bruise areas in golden delicious apples using fluorescence imagery. Trans. Asabe 2015, 58, 215–225. [Google Scholar]
  12. Dubey, S.R.; Jalal, A.S. Detection and classification of apple fruit diseases using complete local binary patterns. In Proceedings of the 2012 Third International Conference on Computer and Communication Technology, Allahabad, India, 23–25 November 2012; pp. 346–351. [Google Scholar]
  13. Ahmed, M.; Raghavendra, A.; Rao, D.M. An image segmentation comparison approach for lesion detection and area calculation in mangoes. Int. Res. J. Eng. Technol. (IRJET) 2015, 2, 190–196. [Google Scholar]
  14. Nadarajan, A.S.; Thamizharasi, A. Detection of bacterial canker disease in mango using image processing. IOSR J. Comput. Eng. (IOSR-JCE) 2017, 19, 901–908. [Google Scholar] [CrossRef]
  15. Wang, L.; Li, A.; Tian, X. Detection of fruit skin defects using machine vision system. In Proceedings of the 2013 Sixth International Conference on Business Intelligence and Financial Engineering, Hangzhou, China, 14–16 November 2013; pp. 44–48. [Google Scholar]
  16. Slaughter, D.C.; Obenland, D.M.; Thompson, J.F.; Arpaia, M.L.; Margosan, D.A. Non-destructive freeze damage detection in oranges using machine vision and ultraviolet fluorescence. Postharvest Biol. Technol. 2008, 48, 341–346. [Google Scholar] [CrossRef]
  17. Fu, X.; Wang, M. Detection of early bruises on pears using fluorescence hyperspectral imaging technique. Food Anal. Methods 2022, 15, 115–123. [Google Scholar] [CrossRef]
  18. Okere, E.E.; Ambaw, A.; Perold, W.J.; Opara, U.L. Vis-NIR and SWIR hyperspectral imaging method to detect bruises in pomegranate fruit. Front Plant Sci. 2023, 14, 1151697. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  19. Chen, S.-H.; Lai, Y.-W.; Kuo, C.-L.; Lo, C.-Y.; Lin, Y.-S.; Lin, Y.-R.; Kang, C.-H.; Tsai, C.-C. A surface defect detection system for golden diamond pineapple based on CycleGAN and YOLOv4. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 8041–8053. [Google Scholar] [CrossRef]
  20. Li, Y.; Yuan, H.; Wang, K.; He, Z.; Dong, Y. Detection Technology of Pineapple Thorn Based on Feature Extraction and Space Position. Laser Optoelectron. Prog. (Trans. LOP) 2023, 60, 238–247, (In Chinese with English Abstract). [Google Scholar]
  21. Li, Y.; Ma, X.; Wang, J. Pineapple Maturity Analysis in Natural Environment Based on MobileNet V3-YOLOv4. Smart Agric. (Trans. SA) 2023, 5, 35–44, (In Chinese with English Abstract). [Google Scholar]
  22. Zhou, T.; Wang, J.; Mai, R. Real-time object detection method of pineapple ripeness based on improved YOLOv8. J. Huazhong Agric. Univ. (Trans. JHAU) 2024, 43, 10–20, (In Chinese with English Abstract). [Google Scholar]
  23. Zhang, R.; Huang, Z.; Zhang, Y.; Xue, Z.; Li, X. MSGV-YOLOv7: A Lightweight Pineapple Detection Method. Agriculture 2024, 14, 29. [Google Scholar] [CrossRef]
  24. He, F.; Zhang, Q.; Deng, G.; Li, G.; Yan, B.; Pan, D.; Luo, X.; Li, J. Research Status and Development Trend of Key Technologies for Pineapple Harvesting Equipment: A Review. Agriculture 2024, 14, 975. [Google Scholar] [CrossRef]
  25. Qiu, P.; Su, Z.; Jia, Y. Research on surface damage identification of fragrant pears based on machine vision. Inf. Syst. Eng. 2023, 133–136. [Google Scholar]
  26. Qiu, X.; Shen, F.; Jiao, Y. Research on Moutai-shaped Bottle Crack Defect Recognition Based on MATLAB Image Processing Technology. Mod. Inf. Technol. (Trans. MIT) 2024, 8, 161–166, (In Chinese with English Abstract). [Google Scholar]
  27. Qiu, X.; Sun, Y.; Xu, Y.; Mu, S. Research on defect identification of cigarette holder tips based on MATLAB image processing technology. China Mech. Eng. 2024, 23, 73–77+8. [Google Scholar]
  28. Liu, Q.; Ying, J.; Zhou, L.; Wen, Z. Digital image segmentation techniques and their advancements. J. Tianshui Norm. Univ. 2007, 2007, 35–39. [Google Scholar]
  29. Gou, Y.; Yan, J.; Zhang, F.; Sun, C.; Xu, Y. Research Progress on Vision System and Manipulator of Fruit Picking Robot. Comput. Eng. Appl. 2023, 59, 13–26, (In Chinese with English Abstract). [Google Scholar]
  30. Wang, S.; Yan, C.; Zhang, T.; Zhao, G. Application of Mathematical Morphology in Image Processing. Comput. Eng. Appl. 2004, 32, 89–92, (In Chinese with English Abstract). [Google Scholar]
  31. Liu, T.; Zheng, Y.; Lai, J.; Cheng, Y.; Chen, S.; Mai, B.; Liu, Y.; Li, J.; Xue, Z. Extracting visual navigation line between pineapple field rows based on an enhanced YOLOv5. Comput. Electron. Agric. 2024, 217, 108574. [Google Scholar] [CrossRef]
  32. Bakar, M.A.; Abdullah, A.H.; Rahim, N.A.; Yazid, H.; Zakaria, N.S.; Omar, S.; Nik, W.W.; Bakar, N.A.; Sulaiman, S.F.; Ahmad, M.I.; et al. Defects detection algorithm of harumanis mango for quality assessment using colour features extraction. J. Phys. Conf. Ser. 2021, 2107, 012008. [Google Scholar] [CrossRef]
  33. Rivera, N.V.; Gómez-Sanchis, J.; Chanona-Pérez, J.; Carrasco, J.J.; Millán-Giraldo, M.; Lorente, D.; Cubero, S.; Blasco, J. Early detection of mechanical damage in mango using nir hyperspectral images and machine learning. Biosyst. Eng. 2014, 122, 91–98. [Google Scholar] [CrossRef]
  34. Hadipour-Rokni, R.; Asli-Ardeh, E.A.; Jahanbakhshi, A.; Sabzi, S. Intelligent detection of citrus fruit pests using machine vision system and convolutional neural network through transfer learning technique. Comput. Biol. Med. 2023, 155, 106611. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Pineapple damage detection experimental platform.
Figure 1. Pineapple damage detection experimental platform.
Agriculture 15 01063 g001
Figure 2. Depth images at different distances: (a) a depth image at a normal working distance, and (b) a depth image at a short distance.
Figure 2. Depth images at different distances: (a) a depth image at a normal working distance, and (b) a depth image at a short distance.
Agriculture 15 01063 g002
Figure 3. Flowchart of the pineapple damage detection process.
Figure 3. Flowchart of the pineapple damage detection process.
Agriculture 15 01063 g003
Figure 4. The MATLAB software interface.
Figure 4. The MATLAB software interface.
Agriculture 15 01063 g004
Figure 5. Images of a pineapple’s damaged area: (a) color image, (b) grayscale, and (c) enhanced.
Figure 5. Images of a pineapple’s damaged area: (a) color image, (b) grayscale, and (c) enhanced.
Agriculture 15 01063 g005
Figure 6. Image filtering results: (a) Gaussian filter, and (b) Laplacian filter.
Figure 6. Image filtering results: (a) Gaussian filter, and (b) Laplacian filter.
Agriculture 15 01063 g006
Figure 7. Binary image of a pineapple’s damaged area, with noise.
Figure 7. Binary image of a pineapple’s damaged area, with noise.
Agriculture 15 01063 g007
Figure 8. Morphological segmentation image.
Figure 8. Morphological segmentation image.
Agriculture 15 01063 g008
Figure 9. Edge detection of the pineapple damage image: (a) Sobel operator, (b) Prewitt operator, (c) Log operator, and (d) Canny operator.
Figure 9. Edge detection of the pineapple damage image: (a) Sobel operator, (b) Prewitt operator, (c) Log operator, and (d) Canny operator.
Agriculture 15 01063 g009
Figure 10. Pineapple damage contour filling.
Figure 10. Pineapple damage contour filling.
Agriculture 15 01063 g010
Figure 11. Binary image of pineapple damage.
Figure 11. Binary image of pineapple damage.
Agriculture 15 01063 g011
Figure 12. The samples and their harvester: (a) samples, and (b) pineapple harvester.
Figure 12. The samples and their harvester: (a) samples, and (b) pineapple harvester.
Agriculture 15 01063 g012
Figure 13. Detection case: (a) structured light, and (b) LED lamp supplemental light.
Figure 13. Detection case: (a) structured light, and (b) LED lamp supplemental light.
Agriculture 15 01063 g013
Figure 14. Mechanical damages and their segmentation results for the two samples: (a) mechanical damage, and (b) segmentation results.
Figure 14. Mechanical damages and their segmentation results for the two samples: (a) mechanical damage, and (b) segmentation results.
Agriculture 15 01063 g014
Table 1. Detection results of the pineapples’ surface damage.
Table 1. Detection results of the pineapples’ surface damage.
No.DA0A1εE
10.174255.4268.913.55.3
20.179390.2415.625.46.5
30.173311.7331.018.45.9
40.190157.0164.57.54.8
50.176205.0215.510.55.1
60.177252.9268.115.16.0
70.177423.1445.522.45.3
80.175299.0316.317.35.8
90.174625.2663.338.16.1
100.173419.7444.524.85.9
Note: D represents the depth value (m); A0 and A1 represent the detected area and the actual area (mm2), respectively; ε represents the detection deviation (mm2); and E represents the error (%).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Mai, B.; Liu, T.; Liu, Z.; Liang, Z.; Liu, S. A Machine Vision Method for Detecting Pineapple Fruit Mechanical Damage. Agriculture 2025, 15, 1063. https://doi.org/10.3390/agriculture15101063

AMA Style

Li J, Mai B, Liu T, Liu Z, Liang Z, Liu S. A Machine Vision Method for Detecting Pineapple Fruit Mechanical Damage. Agriculture. 2025; 15(10):1063. https://doi.org/10.3390/agriculture15101063

Chicago/Turabian Style

Li, Jiahao, Baofeng Mai, Tianhu Liu, Zicheng Liu, Zhaozheng Liang, and Shuyang Liu. 2025. "A Machine Vision Method for Detecting Pineapple Fruit Mechanical Damage" Agriculture 15, no. 10: 1063. https://doi.org/10.3390/agriculture15101063

APA Style

Li, J., Mai, B., Liu, T., Liu, Z., Liang, Z., & Liu, S. (2025). A Machine Vision Method for Detecting Pineapple Fruit Mechanical Damage. Agriculture, 15(10), 1063. https://doi.org/10.3390/agriculture15101063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop