Next Article in Journal
Robust Deep Neural Network for Classification of Diseases from Paddy Fields
Next Article in Special Issue
A Cloud-Based Intelligence System for Asian Rust Risk Analysis in Soybean Crops
Previous Article in Journal
Stone Detection on Agricultural Land Using Thermal Imagery from Unmanned Aerial Systems
Previous Article in Special Issue
Towards Smart Pest Management in Olives: ANN-Based Detection of Olive Moth (Prays oleae Bernard, 1788)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computer Vision-Based Multiple-Width Measurements for Agricultural Produce

by
Cannayen Igathinathane
1,*,
Rangaraju Visvanathan
2,
Ganesh Bora
3 and
Shafiqur Rahman
4
1
Department of Agricultural and Biosystems Engineering, North Dakota State University, Fargo, ND 58102, USA
2
A.D. Agricultural College and Research Institute, Tamil Nadu Agricultural University, Thiruchirappalli 620009, India
3
Research and Technology Innovation, Fayetteville State University, Fayetteville, NC 28301, USA
4
Agricultural Research and Development Program, Central State University, Wilberforce, OH 45384, USA
*
Author to whom correspondence should be addressed.
AgriEngineering 2025, 7(7), 204; https://doi.org/10.3390/agriengineering7070204
Submission received: 5 May 2025 / Revised: 23 June 2025 / Accepted: 26 June 2025 / Published: 1 July 2025

Abstract

The most common size measurements for agricultural produce, including fruits and vegetables, are length and width. While the length of any agricultural produce can be unique, the width varies continuously along its length. Single-width measurements alone are insufficient for accurately characterizing varying width profiles, resulting in an inaccurate representation of the shape or mean dimension. Consequently, the manual measurement of multiple mean dimensions is laborious or impractical, and no information in this domain is available. Therefore, an efficient alternative computer vision measurement tool was developed utilizing ImageJ (Ver. 1.54p). Twenty sample sets, comprising fruits and vegetables, with each representing different shapes, were selected and measured for length and multiple widths. A statistically significant minimum number of multiple widths was determined for practical measurements based on an object’s shape. The “aspect ratio” (width/length) was identified to serve as an effective indicator of the minimum multiple width measurements. In general, 50 multiple width measurements are recommended; however, even 15 measurements would be satisfactory ( 1.0% ± 0.6% deviation from 50 widths). The developed plugin was fast (734 ms ± 365 ms CPU time/image), accurate (>99.6%), and cost-effective, and it incorporated several user-friendly and helpful features. This study’s outcomes have practical applications in the characterization, quality control, grading and sorting, and pricing determination of agricultural produce.

Graphical Abstract

1. Introduction

Length and width form the most common size measurements that characterize agricultural and horticultural produce. Size significantly impacts the external appearance of fruits and vegetables, as the price of produce generally correlates well with its size [1]. With agricultural produce, such as fruits and vegetables, the object will have a unique length but will have several varying widths along its length. A single width measurement will not provide the best description of the object’s width profile. However, the most reported width dimensions are from a single measurement per object. For example, using the computer vision image analysis “pixel-march” method, [2] measured the orthogonal length and width of agricultural produce. Therefore, multiple widths will only describe the shape of the object better, and a correct representation, or at least a mean width derived from multiple widths, would provide a better description.
While dealing with axisymmetrical objects, which have rotational symmetry about their length, the width measurement will also depict its thickness (orthogonal to the width and length). Such measurements will provide the best description of the product’s shape based on simple orthogonal dimensions. Most agricultural produce and products have axisymmetrical (convex) shapes, while non-axisymmetrical (concave or curved) shapes are also found in agricultural produce, but they are less prominent and are not considered in this study.
In the grading and sorting of agricultural produce, size constitutes one of the primary criteria, while other factors such as shape, color, and surface defects are also considered. Manual measurement, which is a known and common method, becomes laborious, tiresome, subjective, and prone to inaccuracy and reproducibility issues.
Several machine-based grading and sorting systems have increasingly been adopted in the industry to address these limitations. These systems utilize computer-based algorithms to analyze digital images and interface with activating mechanisms that perform the actual separation or sorting process.
Computer/machine vision systems have been increasingly used in these industries and food processing plants due to their ability to provide rapid, cost-effective, hygienic, consistent, accurate, and objective assessment; online automatic process control; and real-time quality evaluation [3,4]. In addition, computer vision image analysis applications are well established and proven to be successful for classification; volume and mass estimation; defect detection; size-and-shape feature measurement; quality inspection; and the grading of grains, fruits, and vegetables in agricultural and food process engineering fields [5,6,7,8,9].
Reviews on computer vision or image processing are available and applicable to quality evaluation; size and volume determination; and the shape analysis of fruits and vegetables, agricultural products, and food products [1,4,10,11,12,13], and developments in these fields [14] illustrate several applications to agricultural produce and products. A recent review described a basic method (caliper) compared to a modern method (machine vision and deep learning) for the direct size assessment of fruits on trees in orchards [15]. In addition, some novel applications of image processing include the determination of the volume and surface area of agricultural products [16,17] and the measurement of the major orthogonal dimensions of food grains [2]. The relationships between the volume and mass of axisymmetric fruits like apple, sweet-lime, lemon, and orange were estimated using an imaging technique with five different views of a fruit and geometrical formulas [18]. Machine vision-based systems were employed for in-line sorting and the detection of contaminants or specific chemical compounds on the products’ surface [19]. Object detection and depth maps with a stereo camera for vegetable (cucumber, eggplant, tomato, and pepper) recognition and size estimation using six keypoints [20] were also studied.
Recently, fruit and vegetable disease recognition using convolutional neural networks, YOLO deep learning models [21], and three-dimensional (3D) machine vision techniques have been widely employed in agriculture and food systems. The utilization of modern 3D vision systems and advanced deep learning technologies [22,23]; the application of computer vision-based industrial algorithms for detecting the dimensions and spatial positioning of fruits and vegetables on a conveyor belt, enabling their transfer to a packing machine via a robotic arm [24]; and several similar research outputs have been reported. In a modeling study, researchers developed models for the non-destructive in situ detection of individual fruit mass in diversely shaped tomatoes using their maximum width and length [25].
Computer vision techniques and image analysis make it possible to obtain these lengths and multiple widths automatically. Recognizing the significance of multiple widths in representing actual profiles and deriving the mean width, it is pertinent to assess the variation in width along the length and determine the number of multiple widths required to represent a statistically significant mean width. Despite these state-of-the-art systems making significant advancements, particularly in complex tasks such as identification, classification, object localization, handling occlusions, robotics, and overall shapes and dimensions, no reports on the multiple-dimension measurement of agricultural produce, mean width, and statistically significant minimum number of multiple measurements needed were found in the literature, thus making this research necessary, significant, and unique.
Our goal is to create a method to measure the multiple widths of axisymmetrical objects, such as agricultural produce, and find out statistically how few multiple widths are needed to represent them accurately. Therefore, the objectives of this research are to (i) develop a computer vision analysis tool utilizing the open-source ImageJ analysis platform (ImageJ plugin) to process digital images and validate the algorithm; (ii) apply it to measure the length and orthogonal widths of several axisymmetrical agricultural produce; (iii) determine their mean width and the minimum number of statistically significant widths; and (iv) statistically evaluate the effect of the number of multiple widths on the mean width.

2. Materials and Methods

2.1. Test Material

The test material used in this study was agricultural produce (vegetables and fruits) obtained from the local grocery stores. Pasta fettuccine, carrots, celery, green beans, potato, and sweet potato were obtained from Bismarck and Mandan, ND, USA, and the rest of the fresh produce from Coimbatore, TN, India. Original images of these 20 selected test materials used in the study, along with a sample multiple measurement width results, can be found in the “Mendeley Data” repository (https://doi.org/10.17632/jprxshtr4t.1; accessed on 25 June 2025) [26]. A montage of these image samples is presented in Figure 1 for illustration.
The selected agricultural produce samples were axisymmetrical, characterized by uniform widths and thicknesses along their length. Axisymmetrical shapes produce near-circular cross-sections perpendicular to the rotational length axis. In addition, axisymmetrical objects mostly belong to convex shapes, where the centroid of the projected area lies inside the object itself; as opposed to concave shapes (curved or bent) that represent only a relatively small population among agricultural produce, where the centroid lies outside of the object’s projected area and complicates the measurement.

2.2. Overall Workflow of the Research Work

The workflow of the computer vision research on developing the user-coded ImageJ plugin for multiple-width measurement for agricultural produce is presented in the form of a flowchart in Figure 2. Some of the processes depicted (Figure 2) should be understood in conjunction with the ImageJ environment and were achieved through the Java source code of the plugin, while others depict general setup, input, and output operations. Details of component processes are discussed as required.

2.3. Image Acquisition of Agricultural Produce Samples

Domestic digital cameras (1: Canon PowerShot, SX100 IS, 8.0 megapixel, 10X optical zoom, Melville, NY, USA; and 2: Canon PowerShot, A3300 IS, 16.0 megapixel, 5X optical zoom, Melville, NY, USA) were utilized to capture the test material images. The image size directly influences the measurement accuracy through the resolution of the images. A large image size usually defines the object with an increased number of pixels, resulting in increased resolution, thereby improving measurement accuracy. The resolution of the images is typically expressed in dots per inch (DPI; Figure 2), and the DPI information will be accessible from the image properties.
DPI of an image will vary depending on the focused distance (whether manual or autofocus) or the distance between the object and the camera. When the camera is positioned closer to the object, both the DPI and the accuracy of measurements increase. However, this proximity also reduces the “field of view”, thereby limiting the number of objects that can be captured within a single image. Conversely, maintaining a constant distance between the object and the camera does not affect the DPI of the images.
Fixing the camera on a stand in such a way that it aligns perpendicular and focuses on objects is a basic system suitable for laboratory and industrial settings. However, without securing the camera, images for practical use can be obtained by incorporating a reference object of known dimensions within the image frame. This reference object serves as a legend or scale, based on which the dimensions of the objects in the image can be calibrated and measured. There is no restriction in choosing the reference frame dimensions, but it should be proportionately larger than the tested objects and exhibit contrasting colors with the object, so that the profiles of the object are correctly captured. In this study, a “thermocol board” measuring 1.0 m × 0.5 m and a “letter paper (US)” with a drawn rectangle of dimension 242 mm × 178 mm (Figure 3) were utilized as the reference frames.
Images were captured in a manner that the reference frame occupies the major area of the picture at the highest possible resolution. Objects were arranged so that they do not touch one another (singulated arrangement), and the orientation of objects without protrusions (e.g., pedicles) does not matter. Special roller or chain conveyors presenting fruits individually (singulated) for machine inspection or other operations are common in the industry. However, produce with pedicles should be arranged so that all faces one direction (Figure 1g,h: eggplant samples), as these components were ignored from measurements using the “end caps” methodology coded in the plugin.
The shadows of the objects should be avoided, as they will be included in the projected object area during preprocessing if the object’s color matches the shadow color. Indirect or diffused lighting, as well as multiple lighting sources, can effectively eliminate shadows. Images captured using natural lighting in shaded areas or with multiple fluorescent lights tend to yield good results. In the case of small objects, such as grains and other particulate materials, a document scanner proved to be an efficient imaging tool [2]. The scanner provides the best lighting without shadows, along with very high resolutions (DPI > 1000). Scanners can be readily utilized for small-sized produce (e.g., berries, grapes, nuts).

2.4. Computer Vision Image Analysis Framework Used

The developed computer vision image analysis plugin utilizes the Java language and Fiji (Ver. ImageJ 1.54p; http://fiji.sc/Fiji; accessed on 25 June 2025) package. Fiji is an image processing package and is a distribution of ImageJ, an open-source, free image analysis program [27,28]. It is quite popular among scientists working with biological imaging [29]. Fiji is a feature-rich integrated development environment of ImageJ that offers various commands for plugin development [30]. Researchers have developed custom-made ImageJ plugins to address diverse computer vision applications. Some of the examples are food grains dimensions [2]; morphological characterization of particles [31]; blood vessel diameter measurement [32]; and several more. These plugins played a role in advancing scientific research in these areas.

2.5. Image Preprocessing

In computer vision image analysis, the fundamental calculations are initially performed on pixel units, and subsequently, these values are converted into practical physical units. The calibration process establishes a correlation between the pixel values and the physical dimensions of the object. In most image processing algorithms (Figure 2), the binary image is handled easily and is used to derive various standard parameters from the image processing programs. Therefore, color images were initially converted into binary images (black and white colors only) using the image processing system. ImageJ’s “8-bit” and “Threshold…” commands created the gray-scale and binary images, respectively. The information displayed in typewriter font (e.g., “8-bit”) refers to actual ImageJ commands.
The DPI information (pixel values of a reference object) of the image can be extracted by tracing the known reference rectangular frame or line using the “Rectangular” or “Straight” selection tool, respectively. From the ‘w’ (width) and ‘h’ (height) or the ‘l’ (length) of the reference frame or line in pixels, the DPI of the image is calculated as follows:
DPI ( pixel inch 1 ) = 25.4 × Reference dimension ( pixel ) Reference dimension ( mm )
The measurement resolution (the smallest dimension that can be measured) can be obtained from the DPI as follows:
Measurement resolution ( mm pixel 1 ) = 25.4 DPI ( pixel inch 1 )
Once the image DPI is measured and recorded (Equation (1)), the image can be cropped, if necessary, to only include objects while eliminating unnecessary background details, since image cropping preserves the DPI. Fine particles, such as dust or other specks, can be omitted from analysis by setting the “Analyze Particles …” area range from the “Size (pixel⌃2)” to a minimum cutoff value. This minimum cutoff value depends on the projected area of the object and the DPI of the image. For instance, the smallest object, namely, the ivy gourd image (Figure 1j) at 109 DPI, had the object’s projected area ranging from 10,635 to 31,740 pixel 2 ; thus, a “Size (pixel⌃2)” input of “10,000-Infinity” will eliminate small and fine particles for the entire set of samples studied. However, if the objective is to only remove dust and small non-objects, a smaller range, such as “1000-Infinity” (≈1/10 of the minimum area), can also be employed to ensure the inclusion of larger objects.
During actual measurement, filtering can be performed, or an image mask can be obtained by eliminating the finer particles beforehand. It is essential to check the “Exclude on edges” and “Include holes” options of “Analyze Particles …” while making the mask to ensure that the entire area of the object is completely filled. Any partially cropped object will be discarded.

2.6. Plugin Development and Description of Methodology

A brief description of the developed multiple and mean width measurement plugin (Figure 4) and its methodology is described here. The plugin consisted of about 1250 lines of Java code (including comments) and employed several Java methods to execute its functionalities. These methods include the following activities: (i) initial setup that derives the standard ImageJ output of particle analysis; (ii) determining the length and width limits of objects; (iii) reading user inputs based on object properties; (iv) evaluating multiple and mean widths, as well as lengths, and visualizing these measurements on a binary image; (v) logging the individual multiple-width measurements for each object; (vi) summarizing the overall results for all objects; (vii) creating separate graphs of multiple widths for each object based on user inputs and orientation; and (viii) exporting the output as a consolidated CSV file for all measuring sessions.

2.6.1. Plugin’s User Input Front Panel

The plugin’s front user input panel presents a suite of inputs for performing various activities and achieving the desired results (Figure 4C). The input panel has several numeric field text boxes and Boolean-valued check boxes to facilitate input collection. Several of these choices were coded as default, allowing the plugin to run without users’ intervention if the defaults suit their requirement or for a trial run. In alternative cases, users can provide their specific inputs and obtain the desired outputs.
The calibration (pixel to physical dimensional units conversion) process can be performed either through the DPI value or the reference frame’s dimensions. As previously described (Section 2.5), when the known DPI value of the image (Equation (1)) is combined with a binary mask, the calibration can be performed directly (Figure 4C). Conversely, if the DPI is not known, the number of pixels that constitute the reference frame can be used for calibration (Figure 4C). Both these methods result in the conversion of pixel values into physical dimensions (mm). If the image title contains DPI values, the plugin will process the text, extract the DPI values, and populate them automatically. We followed this approach to obtain a DPI value in the image titles to enable the automatic reading and better documentation of images.
It is common practice in the cultivation of vegetables and fruits to harvest some produce with pedicles intact. These pedicles can interfere with the multiple-width measurement, since any measurement of width along the pedicles will not be representative of the economically important portion of the produce. To tackle the presence of the pedicles, the “end cap” chopping methodology was incorporated (Figure 4C). As the images were read from “left to right” and “top to bottom”, the end-cap values should be input accordingly. Therefore, left- or top-oriented pedicles should result in higher top-end chop percentage values compared to the bottom and vice versa. As the end chop values are applied uniformly in a session, it will be prudent to orient the sample with the pedicles pointing in a way such that a single end-cap value can be applied to all the objects in the image (Figure 1h,g: short pink eggplant and long green eggplant). Future developments should aim to automatically identify the pedicle and apply the end-cap values correctly regardless of their orientation.
One of the other analysis inputs (Figure 4C) is the desired number of multiple-width measurements. A suggestion, based on 80% of the smallest length of the object, was also indicated, and any value that is equal to or less than this limit can be used. In this research, we intended to find the optimum number of width measurements beyond which no statistical difference will be observed. Also, a default value of 50 was provided for the user. The final input box gives another chance of filtering out small particles based on their area, expressed in the number of square pixels. A default value of 10 was set, and this can be modified by the user.
The output-related selections can be exercised through the checkboxes: (i) multiple-width drawing output on a binary image of an object to visualize the measured locations of the widths (Figure 4E); (ii) multiple-width results in the form of a “Log” window for immediate consumption (Figure 4G); (iii) multiple-width visualization as a graphical plot for each object in the image (Figure 4D); and (iv) multiple-width graphical plot orientation (the swapping of x and y axes to visualize the plotted width as desired by the user; Figure 4E). As a background task, all consolidated results of all measurement sessions will be appended to a CSV file for recording and documentation.

2.6.2. Methodology of Multiple-Width and -Length Measurements

Utilizing Fiji as the plugin environment (Figure 4A), “pixel-march”, as reported elsewhere [2], seeks black pixels while marching along the white pixels, following the specified straight line paths, to evaluate both length and widths. For this purpose, it is essential that the interior of the object is continuous and filled with white pixels, which is ensured by the “Include holes” (Figure 4B). Convex-shaped objects ensure that the centroid (Figure 4E1), which serves as the starting point of the pixel-march, remains within the object’s outline, allowing the process of length and width measurements to proceed uninterrupted. If the centroid falls outside, which is determined based on the background color, only that object is excluded from the measurements. With the desired number of multiple widths, the “step length” of segments for multiple-width measurements is determined.
Based on the end caps’ chopping values, the segment length was calculated from the object length less the end cap’s length (Figure 4C). Different or similar end-cap values can be used depending on the type of agricultural produce (e.g., with or without pedicle). Starting from the centroid, the widths were determined by finding the boundary pixels orthogonal to the length on both sides and using the distance formula. The widths were determined starting from the centroid and moving away towards both ends based on the step length (Figure 4E) for correct evaluation, but they were re-allotted from the bottom/left to the top/right for plotting the widths (Figure 4D). The mean and standard deviation (STD) of multiple-width measurements were calculated for each object. The length of the object was obtained along the major axis and through the centroid (Figure 4E). Textual results were produced for reading (Figure 4F) and for further statistical analysis in the form of a spreadsheet.

2.7. Statistical Analysis of Multiple Widths

It is of practical interest to investigate the effect of the number of widths to be considered and their statistical significance and the mean separation analysis results that provide such insights. Several multiple-width measurements ranging from 1 to 200 were considered, with measurements closer to smaller values chosen. The effect of the number of widths on the measured mean width and its statistical significance was evaluated using the mean separation method (pairwise differences among means). An SAS (Ver. 9.3, 2009, SAS Institute, Cary, North Carolina, USA) macro %mmaov that uses PROC MIXED [33] was used to perform the mean separation analysis. This macro converts the mean pairwise differences to letter groups, where the means that share a common letter are not statistically different at a specified α level. The macro was run as follows: mean width as the dependent variable; produce name and the number of widths measured as classification and fixed variables; logarithmic data transformation was used, a common technique to reduce skewness in the data and make it more normally distributed, meeting the normal distribution assumption used in the analysis; grouping by produce name; α = 0.05 ; and adjustment option was set to Tukey.

3. Results and Discussion

For the agricultural produce images captured in this study using the digital camera, the DPI values were within the range of 109 and 246, resulting in a measurement accuracy or resolution (Equation (2)) of 0.103 to 0.233 mm pixel−1. Higher DPI values are feasible with high-resolution cameras. A DPI of 600 (0.042 mm pixel−1) or more is quite common with digital scanners. Therefore, based on the necessary measurement accuracy, the appropriate image resolution (DPI) should be selected.

3.1. Features of the Developed Plugin

Several features are built into the developed plugin and are briefly described below. The plugin accommodates two calibration methods, namely, DPI and reference dimension inputs (Figure 4C). However, in a fixed setting with a stationary camera where the distance between the objects and the camera remains constant, the calibration values can be directly coded into the plugin, eliminating the calibration routine. The plugin extracts the object name and DPI value from the file name of the image using Java string processing commands if the information is included in the specified pattern. This extracted DPI value is filled in the “Use DPI” input box of the plugin’s input panel for ready execution (Figure 4C), and the object name is used for the result’s output.
The plugin (Figure 4) is capable of analyzing images containing objects in any orientation. Consequently, the multiple widths measured were oriented orthogonally to the major axis of any inclination. In addition, the top and bottom end caps’ chopping values can be input individually to preferentially address the presence of pedicles and exclude them from the multiple-width measurements (Figure 4E). The plugin identifies concave objects (centroids falling outside of the profile) and outputs a message with the identification label number to facilitate their elimination through further preprocessing and reruns. For example, the fifth object from the left on the color image of green beans (Figure 1i: green bean) has a highly curved shape, indicating that it is a concave object, and it is removed from the binary image for measurement purposes [26].
The plugin generates various textual, data, and graphical forms of outputs (Figure 4). Usually, the results are displayed in the “Log” window, showing the overall results (Figure 4F,H), and individual multiple-width results (Figure 4G) can also be produced. The plugin also directs consolidated results and continuously archives them to an external spreadsheet through a CSV file format. Graphical outputs include visualizations of measured widths as plots (Figure 4C,D), as well as drawings of widths and lengths directly on a binary image (Figure 4C,E).
The developed measurement methodology is cost-effective, as it only requires an inexpensive domestic digital camera and the necessary plugin created using the open-source and free Fiji ImageJ software (Ver. 1.54p). Thus, the investment for the system rests on the digital camera (e.g., 48-megapixel $ 150 ) and the supplementary lighting supplies. However, a practical unit that incorporates this algorithm and associated hardware for grading or sorting may require additional costs for scaled-up operations.

3.2. Plugin Validation

The validation of the plugin can be performed by analyzing an image (without a reference frame) that contains objects of known dimensions. An image depicting directly drawn rectangles and precision-cut paper strips of known dimensions (Figure 5) was analyzed using the plugin, and the results are tabulated (Table 1). The objective is to test the plugin’s performance by conducting multiple measurements with simple reference objects. It is also possible to use several standard precision objects for the plugin performance test as alternatives.
The results indicate that the plugin correctly measured drawn blocks on the image with perfect edges (Table 1), and the length and multiple widths have great measurement accuracy (99.98%–100%). This demonstrates the capability of the plugin to make exact measurements when images with clean outlines are used. However, the cut paper strips’ length and multiple-width accuracy ranged from 95.70% to 99.25%.
The results demonstrate that the plugin accurately measures drawn blocks on images with precise edges, as evidenced by the high measurement accuracy (99.98%–100%) for length and multiple widths. This capability is particularly evident when using images with clean outlines. However, the accuracy of measuring the length and multiple widths of the cut paper strips varied, ranging from 95.70% to 99.25%. These results are comparable to the absolute deviations (1.44%–2.15%) observed in computer vision analysis study using a document scanner (300 DPI) when compared with digital caliper measurements of standard nylon spacers of printed circuit boards [2].
In the present study, the accuracy reduces as the strips become thinner (5 mm), as the number of pixels describing such widths diminishes, and individual pixels have a greater influence on the measurements. With a DPI of 256, each pixel represents 0.0992 ≈ 0.1 mm. This means that the exclusion/inclusion of a single pixel on a 5 mm reference width results in a ±2% loss of accuracy. However, on a 20 mm reference, this is linearly reduced to a ±0.5% accuracy loss and further diminishes with increased dimensions.
As most of the practical samples are wider than 5 mm, a second experiment included a few wider samples made from letter paper (US; 215.9 mm × 279.4 mm) for validation (Table 1) at two resolutions. These wider strips produced accuracies of ≥99.63% even at 0.154 mm pixel−1 (165 DPI). With the increased resolution of 0.096 mm pixel−1 (265 DPI), the measurement accuracies further improved to ≥99.86%. In addition, a whole letter paper (US), involving no manual cutting, identified as objects numbered 13 and 14 (Table 1), was validated at 165 and 265 DPI, and this showed a slight increase in accuracy with an increase in DPI. Thus, increasing the DPI is a simple method for improving the measurement accuracy. Overall, the measurement accuracy of the plugin varied from 96% to 100%, and it improved with the resolution of the input image.

3.3. Multiple Width Results of Agricultural Produce

An example of multiple and mean dimension measurements, with papaya (Figure 1l) as an illustration from the agricultural produce studied (Figure 1), is presented in Figure 6. The original color image (Figure 6A) was duplicated to make a binary mask during preprocessing stages, during which the length and multiple widths were measured and drawn (Figure 6B). The plugin was run with the loaded binary of the papaya and the default values (DPI from image = 109, both end chop % = 10.0, and number of multiple widths = 50) of the plugin (Figure 4C), with some user inputs (uncheck rotating the profile). The length and multiple widths of each sample of the papaya image are plotted and labeled (Figure 6B).
The samples were identified from the top to the bottom and from the left to the right (third papaya from the left is identified as the first). The angle of inclination of the length direction (along the axis of the papaya) also represents the orientation of the object. The multiple widths were measured and drawn perpendicular to the length axis (Figure 6B). Measured multiple widths can be visualized through unrotated profile plots (width on x-axis and multiple width numbers on y-axis; Figure 6C). The unrotated multiple width plot also aids in visualizing the profile of the object as measured from bottom to top. The plugin also calculates the minimum, maximum, mean, and STD values of the multiple widths of all objects in the sample image (Table 2).
The dimension measurement results for pasta fettuccine and 19 agricultural produce (Figure 1) are presented in Table 2. Length is a unique measurement for each sample, while 50 multiple-width measurements were conducted for each sample used in the results’ illustration. Consequently, within each sample group, there will be distinct minimum and maximum lengths, whereas each sample will have a minimum and maximum width (based on 50 width measurements). The standard deviations are applicable only to the mean length and all three width values (Table 2).
The details presented in the Supplementary Materials data [26] of the selected produce (Figure 1) show a cropped color image, the measured and labeled binary image, and a multiple-width plot of a selected sample that was identified from the plot title (e.g., n = 5 ). The plugin identifies objects by finding their topmost pixel while scanning from the left to the right and starting from the top to the bottom. For example, in the “Bitter gourds” (Figure 1b) sample set (data from [26], page 2), the natural first left sample is actually identified as “9”, as its top tip is lower than the other eight samples. This naturally left the first sample of the image selected for multiple-width plot visualization, and this number can also be identified from the title of a block of the plot “( n = 9 )”. These plots are for a single sample in the image, and similar plots for the rest of the samples were produced (plots furnished by Dr. C. Igathinathane, the first author). The multiple-width plot is a collection of caliper widths (Figure 4E5) from the bottom/left to the top/right. This multiple-width plot clearly shows the width profile of the test samples, and a single value is not able to specify this existing variation. The plugin accurately captures multiple-width profiles and enables us to visualize the width’s variations (Data [26]).
The multiple-width profiles closely resemble the tested sample’s natural profile (data from [26]). For instance, pasta fettuccine (Figure 1a), a manufactured food product with uniform dimensions, exhibits a linear profile with minimal variation (mean around 4.7 mm). Bitter gourds (Figure 1b), on the other hand, demonstrate the zigzag profile characteristic of the sample’s rough surface. Carrots (Figure 1d) and celery (Figure 1e) exhibit a linear but inclined profile following their triangular or truncated pyramid shape. Green beans (Figure 1i) display a wavy pattern that accommodates the presence of seeds within the pod. Long green eggplants (Figure 1g) exhibit a typical inclined and curvilinear variation in the hook-shaped sample. Pineapples (Figure 1m) produce a linear profile with minimal variation, as the fruit is cylindrical and rectangular in profile. The remaining produce exhibits a smooth dome-shaped profile, reflecting the spherical or prolate spheroid shape of the samples. Furthermore, any dent or deformation in the profile of the samples (e.g., potato [Figure 1n], snap melons [Figure 1p], sweet potato [Figure 1q]) was also captured. These findings clearly demonstrate the necessity of multiple-width measurements to accurately measure the varying dimensions and visualize the actual profile or shape of the agricultural produce.
The aspect ratio shape factor ( W / L ) aids in identifying the sample’s elongation (smaller value) or roundness (higher values close to 1.0) (Table 2; data from [26]). Elongated samples, such as pasta fettuccine, green beans, carrots, and celery hearts, exhibited aspect ratios of 0.06, 008, 0.11, and 0.12, respectively. Conversely, round samples, including mangoes, light green watermelons, dark green watermelons, and snap melons, had the greater values of 0.88, 0.73, 0.70, and 0.62, respectively. It is important to note that these aspect ratio values are derived from multiple-width measurements and cannot be precisely obtained from simple single measurements.

3.4. Effect of Number of Width Measurements and Significance

The presence of multiple mean groups with samples (identified by uppercase letters; Table 3), except for pasta and celery, reinforces the statistically necessity of multiple-width measurements ( α = 0.05). The aspect ratio effectively served as an indicator of shape (Table 2) to determine the number of significant groups with respect to multiple widths (Table 3). A smaller value of W / L indicates an elongated object (e.g., pasta fettuccine, carrot, celery [Figure 1a,d,e, respectively]; W / L ≤ 0.12), while an increased value suggests a more spherical object (e.g., mangoes, turnips, watermelons [Figure 1k,r,s,t, respectively]; W / L ≥ 0.66). Overall, for 0.06 ≤ W / L ≤ 0.12, the number of distinct mean groups was 2 , while for increased W / L > 0.2, the number of distinct mean groups was 5 for most produce. Therefore, based on the W / L , the number of multiple widths will be statistically different from the single width, and this difference diminishes with a reduction in W / L and vice versa.
The minimum number of statistically significant multiple widths and the next lower number representing significant width are tabulated (Table 3) as “#SigWidths” in the form ab, which illustrates the importance of multiple-width measurements for sample shapes that deviate from linear profiles. For example, bitter gourds (Figure 1b) have five letter groups (A to E), but multiple widths of 10 through 200 are not significantly different. The first three widths (1 through 5) are all significantly different; widths from 7 to 25 are not significantly different, but widths 7 and 50 are significantly different. Working from the top after a multiple-width number of 50, increasing the number of multiple widths beyond this limit does not produce a significant difference in mean widths; however, below this limit, a width number of seven is the largest significantly different number (ab = 50 ⇔ 7 in Table 3). Depending on the shape ( W / L ), wide variations with respect to multiple widths (excluding potatoes [Figure 1n] and sweet potatoes [Figure 1q], which are single-object sources) for the a (1 to 150) and b (1 to 20) values were observed.
Based on the results, although there are variations, it can generally be considered that, after 50 multiple-width measurements (a) onwards, no clear significant differences were observed among the measurements (Table 3). A closer examination of the results reveals (based on W / L ratios and combined mean groups) that approximately 15 multiple-width measurements, on average, may be required for W / L > 0.2. This can be as low as five multiple-width measurements for W / L < 0.2. As previously observed, with a straight or inclined profile along the length, a single width measurement across the centroid was sufficient for representing the mean width. However, at least two measurements are necessary to define the profile.

3.5. Deviation with Single Dimensions

With 50 multiple-width measurements were used as references, the deviations of 1, 5, and 15 width measurements for mean width determination were evaluated, and they are plotted in Figure 7. As anticipated, the deviations decreased from 1 to 5 widths and were drastic for 15 widths compared to 50 widths. On average, these deviations were 7.2% ± 4.7%, 4.7% ± 2.6%, and 1.0% ± 0.6%. It can also be seen that elongated samples (low W / L ratios), such as pasta fettuccine, carrot, celery, long green eggplants, and green beans, exhibited deviations of <2%. Therefore, based on these results, the general recommendation is to use 50 width measurements for optimal profile representation and to obtain the mean width. Alternatively, approximately 15 width measurements can be used for a satisfactory representation, with a mean width estimation deviation of approximately 1% for the 50-width-measurement reference. However, for new untested produce or products, preliminary measurements will reveal the optimum number of multiple-width measurements (a; Table 3) to be considered for the most effective representation.

3.6. Computational Speed

For an Apple laptop (MacBook Pro, Cupertino, CA, Mac OS X, Intel Core 2 Duo, processor speed of 2.8 GHz, and RAM of 8 GB), the CPU time taken to analyze all 1–200 multiple-width measurement runs (Table 3) and for all 3–48 objects in the image (Table 2) was, on average, 734 ± 365 ms for single run. This translates to an analysis speed of 15 ± 10 objects s 1 . This computational speed is quite efficient and fast. The analysis speed can be further enhanced by optimizing the computer configuration or using the latest computers with better resources.

3.7. Limitations and Recommendations for Future Work

Some of the limitations of the developed plugin include the following: (1) Agricultural produce with pedicles laid in random orientations are not measured correctly with fixed top and bottom end-cap values. (2) Touching and overlapping objects will interfere with measurements. (3) Shadows, although discernible in color, are included in the grayscale image and become a part of the object. Given the advancement in computer vision and algorithm development, almost all limitations can be addressed with elaborate coding and further research. Advanced programming algorithms can be developed to identify features (pedicles vs. economic component of the produce), and segmentation techniques for resolving touching objects could address the above-mentioned limitations. Furthermore, aspects that improve the automatic physical layout of objects in a singulated arrangement (specialized spreaders and conveyors) and methods as simple as employing better lighting conditions that prevent shadows could enhance performance.
Based on the experience gained, future research should explore the development of advanced algorithms specifically designed to address the identified limitations. Subsequent iterations of the software should seamlessly integrate the diverse preprocessing stages into the workflow of the plugin, thereby enabling the software to directly utilize a color image as its primary input. Curvilinear objects can be effectively managed using the “skeletonize” and “curve straightening” operations, enabling the measurement of correct lengths and multiple widths. For example, an advanced active contour algorithm [9] could be employed as a solution. Segmenting objects that are touching each other can be solved by ImageJ’s “Watershed” standard command or using other sophisticated techniques; for example, Fourier analysis and ellipse fitting [34,35] are other possible solution methodologies. Developing a hardware system based on the plugin’s algorithm to efficiently grade and sort produce based on multiple widths or mean widths requires the integration of necessary hardware components, which are readily available in industrial systems.

4. Conclusions

A computer vision ImageJ plugin developed successfully for the measurement of the length and multiple widths of agricultural produce achieved an accuracy of over 99.6% and demonstrated significant variations in the widths along lengths. The statistically significant minimum number of multiple widths required for accurate measurement and representation varies widely, ranging from 1 to 150. On average, employing 50 multiple-width measurements provides a more comprehensive representation of the width profile. However, a reduced number of 15 multiple-width measurements can also yield satisfactory mean width predictions, with a deviation of approximately 1% from the 50 multiple-width measurements.
Single or a few widths are sufficient for objects with straight profiles (e.g., carrot, celery, pasta fettuccine), but a greater number of multiple-width measurements (15 to 150) are required for spherical objects or those having curved profiles (e.g., mango, potato, watermelon) to effectively represent their varying width profiles and estimate the mean width. The aspect ratio serves as an effective indicator for determining the optimal number of significant minimum multiple widths. For objects with thick or wide shapes ( W / L 0.2 ), over 15 multiple-width measurements were found sufficient, and for slender objects ( W / L < 0.2 ), 5 or fewer multiple-width measurements were found sufficient. The developed plugin exhibits fast image analysis capabilities, taking an average CPU time of 734 ± 365 ms per image or 15 ± 19 objects s 1 . Based on the findings of this research, the identified future research directions include addressing challenges associated with pedicle orientation, object contact, and shadow formation through advanced programming or alternative techniques.

Supplementary Materials

The following supporting information can be downloaded from Mendeley Data at: https://doi.org/10.17632/jprxshtr4t.1 (accessed on 25 June 2025). This Mendelay dataset contains the original image data that was used in the manuscript for measurements and analysis. This dataset is cited in the manuscript as [26]. The data is presented in the form of a presentation with a file name “Mutliple-Width-Measurement-SuppMatl-Igathi-etal.pptx” for the 20 items studied (Figure 1; 20 pages). For every image, the data related to the total number of samples, mean length ± STD, and multiple-width mean ± STD were also included. The image data presents the following: (i) original images of agricultural produce (food product, vegetables, and fruits), (ii) labeled binary images showing the measured multiple widths of the original objects on a binary image, and (iii) a sample of plotted multiple widths showing the profile of the object.

Author Contributions

Conceptualization, C.I. and R.V.; methodology, C.I., R.V., G.B. and S.R.; formal analysis, C.I.; investigation, C.I. and R.V.; resources, C.I., R.V., G.B. and S.R.; data curation, C.I.; writing—original draft preparation, C.I., R.V., G.B. and S.R.; writing—review and editing, C.I., R.V., G.B. and S.R.; visualization, C.I.; supervision, C.I.; project administration, C.I.; funding acquisition, C.I. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the USDA-ARS Northern Great Plains Research Laboratory (NGPRL), Mandan, ND, fund: FAR0036174; and in part by the USDA National Institute of Food and Agriculture, Hatch Project: ND01493.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The image data used in the manuscript is available from Mendeley Data at https://doi.org/10.17632/jprxshtr4t.1 (accessed on 25 June 2025). This dataset is cited in the manuscript as [26]. Details are presented in the “Supplementary Materials” section.

Acknowledgments

The support extended by NGPRL, Mandan, ND is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, B.; Huang, W.; Li, J.; Zhao, C.; Fan, S.; Wu, J.; Liu, C. Principles, developments and applications of computer vision for external quality inspection of fruits and vegetables: A review. Food Res. Int. 2014, 62, 326–343. [Google Scholar] [CrossRef]
  2. Igathinathane, C.; Pordesimo, L.; Batchelor, W. Major orthogonal dimensions measurement of food grains by machine vision using ImageJ. Food Res. Int. 2009, 42, 76–84. [Google Scholar] [CrossRef]
  3. Gunasekaran, S. Computer vision technology for food quality assurance. Trends Food Sci. Technol. 1996, 7, 245–256. [Google Scholar] [CrossRef]
  4. Brosnan, T.; Sun, D.W. Inspection and grading of agricultural and food products by computer vision systems—A review. Comput. Electron. Agric. 2002, 36, 193–213. [Google Scholar] [CrossRef]
  5. Lorén, N.; Hamberg, L.; Hermansson, A.M. Measuring shapes for application in complex food structures. Food Hydrocoll. 2006, 20, 712–722. [Google Scholar] [CrossRef]
  6. Huynh, T.T.; TonThat, L.; Dao, S.V. A vision-based method to estimate volume and mass of fruit/vegetable: Case study of sweet potato. Int. J. Food Prop. 2022, 25, 717–732. [Google Scholar] [CrossRef]
  7. Jarimopas, B.; Jaisin, N. An experimental machine vision system for sorting sweet tamarind. J. Food Eng. 2008, 89, 291–297. [Google Scholar] [CrossRef]
  8. Ercisli, S.; Sayinci, B.; Kara, M.; Yildiz, C.; Ozturk, I. Determination of size and shape features of walnut (Juglans regia L.) cultivars using image processing. Sci. Hortic. 2012, 133, 47–55. [Google Scholar] [CrossRef]
  9. Clement, J.; Novas, N.; Manzano-Agugliaro, F.; Gazquez, J.A. Active contour computer algorithm for the classification of cucumbers. Comput. Electron. Agric. 2013, 92, 75–81. [Google Scholar] [CrossRef]
  10. Moreda, G.; Ortiz-Cañavate, J.; García-Ramos, F.J.; Ruiz-Altisent, M. Non-destructive technologies for fruit and vegetable size determination—A review. J. Food Eng. 2009, 92, 119–136. [Google Scholar] [CrossRef]
  11. Brosnan, T.; Sun, D.W. Improving quality inspection of food products by computer vision—A review. J. Food Eng. 2004, 61, 3–16. [Google Scholar] [CrossRef]
  12. Du, C.J.; Sun, D.W. Learning techniques used in computer vision for food quality evaluation: A review. J. Food Eng. 2006, 72, 39–55. [Google Scholar] [CrossRef]
  13. Costa, C.; Antonucci, F.; Pallottino, F.; Aguzzi, J.; Sun, D.W.; Menesatti, P. Shape analysis of agricultural products: A review of recent research advances and potential application to computer vision. Food Bioprocess Technol. 2011, 4, 673–692. [Google Scholar] [CrossRef]
  14. Du, C.J.; Sun, D.W. Recent developments in the applications of image processing techniques for food quality evaluation. Trends Food Sci. Technol. 2004, 15, 230–249. [Google Scholar] [CrossRef]
  15. Neupane, C.; Pereira, M.; Koirala, A.; Walsh, K.B. Fruit sizing in orchard: A review from caliper to machine vision with deep learning. Sensors 2023, 23, 3868. [Google Scholar] [CrossRef]
  16. Sabliov, C.; Boldor, D.; Keener, K.; Farkas, B. Image processing method to determine surface area and volume of axi-symmetric agricultural products. Int. J. Food Prop. 2002, 5, 641–653. [Google Scholar] [CrossRef]
  17. Khojastehnazhand, M.; Omid, M.; Tabatabaeefar, A. Determination of orange volume and surface area using image processing technique. Int. Agrophysics 2009, 23, 237–242. [Google Scholar]
  18. Vivek Venkatesh, G.; Iqbal, S.M.; Gopal, A.; Ganesan, D. Estimation of volume and mass of axi-symmetric fruits using image processing technique. Int. J. Food Prop. 2015, 18, 608–626. [Google Scholar] [CrossRef]
  19. Blasco, J.; Munera, S.; Aleixos, N.; Cubero, S.; Molto, E. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest. In Measurement, Modeling and Automation in Advanced Food Processing; Hitzmann, B., Ed.; Springer International Publishing: Cham, Switzerland, 2017; pp. 71–91. [Google Scholar]
  20. Zheng, B.; Sun, G.; Meng, Z.; Nan, R. Vegetable size measurement based on stereo camera and keypoints detection. Sensors 2022, 22, 1617. [Google Scholar] [CrossRef]
  21. Amin, J.; Anjum, M.A.; Sharif, M.; Kadry, S. Fruits and Vegetable Diseases Recognition Using Convolutional Neural Networks. Comput. Mater. Contin. 2022, 70, 619–635. [Google Scholar] [CrossRef]
  22. Xiang, L.; Wang, D. A review of three-dimensional vision techniques in food and agriculture applications. Smart Agric. Technol. 2023, 5, 100259. [Google Scholar] [CrossRef]
  23. Le Louëdec, J.; Cielniak, G. 3D shape sensing and deep learning-based segmentation of strawberries. Comput. Electron. Agric. 2021, 190, 106374. [Google Scholar] [CrossRef]
  24. Guevara, C.; Rostan, J.; Rodriguez, J.; Gonzalez, S.; Sedano, J. Computer-Vision-Based Industrial Algorithm for Detecting Fruit and Vegetable Dimensions and Positioning. In Proceedings of the International Conference on Soft Computing Models in Industrial and Environmental Applications, Salamanca, Spain, 9–11 October 2024; Springer: Cham, Switzerland, 2024; pp. 93–104. [Google Scholar]
  25. Chen, Z.; Zhou, R.; Jiang, F.; Zhai, Y.; Wu, Z.; Mohammad, S.; Li, Y.; Wu, Z. Development of Interactive Multiple Models for Individual Fruit Mass Estimation of Tomatoes with Diverse Shapes. 2024. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5023176 (accessed on 25 June 2025).
  26. Igathinathane, C.; Visvanathan, R. Machine Vision-Based Multiple width Measurements for Agricultural Produce—Original and Multiple width Measurements Images. Mendeley Data, V1. 2025. Available online: https://data.mendeley.com/datasets/jprxshtr4t/1 (accessed on 25 June 2025).
  27. Rasband, W.S. ImageJ; U.S. National Institutes of Health: Bethesda, MD, USA, 2011; Available online: https://imagej.net/ij/index.html (accessed on 25 June 2025).
  28. Rasband, W.S. ImageJ: Image processing and analysis in Java. Astrophys. Source Code Libr. 2012, 1, 06013. [Google Scholar]
  29. Schneider, C.A.; Rasband, W.S.; Eliceiri, K.W. 671 NIH image to ImageJ: 25 years of image analysis. Nat. Methods 2012, 9, 671–675. [Google Scholar] [CrossRef]
  30. Bailer, W. Writing ImageJ Plugins—A Tutorial. Version 1.71; Upper Austria University of Applied Sciences: Wels, Austria, 2006; Available online: https://imagingbook.github.io/imagingbook-doc/imagej-tutorial/tutorial171.pdf (accessed on 25 June 2025).
  31. Crawford, E.C.; Mortensen, J.K. An ImageJ plugin for the rapid morphological characterization of separated particles and an initial application to placer gold analysis. Comput. Geosci. 2009, 35, 347–359. [Google Scholar] [CrossRef]
  32. Fischer, M.J.; Uchida, S.; Messlinger, K. Measurement of meningeal blood vessel diameter in vivo with a plug-in for ImageJ. Microvasc. Res. 2010, 80, 258–266. [Google Scholar] [CrossRef]
  33. Saxton, A. A macro for converting mean separation output to letter groupings in Proc Mixed. In Proceedings of the 23rd SAS Users Group International; SAS Institute: Cary, NC, USA, 1998; pp. 1243–1246. [Google Scholar]
  34. Mebatsion, H.; Paliwal, J. A Fourier analysis based algorithm to separate touching kernels in digital images. Biosyst. Eng. 2011, 108, 66–74. [Google Scholar] [CrossRef]
  35. Zhang, G.; Jayas, D.S.; White, N.D. Separation of touching grain kernels in an image by ellipse fitting algorithm. Biosyst. Eng. 2005, 92, 135–142. [Google Scholar] [CrossRef]
Figure 1. A montage of the 20 selected axisymmetrical agricultural produce and pasta fettucine used in the study. The original images that were used in the measurements are available at Mendeley Data (https://doi.org/10.17632/jprxshtr4t.1; accessed on 25 June 2025) along with selected results [26].
Figure 1. A montage of the 20 selected axisymmetrical agricultural produce and pasta fettucine used in the study. The original images that were used in the measurements are available at Mendeley Data (https://doi.org/10.17632/jprxshtr4t.1; accessed on 25 June 2025) along with selected results [26].
Agriengineering 07 00204 g001
Figure 2. Process flow diagram of multiple and mean dimensions of agricultural produce.
Figure 2. Process flow diagram of multiple and mean dimensions of agricultural produce.
Agriengineering 07 00204 g002
Figure 3. Images captured using a digital camera with good contrast background and reference frame (left: snap melon on thermocol board—1.0 m × 0.5 m; and right: celery on letter paper (US) with a drawn rectangle of 242 mm × 178 mm).
Figure 3. Images captured using a digital camera with good contrast background and reference frame (left: snap melon on thermocol board—1.0 m × 0.5 m; and right: celery on letter paper (US) with a drawn rectangle of 242 mm × 178 mm).
Agriengineering 07 00204 g003
Figure 4. Fiji environment and the developed plugin with its features. (A) Fiji ImageJ image processing software panel—where several standard tools and status bar are shown; (B) standard “Analyze Particles” dialog box—which scans various objects in the image and offers filtering options, such as fine-particle removal, mask creation, the exclusion of objects on edges, the inclusions of holes in the object, and many more; (C) developed plugin input panel—where calibration through DPI or reference frame dimensions, end cap chopping percentage, number of multiple-widths measurement, and different output options can be input; (D) measured multiple widths (50 numbers) from both edges of the sample orthogonal to the length from the bottom to the top; (E) multiple-width measurement drawing on a bottle gourd sample—with (1) centroid, (2) direction of length, (3) top cap chopped end, (4) bottom cap chopped end, (5) segment of measurement where multiple widths are measured, and (6) pedicle of the sample that was excluded using end-cap values; (F) standard ImageJ outputs of the object; (G) section of the textual output of individual widths and their location along the length; and (H) consolidated numerical result of mean dimensions with standard deviation.
Figure 4. Fiji environment and the developed plugin with its features. (A) Fiji ImageJ image processing software panel—where several standard tools and status bar are shown; (B) standard “Analyze Particles” dialog box—which scans various objects in the image and offers filtering options, such as fine-particle removal, mask creation, the exclusion of objects on edges, the inclusions of holes in the object, and many more; (C) developed plugin input panel—where calibration through DPI or reference frame dimensions, end cap chopping percentage, number of multiple-widths measurement, and different output options can be input; (D) measured multiple widths (50 numbers) from both edges of the sample orthogonal to the length from the bottom to the top; (E) multiple-width measurement drawing on a bottle gourd sample—with (1) centroid, (2) direction of length, (3) top cap chopped end, (4) bottom cap chopped end, (5) segment of measurement where multiple widths are measured, and (6) pedicle of the sample that was excluded using end-cap values; (F) standard ImageJ outputs of the object; (G) section of the textual output of individual widths and their location along the length; and (H) consolidated numerical result of mean dimensions with standard deviation.
Agriengineering 07 00204 g004
Figure 5. Multiple-width-measurement validation using drawn rectangular blocks (1–4) and cut paper strips (5–10) of known dimensions (DPI = 256; 20 measurements; 10% end caps; and labels indicate the object numbers allocated by the plugin scanned from the top to the bottom).
Figure 5. Multiple-width-measurement validation using drawn rectangular blocks (1–4) and cut paper strips (5–10) of known dimensions (DPI = 256; 20 measurements; 10% end caps; and labels indicate the object numbers allocated by the plugin scanned from the top to the bottom).
Agriengineering 07 00204 g005
Figure 6. Illustration of multiple-width measurements using papaya sample images. (A) Original color image of the sample; (B) binary mask of the original image used as the plugin input where length and multiple widths were drawn, and the numbers (1–5) indicate the label numbers assigned to the objects by the plugin scanned from the top to the bottom; and (C) multiple-width plot for the visualization of the measured widths and their profile.
Figure 6. Illustration of multiple-width measurements using papaya sample images. (A) Original color image of the sample; (B) binary mask of the original image used as the plugin input where length and multiple widths were drawn, and the numbers (1–5) indicate the label numbers assigned to the objects by the plugin scanned from the top to the bottom; and (C) multiple-width plot for the visualization of the measured widths and their profile.
Agriengineering 07 00204 g006
Figure 7. Deviation of selected single and multiple widths from 50 multiple-width measurements.
Figure 7. Deviation of selected single and multiple widths from 50 multiple-width measurements.
Agriengineering 07 00204 g007
Table 1. Validation results using drawn rectangular blocks, cut paper strips, and whole letter paper (US) of known dimensions (number of width measurements = 20).
Table 1. Validation results using drawn rectangular blocks, cut paper strips, and whole letter paper (US) of known dimensions (number of width measurements = 20).
ObjectDPIActual (mm)Plugin Measured (mm)Accuracy (%) #
Length Width Length Width Length Width
min max mean STD
1 *25650050500.0050.01050.01050.010.00100.0099.98
2 *256500100500.00100.005100.0051000.00100.00100.00
3 *256400150400.00150.003150.0031500.00100.00100.00
4 *256400200400.00200.003200.0032000.00100.00100.00
5   25610020102.5019.7420.0419.850.0997.5099.25
6   25610010104.2510.1210.4210.330.1095.7596.70
7   2561005102.364.965.465.210.1197.6495.80
8   25610020103.2320.0020.5720.330.1796.7798.35
9   25610010102.7910.3110.5910.430.1097.2195.70
10   2561005102.034.985.385.200.1097.9796.00
11   165279.4108.9278.66107.61109.46108.500.5699.7499.63
12   165279.4107.4278.94106.53107.76107.180.4199.8499.80
13   § 165279.4215.9278.84214.17216.33215.190.6799.8099.67
14   265279.4215.9279.78215.56216.14215.740.1699.8699.93
Min, max, and STD in the width columns represent minimum, maximum, and standard deviation, respectively. * Blocks drawn using Fiji tools to exact pixel dimensions shown as actual length and widths (Figure 5); measurement resolution = 0.099 mm pixel−1. Thin paper strips cut manually after drawing them (Figure 5); measurement resolution = 0.099 mm pixel−1. Two wider strips made by cutting a letter paper (US; 215.9 mm × 279.4 mm) along the length (figure furnished by Dr. C. Igathinathane, the first author); measurement resolution = 0.154 mm pixel−1. § Whole letter paper (US) is also included in the image of two wider strips (figure furnished by Dr. C. Igathinathane, the first author); measurement resolution = 0.154 mm pixel−1. Whole letter paper (US) only with slightly increased DPI (figure furnished by Dr. C. Igathinathane, the first author); measurement resolution = 0.096 mm pixel−1. # Accuracy (%) = [(Plugin measured − Actual)/Actual] × 100.
Table 2. Results obtained from the plugin showing the measured dimensions of the agricultural produce (number of width measurements = 50).
Table 2. Results obtained from the plugin showing the measured dimensions of the agricultural produce (number of width measurements = 50).
NImage_File *Scientific Name# ObjectsPlugin Measured Sample DimensionsW/L 
Samples Lengths (L, mm) Samples Widths (W, mm)
Minimum Maximum Mean Minimum Maximum Mean
±STD±STD±STD±STD
1PastaFettuccine_252DPI4869.9599.0485.10 ± 6.264.45 ± 0.034.98 ± 0.084.71 ± 0.050.06
2BitterGourds_109DPIMomordica charatia10159.64290.62237.24 ± 41.9745.49 ± 3.8758.06 ± 7.5550.42 ± 5.580.22
3BottleGourds_109DPILagenaria siceraria8265.48449.76349.17 ± 61.4460.41 ± 1.2078.32 ± 12.1167.47 ± 5.070.20
4Carrots_169DPIDaucus carota9179.76210.27196.98 ± 8.5518.99 ± 3.5128.80 ± 7.1722.56 ± 5.920.11
5CeleryHearts_236DPIApium graveolens var. Dulce5206.97263.76247.11 ± 21.0926.27 ± 1.4941.21 ± 4.7330.80 ± 2.890.12
6Cucumbers_109DPICucumis sativus8130.43209.34172.43 ± 23.0940.05 ± 3.0648.27 ± 4.9943.79 ± 4.070.28
7EggplantLongGreen_109DPISolanum melongena1196.94198.4151.30 ± 27.9025.60 ± 2.6535.26 ± 6.7730.12 ± 4.540.20
8EggplantShortPink_109DPISolanum melongena2762.8996.0775.25 ± 9.3330.86 ± 2.7545.42 ± 9.6637.72 ± 4.290.55
9GreenBeans_244DPIPhaseolus vulgaris779.36123.09104.79 ± 14.387.65 ± 0.1510.33 ± 0.778.79 ± 0.340.08
10IvyGourds_109DPICoccinea indica2939.8974.660.79 ± 7.9016.21 ± 2.0825.15 ± 3.3621.66 ± 2.720.39
11Mangos_109DPIMangifera indica7114.7132.57121.79 ± 5.7386.52 ± 11.10101.15 ± 14.0394.79 ± 12.350.88
12Papayas_109DPICarica papaya5213.69238.91228.30 ± 9.7498.35 ± 12.70106.75 ± 17.27103.21 ± 15.010.50
13Pineapple_109DPIAnanas cosmosus6234.67276.97260.26 ± 15.83100.16 ± 4.24121.59 ± 6.66113.54 ± 5.050.45
14Potato_193DPISolanum tuberosum 3   177.38177.51177.42 ± 0.0664.80 ± 6.6564.90 ± 6.9264.86 ± 6.740.40
15SnakeGourdsShort_109DPITrichosanthes cucumerina10169.71271.48202.59 ± 31.5250.14 ± 1.6765.14 ± 12.2857.18 ± 7.940.31
16SnapMelon_109DPICucumis melo var. Momordica5166.64200.2177.82 ± 11.9590.09 ± 10.83103.92 ± 16.4798.87 ± 13.020.62
17SweetPotato_246DPIIpomoea batatas 3   175.32175.63175.53 ± 0.1560.19 ± 5.7360.34 ± 6.0960.28 ± 5.860.36
18Turnips_109DPIBrassica rapa var. Rapa1495.5150.35117.67 ± 15.0748.37 ± 6.9899.14 ± 13.4068.26 ± 10.280.66
19WaterMelonDarkGreen_109DPICitrulus lanatus5201.52232.1214.87 ± 11.63122.87 ± 14.23143.46 ± 18.50134.43 ± 16.070.70
20WaterMelonLightGreen_109DPICitrulus lanatus4271.97297.61284.93 ± 9.12171.94 ± 18.65196.43 ± 24.00187.28 ± 21.710.73
N—number sequence. STD—standard deviation obtained from different objects of the sample from the same image representing a produce. * Image file name depicting the common name and the dots per inch (DPI) information of the captured image.   W / L : Width/length ratio (dimensionless), also known as the aspect ratio; the width and length are the mean of single orthogonal measurements at the centroid of each sample. The three objects included are actually derived from a single original image by flipping it vertically and horizontally and combining them digitally.
Table 3. Results obtained from the plugin, showing the plugin’s measured agricultural produce dimensions (values presented are in mm, letters represent mean separation groups, and multiple-width measurements = 50).
Table 3. Results obtained from the plugin, showing the plugin’s measured agricultural produce dimensions (values presented are in mm, letters represent mean separation groups, and multiple-width measurements = 50).
#Widths Pasta fettuccineBitter gourdBottle gourdCarrotCeleryCucumberEggplant
long green
14.713 ± 0.00 A53.0 ± 0.05 A69.3 ± 0.10 C22.1 ± 0.02 B30.0 ± 0.03 A47.8 ± 0.02 B30.0 ± 0.04 B
34.705 ± 0.00 A45.7 ± 0.04 B62.4 ± 0.10 A22.1 ± 0.02 B30.8 ± 0.03 A38.9 ± 0.02 D28.5 ± 0.03 A
54.709 ± 0.00 A47.9 ± 0.05 E64.4 ± 0.10 AB22.2 ± 0.02 AB30.6 ± 0.03 A41.4 ± 0.02 E29.3 ± 0.04 AB
74.706 ± 0.00 A48.6 ± 0.05 DE65.2 ± 0.10 AB22.3 ± 0.02 AB30.5 ± 0.03 A42.2 ± 0.02 C29.5 ± 0.04 B
104.711 ± 0.00 A49.4 ± 0.05 CDE65.7 ± 0.10 ABC22.7 ± 0.02 A30.6 ± 0.03 A42.4 ± 0.02 C29.2 ± 0.04 AB
154.708 ± 0.00 A49.7 ± 0.05 CD65.9 ± 0.10 ABC22.4 ± 0.02 AB30.4 ± 0.03 A43.2 ± 0.02 A29.8 ± 0.04 B
204.710 ± 0.00 A50.1 ± 0.05 CD66.8 ± 0.10 BC22.6 ± 0.02 A30.4 ± 0.03 A43.3 ± 0.02 A29.6 ± 0.04 B
254.711 ± 0.00 A50.1 ± 0.05 CD66.2 ± 0.10 BC22.4 ± 0.02 AB30.4 ± 0.03 A43.5 ± 0.02 A29.9 ± 0.04 B
504.710 ± 0.00 A50.4 ± 0.05 C66.5 ± 0.10 BC22.5 ± 0.02 AB30.4 ± 0.03 A43.7 ± 0.02 A29.9 ± 0.04 B
754.709 ± 0.00 A50.4 ± 0.05 C66.4 ± 0.10 BC22.4 ± 0.02 AB30.4 ± 0.03 A43.8 ± 0.02 A30.0 ± 0.04 B
1004.709 ± 0.00 A50.4 ± 0.05 C66.5 ± 0.10 BC22.5 ± 0.02 AB30.4 ± 0.03 A43.8 ± 0.02 A29.9 ± 0.04 B
1504.709 ± 0.00 A50.4 ± 0.05 C66.5 ± 0.10 BC22.5 ± 0.02 AB30.4 ± 0.03 A43.8 ± 0.02 A29.9 ± 0.04 B
2004.709 ± 0.00 A50.4 ± 0.05 C66.5 ± 0.10 BC22.4 ± 0.02 AB30.3 ± 0.03 A43.9 ± 0.02 A30.0 ± 0.04 B
#SigWidths 1 ⇔ 150 ⇔ 720 ⇔ 310 ⇔ 31 ⇔ 115 ⇔ 107 ⇔ 3
#Widths Eggplant
short pink
Green beanIvy gourdMangoPapayaPineapplePotato
141.2 ± 0.02 F8.8 ± 0.01 B23.8 ± 0.01 C107.6 ± 0.01 G114.9 ± 0.03 G118.0 ± 0.03 D70.3 ± 0.01 J
333.5 ± 0.01 G8.5 ± 0.01 A18.2 ± 0.01 F81.7 ± 0.02 F87.3 ± 0.02 F107.7 ± 0.03 E56.4 ± 0.01 I
535.6 ± 0.01 D8.6 ± 0.01 AB20.0 ± 0.01 D88.4 ± 0.02 D95.5 ± 0.02 E110.3 ± 0.03 F60.7 ± 0.01 G
736.3 ± 0.01 C8.7 ± 0.01 AB20.6 ± 0.01 B90.7 ± 0.02 C98.4 ± 0.03 D111.7 ± 0.03 CF62.3 ± 0.01 H
1036.5 ± 0.01 C8.7 ± 0.01 AB20.7 ± 0.01 B91.5 ± 0.02 C99.2 ± 0.03 D111.9 ± 0.03 BC62.9 ± 0.01 F
1537.1 ± 0.01 A8.7 ± 0.01 AB21.2 ± 0.01 E93.3 ± 0.02 E101.6 ± 0.03 C112.8 ± 0.03 ABC64.0 ± 0.01 E
2037.2 ± 0.01 AE8.7 ± 0.01 AB21.3 ± 0.01 E93.7 ± 0.02 E102.0 ± 0.03 BC112.7 ± 0.03 ABC64.2 ± 0.01 D
2537.4 ± 0.01 AB8.7 ± 0.01 B21.4 ± 0.01 AE94.1 ± 0.02 AE102.5 ± 0.03 ABC113.0 ± 0.03 ABC64.5 ± 0.01 C
5037.5 ± 0.01 AB8.7 ± 0.01 B21.5 ± 0.01 A94.7 ± 0.02 AB103.1 ± 0.03 AB113.3 ± 0.03 AB64.9 ± 0.01 A
7537.6 ± 0.01 BE8.8 ± 0.01 B21.6 ± 0.01 A94.9 ± 0.02 AB103.4 ± 0.03 A113.4 ± 0.03 AB65.0 ± 0.01 AB
10037.6 ± 0.01 BE8.8 ± 0.01 B21.6 ± 0.01 A95.0 ± 0.02 AB103.4 ± 0.03 A113.4 ± 0.03 AB65.0 ± 0.01 AB
15037.6 ± 0.01 B8.8 ± 0.01 B21.6 ± 0.01 A95.1 ± 0.02 B103.5 ± 0.03 A113.4 ± 0.03 A65.1 ± 0.01 B
20037.6 ± 0.01 B8.8 ± 0.01 B21.6 ± 0.01 A95.1 ± 0.02 B103.6 ± 0.03 A113.5 ± 0.03 A65.1 ± 0.01 B
#SigWidths 75 ⇔ 1525 ⇔ 350 ⇔ 2050 ⇔ 2075 ⇔ 20150 ⇔ 10150 ⇔ 50
#Widths Snake gourdSnap melonSweet potatoTurnipWatermelon
dark green
Watermelon
light green
162.7 ± 0.06 D109.8 ± 0.06 F63.2 ± 0.01 A76.6 ± 0.05 F150.1 ± 0.03 A208.5 ± 0.04 F
347.7 ± 0.05 C83.9 ± 0.05 E52.7 ± 0.01 H57.3 ± 0.05 E114.4 ± 0.03 G160.6 ± 0.03 C
552.6 ± 0.05 A91.8 ± 0.06 D56.8 ± 0.01 G62.5 ± 0.05 C124.7 ± 0.03 F174.4 ± 0.03 A
754.2 ± 0.05 AB94.3 ± 0.06 CD58.1 ± 0.01 F64.2 ± 0.05 BC128.3 ± 0.03 E179.2 ± 0.03 G
1054.7 ± 0.05 BE95.9 ± 0.06 BC58.4 ± 0.01 E64.9 ± 0.05 BD129.3 ± 0.03 E180.6 ± 0.03 G
1556.1 ± 0.05 EF97.2 ± 0.06 AB59.6 ± 0.01 D66.1 ± 0.05 AB132.2 ± 0.03 D184.4 ± 0.04 E
2056.2 ± 0.05 EF98.0 ± 0.06 AB59.7 ± 0.01 D66.3 ± 0.05 AD132.7 ± 0.03 CD185.1 ± 0.04 DE
2556.6 ± 0.05 EF98.1 ± 0.06 AB60.0 ± 0.01 C67.7 ± 0.04 A133.4 ± 0.03 BCD185.9 ± 0.04 BDE
5057.0 ± 0.05 F98.8 ± 0.06 A60.3 ± 0.01 B67.1 ± 0.05 A134.2 ± 0.03 BC187.0 ± 0.04 BD
7557.1 ± 0.05 F98.9 ± 0.06 A60.4 ± 0.01 B67.2 ± 0.05 A134.5 ± 0.03 B187.4 ± 0.04 BD
10057.1 ± 0.05 F99.1 ± 0.06 A60.4 ± 0.01 B67.3 ± 0.05 A134.7 ± 0.03 B187.6 ± 0.04 B
15057.2 ± 0.05 F99.2 ± 0.06 A60.5 ± 0.01 B67.4 ± 0.05 A134.8 ± 0.03 B187.8 ± 0.04 B
20057.2 ± 0.05 F99.2 ± 0.06 A60.5 ± 0.01 B67.4 ± 0.05 A134.9 ± 0.03 B187.9 ± 0.04 B
#SigWidths 50 ⇔ 1050 ⇔ 1050 ⇔ 2525 ⇔ 1075 ⇔ 20100 ⇔ 20
Number of multiple-width measurements considered. Values shown are estimated mean ± standard error estimate in mm; uppercase letter groupings with a common letter(s) indicate that the means are not significantly different (α = 0.05). Maximum number of significant multiple widths shown in the following form: ab, where a represents the minimum number of multiple-width measurements above which means are not significantly different (α = 0.05), and b is the next lower number of multiple-width measurements for which the means are significantly different.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Igathinathane, C.; Visvanathan, R.; Bora, G.; Rahman, S. Computer Vision-Based Multiple-Width Measurements for Agricultural Produce. AgriEngineering 2025, 7, 204. https://doi.org/10.3390/agriengineering7070204

AMA Style

Igathinathane C, Visvanathan R, Bora G, Rahman S. Computer Vision-Based Multiple-Width Measurements for Agricultural Produce. AgriEngineering. 2025; 7(7):204. https://doi.org/10.3390/agriengineering7070204

Chicago/Turabian Style

Igathinathane, Cannayen, Rangaraju Visvanathan, Ganesh Bora, and Shafiqur Rahman. 2025. "Computer Vision-Based Multiple-Width Measurements for Agricultural Produce" AgriEngineering 7, no. 7: 204. https://doi.org/10.3390/agriengineering7070204

APA Style

Igathinathane, C., Visvanathan, R., Bora, G., & Rahman, S. (2025). Computer Vision-Based Multiple-Width Measurements for Agricultural Produce. AgriEngineering, 7(7), 204. https://doi.org/10.3390/agriengineering7070204

Article Metrics

Back to TopTop