Next Article in Journal
Visualization Techniques for Spray Monitoring in Unmanned Aerial Spraying Systems: A Review
Previous Article in Journal
Genome-Wide Association Study Identifies Candidate Genes Regulating Berry Color in Grape (Vitis vinifera L.)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Pipeline for Mushroom Mass Estimation Based on Phenotypic Parameters: A Multiple Oudemansiella raphanipies Model

1
School of Computer and Information Engineering, Jiangxi Agricultural University, Nanchang 330045, China
2
School of Software, Jiangxi Agricultural University, Nanchang 330045, China
3
Jiangxi Key Laboratory for Excavation and Utilization of Agricultural Microorganisms, Jiangxi Agricultural University, Nanchang 330045, China
4
Ji’an Agricultural and Rural Development Service Center, Ji’an 343000, China
5
College of Computer and Data Science, Fuzhou University, Fuzhou 350000, China
6
Ministry of Education Key Laboratory of Crop Physiology, Ecology and Genetic Breeding, Jiangxi Agricultural University, Nanchang 330045, China
*
Authors to whom correspondence should be addressed.
Agronomy 2026, 16(1), 124; https://doi.org/10.3390/agronomy16010124
Submission received: 20 November 2025 / Revised: 19 December 2025 / Accepted: 30 December 2025 / Published: 4 January 2026
(This article belongs to the Special Issue Novel Studies in High-Throughput Plant Phenomics)

Abstract

Estimating the mass of Oudemansiella raphanipies quickly and accurately is indispensable in optimizing post-harvest packaging processes. Traditional methods typically involve manual grading followed by weighing with a balance, which is inefficient and labor-intensive. To address the challenges encountered in actual production scenarios, in this work, we developed a novel pipeline for estimating the mass of multiple Oudemansiella raphanipies. To achieve this goal, an enhanced deep learning (DL) algorithm for instance segmentation and a machine learning (ML) model for mass prediction were introduced. On one hand, to segment multiple samples in the same image, a novel instance segmentation network named FinePoint-ORSeg was applied to obtain the finer edges of samples, by integrating an edge attention module to improve the fineness of the edges. On the other hand, for individual samples, a novel cap–stem segmentation approach was applied and 18 phenotypic parameters were obtained. Furthermore, principal component analysis (PCA) was utilized to reduce the redundancy among features. Combining the two aspects mentioned above, the mass was computed by an exponential GPR model with seven principal components. In terms of segmentation performance, our model outperforms the original Mask R-CNN; the A P , A P 50 , A P 75 , and A P s are improved by 2%, 0.7%, 1.9%, and 0.3%, respectively. Additionally, our model outperforms other networks such as YOLACT, SOLOV2, and Mask R-CNN with Swin. As for mass estimation, the results show that the average coefficient of variation (CV) of a single sample mass in different attitudes is 6.81%. Moreover, the average mean absolute percentage error (MAPE) for multiple samples is 8.53%. Overall, the experimental results indicate that the proposed method is time-saving, non-destructive, and accurate. This can provide a reference for research on post-harvest packaging technology for Oudemansiella raphanipies.

1. Introduction

Oudemansiella raphanipies is a precious edible mushroom that not only contains essential nutrients such as protein, fat, amino acids, and various vitamins but also contains certain amounts of polyphenols and polysaccharides [1]. Additionally, its unique flavor, rich nutritional value, and excellent antioxidant activity are making it increasingly popular among consumers. Thus, Oudemansiella raphanipies has gained popularity as a cultivated crop among mushroom farmers. However, the surge in Oudemansiella raphanipies production has intensified post-harvest packaging and processing difficulties, attributable to its high water content and absence of epidermal tissue. Post-harvest delays in packaging and storage accelerate quality loss in mushrooms, adversely impacting their marketability and farmer incomes. This is also one of the key factors hindering the rapid development of the Oudemansiella raphanipies industry.
Typically, the packaging of post-harvest agricultural products is based on mass. To improve the weighing efficiency, electronic weighing equipment is used to replace traditional mechanical weighing devices [2]. However, during the operation process, workers may cause physical damage, which further reduces the value of samples. With the rapid development of computer vision technology, non-destructive mass estimation has emerged as the predominant methodology. One method is to determine the mass of the sample by measuring its volume. This approach relies on the principle that the density of a homogeneous material is constant, implying a direct proportional relationship between the volume and mass. Currently, researchers have achieved satisfactory accuracy in the volume measurement of many agricultural products, such as watermelon [3], apple [4], egg [5,6], orange [7], cucumber, and carrot [8]. Another method is to estimate the mass of samples via a regression method based on phenotypic parameters. Similarly, many researchers have successfully developed regression models to estimate the mass of agricultural products, such as apple [9], kiwi fruit [10], tomato [11], and potato [12]. In contrast to other agricultural products, Oudemansiella raphanipies is not symmetrical, and there are undetermined hollowed-out areas at the bottom when they are processed on the conveyor belt. Therefore, it is difficult to measure the volume of Oudemansiella raphanipies directly via image-based methods; thus, the approach of estimating the mass via the volume is not suitable. From another perspective, owing to the distinct morphological characteristics of Oudemansiella raphanipies compared to conventional agricultural commodities, regression models relying exclusively on basic phenotypic parameters (e.g., length and width) exhibit substantial errors in mass estimation. Furthermore, Oudemansiella raphanipies is commonly handled in batches rather than processed individually in practical production due to the small sizes of individual units. Consequently, it is necessary to design a new means to obtain the mass of a batch of Oudemansiella raphanipies in a rapid and accurate manner. For this purpose, the task can be divided into three subtasks, namely the segmentation of multiple samples, the extraction of multiple complex phenotypic parameters, and the selection of appropriate parameters and approaches for mass regression.
The separation of samples is an instance segmentation task in the field of computer vision. With the development of computer hardware and deep learning, various instance segmentation networks have rapidly emerged, such as Mask R-CNN [13], YOLACT [14], SOLOV2 [15], and Mask R-CNN with Swin [16]. These approaches have achieved impressive performance in the agriculture field. For instance, Yang et al. [17] and Li et al. [18] applied Mask R-CNN to segment soybean to calculate its phenotypic parameters, and the experimental results showed that the method is robust in segmenting targets, even under densely cluttered environments. Sapkota et al. [19] compared YOLOv8 and Mask R-CNN with immature green fruit and trunk and branch datasets. The experimental results showed that both of them effectively segmented apple tree canopy images from both the dormant and early growing seasons, and YOLOv8 performed slightly better in different environments. Moreover, to solve the problem whereby rice field detection technology cannot be adapted to the complexity of the real world, Chen et al. [20] considered the rice row detection problem as an instance segmentation problem and successfully implemented a two-pathway-based method. However, the application of these advanced methods in the field of edible mushrooms is scarce, especially for Oudemansiella raphanipies.
For the phenotypic parameters subtask, recent advances in computer vision and deep learning techniques have enabled the application of automatic phenotype extraction in agronomy. Yang et al. [17] applied principal component analysis (PCA) to correct the pod’s direction and then calculate the width and length of the pod. The results showed that the average measurement error for pod length was 1.33 mm, with an average relative error of 2.90%, while the pod width had an average measurement error of 1.19 mm and an average relative error of 13.32%. Except for length and width, He et al. [21] extracted the pod’s area using the minimum circumscribed rectangle method combined with the template calibration method, and the results showed that the accuracy of the pod area was 97.1%. Additionally, Liu et al. [22] proposed a core diameter Ostu method to judge the posture and then obtained the length, surface area and volume, which were calculated by the elliptic long and short axes of the cross section of the silique. The experimental results reported that the errors of all phenotypic parameters were less than 5.0%. To meet the phenotypic information requirements of Flammulina filiformis breeding, Zhu et al. [23] utilized image recognition technology and deep learning models to automatically calculate phenotypic parameters of Flammulina filiformis fruiting bodies, including cap shape, area, growth position, color, stem length, width, and color. Furthermore, some studies apply the extracted phenotypic characteristics to other tasks. Kumar et al. [24] extracted the centroid, main axis length, and perimeter of plant leaves and then combined them with multiple classifiers to achieve classification. The accuracy of this method can reach 95.42%. Moreover, Okinda et al. [25] fitted the egg to an ellipse using the direct least square method and then extracted 2D features of the ellipse, such as area, eccentricity and perimeter, to establish the relationship between these parameters and the product’s volume using thirteen regression models, achieving excellent results in volume estimation of the egg. However, although the previous studies have obtained the basic parameters of Oudemansiella raphanipies like length and width [26,27], these characteristics cannot fully represent the morphology of Oudemansiella raphanipies. Thus, to obtain an accurate mass result, more complex phenotypic parameters of Oudemansiella raphanipies need to be calculated.
For the mass estimation subtask, existing research has primarily focused on mathematical model-based methods and regression-based methods. Due to the irregularity of agricultural products, more and more regression-based methods have been used. For instance, Nyalala et al. [28] directly fed five phenotypic parameters of tomato (area, perimeter, eccentricity, axis-length, and radial distance) into a support vector machine (SVM) with different kernels and an artificial neural network (ANN) with different training algorithms models to predict its mass. The experimental results showed that the Bayesian regularization ANN outperformed other models, with a root mean square error (RMSE) of 1.468 g. Nevertheless, this approach neglected feature redundancy and correlations between predictors and target variables. To make up for this insufficiency, Saikumar et al. [29] calculated a high linear correlation between the mass and the length, width, perimeter, and projection area first with a correlation coefficient of 0.96, 0.92, 0.92, and 0.95, respectively, and built multiple univariate and multivariate regression models for elephant apple (Dillenia indica L.) mass prediction. The results showed that the multivariate rational model performed the best, with an RMSE of 18.196 g. However, this simplistic variation in input combinations did not consider potential redundancy among the input parameters, which could lead to suboptimal model performance. Moreover, the performance of regression models varies depending on the target. Consequently, to accurately predict the mass of Oudemansiella raphanipies, it is essential to explore optimal model selection and feature optimization strategies.
In this study, we implemented a machine learning- and deep learning-based framework for estimating the mass of a batch of Oudemansiella raphanipies. The main contributions are as follows:
(1) A dataset including 1201 images was constructed and a novel instance segmentation network for Oudemansiella raphanipes Segmentation (FinePoint-ORSeg) was applied to obtain the individual sample;
(2) A novel stem–cap-based segmentation method was proposed for extracting phenotypic parameters robustly, and 18 phenotypic parameters were extracted;
(3) We evaluated the performance of various mass regression methods and the best means of calculating the mass of multiple Oudemansiella raphanipies was determined.

2. Materials and Methods

2.1. Materials

2.1.1. Image Acquisition System

The image acquisition system consists of a camera (RealSense SR305, Intel, Santa Clara, CA, USA), a computer (Intel Core i5-7500, 16 GB memory), two LED lights and a camera holder, as shown in Figure 1. Since the proposed method will be applied to the post-harvest packaging assembly line of Oudemansiella raphanipies, the imaging background is set to green to match the conveyor belt. The distance between the camera and the Oudemansiella raphanipies was maintained at 31.5 cm to ensure a sufficient number of samples in the field of view. The RGB images are transferred to the computer via a USB 3.0 data cable and saved in the “PNG” format (640 × 480 pixels).

2.1.2. Dataset

The samples in our work were collected from the School of Biological Science and Engineering, Jiangxi Agricultural University in May 2024 and the edible mushroom factory in Zhangshu City, Jiangxi Province from January to November 2024. Four sub-datasets (Dataset 1, Dataset 2, Dataset 3, Dataset 4) were established as shown in Figure 2.
Dataset 1 contains 1201 labeled images and each image contains several (1–10) Oudemansiella raphanipes. Dataset 2 is used to build the mass regression model, where each sample is shot from different angles seven times at the image center. After removing the blurry images, a total of 1475 images remain. Additionally, we randomly split Dataset 1 and Dataset 2 into a training set and a validation set in an 8:2 ratio, respectively.
On the assembly line, it is easy to fix a camera above the conveyor belt and clearer images can be obtained by controlling the conveyor speed. However, the orientation and position of the Oudemansiella raphanipes in the image cannot be ascertained. Therefore, we additionally set up two datasets, Dataset 3 and Dataset 4, to validate the model’s performance. To explore the robustness of the mass estimation model under different placement states of Oudemansiella raphanipes, Dataset 3 includes 24 Oudemansiella raphanipes and a total of 240 images, where each sample was randomly thrown 10 times, respectively. Dataset 4 contains 30 Oudemansiella raphanipes images, with 10 large (L) samples, 10 medium (M) samples, and 10 small (S) samples, which were classified by experienced farmers and partial examples are shown in Figure 3. For each grade, 10 Oudemansiella raphanipes are grouped together and then randomly thrown on the platform 10 times. This dataset can be used to evaluate the accuracy of the mass estimation method proposed in this paper. Additionally, a high-precision densitometer (MDJ-300S, LICHEN technology, Shanghai, China) was applied to acquire the ground truth of mass and volume of Datasets 2, 3, and 4.

2.2. Methods

The main objective of this study is to predict the overall mass of multiple Oudemansiella raphanipes. To achieve this goal, the pipeline is described as follows: First, an instance segmentation network is trained to segment individual samples, with the specific steps shown in the Oudemansiella raphanipes instance segmentation (Figure 4a). Then, based on the Oudemansiella raphanipes mask, a shape prior-based cap–stem segmentation algorithm is presented for building a mass regression model (Figure 4b). Finally, combining the results of the above step, the overall mass of Oudemansiella raphanipes in one image is obtained (Figure 4c). Since the volume of a solid is closely related to its mass, we also conducted a regression analysis on the volume.

2.2.1. FinePoint-ORSeg Model for Sample Segmentation

Mask R-CNN [13] is a classical deep learning model specifically designed for instance segmentation tasks. As a method based on convolutional neural networks (CNNs), Mask R-CNN relies on convolutional layers to extract features during the segmentation process. However, convolution operations have a local receptive field, which may lead to blurring effects when processing image details, especially at the edges, resulting in less smooth or precise boundaries. For general applications, such as animal or vehicle segmentation, this does not have an adverse effect on the results. Nevertheless, in our task, it may result in inaccurate phenotypic parameters. Therefore, we hope to obtain finer edge shapes by introducing the PointRend module [30]. Additionally, we consider that traditional nonlinear activation functions (such as ReLU, Sigmoid/Tanh) have significant drawbacks in information transmission, which severely damage high-frequency details of the image (such as sharp edges) and result in information loss or distortion, causing blurred results, inaccurate boundaries, and loss of details. Therefore, we have also embedded the Nonlinear Activation Free (NAF) block [31] to improve the model’s performance. The structures of the above two modules are shown in Figure 5.
(1)
PointRend module
PointRend (Point-based Rendering) is an image segmentation refinement module based on adaptive point sampling. Its core goal is to improve the segmentation accuracy of object boundaries and detailed regions through iterative refinement, thereby enabling more precise subsequent extraction of phenotypic parameters. The workflow of this module can be divided into three key stages modules, as shown in Figure 6a. First, the PointRend module uses a mixed strategy of uncertainty sampling and uniform sampling to select a set of candidate points for refinement from the initial coarse-grained segmentation results. For each candidate point, the module extracts corresponding feature vectors from different layers of the backbone network and aligns them to the target point’s spatial coordinates using bilinear interpolation. Then, feature concatenation is performed to merge low-level high-resolution detail information with high-level semantic information, forming a point-level feature vector with rich contextual representation. The fused feature vector is then passed through a small multi-layer perceptron (MLP) to predict the refined segmentation result for each point. The MLP is designed for efficient computation, and its output is the binary classification probability (foreground/background) for each candidate point. Finally, the refined point predictions are interpolated back to the original resolution and merged with the coarse segmentation map to generate a high-precision segmentation mask.
(2)
Nonlinear Activation Free block
Traditional activation functions (such as ReLU) introduce information truncation in nonlinear transformations (e.g., ReLU sets negative values to zero), which may lead to the loss of feature details. In contrast, segmentation tasks rely on pixel-level localization, and NAF may preserve more high-frequency details (such as object edges), improving the boundary accuracy of the mask. The structure of the Nonlinear Activation Free (NAF) block is shown in Figure 6b. Given a feature map X , the gated linear units can be calculated as described in Equation (1):
G a t e X , f , g , σ = f X σ g X
where the f and g are linear transformers, σ is a nonlinear activation function, and the represents element-wise multiplication. However, this operation may increase the intra-block complexity; thus, the authors revisit the activation function as shown in Equation (2):
G E L U X = x Φ ( x )
where the Φ refers to the cumulative distribution function of the standard normal.
Furthermore, comparing Equations (1) and (2), it can be observed that due to the inherent nonlinearity of the GLU function, the formula still holds even when σ is removed. Therefore, directly splitting the feature map into two parts along the channel dimension and multiplying them can reduce complexity as shown in Equation (3):
S i m p l e G a t e F , Y = X Y
where Y is a feature map of the same size as X .
Additionally, the authors compress spatial information into channel information and utilize a multi-layer perceptron to attend to the channels to capture global information, as demonstrated by the calculation in Equation (4):
C A X = X σ W 2 max 0 , W 1 p o o l X
where W 1 and W 2 are fully connected layers and ReLU is adopted between two fully connected layers, is a channelwise product operation.
Further, if we consider the channel-attention calculation as a function noted as Ψ , then Equation (4) can be rewritten as Equation (5):
C A X = X Ψ X
Comparing Equation (5) and Equation (2), we find that the two are similar. This inspires us to explore whether we can simplify Equation (5) in a similar manner to Equation (2) by aggregating global information and channel information to retain channel attention. Thus, the Simplified Channel Attention can be represented by Equation (6):
S C A X = X W p o o l X

2.2.2. Shape Prior-Based Phenotypic Extraction Algorithm

(1)
Cap–Stem segmentation algorithm
Since Oudemansiella raphanipies consists of the cap and stem, it is necessary to segment the stem and cap for the convenience of subsequent phenotypic parameter measurements. Based on the morphological characteristics of the Oudemansiella raphanipies, a novel cap–stem segmentation algorithm is presented in Figure 7. The procedure includes the following steps:
(a) Extract ordering contour point. In the task used to obtain the phenotype parameters, a key step is to calculate the position of the measurement point. Although the function find_contours in the OpenCV library can obtain sorted contours, they are discontinuous. Thus, a new contour-sporting approach needs to be developed. Establish an image coordinate system with the top-left corner as the origin. Traverse the image G x , y from top to bottom, then left to right. If the pixel value G x , y at the current position is 255, and there exists at least one neighboring pixel with a value of 0 in the 8-connected neighborhood, the current pixel is considered a contour point. Otherwise, it is considered an interior point, as shown in Equation (7).
C x , y = x , y | i f   G x , y = 255   a n d     G x 1 : x + 1 , y 1 : y + 1 = 0
With the contour point C x , y , randomly select a contour point p   C x , y as the starting point C ( x 1 , y 1 ) of the ordered contour. As shown in Equation (8), calculate the Euclidean distance between the point C ( x 1 , y 1 ) and other points C j , which the points in C x , y but not in C x , y . Then, add the point with the shortest distance as the next point C ( x 2 , y 2 ) in the ordered contour. Then, calculate the Euclidean distance between point C ( x 2 , y 2 ) and the remaining points C j , which there are points in C x , y but not in C x , y , and again add the point with the shortest distance as the next point in the ordered contour in which there are points in C x , y but not in C x , y . Repeat the above process until all contour points have been traversed and obtain the complete ordered contour C .
C x , y = { { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x i , y i ) | ( x i , y i ) = min ( E u c l i d e a n ( ( x i , y i ) , C j ) ) } }
(b) Extract the segmentation points. After obtaining the ordering contour points C x , y , calculating the vector A n 1 i (Equation (9)) from the current point to the previous m points, and the vectors A n 2 i (Equation (10)) from the current point to the next m points. Then, compute the angle between vectors A n 1 i and A n 2 i . If the angle is smaller than the threshold ε , the current point is considered as a segmentation point S i (Equation (11)). Based on multiple experiments and the morphological structure of Oudemansiella raphanipies, the value of m was chosen to be 7 and the ε was set to 135 degrees.
A n 1 i = ( x i x i m , y i y i m ) , i 0 , n
A n 2 i = ( x i x i + m , y i y i + m ) , i 0 , n
S = x 1 , y 1 , , x i , y i , x n , y n i f arccos A n 1 i · A n 2 i A n 1 i × A n 2 i < ε , i 0 , n
(c) Filter segmentation points. Due to the morphological variations of Oudemansiella raphanipies, neighboring positions around the segmentation points may also meet the threshold set by Equation (12). Therefore, it is necessary to filter the obtained segmentation points and select two points as the final segmentation points. With the segmentation points S , randomly select a segmentation point ( S x , S y ) from S as the starting point S ( x 1 , y 1 ) of the filtered segmentation points. Calculate the Euclidean distance between the current point S ( x 1 , y 1 ) and the other unfiltered points S . If the distance is smaller than the threshold, remove this unfiltered point from S and recompute the distance to the next unfiltered points S . Repeat the calculation until find a point that the distance is greater than the threshold, and consider this point as another segmentation point S ( x 2 , y 2 ) . Thus, the points S ( x 1 , y 1 ) and S ( x 2 , y 2 ) were considered as the final segmentation points.
S = x 1 , y 1 , x 2 , y 2 if   Euclidean x 1 , y 1 , S i > γ   and   Euclidean x 2 , y 2 , S i > γ , i 0 , n 1
where the S i refer to the points of the unfiltered segmentation points besides x 1 , y 1 and x 2 , y 2 , and the γ was set to 10.
(d) Obtain the cap and stem region. After obtaining the segmentation points of the stem and cap x 1 , y 1   a n d   x 2 , y 2 , divide the ordered contour C x , y into two contours, C 1 and C 2 . Then, calculate the centroid M x , M y of the ordered contour obtained in Step (a). If M x , M y lies within the range of C 1 , then C 1 is the stem region and C 2 is the cap region; if M x , M y lies within the range of C 2 , then C 2 is the stem region and C 1 is the cap region.
(2)
Definition of phenotypic parameters
By the cap–stem segmentation algorithm, the separate masks of the cap and the stem can be obtained. However, the randomly placed sample may cause the Oudemansiella raphanipies to be tilted, making it hard to directly measure the parameters by bounding box, and this introduces systematic error, as shown in Figure 8a. To overcome this drawback, we first used the principal component analysis (PCA) algorithm to acquire the main direction of the cap (stem) as shown in Figure 8b. Then, we rotated the cap (stem) so that the main direction aligned with the image coordinate system, as shown in Figure 8c. Figure 8d illustrates the five primary phenotypic parameters (CD, CH, Angle, SH, SD), and 18 phenotypic parameters are defined in Table 1.

2.2.3. Mass Estimation Model

To predict the mass of an Oudemansiella raphanipies based on phenotypic parameters, nine regression models were evaluated in this study (Table 2): support vector machine (SVM) (Linear, Fine Gaussian, Medium Gaussian and Coarse Gaussian), Gaussian Process Regression (GPR) (Rational Quadratic and Exponential), and Artificial Neural Networks (ANNs) (Bayesian Regularization, Levenberg–Marquardt, and Scaled Conjugate Gradient training algorithms).
Support vector machine (SVM) [32] is a powerful supervised learning algorithm primarily used for classification and regression tasks. Further, SVM can be extended into Support Vector Regression (SVR), a regression method that can utilize multiple different kernel functions such as Linear and Gaussian kernel to solve nonlinear problems. For example, Nyalala et al. [28] used SVM with different kernels to predict the volume and mass of tomatoes, and the accuracy was all above 90%.
Gaussian Process Regression (GPR) [33] is a non-parametric regression method based on Bayesian inference, using Gaussian processes (GP) as the prior model for the target function. Similarly to SVM, different types of data require distinct kernel functions to characterize their underlying structure. Consequently, selecting appropriate kernel functions is critical for enhancing both the performance and flexibility of Gaussian Process Regression (GPR) models. Okinda et al. [25] used the Exponential Gaussian Process to predict the volume of a single egg, achieving an excellent result with an RMSE of 1.175 c m 3 and R 2 of 0.984. Moreover, Gonzalez et al. [34] estimated the weight of rice based on Exponential kernel and rational quadratic GPR, which obtained an RMSE of 31.081 g and 31.115 g, respectively. This study will explore the performance of two kernel functions, Rational Quadratic and Exponential, in estimating the mass and volume of Oudemansiella raphanipies.
Comparing the SVM and GPR methods, the advantage of Artificial Neural Networks (ANNs) [35] lies in the fact that there is no need to manually design the kernel function and they can automatically capture complex deep structures. During the learning process, the network’s objective is to adjust parameters using optimization algorithms to minimize prediction errors. Depending on the different optimization algorithms, the commonly used ANNs include Bayesian Regularization, Levenberg–Marquardt, and Scaled Conjugate Gradient ANN [36], and they have all been well applied in different scenarios. Kayabaşı et al. [37] obtained five features (length, width, area, perimeter and fullness) of wheat and then applied the Bayesian Regularization ANN to identify the grain type of the wheat. Amraei et al. [38] predicted broiler mass based on different ANN techniques, while Bayesian Regulation ANN was the best network with an R 2 value of 0.98. Sofwan et al. [35] utilized the Levenberg–Marquardt ANN to predict the air temperature inside the greenhouse 30 min in the future, which helped to adjust the growing environment of water spinach.

2.2.4. Implementation Details

All DL and ML algorithms were run on the Ubuntu 20.04.6 (GNU/Linux 5.14.0-1051-oem x86_64) operating system (Intel(R) Xeon(R) Silver 4112 CPU @ 2.60 GHz, 64 GB of RAM and NVIDIA GeForce RTX 5000 GPUs), using the PyTorch 1.7.1 framework and Python 3.7. CUDA version 11.7 was used for this experiment. The software used to train the model was PyCharm 2019.
To fine-tune the dataset of the DL model during the training process, we used the pretraining weight trained on the open-source dataset ImageNet. In addition, we used the Adam optimizer with a learning rate of 0.0001, while the learning rate was controlled by the steep descent method with a weight decay of 0.0001 and a batch size of 4.

2.2.5. Evaluation Metrics

For the Oudemansiella raphanipies instance segmentation subtask, the instance segmentation model was evaluated with the average precision ( A P ). Depending on the IoU threshold setting, it can be specifically divided into A P , A P 50 , A P 75 . A P represents the average of A P values calculated at each IoU threshold, with the IoU thresholds ranging from 0.5 to 0.95 in steps of 0.05. A P 50 and A P 75 are the A P values when the IoU threshold is 0.5 and 0.75, respectively. Additionally, depending on the size of the objects, A P includes A P s , A P m , and A P l , where A P s refers to the average precision calculated for small objects, typically defined as objects with an area smaller than a specific threshold (less than 32 × 32 pixels). The calculation equations were shown in Table 3.
Given the positive correlation between volume and mass, we also analyzed the volume regression model. The performance of volume and mass regression models was evaluated with four metrics: adjusted R 2 ( R a d j 2 ) [39,40], mean absolute error ( M A E ), root mean square error ( R M S E ) [41] and ratio of performance to deviation ( R P D ) [42]. R a d j 2 is the adjusted coefficient of determination, which accounts for the number of independent variables in the model. It adjusts R 2 by penalizing the inclusion of unnecessary predictors, providing a more accurate measure of model fit, especially in multiple regression. R M S E is the root mean square of the square error between the predicted and reference values. R M S E measures the prediction error in the same units as the original data, so it is easier to interpret. R P D is the ratio of the standard deviation ( S D r e f ) to the root mean square error ( R M S E ). The performance of the model can be evaluated through R P D : R P D > 2.5 shows excellent performance; 2.0 < R P D ≤ 2.5 shows very good performance; 1.8 < R P D ≤ 2.0 signals good performance; 1.0 < R P D ≤ 1.4 is considered to indicate poor performance; and R P D ≤ 1 reflects unsuitability for application. Their definitions are listed in Table 3.
Furthermore, to validate the model’s performance, we also used three metrics: mean absolute percentage error ( M A P E ) [43], A c c u r a c y , and coefficient of variation ( C V ) [44]. M A P E is the mean percentage of the absolute error between the predicted and reference values. A c c u r a c y is a metric defined in this paper, which was described as the complement of the M A P E . Specifically, it represents how close the predicted values are to the reference values, with a higher A c c u r a c y indicating a smaller error between the predictions and the reference values. C V shows the volatility between predicted values and average of them. In addition, the smaller C V indicates less variability in the predicted values for the same sample across multiple measurements. Their definitions are provided in Table 3.

3. Results

3.1. Overall Performance of Our Method

As shown in Figure 9, the data points are densely distributed near the reference line (y = x), and the slope of the fitting line (red) formula is close to 1 with an extremely small intercept, indicating that the predicted values have a slight systematic overestimation (with an average overestimation of 4%), but also reporting that the estimated values showed a high correlation with the manual measurements, with an R 2 of 0.99. Specifically, our methods achieved an M A E of 1.75 g, a M A P E of 0.09%, and an R M S E of 0.86 g for mass estimation and an M A E of 2.75 c m 3 , a M A P E of 0.09%, and an R M S E of 1.36 c m 3 for volume estimation, respectively. These results confirm that our method provides accurate and reliable estimations of both mass and volume for Oudemansiella raphanipies.

3.2. The Results of Instance Segmentation

3.2.1. Performance of Model

After completing the training, the loss function curves of the training and validation datasets were obtained. As shown in Figure 10, it can be observed that as the number of iterations increases, the training loss gradually decreases from its initial value, indicating that the model is continuously learning the features of the training data through parameter optimization and the fitting ability progressively improves. Additionally, the validation loss rapidly decreases in the early stages of training (the first 6000 iterations), and then the rate of decline slows down and gradually stabilizes, suggesting that the model’s generalization performance on the validation set has reached a stage of convergence. Moreover, partial instance segmentation results are visualized in Figure 11.

3.2.2. Ablation Experiment

Four ablation experiments were conducted to intuitively analyze the effect of each module. As shown in Table 4, the experimental results in Row 1 represent the original Mask R-CNN without any improvement, achieving an A P of 0.811, an A P 50 of 0.977, an A P 75 of 0.911, and an A P s of 0.857. With the addition of the PointRend module (row 2), the A P 50 was 0.975 and the A P s was 0.843, which were 0.002 and 0.014 lower than those in Row 1, respectively. However, Row 2 showed improvements in both A P and A P 75 , increasing from 0.811 to 0.813 and from 0.911 to 0.921, respectively. Row 3 introduced the NAF module, resulting in a decrease of 0.001 in A P 50 and 0.002 in A P s , while the A P increased by 0.003. Additionally, it gained an A P 75 of 0.935, which showed a significant increase of 0.024. When both the PointRend and NAF modules were combined, the A P , the A P 75 , the A P 50 , and the A P s reached 0.831, 0.930, 0.984 and 0.860, respectively, improving by 2%, 1.9%, 0.7% and 0.3%, respectively. Meanwhile, the A P , the A P 50 , and the A P s were the highest among the four experiments. The experimental results demonstrated that the PointRend and NAF modules improved the overall performance of the baseline.

3.2.3. Comparison Results of Different Instance Segmentation Models

To further explore the performance of FinePoint-ORSeg model, standard COCO evaluation indicators such as A P , A P 50 , A P 75 and A P s were used for evaluation. We compared the performance of our model with Mask R-CNN, SOLOv2, YOLACT, Mask2former, TensorMask, Mask R-CNN with swin and InstaBoost on Dataset 1, and the evaluation results were shown in Table 5. The experimental results show that the proposed FinePoint-ORSeg network achieves optimal performance across multiple key metrics: its A P value is 0.831, tied for first place with Mask2former; the A P 50 is significantly leading at 0.984, improving by 0.7% over the second-best Mask R-CNN and Mask R-CNN with swin (0.977). Notably, traditional methods like YOLACT have a weaker performance ( A P = 0.655), while Mask2former (Swin Transformer-based) performs excellently in terms of A P and A P s (small target precision), but slightly lag behind FinePoint-ORSeg at higher thresholds ( A P 75 ). The FinePoint-ORSeg model reduces unnecessary complex operations in traditional modules by effective channel attention and a simple gating mechanism while maintaining high robustness ( A P s = 0.860), providing a data foundation for the subsequent extraction of Oudemansiella raphanipies phenotypic parameters.

3.3. The Result of Phenotypic Parameters Extraction

To verify the accuracy of the extracted phenotypic parameters, we randomly selected and manually measured the CD, CH, SD, and SH of 10 samples of Dataset 2, while the other phenotypic parameters cannot be directly measured manually. Table 6 shows the manual measurement ( m e a s u r e r e f e r e n c e ), estimated measurement ( m e a s u r e e s t i m a t i o n ), and the MAE and MAPE of the Oudemansiella raphanipies examples. For CD, CH, SD, and SH, the phenotypic extraction method has an average MAE of 1.10 mm, 0.73 mm, 1.30 mm and 0.58 mm, respectively. In addition, the MAPEs were 5.16%, 4.88%, 3.47%, and 3.50%, respectively.

3.4. Correlation Analysis and Best Regression Model Selection

To explore the relationship between Oudemansiella raphanipies parameters and mass while reducing feature redundancy, we constructed a correlation heatmap, as shown in Figure 12. Furthermore, since volume and mass have a close correlation, the relationship between phenotypic parameters and volume was evaluated as well. From Figure 12, it can be seen that the height, width, perimeter, and area have a high correlation with the mass and volume. However, there is also a high correlation between height, width, perimeter, and area, indicating some redundancy. Therefore, to explore the impact of nine ML models and the number of Principal Component Analysis (PCA) components on model performance, we applied the phenotypic data obtained in Section 3.4 to evaluate the accuracy of the mass and volume estimation, and the results are shown in Figure 13. Whether for mass or volume estimation, as the number of principal components increases, MAE and RMSE all show a downward trend. When the number of principal components reaches 7, the evaluation metrics begin to converge, which indicates a high level of information redundancy among the 18 features.
For mass estimation, the Gaussian kernel in SVM outperforms the linear kernel, especially the Medium Gaussian SVM and Coarse Gaussian SVM, both of which achieved a R a d j 2 of 0.92. Additionally, the Medium Gaussian SVM achieved the highest RPD and the lowest RMSE, 3.61 and 0.70 g, respectively, while the Coarse Gaussian SVM achieved the lowest MAE of 0.35 g. In comparison, the Linear SVM had a R a d j 2 of 0.91, RMSE of 0.77 g, RPD of 3.30, and MAE of 0.50 g. For GPR models, Rational Quadratic GPR and Exponential GPR both had identical R a d j 2 , RMSE, RPD, and MAE values of 0.96, 0.53 g, 4.78, and 0.21 g, respectively. However, Exponential GPR demonstrated more stable performance compared to Rational Quadratic GPR. For Artificial Neural Networks (ANNs), the Bayesian Regularization ANN performed the best, with a R a d j 2 of 0.97, RMSE of 0.47 g, RPD of 5.43, and MAE of 0.29 g. Among the models, Exponential GPR had the smallest MAE of 0.21 g, while Bayesian Regularization ANN had the highest R a d j 2 and RPD, at 0.97 and 5.34, respectively, and the smallest RMSE at 0.47 g. For volume estimation, Exponential GPR performed the best, achieving the highest R a d j 2 and RPD of 0.95 and 4.74, and the lowest RMSE and MAE of 0.76 c m 3 and 0.29 c m 3 , respectively. Considering both the stability and accuracy of the models for mass and volume estimation, we ultimately select the Exponential GPR with seven principal components as the model for subsequent mass and volume estimation.

3.5. The Results Under Different Conditions

3.5.1. Estimation Results of a Single Sample at Random States

Since the phenotypic parameters of the same Oudemansiella raphanipies vary when viewed from different perspectives, we selected 24 Oudemansiella raphanipies instances (Dataset 3), each with 10 randomly generated states, to validate the impact of placement state on the Exponential GPR model obtained in Section 3.4. Table 7 shows the estimated mass and volume results of a single sample at random states. The manual measurements of mass and volume had average values ranging from 2.20 g to 3.40 g and from 3.04 c m 3 to 4.70 c m 3 , respectively. Compared with the measured value, the estimated value of mass and volume had an average MAE of 0.26 g and 0.34 c m 3 , with an average STD of 0.19 g and 0.28 c m 3 , respectively. Moreover, the average CVs of mass and volume were 6.81% and 6.94%, which indicated that the Exponential GPR model had robustness against Oudemansiella raphanipies at different states.

3.5.2. Mass/Volume Estimation of Multiple Samples in a Single Image with Random Orientations

Although our method demonstrates acceptable performance on individual images of Oudemansiella raphanipies, the number of samples is typically multiple and in random states in real-world assembly lines. Therefore, to validate the practicality of the proposed algorithm in real scenarios, Dataset 4 is selected to simulate the orientations of Oudemansiella raphanipies on the assembly line. The estimated mass and volume are shown in Figure 14 and the total error is shown in Table 8.
For S samples, our method achieved an accuracy of over 70% both for mass and volume and an RMSE of 0.590 g and 0.835 c m 3 , respectively. Moreover, the average MAPE for the mass and volume of S samples was the largest compared to both M and L samples, with values of 18.17% and 17.94%, respectively. This is because although the MAE for the S sample is only 1.454 g for mass (2.14 c m 3 for volume) higher than that of the M sample, the reference value for the former is much smaller than that of the latter, as the S samples have a smaller reference mass and volume. For M samples, although the MAPE of mass (volume) is 1.01% (1.11%) higher than L samples, the RMSE between reference mass (volume) and estimated mass (volume) is the lowest. The performance on L samples is the best, with an MAPE of 3.21% for mass and 3.66% for volume due to them having the most regular shape, and our extraction method of phenotypic parameters can obtain more accurate data. Despite this, the proposed method showed an MAE of 1.714 g and 2.703 c m 3 for multiple Oudemansiella raphanipies of different grades in one image to estimate the mean mass and volume at single view. The mass and volume estimation were robust at different grades instances. Additionally, as shown in Figure 14c,d, the violin plot visualizes the relative estimation errors of 10 results for 10 Oudemansiella raphanipies samples at different grades. The width of the violin plot represents the distribution of the data, with a wider section indicating that the estimations are more concentrated at that position. The length of the violin plot represents the degree of dispersion, where a longer instrument body signifies greater instability in the measurement results. It can be reported that the relative error of mass might even exceed 4 g, such as the No.5 of S samples and the No.7 of L samples, resulting in low accuracy. Meanwhile, the volume estimation also contains instances with large errors. The large errors imply that these instances will contribute to a greater prediction error for the corresponding images.

3.5.3. Comparison Results of Samples at Different Grades

To further demonstrate the robustness of our method and the consistency of the estimated results, we also statistically analyzed the MAE and CV of the same Oudemansiella raphanipies across different images, as shown in Figure 15 and Table 9. For S samples, the CV for both mass and volume estimation are within 5%, which implies high robustness.
For L samples, it has an MAE with 1.045 g and 1.830 c m 3 and has higher overall CV than S and M samples both for mass and volume, indicating that the estimated results of L samples have higher volatility. In addition, the MAPE for the S samples obtains an opposite result, with a value of 48.62% for mass and a value of 44.89% for volume, while that of the M and L samples is much lower. This is because the mass and volume variations of the M and L samples are more significant, leading to higher variability in their measurements, resulting in a larger CV. Furthermore, the MAPE is sensitive to the relative impact of errors with respect to the target size. For L samples, it has a larger overall mass and volume, so the same MAE has a smaller relative impact in terms of percentage, leading to a lower MAPE. On the other hand, the shape of the same Oudemansiella raphanipies is not exactly the same on different placement surfaces, leading to variations in the extracted phenotypic parameters, which ultimately affect the estimates of mass and volume.

4. Discussion

Regarding the Oudemansiella raphanipies instance segmentation task, we integrated the NAF module and the PointRend module into the FinePoint-ORSeg network to improve the segmentation capabilities and address the issue of rough target boundaries during inference. Specifically, NAF helps the network maintain an awareness of the overall morphology and spatial distribution of the targets while extracting local details, while PointRend significantly improves the geometric accuracy of the segmentation masks, which is particularly important for subsequent geometry-based volume estimation, thereby improving the model’s robustness in complex scenarios. Table 4 shows that adding only the NAF or PointRend module could improve the average precision, which demonstrates its contribution to high-precision mask generation. Table 5 shows a similar performance to other state-of-the-art works based on DL which reported an AP result of 0.831.
The main contribution of this work is the phenotypic parameters extraction algorithms and the use of ML to estimate the Oudemansiella raphanipies (mass and volume) even in the presence of multiple targets. The results (Table 6) show that the phenotypic parameters extraction algorithm was able to robustly obtain features such as CD (MAE = 1.01 mm), CH (MAE = 0.73 mm), SD (MAE = 1.30 mm) and SH (MAE = 0.58 mm). Compared with other state-of-the-art results that measure the CD of Oudemansiella raphanipies [26,27], this study achieved MAE values of between 0.35 mm and 2.30 mm and is considered as an accurate and acceptable results and achieved. However, it is notable that the error (Table 6) of the number 1 (MAPE = 13.26%) and 8 (MAPE = 11.39%) of CD, the number 2 (MAPE = 11.30%) of CH and the number 7 (MAPE = 10.71%) of SD were significantly higher than other Oudemansiella raphanipies. The main reasons are as follows: (1) Error introduced by manual measurement. Since the Oudemansiella raphanipies is a non-rigid object, compression may occur during measurement, leading to an underestimation of the reference value. Additionally, the determination of measurement points manually involves a certain degree of subjectivity. (2) Error introduced by the view angle of measurement. Due to the irregular shape of the Oudemansiella raphanipies and the fact that the manual measurement perspective is different from the camera’s perspective, there is a discrepancy between the manual measurement and the system’s measurement results. (3) Error introduced by our extraction approach. Due to the significant influence of external environmental factors on the shape of the Oudemansiella raphanipies, some instances have large caps but narrow stems, while others have small caps but thick stems. As a result, our segmentation method based on the angle between the stem and cap requires different threshold values for the latter case, which would also cause errors in the phenotypic parameters’ extraction.
The correlation analysis results show that RDH, roughness, and ARMB have low correlations with mass and volume. Therefore, the performance of nine machine learning models was compared under different numbers of principal components, and the Exponential GPR was finally selected as the subsequent basic model. Furthermore, the Exponential GPR was applied to evaluate the samples in different states. The experiments show that the CV of instance number 9 and 18 for single samples (Table 6) and the accuracy of image numbers 5 and 9 for multiple samples (Figure 14b) have unsatisfactory results. Figure 16 depicts the CV of 18 phenotypic parameters extracted from the above samples. It can be seen that the Angle and Total Area of the No.9 sample have a relatively larger error of 8.97% and 9.12% compared with other features. Moreover, the opening angle, perimeter and total area also have a relatively larger error. For three samples of different grades, all of the opening angles obtain a CV exceeding 5%. This indicates a relatively large volatility, which might be an important reason for the significant difference in the prediction results of the two images. In addition, Table 8 shows that the MAPE of the large samples is 5.32% and 4.80% lower than average error for mass and volume, respectively, and this means that the economic loss can be reduced by 9.77%.
The phenotypic parameters extracted in this work were obtained with 2D image and calibration at fixed height, which limited the data precision and the number of features. The use of 2.5D or 3D data would provide higher precision for subsequent estimation. However, the Oudemansiella raphanipies is relatively small in size, so higher-precision depth cameras or three-dimensional phenotypic extraction methods need to be adopted. In addition, the phenotypes of Oudemansiella raphanipies of different varieties and in different planting environments vary greatly. Therefore, it is urgent to develop an adaptive phenotypic parameter extraction method with high robustness.

5. Conclusions

In this research, a computer vision-based method was proposed for estimating the mass and volume of Oudemansiella raphanipies and achieved an impressive performance. This study was motivated by the limitations of existing methods, which are not robust to irregular agricultural products. By establishing a fitting model between phenotypic parameters and mass, we can effectively solve the problem of inaccurate mass estimation caused by the irregular shape of the Oudemansiella raphanipies. The improved instance segmentation model directly obtains a finer edge of the segmentation region. In addition, a novel prior-based segmentation method of Oudemansiella raphanipies cap and stem was proposed to extract 18 phenotypic parameters, as validated on the proposed datasets. Additionally, the analysis of Oudemansiella raphanipies of three different grades demonstrated that the mass and volume estimated by our methods could effectively breakthrough the limitation of an irregular shape. However, although the mass and volume estimation approach has achieved excellent results, it still has limitations. For instance, our method divides the work into three subtasks, but each subtask will introduce systematic errors, which increases the accuracy of the result. Future work should consider using a single-stage method to reduce these errors.

Author Contributions

H.Y.: Conceptualization, Writing—review and editing, Supervision; D.L.: Data curation, Methodology, Formal analysis, Writing—original draft; A.X.: Conceptualization, L.Y.: Data curation, Methodology; M.C.: Conceptualization, Methodology; Y.X.: Conceptualization, Methodology; H.X.: Conceptualization; Y.W.: Supervision, Project administration, Writing—review and editing; Q.W.: Methodology, Project administration, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by the National Natural Science Foundation of China (No. 62362039), the Jiangxi Provincial Natural Science Foundation (No. 20242BAB25081).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

Special thanks to the reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yin, H.; Yi, W.; Hu, D. Computer vision and machine learning applied in the mushroom industry: A critical review. Comput. Electron. Agric. 2022, 198, 107015. [Google Scholar] [CrossRef]
  2. Yin, H.; Xu, J.; Wang, Y.; Hu, D.; Yi, W. A novel method of situ measurement algorithm for Oudemansiella raphanipies caps based on YOLOv4 and distance filtering. Agronomy 2022, 13, 134. [Google Scholar] [CrossRef]
  3. Koc, A.B. Determination of watermelon volume using ellipsoid approximation and image processing. Postharvest Biol. Technol. 2007, 45, 366–371. [Google Scholar] [CrossRef]
  4. Iqbal, S.M.; Gopal, A.; Sarma, A. Volume estimation of apple fruits using image processing. In 2011 International Conference on Image Information Processing; IEEE: New York, NY, USA, 2011; pp. 1–6. [Google Scholar] [CrossRef]
  5. Siswantoro, J.; Hilman, M.; Widiasri, M. Computer vision system for egg volume prediction using backpropagation neural network. In IOP Conference Series: Materials Science and Engineering; IOP Publishing Ltd.: Bristol, UK, 2017; Volume 273, p. 012002. [Google Scholar]
  6. Widiasri, M.; Santoso, L.P.; Siswantoro, J. Computer Vision System in Measurement of the Volume and Mass of Egg Using the Disc Method. In IOP Conference Series: Materials Science and Engineering; IOP Publishing Ltd.: Bristol, UK, 2019; Volume 703, p. 012050. [Google Scholar]
  7. Siswantoro, J.; Asmawati, E.; Siswantoro, M.Z. A rapid and accurate computer vision system for measuring the volume of axi-Symmetric natural products based on cubic spline interpolation. J. Food Eng. 2022, 333, 111139. [Google Scholar] [CrossRef]
  8. Huynh, T.; Tran, L.; Dao, S. Real-time size and mass estimation of slender axi-Symmetric fruit/vegetable using a single top view image. Sensors 2020, 20, 5406. [Google Scholar] [CrossRef] [PubMed]
  9. Tabatabaeefar, A.; Rajabipour, A. Modeling the mass of apples by geometrical attributes. Sci. Hortic. 2005, 105, 373–382. [Google Scholar] [CrossRef]
  10. Lorestani, A.N.; Tabatabaeefar, A. Modelling the mass of kiwi fruit by geometrical attributes. Sci. Hortic. 2006, 105, 373–382. [Google Scholar]
  11. Lee, J.; Nazki, H.; Baek, J.; Hong, Y.; Lee, M. Artificial intelligence approach for tomato detection and mass estimation in precision agriculture. Sustainability 2020, 12, 9138. [Google Scholar] [CrossRef]
  12. Jang, S.H.; Moon, S.P.; Kim, Y.J.; Lee, S.-H. Development of potato mass estimation system based on deep learning. Appl. Sci. 2023, 13, 2614. [Google Scholar] [CrossRef]
  13. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar] [CrossRef]
  14. Liu, C.; Feng, Q.; Sun, Y.; Li, Y.; Ru, M.; Xu, L. Yolactfusion: An Instance segmentation method for rgb-nir multimodal image fusion based on an attention mechanism. Comput. Electron. Agric. 2023, 213, 108186. [Google Scholar] [CrossRef]
  15. Wang, X.; Zhang, R.; Kong, T.; Li, L.; Shen, C. Solov2: Dynamic and fast Instance segmentation. Adv. Neural Inf. Process. Syst. 2020, 33, 17721–17732. [Google Scholar]
  16. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar] [CrossRef]
  17. Yang, S.; Zheng, L.; Yang, H.; Zhang, M.; Wu, T.; Sun, S.; Tomasetto, F.; Wang, M. A synthetic datasets based instance segmentation network for high-throughput soybean pods phenotype investigation. Expert Syst. Appl. 2022, 192, 116403. [Google Scholar] [CrossRef]
  18. Li, S.; Yan, Z.; Guo, Y.; Su, X.; Cao, Y.; Jiang, B.; Yang, F.; Zhang, Z.; Xin, D.; Chen, Q. Spm-Is: An auto-algorithm to acquire a mature soybean phenotype based on instance segmentation. Crop J. 2022, 10, 1412–1423. [Google Scholar] [CrossRef]
  19. Sapkota, R.; Ahmed, D.; Karkee, M. Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments. Artif. Intell. Agric. 2024, 13, 84–99. [Google Scholar] [CrossRef]
  20. Chen, Z.; Cai, Y.; Liu, Y.; Liang, Z.; Chen, H.; Ma, R.; Qi, L. Towards end-to-end rice row detection in paddy fields exploiting two-pathway instance segmentation. Comput. Electron. Agric. 2025, 231, 109963. [Google Scholar] [CrossRef]
  21. He, H.; Ma, X.; Guan, H. A calculation method of phenotypic traits of soybean pods based on image processing technology. Ecol. Inform. 2022, 69, 101676. [Google Scholar] [CrossRef]
  22. Liu, R.; Huang, S.; New, Y.; Xu, S. Automated detection research for number and key phenotypic parameters of rapeseed silique. Chin. J. Oil Crop Sci. 2020, 42, 71–77. [Google Scholar] [CrossRef]
  23. Zhu, Y.; Zhang, X.; Shen, Y.; Gu, Q.; Jin, Q.; Zheng, K. High-throughput phenotyping collection and analysis of Flammulina filiformis based on image recognition technology. Myco 2021, 40, 3. [Google Scholar] [CrossRef]
  24. Kumar, M.; Gupta, S.; Gao, X.Z.; Singh, A. Plant species recognition using morphological features and adaptive boosting methodology. IEEE Access 2019, 7, 163912–163918. [Google Scholar] [CrossRef]
  25. Okinda, C.; Sun, Y.; Nyalala, I.; Korohou, T.; Opiyo, S.; Wang, J.; Shen, M. Egg volume estimation based on image processing and computer vision. J. Food Eng. 2020, 283, 110041. [Google Scholar] [CrossRef]
  26. Wang, Y.; Xiao, H.; Yin, H.; Luo, S.; Le, Y.; Wan, J. Measurement of morphology of Oudemansiella raphanipes based on RGBD camera. Nongye Gongcheng Xuebao/Trans. Chinese Soc. Agric. Eng. 2022, 38, 140–148. [Google Scholar] [CrossRef]
  27. Yin, H.; Wei, Q.; Gao, Y.; Hu, H.; Wang, Y. Moving toward smart breeding: A robust amodal segmentation method for occluded Oudemansiella raphanipes cap size estimation. Comput. Electron. Agric. 2024, 220, 108895. [Google Scholar] [CrossRef]
  28. Nyalala, I.; Okinda, C.; Nyalala, L.; Makange, N.; Chao, Q.; Chao, L.; Yousaf, K.; Chen, K. Tomato volume and mass estimation using computer vision and machine learning algorithms: Cherry Tomato Model. J. Food Eng. 2019, 263, 288–298. [Google Scholar] [CrossRef]
  29. Saikumar, A.; Nickhil, C.; Badwaik, L.S. Physicochemical characterization of elephant apple (Dillenia indica L.) fruit and its mass and volume modeling using computer vision. Sci. Hortic. 2023, 314, 111947. [Google Scholar] [CrossRef]
  30. Kirillov, A.; Wu, Y.; He, K.; Girshick, R. Pointrend: Image segmentation as rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 9799–9808. [Google Scholar]
  31. Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baselines for image restoration. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 17–33. [Google Scholar] [CrossRef]
  32. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  33. Quinonero-Candela, J.; Rasmussen, C.E.; Williams, C.K. Approximation methods for gaussian process regression. In Large-Scale Kernel Machines; MIT Press: Cambridge, MA, USA, 2007; pp. 203–223. [Google Scholar]
  34. Gonzalez, B.; Garcia, G.; Velastin, S.A.; GholamHosseini, H.; Tejeda, L.; Farias, G. Automated food weight and content estimation using computer vision and AI algorithms. Sensors 2024, 24, 7660. [Google Scholar] [CrossRef] [PubMed]
  35. Sofwan, A.; Sumardi, S.; Ayun, K.Q.; Budiraharjo, K.; Karno, K. Artificial neural network levenberg-marquardt method for environmental prediction of smart greenhouse. In 2022 9th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE); IEEE: New York, NY, USA, 2022; pp. 50–54. [Google Scholar] [CrossRef]
  36. Sivaranjani, T.; Vimal, S. AI method for improving crop yield prediction accuracy using ANN. Comput. Syst. Sci. Eng. 2023, 47, 153–170. [Google Scholar] [CrossRef]
  37. Kayabaşı, A.; Sabancı, K.; Yiğit, E.; Toktaş, A.; Yerlikaya, M.; Yıldız, B. Image processing based ANN with bayesian regularization learning algorithm for classification of wheat grains. In Proceedings of the 2017 10th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 30 November–2 December 2017; pp. 1166–1170. [Google Scholar]
  38. Amraei, S.; Abdanan, M.S.; Salari, S. Broiler weight estimation based on machine vision and artificial neural network. Br. Poult. Sci. 2017, 58, 200–205. [Google Scholar] [CrossRef]
  39. Akbarian, S.; Xu, C.; Wang, W.; Ginns, S.; Lim, S. Sugarcane yields prediction at the row level Using a novel cross-validation approach to multi-Year multispectral images. Comput. Electron. Agric. 2022, 198, 107024. [Google Scholar] [CrossRef]
  40. Yang, H.I.; Min, S.G.; Yang, J.H.; Eun, J.B.; Chung, Y.B. Mass and volume estimation of diverse kimchi cabbage forms using RGB-D vision and machine learning. Postharvest Biol. Technol. 2024, 218, 113130. [Google Scholar] [CrossRef]
  41. Yang, H.I.; Min, S.G.; Yang, J.H.; Lee, M.A.; Park, S.H.; Eun, J.B.; Chung, Y.B. Predictive modeling and mass transfer kinetics of tumbling-assisted dry salting of kimchi cabbage. J. Food Eng. 2024, 361, 111742. [Google Scholar] [CrossRef]
  42. Wang, D.; Feng, Z.; Ji, S.; Cui, D. Simultaneous prediction of peach firmness and weight using vibration spectra combined with one-dimensional convolutional neural network. Comput. Electron. Agric. 2022, 201, 107341. [Google Scholar] [CrossRef]
  43. Xie, W.; Wei, S.; Zheng, Z.; Chang, Z.; Yang, D. Developing a stacked ensemble model for predicting the mass of fresh carrot. Postharvest Biol. Technol. 2022, 186, 111848. [Google Scholar] [CrossRef]
  44. Luo, S.; Tang, J.; Peng, J.; Yin, H. A novel approach for measuring the volume of pleurotus eryngii based on depth camera and improved circular disk method. Sci. Hortic. 2024, 336, 113382. [Google Scholar] [CrossRef]
Figure 1. Diagram of the imaging system.
Figure 1. Diagram of the imaging system.
Agronomy 16 00124 g001
Figure 2. Four datasets used in our work. (a) Dataset 1 is used to train the DL model for segmenting the multiple Oudemansiella raphanipies samples. (b) Dataset 2 serves to build the ML model for mass estimation of Oudemansiella raphanipies. (c) Dataset 3 is employed to evaluate the performance of the individual Oudemansiella raphanies mass estimation under different states. (d) Dataset 4 is designated to validate the mass estimation of multiple Oudemansiella raphanipies across different quality grades. L, M, and S denote large, medium, and small Oudemansiella raphanipies sizes, respectively.
Figure 2. Four datasets used in our work. (a) Dataset 1 is used to train the DL model for segmenting the multiple Oudemansiella raphanipies samples. (b) Dataset 2 serves to build the ML model for mass estimation of Oudemansiella raphanipies. (c) Dataset 3 is employed to evaluate the performance of the individual Oudemansiella raphanies mass estimation under different states. (d) Dataset 4 is designated to validate the mass estimation of multiple Oudemansiella raphanipies across different quality grades. L, M, and S denote large, medium, and small Oudemansiella raphanipies sizes, respectively.
Agronomy 16 00124 g002
Figure 3. Examples of different grades of Oudemansiella raphanipes. Columns #1, #2, and #3 represents distinct visual examples of the artifacts within each size category.
Figure 3. Examples of different grades of Oudemansiella raphanipes. Columns #1, #2, and #3 represents distinct visual examples of the artifacts within each size category.
Agronomy 16 00124 g003
Figure 4. Scheme of the method used in this study. (a) Oudemansiella raphanipes instance segmentation for subsequent steps. (b) Regression model construction for volume and mass estimation. (c) Model evaluation.
Figure 4. Scheme of the method used in this study. (a) Oudemansiella raphanipes instance segmentation for subsequent steps. (b) Regression model construction for volume and mass estimation. (c) Model evaluation.
Agronomy 16 00124 g004
Figure 5. The schematic diagram of segmentation by FinePoint-ORSeg module. The red font represents the modules added in this paper, and the red dashed box represents the backbone network of the FinePoint-ORSeg model proposed in this paper.
Figure 5. The schematic diagram of segmentation by FinePoint-ORSeg module. The red font represents the modules added in this paper, and the red dashed box represents the backbone network of the FinePoint-ORSeg model proposed in this paper.
Agronomy 16 00124 g005
Figure 6. (a) Example of how the PointRend module works, where various shades of red/pink denote the intermediate feature extraction and grid-based coarse predictions and the solid red silhouette indicates the final point-based prediction output. (b) The structure of the NAF block, where different colors donate different blocks.
Figure 6. (a) Example of how the PointRend module works, where various shades of red/pink denote the intermediate feature extraction and grid-based coarse predictions and the solid red silhouette indicates the final point-based prediction output. (b) The structure of the NAF block, where different colors donate different blocks.
Agronomy 16 00124 g006aAgronomy 16 00124 g006b
Figure 7. Flow chart of cap–stem segmentation. The arrow indicates the overall workflow of the algorithm.
Figure 7. Flow chart of cap–stem segmentation. The arrow indicates the overall workflow of the algorithm.
Agronomy 16 00124 g007
Figure 8. The revising process of the bounding box of the cap and stem. (a) The original bounding box of the cap (stem) mask. (b) The original main direction calculated via PCA algorithm and the green lines represent the main direction. (c) The cap (stem) mask after rotation with PCA and the red lines represent the corrected main direction. (d) The revised bounding box of the rotated cap (stem) mask.
Figure 8. The revising process of the bounding box of the cap and stem. (a) The original bounding box of the cap (stem) mask. (b) The original main direction calculated via PCA algorithm and the green lines represent the main direction. (c) The cap (stem) mask after rotation with PCA and the red lines represent the corrected main direction. (d) The revised bounding box of the rotated cap (stem) mask.
Agronomy 16 00124 g008
Figure 9. Relationships of mass and volume between ground truth and prediction. (a) Correlation between mass estimates and manual measurements. (b) Correlation between volume estimates and manual measurements.
Figure 9. Relationships of mass and volume between ground truth and prediction. (a) Correlation between mass estimates and manual measurements. (b) Correlation between volume estimates and manual measurements.
Agronomy 16 00124 g009
Figure 10. Loss curves.
Figure 10. Loss curves.
Agronomy 16 00124 g010
Figure 11. Segmentation visualization results of FinePoint-ORSeg model. Different colors represent different Oudemansiella raphanipies instances and the colors are generated randomly. Columns #1, #2, and #3 represents distinct visual examples of the artifacts within each size category.
Figure 11. Segmentation visualization results of FinePoint-ORSeg model. Different colors represent different Oudemansiella raphanipies instances and the colors are generated randomly. Columns #1, #2, and #3 represents distinct visual examples of the artifacts within each size category.
Agronomy 16 00124 g011
Figure 12. The correlations of characteristic parameters with mass and volume.
Figure 12. The correlations of characteristic parameters with mass and volume.
Agronomy 16 00124 g012
Figure 13. The evaluated results ( R a d j 2 , RPD, MAE, and RMSE) on the validation set.
Figure 13. The evaluated results ( R a d j 2 , RPD, MAE, and RMSE) on the validation set.
Agronomy 16 00124 g013
Figure 14. The results of different grades samples between estimated results and manual measurement. (a) Rows 1, 2, and 3 represent the multiple samples of different grades. Columns 1, 2, 3 represent three numbered image examples. (b) The line chart represents the average accuracy of the estimation results in 30 images, and the bar chart represents the average absolute error of the estimation results in 30 images. Blue represents small samples, pink represents medium samples, and yellow represents large samples. (c) The relative errors of mass between estimation and reference measurements. (d) The relative errors of volume between estimation and reference measurements.
Figure 14. The results of different grades samples between estimated results and manual measurement. (a) Rows 1, 2, and 3 represent the multiple samples of different grades. Columns 1, 2, 3 represent three numbered image examples. (b) The line chart represents the average accuracy of the estimation results in 30 images, and the bar chart represents the average absolute error of the estimation results in 30 images. Blue represents small samples, pink represents medium samples, and yellow represents large samples. (c) The relative errors of mass between estimation and reference measurements. (d) The relative errors of volume between estimation and reference measurements.
Agronomy 16 00124 g014
Figure 15. Estimated results of mass and volume of each Oudemansiella raphanipies on 10 different images. (a) The estimated results of mass. (b) The estimated results of volume. The line chart represents the coefficient of variation of each Oudemansiella raphanipies of different grades on 10 different images, and the bar chart represents the average absolute error of each Oudemansiella raphanipies of different grades on 10 different images. Blue represents small samples, pink represents medium samples, and yellow represents large samples.
Figure 15. Estimated results of mass and volume of each Oudemansiella raphanipies on 10 different images. (a) The estimated results of mass. (b) The estimated results of volume. The line chart represents the coefficient of variation of each Oudemansiella raphanipies of different grades on 10 different images, and the bar chart represents the average absolute error of each Oudemansiella raphanipies of different grades on 10 different images. Blue represents small samples, pink represents medium samples, and yellow represents large samples.
Agronomy 16 00124 g015
Figure 16. The CV of 18 features extracted by our method. No.9 and No.18 (in Table 7) are the single samples measured 10 times in different states, while the S, M, L samples are the CV of the two images with the largest and the smallest errors.
Figure 16. The CV of 18 features extracted by our method. No.9 and No.18 (in Table 7) are the single samples measured 10 times in different states, while the S, M, L samples are the CV of the two images with the largest and the smallest errors.
Agronomy 16 00124 g016
Table 1. Definition of 18 phenotypic parameters.
Table 1. Definition of 18 phenotypic parameters.
IDAbbreviationPhenotypic ParametersEquationsUnits
1CDCap Diameter max x i m i n x i mm
2SDStem Diameter max x i m i n x i mm
3CHCap Height max y i m i n y i mm
4SHStem Height max y i m i n y i mm
5 R D H c a p Cap Diameter–Height Ratio C D C H -
6 R D H s t e m Stem Diameter–Height Ratio S D S H -
7 P c a p Perimeter of Cap i = 1 n 1 ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2 mm
8 P s t e m Perimeter of Stem i = 1 n 1 ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2 mm
9 A r e a c a p Area of cap i = 1 n 1 x i y i + 1 y i x i + 1 + x n y 1 y n x 1 m m 2
10 A r e a s t e m Area of stem i = 1 n 1 x i y i + 1 y i x i + 1 + x n y 1 y n x 1 m m 2
11 R c a p Roughness of Cap A r e a c a p C o n v e x h u l l A r e a c a p -
12 R s t e m Roughness of Stem A r e a s t e m C o n v e x h u l l A r e a s t e m -
13 A R M B c a p Ratio of Cap Area to Bounding Box C D × C H A r e a c a p -
14 A R M B s t e m Ratio of Stem Area to Bounding Box S D × S H A r e a s t e m -
15 A n g l e Opening Angle of Cap A n g l e = v 1 · v 2 v 1 × v 2 -
16 H t o t a l Total Height C H + S H mm
17 P t o t a l Total Perimeter i = 1 n 1 ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2 mm
18 A r e a t o t a l Total Area i = 1 n 1 x i y i + 1 y i x i + 1 + x n y 1 y n x 1 m m 2
Note: the coordinate ( x i ,   y i ) represented the cap (stem/fruit body) mask; the Convexhull area was the area of the convex hull of the cap (stem) mask.
Table 2. The property descriptions of the GPR and SVR kernels.
Table 2. The property descriptions of the GPR and SVR kernels.
ModelsKernel Equations
Linear SVM K L x , x = x T x
Fine Gaussian SVM
Medium Gaussian SVM
Coarse Gaussian SVM
K G x , x = e x p ( x x 2 σ 2 ) 2
Rational Quadratic GPR K R Q x , x = ( 1 + x x 2 2 α l 2 ) α
Exponential GPR K E x , x = e x p ( x x l )
Bayesian Regularization ANNn-11-2 layers
Levenberg–Marquardt ANNn-11-2 layers
Scaled Conjugate Gradient ANNn-11-2 layers
Note: σ is the dimensional feature-space scale, α is the decay exponent, and l is the length scale.
Table 3. The calculation equations of evaluation metrics.
Table 3. The calculation equations of evaluation metrics.
TasksEvaluation MetricsEquations
Instance segmentationThe average precision ( A P ) P = T P T P + F P
R = T P T P + F N
A P = 0 1 P R r d r
A P 50
A P 75
A P s
Phenotypic parameter extractionMean absolute error ( M A E ) M A E = 1 n i = 1 n y i y i ^
Mean absolute percentage error ( M A P E ) M A P E = 1 n i = 1 n y i y i ^ y i × 100 %
Regression modelAdjusted R 2 ( R a d j 2 ) R 2 = 1 i = 1 n ( y i y i ^ ) 2 i = 1 n ( y i y i ¯ ) 2
R a d j 2 = 1 1 R 2 ( n 1 ) / ( n m 1 )
Mean absolute error ( M A E ) M A E = 1 n i = 1 n y i y i ^
Root mean square error ( R M S E ) R M S E = 1 n i = 1 n y i y i ^ 2
Ratio of performance to deviation ( R P D ) S D r e f = 1 n i = 1 n ( y i y i ¯ ) 2
R P D = S D r e f / R M S E
Mass evaluation under different conditionsMean absolute percentage error ( M A P E ) M A P E = 1 n i = 1 n y i y i ^ y i × 100 %
A c c u r a c y A c c u r a c y = 1 M A P E
Coefficient of variation ( C V ) S D p r e d = 1 n i = 1 n ( y i ^ μ ) 2
C V = S D p r e d μ × 100 %
Note: T P represents a positive sample predicted as positive, F P represents a negative sample predicted as positive, F N represents a positive sample predicted as negative, P represents precision and R represents recall. y i represents the reference value, y i ^ represents the predicted value, y i ¯ represents the mean value of the reference value, n represents the number of data, m represents the number of independent variables, S D r e f represents the standard deviation of the reference values, S D p r e d represents the standard deviation of the predicted values, and μ represents the average predicted values.
Table 4. The results of ablation experiment. Bold represents the best result.
Table 4. The results of ablation experiment. Bold represents the best result.
IDPointRendNAF A P A P 50 A P 75 A P s
1--0.8110.9770.9110.857
2-0.8130.9750.9210.843
3-0.8140.9760.9350.855
40.8310.9840.9300.860
Table 5. Comparison of different instance segmentation networks on Oudemansiella raphanipies.
Table 5. Comparison of different instance segmentation networks on Oudemansiella raphanipies.
Method A P A P 50 A P 75 A P s
Mask R-CNN0.8110.9770.9110.857
SOLOv20.8180.9730.9170.857
YOLACT0.6550.9370.7400.730
Mask2former0.8310.9690.9080.864
TensorMask0.7980.9660.9080.830
Mask R-CNN with swin0.8290.9770.9600.857
InstaBoost0.7600.9650.8740.792
FinePoint-ORSeg0.8310.9840.9300.860
Table 6. Examples of the extracted phenotypic parameters.
Table 6. Examples of the extracted phenotypic parameters.
ParameterNumber m e a s u r e r e f e r e n c e (mm) m e a s u r e e s t i m a t i o n (mm)MAE (mm)MAPE (%)
CD114.916.881.9813.26
220.621.881.286.19
319.518.750.753.85
420.921.250.351.68
521.421.880.482.22
622.123.751.677.47
725.325.000.301.29
820.222.502.3011.39
923.724.380.682.85
1025.926.250.351.35
Average21.4522.251.015.16
CH112.712.500.201.58
214.616.251.6511.30
314.013.750.251.79
415.816.881.086.80
515.515.630.130.80
617.819.381.588.85
715.415.630.231.46
813.114.381.289.73
915.215.630.432.80
1012.112.500.443.65
Average14.6215.250.734.88
SD139.139.380.280.70
230.930.630.280.89
336.538.131.634.45
441.843.131.333.17
558.661.883.285.59
633.733.750.051.48
733.630.003.6010.71
833.131.251.855.59
932.733.130.431.30
1037.1636.880.290.77
Average37.7237.821.303.47
SH113.113.750.654.96
214.915.630.7254.87
318.518.750.251.35
418.819.380.5753.06
517.518.130.6253.57
617.818.750.955.34
712.111.880.2251.86
818.118.750.653.59
915.916.250.352.20
1018.018.750.754.17
Average16.4717.000.583.50
Table 7. The results of the estimated results of single samples at random states. Ref refers to the reference data from manual measurements. APV refers to the average predicted values. STD represents the standard deviation.
Table 7. The results of the estimated results of single samples at random states. Ref refers to the reference data from manual measurements. APV refers to the average predicted values. STD represents the standard deviation.
IDMassVolume
Ref (g)APV (g)MAE (g)STD (g)CV (%)Ref (cm3)APV (cm3)MAE (cm3)STD (cm3)CV (%)
13.23 3.10 0.19 0.17 5.42 4.28 4.32 0.26 0.32 7.31
22.20 2.15 0.14 0.17 7.92 3.04 2.93 0.20 0.22 7.62
33.21 3.20 0.22 0.25 7.92 4.25 4.46 0.32 0.30 6.81
43.36 3.10 0.29 0.28 9.13 4.29 4.43 0.21 0.23 5.25
52.37 2.45 0.17 0.19 7.59 3.55 3.46 0.28 0.28 8.16
62.53 2.40 0.23 0.20 8.27 3.34 3.46 0.23 0.25 7.26
72.95 2.40 0.55 0.10 4.12 3.96 3.51 0.45 0.11 3.09
82.72 2.70 0.08 0.10 3.84 3.45 3.65 0.25 0.26 7.02
93.10 2.93 0.40 0.39 13.34 4.38 4.66 0.64 0.58 12.37
103.31 3.43 0.15 0.12 3.37 4.35 4.81 0.46 0.11 2.21
112.92 2.57 0.35 0.14 5.40 4.20 3.70 0.49 0.15 3.95
123.11 2.95 0.27 0.28 9.57 4.25 4.24 0.28 0.37 8.84
133.25 2.74 0.51 0.16 5.86 4.25 4.13 0.29 0.29 7.13
143.08 3.08 0.14 0.15 5.03 4.24 4.29 0.22 0.29 6.67
152.17 2.27 0.21 0.22 9.69 2.98 3.31 0.33 0.26 7.99
163.06 2.96 0.15 0.13 4.46 4.51 4.56 0.20 0.24 5.22
172.35 2.12 0.19 0.12 5.71 3.37 3.29 0.22 0.23 7.01
182.98 2.70 0.19 0.32 11.88 4.01 3.85 0.44 0.51 13.21
192.25 2.34 0.19 0.08 3.45 3.16 3.34 0.37 0.33 9.79
203.34 3.12 0.32 0.29 9.17 4.25 4.51 0.34 0.30 6.61
213.40 2.54 0.86 0.18 6.99 4.70 4.03 0.67 0.20 4.98
223.35 3.29 0.10 0.11 3.47 4.72 4.93 0.29 0.24 4.94
233.17 3.05 0.16 0.15 4.85 4.64 4.36 0.29 0.22 4.94
243.29 3.29 0.21 0.23 6.90 4.46 4.46 0.33 0.36 8.12
Average2.95 2.79 0.26 0.19 6.81 4.03 4.03 0.34 0.28 6.94
Table 8. Average error of the multiple Oudemansiella raphanipies in one image compared with manual measurement.
Table 8. Average error of the multiple Oudemansiella raphanipies in one image compared with manual measurement.
GradeTotal
SML
Number of Images10101030
MassRMSE ( g )0.5900.4931.3230.802
MAE (g)1.4541.3232.3671.714
MAPE (%)18.174.223.218.53
VolumeRMSE ( c m 3 )0.8350.7572.3271.306
MAE ( c m 3 )2.1401.7964.7132.703
MAPE (%)17.943.773.668.46
Table 9. Average error of the each Oudemansiella raphanipies in different images compared with manual measurement.
Table 9. Average error of the each Oudemansiella raphanipies in different images compared with manual measurement.
GradeTotal
SML
Number of Samples10101030
MassRMSE ( g )0.3900.4941.0830.656
MAE (g)0.4220.4211.0450.629
MAPE (%)48.62%12.76%13.97%25.12%
VolumeRMSE ( c m 3 )0.5560.7521.8841.064
MAE ( c m 3 )0.6010.6341.8301.022
MAPE (%)44.89%12.76%15.18%24.28%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yin, H.; Lei, D.; Xiong, A.; Yuan, L.; Chen, M.; Xu, Y.; Wang, Y.; Xiao, H.; Wei, Q. A Pipeline for Mushroom Mass Estimation Based on Phenotypic Parameters: A Multiple Oudemansiella raphanipies Model. Agronomy 2026, 16, 124. https://doi.org/10.3390/agronomy16010124

AMA Style

Yin H, Lei D, Xiong A, Yuan L, Chen M, Xu Y, Wang Y, Xiao H, Wei Q. A Pipeline for Mushroom Mass Estimation Based on Phenotypic Parameters: A Multiple Oudemansiella raphanipies Model. Agronomy. 2026; 16(1):124. https://doi.org/10.3390/agronomy16010124

Chicago/Turabian Style

Yin, Hua, Danying Lei, Anping Xiong, Lu Yuan, Minghui Chen, Yilu Xu, Yinglong Wang, Hui Xiao, and Quan Wei. 2026. "A Pipeline for Mushroom Mass Estimation Based on Phenotypic Parameters: A Multiple Oudemansiella raphanipies Model" Agronomy 16, no. 1: 124. https://doi.org/10.3390/agronomy16010124

APA Style

Yin, H., Lei, D., Xiong, A., Yuan, L., Chen, M., Xu, Y., Wang, Y., Xiao, H., & Wei, Q. (2026). A Pipeline for Mushroom Mass Estimation Based on Phenotypic Parameters: A Multiple Oudemansiella raphanipies Model. Agronomy, 16(1), 124. https://doi.org/10.3390/agronomy16010124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop