Next Article in Journal
Correction: Buoio et al. Microbial Load, Physical–Chemical Characteristics, Ammonia, and GHG Emissions from Fresh Dairy Manure and Digestates According to Different Environmental Temperatures. Agriculture 2025, 15, 1931
Previous Article in Journal
Evaluating the Sustainability of Emerging Extraction Technologies for Valorization of Food Waste: Microwave, Ultrasound, Enzyme-Assisted, and Supercritical Fluid Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Semi-Automatic and Visual Leaf Area Measurement System Integrating Hough Transform and Gaussian Level-Set Method

1
Department of Basic Sciences, Shanxi Agricultural University, Jinzhong 030801, China
2
College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
3
School of Software, Shanxi Agricultural University, Jinzhong 030801, China
4
College of Agriculture, Shanxi Agricultural University, Jinzhong 030801, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(19), 2101; https://doi.org/10.3390/agriculture15192101
Submission received: 2 September 2025 / Revised: 30 September 2025 / Accepted: 1 October 2025 / Published: 9 October 2025
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)

Abstract

Accurate leaf area measurement is essential for plant growth monitoring and ecological research; however, it is often challenged by perspective distortion and color inconsistencies resulting from variations in shooting conditions and plant status. To address these issues, this study proposes a visual and semi-automatic measurement system. The system utilizes Hough transform-based perspective transformation to correct perspective distortions and incorporates manually sampled points to obtain prior color information, effectively mitigating color inconsistency. Based on this prior knowledge, the level-set function is automatically initialized. The leaf extraction is achieved through level-set curve evolution that minimizes an energy function derived from a multivariate Gaussian distribution model, and the evolution process allows visual monitoring of the leaf extraction progress. Experimental results demonstrate robust performance under diverse conditions: the standard deviation remains below 1 cm2, the relative error is under 1%, the coefficient of variation is less than 3%, and processing time is under 10 s for most images. Compared to the traditional labor-intensive and time-consuming manual photocopy-weighing approach, as well as OpenPheno (which lacks parameter adjustability) and ImageJ 1.54g (whose results are highly operator-dependent), the proposed system provides a more flexible, controllable, and robust semi-automatic solution. It significantly reduces operational barriers while enhancing measurement stability, demonstrating considerable practical application value.

1. Introduction

Leaves serve as the primary organs for plant photosynthesis and transpiration [1,2], with leaf area being a critical agronomic trait intrinsically linked to photosynthetic capacity, yield potential, and plant health. Consequently, the demand for accurate, efficient, and verifiable leaf area measurement methods remains high in fields such as crop breeding and precision cultivation.
Existing methods can be broadly categorized into direct and image-based techniques. Direct methods, such as the photocopy-weighing technique or the use of leaf area meters, often achieve high accuracy but are either destructively labor-intensive or entail high equipment costs [3,4]. Regression equations offer non-destructive assessment but lack generality, yielding significant errors for irregularly shaped leaves or across different growth stages [4,5]. The proliferation of digital cameras and smartphones has catalyzed the development of image-based methods, which calculate area from the pixel ratio between a leaf and a reference object. These approaches are cost-effective and accessible but are predominantly divided into two paradigms, each with inherent limitations for rigorous scientific application: (1) Threshold-based methods [5,6,7,8,9], which are fast but fundamentally limited to grayscale intensity, failing to account for critical color variations caused by disease, nutrient status, or species differences. They discard the rich information in color channels and their correlations, making them highly susceptible to subjective threshold selection and perspective distortion from non-standardized imaging setups; (2) Deep learning-based methods [10,11,12], which, while powerful, require extensive, annotated datasets and computationally intensive training, often resulting in models that are species-specific and lack transparency for visual result validation.
This landscape reveals a persistent overall problem: a lack of a transparent, accurate, and practical imaging solution that is both robust to common imaging imperfections (e.g., perspective distortion, color variation) and accessible for research settings where building large datasets for deep learning is impractical. Many current methods prioritize either full automation at the cost of accuracy and verifiability or require extensive manual intervention. Therefore, there is a clear demand for a method that guarantees measurement integrity through geometric accuracy and precise segmentation, while providing a visual workflow for validation—a critical capability for robust agricultural applications.
To address this specific problem, we define the focused research objective of this study: to develop and validate a semi-automated, vision-based measurement system that explicitly prioritizes precision and verifiability over full automation. Our approach is designed for the specific application of laboratory-level phenotyping and plant physiological research, where measurement accuracy and the ability to audit results are paramount. The core of our methodology tackles two key challenges:
1.
Geometric Fidelity: We address perspective distortion algorithmically, not through fixed imaging rigs, by using Hough transform-based detection to correct scaling and angular inconsistencies. This correction establishes an accurate spatial mapping relationship, ensuring that measurements are based on a uniform metric scale and are independent of the camera’s position and angle.
2.
Precision Segmentation with Minimal Interaction: We introduce a novel segmentation workflow that requires minimal user input to guide high-precision automation. Through simple interactive sampling, the user provides prior knowledge by indicating approximate regions of the leaf and reference object. This critical yet minimal interaction enables our system to automatically generate initial contours within these regions, serving as the starting point for the variational level set evolution. The subsequent segmentation is guided by a multivariate Gaussian color model that fully leverages RGB channel information and their correlations, enabling robust performance across natural color variations.
This synthesis of automated geometric correction, minimal interactive guidance, and statistically driven evolution results in a unique system that outputs the extracted leaf and reference contours overlaid on the original image. This provides immediate visual verification of results—a feature often absent in both threshold and deep learning methods. By forgoing the pursuit of brittle full automation in favor of a robust, statistically grounded approach with minimal interaction, we demonstrate a pragmatic and highly accurate alternative that is training-free, species-agnostic, and tailored for researchers who require not just a result, but a trustworthy and auditable measurement process for critical leaf area data.

2. Materials and Methods

2.1. Experimental Materials

Leaf samples were collected from experimental fields at Shanxi Agricultural University (Taigu, Jinzhong, China), comprising green leaves of the same crop exhibiting varying shades due to environmental factors (light exposure duration, drought, pest/disease stress) as shown in Figure 1, along with morphologically diverse leaves from different crop species illustrated in Figure 2.

2.2. Method for Calculating Leaf Area

This system employs a reference marker-based approach for leaf area calculation, which involves positioning a reference marker of known area beside the target leaf. This method is consistent with techniques used in prior studies, such as Li Fangyi’s use of industrial-grade black frosted circular magnetic sheets (30 mm in diameter) to calculate the leaf area of individual rapeseed plants [9], and Guo Xiaojuan’s approach of utilizing a 1 cm black square for measuring wheat leaf area [13]. Figure 3 illustrates the integrated workflow for automated leaf area measurement. The process begins with image acquisition using a smartphone to capture pictures of a custom-designed leaf-bearing platform containing a leaf sample and a reference marker. Perspective distortion correction is then performed through Hough transform-based detection of the platform’s four interior edges, followed by a perspective transformation to rectify geometrical distortions. Prior knowledge is obtained by manually selecting 2–3 pixel points on both the leaf and reference regions in the rectified image, establishing adaptive thresholding which are then used to generate approximate contours through a thresholding method. The color image segmentation employs a probabilistic model that treats pixel colors as multivariate Gaussian distributions, formulating the segmentation as a Maximum A Posteriori (MAP) optimization problem with an energy function definition. By minimizing an energy functional using variational level set methods—auto-initialized from the approximate contours—the system achieves tri-class segmentation (leaf, reference, and background). Finally, the actual leaf area is calculated by scaling the pixel ratio between the leaf and reference regions, derived from level set evolution results, against the known physical area of the reference marker.

2.2.1. Image Acquisition

Image acquisition employs a custom-designed leaf-bearing platform featuring reference markers (Figure 4), comprising three components: a specimen mounting plate with black rectangular borders, a flexible transparent plastic cover, and a securing clip. The plate’s black borders facilitate perspective distortion correction through automated border detection, while its white rectangular substrate displays a pure red reference marker in the upper-right quadrant—exploiting red-green chromatic contrast to optimize color-based segmentation between predominantly green foliage and reference. Platform dimensions are scaled according to anticipated leaf sizes.
During image capture using digital cameras or smartphones, plant leaves must be evenly positioned within the central zone of the mounting plate under the following constraints: (1) free of foliar overlap, (2) contained entirely within the white detection area, (3) spatially isolated from the red reference marker, and (4) fully covered within the camera’s field of view. Resultant images must exhibit sharp focus without motion artifacts to ensure measurement validity.

2.2.2. Image Rectification

Variable camera height and imaging angle during capture introduce perspective distortion in acquired leaf images, compromising measurement accuracy. To mitigate this effect, our methodology implements a geometric rectification pipeline: (1) binarization and edge detection of raw images; (2) Hough transform-based localization of the four vertices defining the platform’s white detection area; (3) perspective transformation utilizing these vertices to correct spatial distortions, thereby enhancing area measurement precision.
1. Perspective Transformation
Perspective transformation, grounded in optical imaging principles, rectifies geometric distortions through computation of control points to derive a transformation matrix [3,14,15]. This technique fundamentally reprojects images onto a novel viewing plane, with its generalized transformation formula expressed as Equation (1):
x         y         z = u           v             w     T ,                                                     T =   a 11                 a 12               a 13 a 21                 a 22               a 23   a 31                 a 32               a 33 ,
where T denotes the perspective transformation matrix, ( u , v ) represent the original pixel coordinates, and w = 1 , and a 33 = 1 . Defining the rectified pixel coordinates as ( x , y ) where x = x / z and y = y / z , the perspective transformation equation is derived as Equation (2):
x = x z = a 11 u + a 21 v + a 31 a 13 u + a 23 v + a 33 y = y z = a 12 u + a 22 v + a 32 a 13 u + a 23 v + a 33 ,
As evident from the derivation, four corresponding point pairs yield eight equations. Solving this system simultaneously determines the eight unknown parameters in transformation matrix T (excluding a 33 ). This computed matrix subsequently enables perspective warping of all pixel coordinates in the source image [14,15].
2. Hough Transform-Based Perspective Rectification
This work implements perspective correction through Hough transform-based line detection [16,17,18] and vertex computation, experimentally validating enhanced leaf area measurement accuracy. The workflow comprises four stages:
(1)
Preprocessing: Original images (Figure 5a) undergo binarization (Figure 5b) and edge detection (Figure 5c). Comparative evaluation of edge operators demonstrated the Laplacian of Gaussian (LoG) operator’s superior performance in extracting the platform’s rectangular contour.
(2)
Boundary Identification: Hough transform detects all line segments in Figure 5c, with the four longest lines corresponding to the platform’s borders (Figure 5d).
(3)
Geometric Registration:
a.
Source vertices: Intersection coordinates ( u 1 , v 1 ) to ( u 4 , v 4 ) from line equations.
b.
Target rectangle: Constructed using maximum horizontal/vertical spans with vertices ( x 1 , y 1 ) to $( x 4 , y 4 ).
(4)
Spatial Transformation:
a.
Perspective matrix derivation: Plugging source/target coordinates into Equation (2).
b.
Bicubic interpolation: Mitigates pixel loss during warping.
c.
Background pruning: Extraneous regions beyond detection area removed (Figure 5e).

2.2.3. Color Image Segmentation

This study’s image segmentation aims to partition plant leaf, reference marker, and background into three disjoint regions, calculating actual leaf area through the pixel ratio between leaf and reference regions combined with the reference’s known physical area. We implement a region-based approach that models pixel colors via multivariate Gaussian distributions, formulates segmentation as a Maximum A Posteriori (MAP) optimization problem with an energy function definition, and minimizes this functional using variational level set method to derive curve evolution equations. This methodology fully leverages RGB color information while enabling direct segmentation visualization through level set evolution results and simultaneously extracting both leaf and reference markers.
1. Probabilistic Segmentation Model
This study models image color values as random variables with specific probability distributions, transforming image segmentation into a Maximum A Posteriori (MAP) optimization problem to construct a region-based statistical segmentation model.
Assuming the rectified image G is partitioned into three regions: leaf region M1, reference marker region M2, and background region M3, we define M = {M1, M2, M3}. Image segmentation is formulated as maximizing the posterior probability p(M|G) [19,20]. By Bayesian theory, assuming pixel-wise independence within each region, p(M|G) can be expressed as:
p ( M | G ) = p ( { M i } | G ) = p ( G | { M i } ) p ( G ) p ( { M i } ) = s M i p i ( G ( s ) ) p ( G ) p ( { M i } ) ,                 i { 1 , 2 , 3 } ,
Since image G is given, p(G) remains constant. Defining p{Mi}) as the prior probability of pixels assigned to region Mi and assuming uniform priors (p{Mi} = 1/3), the maximum a posteriori (MAP) estimate simplifies to maximizing the likelihood term. Thus, ignoring constant and uniform factors, the energy functional E is defined as:
E = max ( p ( M | G ) )       = max ( s M i p i ( G ( s ) ) )         = max ( s M 1 p 1 ( G ( s ) ) s M 2 p 2 ( G ( s ) ) s M 3 p 3 ( G ( s ) ) )   ,             i { 1 , 2 , 3 } ,
To facilitate computation, we take the logarithm of the energy functional, transforming the above expression into:
E         = max ( log ( s M 1 p 1 ( G ( s ) ) s M 2 p 2 ( G ( s ) ) s M 3 p 3 ( G ( s ) ) )   )               = max ( s M 1 log p 1 ( G ( s ) ) + s M 2 log p 2 ( G ( s ) ) + s M 3 log p 3 ( G ( s ) ) )                   = max ( i = 1 3 M i log p i ( G ( s ) ) )                   = max ( i = 1 3 M i log p i ( G ( s ) ) ) ,                                   i { 1 , 2 , 3 }
Here, pi(G) denotes the probability density of a pixel value within region Mi, where each pixel contains three color components: red, green, and blue. To characterize color information and inter-channel correlations across the three regions, pixel values are assumed to follow a multivariate Gaussian distribution, whose probability density function is given by Equation (6):
p i ( G ) = 1 ( 2 π ) d / 2 | Σ i | 1 / 2 exp ( 1 2 ( G μ i ) T Σ i 1 ( x μ i ) )                               i [ 1 , 2 ]
Here, μi and Σi denote the mean vector and covariance matrix for the i-th region, respectively, with d representing the data dimensionality. For RGB color images, d = 3.
The variational level set method seeks to minimize the energy functional. Consequently, we negate the functional defined in Equation (5), converting it to Equation (7):
E     = i = 1 3 M i log ( p i ( G ) ) d x d y         = i = 1 3 M i 1 2 ( G μ i ) Σ i 1 ( G μ i ) 1 2 log | Σ i | log ( 2 π ) d / 2 d x d y         = i = 1 3 M i 1 2 ( G μ i ) Σ i 1 ( G μ i ) + 1 2 log | Σ i | d x d y + i = 1 3 M i log ( 2 π ) d / 2 d x d y
Multiplying Equation (7) by 2 and discarding constant terms preserves the optimization solution. Thus, Equation (7) is reformulated as:
E       = i = 1 3 M i ( G μ i ) Σ i 1 ( G μ i ) + log | Σ i | d x d y
2. Level Set Curve Evolution Implementation
This work minimizes the energy functional in Equation (8) by leveraging variational level set curve evolution [21,22,23,24,25,26,27]. The implementation comprises: (1) representing image regions through level set functions, and (2) deriving the curve evolution equation via variational calculus. This drives the level set function along the steepest energy descent gradient, ultimately yielding planar closed curves that partition the image into non-overlapping regions.
Level Set Representation of Image Regions: Let Ω denote the image domain. Two level set functions Φ 1 and Φ 2 define three mutually exclusive regions:
Leaf   region   M 1 :   M 1 = M Φ 1                                         Reference   region   M 2 :   M 2 = M Φ 1 C M Φ 2                                                 Background   region   M 3 :   M 3 = M Φ 1 C M Φ 2 C
where the planar curve γ i = { x Ω | Φ i ( x ) = 0 , i 1,2 } bounds region M Φ i (interior: x Ω   Φ i ( x ) > 0 } ) and M Φ i C (exterior: { x Ω | Φ i ( x ) < 0 } ). Figure 6 illustrates this regional partitioning scheme.
Thus, Equation (8) can be expressed in terms of level set functions as:
E = M Φ 1 e 1 d x d y + M Φ 1 C M Φ 2 e 2 d x d y + M Φ 1 C M Φ 2 C e 3 d x d y
where e i = ( G μ i ) Σ i 1 ( G μ i ) + log | Σ i | .
To obtain smooth contour evolution, we incorporate the regularization term from the Chan-Vese (C-V) model [21], reformulating Equation (10) as Equation (10):
E = M Φ 1 e 1 d x d y + M Φ 1 C M Φ 2 e 2 d x d y + M Φ 1 C M Φ 2 C e 3 d x d y + λ i = 1 2 γ i d s
Applying shape derivative calculus [28] to Equation (9) yields the curve evolution equation through partial differentiation:
Φ 1 t = ( e 1 e 2 χ { Φ 2 > 0 } e 3 χ { Φ 2 0 } + λ K 1 ) | | Φ 1 | | Φ 2 t = ( χ { Φ 1 0 } ( e 2 e 3 ) + λ K 2 ) | | Φ 2 | |
where χ ( Φ i 0 ) = 1                                 Φ i 0 0                               Φ i > 0 , χ ( Φ i > 0 ) = 1                                 Φ i > 0 0                               Φ i 0                 , K i = Φ i | Φ i | , i { 1 , 2 } Experiments confirm that the curve evolution Equation (11) achieves excellent convergence and high precision, but exhibits slow computational efficiency. To enhance evolution speed, we refine this formulation by incorporating the Heaviside function and Dirac function defined in Equation (12).
H ( z ) = 1                               z 0 0                         z < 0 ,                                                                 δ ( z ) = d d z H ( z )
Furthermore, to simultaneously streamline level set initialization and eliminate re-initialization during evolution, we integrate the internal penalty energy P ( Φ ) = Ω 1 2 ( Φ 1 ) 2 d x d y proposed by Chunming Li [29]. This dual-purpose approach accelerates curve evolution and enables φ0 to be initialized as follows:
Φ 0 = d                                 ( x , y )           inside   the   plane   curve   γ i 0                                 ( x , y )           on   the   plane   curve   γ i d                           ( x , y )         outside   the   plane   curve   γ i            
Therefore, Equation (10) can be expressed by heavside function and Dirac function as:
E = Ω e 1 H ( Φ 1 ) d x d y + Ω e 2 ( 1 H ( Φ 1 ) ) H ( Φ 2 ) d x d y               + Ω e 3 ( 1 H ( Φ 1 ) ) ( 1 - H ( Φ 2 ) ) d x d y               + λ i = 1 2 Ω | H ( Φ i ) | d x d y + μ i = 1 2 Ω 1 2 ( | Φ i | 1 ) 2 d x d y
Applying variational calculus [23] to the above formulation, we derive its Euler–Lagrange equation, yielding the evolution equation shown in Equation (15):
Φ 1 t = δ ( Φ 1 ) ( e 1 e 2 H ( Φ 2 ) - e 3 ( 1 H ( Φ 2 ) )     + λ K 1 ) + μ ( Δ Φ 1 K 1 ) Φ 2 t = δ ( Φ 2 ) ( ( 1 H ( Φ 1 ) ) ( e 2 e 3 ) + λ K 2 ) + μ ( Δ Φ 2 K 2 )
where Δ is Laplacian operator.
Empirical analysis confirms that the evolution equation (Equation (11)) delivers robust convergence and high segmentation accuracy but exhibits slow computational performance. In contrast, Equation (15) achieves accelerated evolution yet suffers from unstable convergence and diminished precision. To resolve this accuracy–speed tradeoff, we implement an integrative framework combining both equations, enhancing computational efficiency while maintaining segmentation fidelity.

2.2.4. Algorithm Description and System Development

1. Algorithm Description
The leaf area calculation algorithm presented in this study comprises the following steps:
(1)
Image Rectification: Acquired images undergo rectification using the method detailed in Section 2.2.2, yielding rectified images. Rectification can be performed either manually or automatically.
(2)
Adaptive Thresholding and Rough Segmentation: On the rectified image, manually select n1 sample points on the leaf and n2 points on the reference object to obtain two sets of adaptive thresholds. The RGB color values acquired from these samples are converted to the HSV color space. The minimum and maximum hue (H) values from all sampled points are then calculated to define the hue range of the leaf. This hue range is used as a threshold to segment the approximate regions of the leaf and the reference object, thereby providing a foundation for the automatic initialization of the level set initial contour.
(3)
Initial Contour Definition: Within the segmented rough regions, automatically and randomly define multiple small circles as initial contours using Equation (13) (where d is the function value). Parameters include scaling factor p, circle radius r, number of initial leaf contours c1, and number of initial reference object contours c2.
(4)
Curve Evolution: Perform curve evolution for iter1 iterations according to Equation (15), followed by iter2 iterations according to Equation (11) (total iterations iter = iter1 + iter2). This process converges with the final contours of the leaf and reference object.
(5)
Area Calculation: Extract the final contours to determine the number of pixels within the leaf region (N_leaf) and the reference object region (N_ref). Given the known true area of the reference object (A_ref), the true leaf area (A_leaf) is calculated as [30]:
A _ leaf = N _ leaf N _ ref × A _ ref
2. System Development
(1)
Computational Environment
All subsequent processing and analysis were performed on a laptop computer running Windows 11. The development platform was MATLAB R2023b. The hardware configuration included an AMD Ryzen 9 6900HX CPU, 32 GB of RAM, and an NVIDIA GeForce RTX 3060 Laptop GPU.App Development.
(2)
Imaging Setup
The sample images were captured using a Huawei P40 Pro smartphone camera manufactured by Huawei Technologies Co., Ltd. in Shenzhen, China. Our methodology is specifically designed to be robust to varying shooting distances and angles through perspective correction. Therefore, the shooting height and angle were unconstrained, but the phone was positioned as close to the subject as possible to fulfill two criteria: (1) to include the entire leaf platform within the field of view, and (2) to ensure high-resolution image quality capable of revealing fine edge details of the leaves.
(3)
App Development
To enhance operational efficiency and practicality, the system provides two deployment options: a standalone EXE application Version 1.0 and a Web-based application Version 1.0. The EXE version requires local installation but operates independently of internet connectivity, making it suitable for environments with limited or unstable network access. However, it consumes more storage space compared to the web version. The web application, accessible via a browser without installation, offers greater accessibility and convenience for multi-user scenarios but relies on server availability and maintenance.
As shown in Figure 7a, the EXE interface enables full parameter customization—including sampling point quantity, initial contour circle settings, image scaling ratio, and iteration counts—within a localized environment. Meanwhile, Figure 7b illustrates the web interface, which provides similar functionality through a browser-based platform. In both versions, the initial contour circles are automatically generated by the algorithm, ensuring consistency and objectivity in the initialization process.

3. Results

To validate the practicality and measurement accuracy of our leaf area extraction algorithm, we designed two complementary experiments. The first experiment evaluated image rectification functionality using geometrically congruent paper specimens with contrasting colors. The second experiment assessed algorithmic performance across diverse crop species by comparing leaf area measurements derived from our method under multiple parameter configurations against ground-truth values obtained via manual photocopy-weighing technique. This comparison encompassed leaves exhibiting substantial variations in morphology and coloration.
To ensure the conciseness and readability of the tables, this study adopts the following measures. First, a summary of evaluation metrics and their mathematical definitions is provided in Table A1 (Appendix A); see the Abbreviations list for acronyms of the metrics. Second, standardized abbreviations are employed consistently across the Results and Discussion section. Table 1 lists these abbreviations alongside their full terms for clarity and reference. These primarily encompass key concepts in image processing terminology, measurement parameters, and method names.

3.1. Image Rectification Functionality Validation

Under controlled laboratory conditions, geometrically congruent red and blue paper specimens were positioned on our reference-free Leaf Platform. Images were acquired from multiple perspectives to induce perspective distortions. The proposed method extracted specimen areas, computing red-to-blue area ratios. Ten replicate measurements for unrectified/rectified states were averaged per condition. Table 1 presents representative pre-/post-rectification ratios for diverse geometries (triangles, quadrilaterals, circles, irregular quadrilaterals).
As evidenced in Table 2, unrectified specimens exhibited substantial relative errors (RE) in area ratios (up to 33.32%). Post-rectification ratios consistently approached 1.0 with RE below 1%, confirming the effectiveness of our perspective rectification method in meeting agricultural measurement precision requirements.

3.2. Leaf Extraction Workflows for Diverse Crop Species

The proposed method extracted leaves from diverse crop species (Figure 1 and Figure 2) exhibiting distinct morphological and chromatic characteristics: tomato leaves with complex marginal structures, grape leaves featuring serrated edges, pear leaves showing variegated pigmentation, and broomcorn millet leaves demonstrating simple, smooth margins. Representative extraction workflows are documented in Table 3.
Table 3 demonstrates robust segmentation performance across diverse crops and leaf morphologies. The method successfully extracts both individual and clustered leaves (Row 6) while generating precise leaf and reference object contours (Column 8), enabling visual quality assessment. Key parameter adaptation principles include:
(1)
Size-dependent scaling: Increase initial circle radius, count, and iterations for larger leaves; decrease for smaller specimens.
(2)
Over-segmentation compensation: Elevate circle count when rough segmentation exceeds target boundaries to ensure adequate on-leaf initialization.
(3)
Algorithmic resilience: Randomized circle placement achieves accurate contours even with suboptimal initialization (e.g., broomcorn millet, Row 5: 2/5 circles overlapping leaf).

3.3. Comprehensive Leaf Area Measurements Across Crops and Parameters

Table 4 presents comprehensive leaf area measurements for pear, tomato, broomcorn millet and grape under varying parameters (imaging angles and heights, sampling points, initial contours, iterations). Results include mean area (derived from ≥100 repetitions; the specific data sources and measurement repetitions for each crop are detailed in Table 5), standard deviation (SD), coefficient of variation (CV), and errors relative to photocopy-weighing (root mean square error RMSE, relative error RE; 10 weighing repetitions). The method demonstrates exceptional stability (SD < 1 cm2) and precision (RMSE < 2 cm2) for most images, though tomato and grape exhibit marginally higher RMSE than millet due to error amplification from complex margins in the photocopy-weighing method. Most measurements satisfy strict accuracy thresholds: absolute error (AE) < 1 cm2, CV < 3%, RE < 1%. Computational efficiency is evidenced by minimum measurement times < 10 s per leaf—representing substantial efficiency gains over manual techniques. Owing to space constraints, only a subset of the measurement data for Pear_5 under different imaging angles, heights, and parameters is presented in Table 6, along with its statistical summary.

3.4. Comparison with OpenPheno and ImageJ

OpenPheno is an open-access, user-friendly, and smartphone-based platform designed for instant plant phenotyping [31]. Implemented as a WeChat Mini Program, it supports rapid plant trait analysis directly on mobile devices. In this study, leaf area was measured using both OpenPheno and ImageJ—a widely adopted image processing tool—and their performance was compared against the proposed method.
During the experiment, each leaf was placed on the platform described in Section 2.2.1 and remained fixed throughout the process. First, leaf area was measured with the proposed method using images captured via a smartphone. A one-yuan coin was then placed beside the leaf following OpenPheno’s scale calibration protocol, and another image was acquired for analysis with both OpenPheno and ImageJ. Comparative results of the three methods are summarized in Table 7.
The results indicate that all three methods successfully extracted leaf area on single-branch leaves. However, OpenPheno exhibited occasional inaccuracies in segmenting multi-branch leaves, as shown in the three column of Table 6. Measured leaf areas varied across methods.
OpenPheno yielded inconsistent results when applied to images of the same leaf captured at different heights. While it offers fully automated measurement without parameter adjustment—promoting reproducibility under consistent conditions—it lacks the flexibility for parameter tuning when extraction errors are visually detected.
ImageJ also showed variability in measured area when different reference objects were used within the same image. This inconsistency stems from its reliance on manual operations, including scale calibration, threshold adjustment, and/or manual leaf outlining. As a result, measurement accuracy is highly user-dependent, and reproducibility is limited.
In contrast, the proposed method employs a semi-automatic workflow. Although not fully automated, it allows parameter adjustment when erroneous extraction is identified, facilitating accurate leaf area measurement. This approach strikes a balance between automation and manual intervention, delivering robust and accurate results through simple and intuitive parameter settings.

4. Discussion

4.1. Impact of Manual Sampling Points

Manual sampling point selection is sensitive to lighting and viewing angle variations, generating notably different rough segmentation regions—especially for leaves—as documented in Table 8. While these variations affect level set initialization contours, our method demonstrates remarkable consistency: under heterogeneous illumination, divergent sampling points yield different initial regions but ultimately produce equivalent leaf areas (SD = 0.8376 cm2) and comparable processing times (SD = 0.8674 s). With both standard deviations below 1.0, results confirm method robustness against sampling point variability.

4.2. Impact of Initial Contour Configuration

A broomcorn millet case study (rough segmentation regions shown in Figure 8) examined stochastic initial contour effects by varying target contour count (c1) under fixed parameters (p = 0.2, r = 50p, iter1 = 4, iter2 = 6, c2 = 1). As Table 9 demonstrates: (1) At c1 = 1, segmentation success depended stochastically on circle-leaf overlap (Column 2: success; Column 1: failure); (2) For c1 ≥ 2, correct segmentation occurred consistently despite partial coverage (Columns 4–6). Method robustness was confirmed through minimal outcome variation (leaf area SD = 0.5526 cm2; processing time SD = 0.8740 s; both <1.0), validating reliability across contour initialization scenarios.

4.3. Impact of Scaling Factor Variation

To evaluate scaling factor (p) effects, rectified images of broomcorn millet (3507 × 2358 px; smooth margins), grape (2518 × 3715 px; serrated margins), and tomato (1969 × 2879 px; serrated and lobed) were analyzed under fixed parameters (r = 50p, c1 = 3, c2 = 1, iter1 = 4, iter2 = 6) with all metrics averaged over 10 repetitions. As demonstrated in Figure 9a and Table 10, segmentation time scales near-exponentially with p (Coefficients of Determination R2 > 0.99) and correlates with image dimensions rather than biological leaf area—evidenced by 65.7% faster tomato processing (684 s vs. 1995.62 s at p = 1) despite its 180% larger leaf area (96.61 cm2 vs. 34.46 cm2). Crucially, leaf area measurements exhibit exceptional stability across scaling factors (Figure 9b), with all standard deviations <1 cm2 (max SD = 0.57 cm2 for tomato) and coefficients of variation <1% (Table 11). Observed RMSE variations (0.28–0.57 cm2) primarily reflect reference method limitations: minimal error for smooth-margined millet (0.28 cm2), moderate for serrated-only grape (0.52 cm2), and maximal for serrated and lobed tomato (0.57 cm2), confirming the algorithm’s scaling-invariant precision while attributing discrepancies to manual measurement challenges with complex morphologies.

4.4. Impact of Iteration Counts

This study examines the impact of iteration counts iter1 and iter2 on segmenting grape leaf images using fixed parameters (p = 0.2, r = 50p, c1 = 3, c2 = 1). As summarized in Table 12, when only Equation (11) was applied (e.g., iter1 = 0, iter2 = 100), curve evolution was slow yet stable, requiring 80 iterations to approximate the leaf contour but yielding a leaf area consistent with manual measurements. Using only Equation (15) (e.g., iter1 = 100, iter2 = 0) accelerated initial contour detection achieving a rough outline within 10 iterations, but at the cost of reduced smoothness and increased area error due to poor convergence. A hybrid approach (e.g., iter1 = 80, iter2 = 20) balanced speed and accuracy, delivering a well-converged contour with higher precision.
Two extended experiments were conducted: (a) iter1 fixed at 10 with iter2 varying from 2 to 100, and (b) iter2 fixed at 10 with iter1 varying similarly. Each configuration was executed 10 times to compute average leaf area and segmentation time. Results indicate that increasing either iter1 or iter2 independently leads to decreased estimated leaf area and linearly increased computation time (Figure 10). Optimal performance requires avoiding extreme values: insufficient iter1 causes incomplete segmentation, while excessive iter1 prolongs runtime without improving accuracy; insufficient iter2 reduces contour smoothness, and too many iter2 iterations waste computation without enhancing convergence. In practice, iter1 should be set to allow the curve to approach the leaf margin, followed by a moderate iter2 to ensure stable and smooth convergence.

4.5. Impact of Shooting Angle and Height

This study examines the effect of varying shooting angles and heights on tomato leaf area measurements using a fixed parameter set (p = 0.2, r = 50p, c1 = 3, c2 = 1, iter1 = 4, iter2 = 6). The same leaf was photographed from multiple angles and heights, both intact and after being cut. For each image, leaf area was computed 10 times to determine the average value, average segmentation time, and corresponding standard deviations.
As shown in Table 13, the standard deviations for both average leaf area and segmentation time remained below 1 across all shooting conditions, demonstrating high measurement stability and consistency. These findings confirm that the method is robust to variations in both shooting angle and height and effectively measures leaf area for both intact and cut leaves.

4.6. Impact of Leaf Color Variation

This study investigates the impact of leaf color variation on segmentation performance using pear tree leaves with significant color differences as examples. All original images were of equal size, and experiments were conducted with fixed parameters: p = 0.2, r = 50p, c1 = 3, c2 = 1, Iter1 = 1, Iter2 = 5.
As shown in Table 14, leaves of different colors from the same crop species were successfully extracted with high accuracy. The measured leaf areas showed close agreement with GT areas, with all relative errors (AE) remaining below 1 cm2. The segmentation time was found to be independent of original image size but correlated with the size of the corrected images after Hough transformation.

4.7. Performance Evaluation of Different Approaches

In contrast to threshold-based techniques that are sensitive to lighting and color variations, our model makes full use of color information to enhance robustness. Unlike deep learning approaches, our method requires no training, is species-agnostic, and maintains complete operational transparency. Compared to OpenPheno, which offers full automation but lacks parameter adjustability, and ImageJ, which depends heavily on manual intervention and yields operator-dependent results, our solution strikes an optimal balance by combining automation with flexible user control. By integrating automated geometric correction, intuitive interaction, and statistically driven segmentation, our system outputs both leaf and reference contours overlaid on the original image for real-time visual verification—a feature rarely supported in existing solutions.

5. Conclusions

This study presents a general leaf area measurement system and examines the influence of various parameters on measurement results. Experiments show that image rectification improves the accuracy of leaf area measurement and reduces errors. Manual sampling helps acquire prior knowledge about the leaves. The level set-based curve evolution method not only allows visual inspection of leaf extraction but also increases computational efficiency. The standard deviation of leaf area measurements remained below 1 cm2 under different sampling points, initial contours, scaling factors, shooting angles, and heights, indicating high stability. The root mean square error (RMSE), using manual photocopy-weighing as the reference method, was associated with the complexity of the leaf margin—more complex edges led to larger RMSE values. Segmentation time was mainly affected by the size of the rectified image, the scaling factor, and the number of iterations. Specifically, processing time increased approximately exponentially with the scaling factor and linearly with the number of iterations. Too few iterations prevented the curve from reaching the leaf edge, while too many resulted in prolonged computation without noticeable improvement. Automating the selection of iteration counts requires further investigation and remains a limitation of the proposed method. Another drawback is the need to detach leaves for imaging; future research should explore non-destructive image acquisition techniques.

Author Contributions

Conceptualization, L.W.; methodology, L.W., X.Z. and Z.B.; data curation, W.G. and C.H.; software, L.W. and W.G.; validation, C.H., W.G. and L.Z.; formal analysis, L.W. and X.Z.; investigation, Z.L., W.G., Z.B. and C.H.; resources, X.Z., Z.L. and Y.H.; writing—original draft preparation, L.W. and C.H.; writing—review and editing, L.W., C.H., X.Z., W.G., Z.B., Z.L., L.Z. and Y.H.; visualization, L.W. and C.H.; supervision, L.W., X.Z. and C.H.; funding acquisition, L.W., X.Z. and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundamental Research Program of Shanxi Province of Science and Technology Department of Shanxi Province, China, grant number 202203021212450, the Research Startup Program for Introduced Talents of Shanxi Agricultural University, grant number 2025BQ07, and the Science and Technology Innovation Enhancement Project under the Scientific Research Initiatives of Shanxi Agricultural University, grant number CXGC2025057.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

All authors express their gratitude to Wang Yukun from the College of Agriculture, Shanxi Agricultural University, for providing experimental materials and agricultural technical guidance. The authors would like to thank the editors and anonymous reviewers for their constructive comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SDstandard deviation
CVcoefficient of variation
RMSEroot mean square error
AEabsolute error
RERelative Error
R2Coefficients of Determination

Appendix A

Table A1. Formulas and Variable Descriptions for Key Statistical Metrics.
Table A1. Formulas and Variable Descriptions for Key Statistical Metrics.
Metric (Abbreviation)FormulaVariable Definitions
AE AE = | y y ^ | y y ^ : measured value
RE RE = | y y ^ | y
SD SD = σ = i = 1 n ( x i μ ) 2 n 1 ,   μ = i = 1 n x i n n : total number of samples
x i : the i-th sample value
μ : sample mean
CV CV = σ μ × 100 % ,
RMSE RMSE = i = 1 n ( y i y ^ i ) 2 n y i : the i-th true value
y ^ i : the i-th measured or predicted value
n : total number of samples
R2 R 2 = 1 s s r e s s s t o t ,   s s r e s = i = 1 n ( y i y ^ i ) 2 ,   s s t o t = i = 1 n ( y i y ¯ ) 2 ,   y ¯ = i = 1 n y i n

References

  1. Jiang, N. A Non-Destructive Method for Toal Green Leaf Area Estimation of Individual Rice Plants. Ph.D. Dissertation, Huazhong University of Science and Technology, Wuhan, China, 2014. [Google Scholar]
  2. Yin, C.; Shi, Z.; Tian, C.; Chen, C.; Li, D.; Dong, W.; Zhang, Y. Effects of Red and Blue Light on Photosynthetic Carbon Assimilation and Growth Development in Plants: A Review. Chin. J. Appl. Ecol. 2025, 36, 2246–2256. [Google Scholar] [CrossRef]
  3. Shi, Y.; Xia, C. Research on Leaf Area Measurement of Variable Reference Based on Checkerboards. J. Chin. Agric. Mech. 2021, 42, 191–196. [Google Scholar] [CrossRef]
  4. Zhang, F.; Li, X.; Zhao, Q.; Feng, Y.; Wu, Y. Regression Estimation on Leaf Area of Hevea Brasiliensis Clones. Trop. Agric. Sci. Technol. 2024, 47, 63–68. [Google Scholar] [CrossRef]
  5. Zhang, Z.; Jia, S.; Zhao, N.; Liu, S.; Fan, F.; Nie, L. A Measurement Approach of Leaf Area Based on Digital Image Processing and Regression Equation of Leaves Area on Watermelon. China Cucurbits Veg. 2021, 34, 51–54. [Google Scholar] [CrossRef]
  6. Li, Q.; Yang, M.; Yuan, P.; Xue, Y. Image Measurement Method of Leaf Area Based on Saturation Segmentation. J. For. Eng. 2021, 6, 147–152. [Google Scholar] [CrossRef]
  7. Xu, Z.; Sun, L.; Li, B.; Lin, J.; Lan, W.; Zhang, S. Design of Leaf Area Measurement System Based on Android Smartphone. Exp. Technol. Manag. 2023, 40, 89–94. [Google Scholar] [CrossRef]
  8. Tan, X.; Tan, L.; Yang, S. Leaf Area Measurement Method Based on Black-White Scanning Image. J. Guangxi Agric. 2022, 37, 46–52. [Google Scholar]
  9. Li, F.; Li, J.; Huang, H.; Guan, M.; Guan, C. Research on Non-Destructive Measurement Methods and Strategies of Individual Rape Leaf Area in Field. Acta Laser Biol. Sin. 2021, 30, 505–517. [Google Scholar]
  10. Wang, P.; Deng, H.; Guo, J.; Ji, S.; Meng, D.; Bao, J.; Zuo, P. Leaf Segmentation Using Modified YOLOv8-Seg Models. Life 2024, 14, 780. [Google Scholar] [CrossRef]
  11. Huang, F.; Li, Y.; Liu, Z.; Gong, L.; Liu, C. A Method for Calculating the Leaf Area of Pak Choi Based on an Improved Mask R-CNN. Agriculture 2024, 14, 101. [Google Scholar] [CrossRef]
  12. Khan, A.T.; Jensen, S.M. LEAF-Net: A Unified Framework for Leaf Extraction and Analysis in Multi-Crop Phenotyping Using YOLOv11. Agriculture 2025, 15, 196. [Google Scholar] [CrossRef]
  13. Guo, X.; Cheng, J.; Wang, L. Research and Realisation the Statistics Algorithm of Wheat Leaf Area. J. Henan Inst. Sci. Technol. Sci. Ed. 2019, 47, 56–59. [Google Scholar]
  14. Yang, Z.; Yang, P.; Zhang, M.; Xiao, Y.; Liu, H. Study on Correction Method of Local Geometric Distortion in Projection Phantom Imaging. Acta Photonica Sin. 2022, 51, 0311004. [Google Scholar]
  15. Zhou, D.; Yang, Y.; Zhu, J.; Wang, K. Tilt Correction Method of Pointer Meter Based on Deep Learning. J. Comput.-Aided Des. Comput. Graph. 2020, 32, 1976–1984. [Google Scholar] [CrossRef]
  16. Matarneh, S.; Elghaish, F.; Al-Ghraibah, A.; Abdellatef, E.; Edwards, D.J. An Automatic Image Processing Based on Hough Transform Algorithm for Pavement Crack Detection and Classification. Smart Sustain. Built Environ. 2025, 14, 1–22. [Google Scholar] [CrossRef]
  17. Sasmita, W.A.; Mubarok, H.; Widiyasono, N. Agricultural Path Detection Systems Using Canny-Edge Detection and Hough Transform. IAES Int. J. Robot. Autom. 2024, 13, 247. [Google Scholar] [CrossRef]
  18. Wang, P.; Liu, Z.; Ma, C.; Peng, A.; Ren, F.; Cai, M. Investigation of Fast Identification of Joint Traces Information of Rock Mass Based on Hough Detection Method and Its Application. Rock Soil Mech. 2022, 43, 2887–2897. [Google Scholar] [CrossRef]
  19. Cremers, D.; Rousson, M.; Deriche, R. A Review of Statistical Approaches to Level Set Segmentation: Integrating Color, Texture, Motion and Shape. Int. J. Comput. Vis. 2007, 72, 195–215. [Google Scholar] [CrossRef]
  20. Qu, H.-B.; Xiang, L.; Wang, J.-Q.; Li, B.; Tao, H.-J. Variational Bayesian Level Set for Image Segmentation. In Proceedings of the International Conference on Machine Vision, London, UK, 24 December 2013; p. 90670D. [Google Scholar]
  21. Chan, T.F.; Vese, L.A. Active Contours without Edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed]
  22. Ben Ayed, I.; Hennane, N.; Mitiche, A. Unsupervised Variational Image Segmentation/Classification Using a Weibull Observation Model. IEEE Trans. Image Process. 2006, 15, 3431–3439. [Google Scholar] [CrossRef]
  23. Reska, D.; Kretowski, M. GPU-Accelerated Image Segmentation Based on Level Sets and Multiple Texture Features. Multimed. Tools Appl. 2021, 80, 5087–5109. [Google Scholar] [CrossRef]
  24. Han, T.; Cao, H.; Yang, Y. AS2LS: Adaptive Anatomical Structure-Based Two-Layer Level Set Framework for Medical Image Segmentation. IEEE Trans. Image Process. 2024, 33, 6393–6408. [Google Scholar] [CrossRef] [PubMed]
  25. Joshi, A.; Saquib Khan, M.; Choi, K.N. Medical Image Segmentation Using Combined Level Set and Saliency Analysis. IEEE Access 2024, 12, 102016–102026. [Google Scholar] [CrossRef]
  26. Li, W.; Liu, W.; Zhu, J.; Cui, M.; Hua, X.-S.; Zhang, L. Box2Mask: Box-Supervised Instance Segmentation with Level Set Evolution, Proceedings of the Computer Vision—ECCV 2022. Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022; pp. 1–18. [Google Scholar]
  27. Aghazadeh, N.; Moradi, P.; Castellano, G.; Noras, P. An Automatic MRI Brain Image Segmentation Technique Using Edge–Region-Based Level Set. J. Supercomput. 2023, 79, 7337–7359. [Google Scholar] [CrossRef]
  28. Olfa, B.; Ziad, B.; Nozha, B. Adaptive Satellite Images Segmentation by Level Set Multiregion Competition. Inst. Natl. Rech. Inform. Autom. 1992, 37, 55–57. [Google Scholar] [CrossRef]
  29. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Distance Regularized Level Set Evolution and Its Application to Image Segmentation. IEEE Trans. Image Process. 2010, 19, 3243–3254. [Google Scholar] [CrossRef]
  30. He, A.; Li, B.; Xu, Z.; Yang, Y.; Li, Z. Detection of Bamboo Leaf Area Based on Self-Made Colored Base Plate and Spectral Features. Res. Explor. Lab. 2024, 43, 39–44. [Google Scholar] [CrossRef]
  31. Hu, T.; Shen, P.; Zhang, Y.; Zhang, J.; Li, X.; Xia, C.; Liu, P.; Lu, H.; Wu, T.; Han, Z. OpenPheno: An Open-Access, User-Friendly, and Smartphone-Based Software Platform for Instant Plant Phenotyping. Plant Methods 2025, 21, 76. [Google Scholar] [CrossRef]
Figure 1. Leaves of the same plant species exhibiting varying shades of green (specifically pear tree leaves).
Figure 1. Leaves of the same plant species exhibiting varying shades of green (specifically pear tree leaves).
Agriculture 15 02101 g001
Figure 2. Leaves of different crops exhibiting diverse morphologies: (a) Tomato leaf; (b) Broomcorn millet leaf; (c) Grape leaf; (d) Pear tree leaf.
Figure 2. Leaves of different crops exhibiting diverse morphologies: (a) Tomato leaf; (b) Broomcorn millet leaf; (c) Grape leaf; (d) Pear tree leaf.
Agriculture 15 02101 g002
Figure 3. Flowchart of leaf area calculation.
Figure 3. Flowchart of leaf area calculation.
Agriculture 15 02101 g003
Figure 4. The leaf-bearing platform with a reference marker.
Figure 4. The leaf-bearing platform with a reference marker.
Agriculture 15 02101 g004
Figure 5. Image rectification process: (a) Original image; (b) Binary image; (c) Edge image; (d) Hough line detection; (e) Rectified image.
Figure 5. Image rectification process: (a) Original image; (b) Binary image; (c) Edge image; (d) Hough line detection; (e) Rectified image.
Agriculture 15 02101 g005
Figure 6. Representation of a partition, the curves in different colors represent the zero-level contours of two distinct level set functions.
Figure 6. Representation of a partition, the curves in different colors represent the zero-level contours of two distinct level set functions.
Agriculture 15 02101 g006
Figure 7. Interactive Visual Application Interface: (a) EXE Application Interface; (b) Web Application Interface.
Figure 7. Interactive Visual Application Interface: (a) EXE Application Interface; (b) Web Application Interface.
Agriculture 15 02101 g007aAgriculture 15 02101 g007b
Figure 8. Rough segmentation regions of a broomcorn millet: (a) Original Image; (b) Hough-Detected lines; (c) Manual seed points; (d) Leaf rough segmentation; (e) Reference rough segmentation.
Figure 8. Rough segmentation regions of a broomcorn millet: (a) Original Image; (b) Hough-Detected lines; (c) Manual seed points; (d) Leaf rough segmentation; (e) Reference rough segmentation.
Agriculture 15 02101 g008
Figure 9. Segmentation time and Leaf area with scaling factor (p): (a) Segmentation time with p; (b) Leaf area with p.
Figure 9. Segmentation time and Leaf area with scaling factor (p): (a) Segmentation time with p; (b) Leaf area with p.
Agriculture 15 02101 g009
Figure 10. Leaf area measurements and Segmentation time under different iteration counts (iter): (a) Leaf area measurements with iter; (b) Segmentation time with iter.
Figure 10. Leaf area measurements and Segmentation time under different iteration counts (iter): (a) Leaf area measurements with iter; (b) Segmentation time with iter.
Agriculture 15 02101 g010
Table 1. Glossary of Abbreviations Used in the Study.
Table 1. Glossary of Abbreviations Used in the Study.
AbbreviationFull TermAbbreviationFull Term
Orig.OriginalFin.Final
Rect.RectifiedMeas.Measured
Ref.ReferenceProp.Proposed Method
Seg.SegmentationOP.OpenPheno
Auto-Init.Auto-InitializedIJ.ImageJ
Cont.ContoursSh.Shooting
Evol.EvolutionH1–H4Height 1–Height 4
Table 2. Pre-/Post-Rectification Area Ratios.
Table 2. Pre-/Post-Rectification Area Ratios.
Orig. ImageHough LinesRect. ImageRef. RatioRatio (Pre)Ratio (Post)RE (Pre) (%)RE (Post) (%)
Agriculture 15 02101 i001Agriculture 15 02101 i002Agriculture 15 02101 i00311.31971.000831.970.08
Agriculture 15 02101 i004Agriculture 15 02101 i005Agriculture 15 02101 i00611.09310.99239.310.77
Agriculture 15 02101 i007Agriculture 15 02101 i008Agriculture 15 02101 i00910.71180.994928.820.51
Agriculture 15 02101 i010Agriculture 15 02101 i011Agriculture 15 02101 i01210.91071.00168.930.16
Agriculture 15 02101 i013Agriculture 15 02101 i014Agriculture 15 02101 i01510.85931.002714.070.27
Agriculture 15 02101 i016Agriculture 15 02101 i017Agriculture 15 02101 i01811.29660.997629.660.24
Agriculture 15 02101 i019Agriculture 15 02101 i020Agriculture 15 02101 i02110.66681.009133.320.91
Agriculture 15 02101 i022Agriculture 15 02101 i023Agriculture 15 02101 i02410.93090.99156.910.85
Table 3. Leaf Extraction Workflows for Diverse Crop Species.
Table 3. Leaf Extraction Workflows for Diverse Crop Species.
Orig. ImageHough LinesManual SeedsRough Seg. (leaf)Rough Seg. (Ref.)Auto-Init. Cont.Evol. StageFin. Cont.Leaf Fin. Seg.Ref. Fin. Seg.
Agriculture 15 02101 i025Agriculture 15 02101 i026Agriculture 15 02101 i027Agriculture 15 02101 i028Agriculture 15 02101 i029Agriculture 15 02101 i030Agriculture 15 02101 i031Agriculture 15 02101 i032Agriculture 15 02101 i033Agriculture 15 02101 i034
Agriculture 15 02101 i035Agriculture 15 02101 i036Agriculture 15 02101 i037Agriculture 15 02101 i038Agriculture 15 02101 i039Agriculture 15 02101 i040Agriculture 15 02101 i041Agriculture 15 02101 i042Agriculture 15 02101 i043Agriculture 15 02101 i044
Agriculture 15 02101 i045Agriculture 15 02101 i046Agriculture 15 02101 i047Agriculture 15 02101 i048Agriculture 15 02101 i049Agriculture 15 02101 i050Agriculture 15 02101 i051Agriculture 15 02101 i052Agriculture 15 02101 i053Agriculture 15 02101 i054
Agriculture 15 02101 i055Agriculture 15 02101 i056Agriculture 15 02101 i057Agriculture 15 02101 i058Agriculture 15 02101 i059Agriculture 15 02101 i060Agriculture 15 02101 i061Agriculture 15 02101 i062Agriculture 15 02101 i063Agriculture 15 02101 i064
Agriculture 15 02101 i065Agriculture 15 02101 i066Agriculture 15 02101 i067Agriculture 15 02101 i068Agriculture 15 02101 i069Agriculture 15 02101 i070Agriculture 15 02101 i071Agriculture 15 02101 i072Agriculture 15 02101 i073Agriculture 15 02101 i074
Agriculture 15 02101 i075Agriculture 15 02101 i076Agriculture 15 02101 i077Agriculture 15 02101 i078Agriculture 15 02101 i079Agriculture 15 02101 i080Agriculture 15 02101 i081Agriculture 15 02101 i082Agriculture 15 02101 i083Agriculture 15 02101 i084
Table 4. Comprehensive leaf area measurements.
Table 4. Comprehensive leaf area measurements.
Leaf TypeGT 1 Mean Area (cm2)Meas. Mean Area (cm2)SD
(cm2)
CV
(%)
RMSE
(cm2)
AE
(cm2)
RE
(%)
Min Meas. Time (s)
Pear_154.32 53.91 0.98 1.821.82 0.410.755.23
Pear_279.34 79.00 0.98 1.06 1.240.340.435.28
Pear_337.18 36.84 0.90 2.440.96 0.340.913.40
Pear_426.65 26.91 0.55 2.040.95 0.260.983.36
Pear_543.03 43.24 0.94 2.180.96 0.210.495.93
Tomato_137.7137.34 0.69 1.840.7830.370.985.24
Tomato_277.3177.83 0.98 1.261.1080.520.674.54
Tomato_391.0990.19 0.97 1.081.3220.90.995.39
Tomato_496.4296.40 0.99 1.030.9910.020.027.47
Tomato_5112.18 111.68 0.97 0.871.0940.50.458.12
Tomato_6136.70 137.07 1.94 1.421.970.370.279.05
B_millet_119.3619.470.542.770.5490.110.573.86
B_millet_231.4331.410.902.860.8960.020.066.80
B_millet_333.1033.290.722.160.740.190.574.62
B_millet_434.5634.850.712.050.7680.290.846.98
B_millet_570.5970.780.961.360.9760.190.275.74
Grape_160.7760.28 0.94 1.561.06 0.490.815.11
Grape_271.5871.17 1.00 1.401.0740.410.575.65
Grape_388.1288.16 0.97 1.31.1030.040.054.54
Grape_491.3190.91 0.98 1.441.0770.40.445.23
Grape_5112.18111.22 0.94 0.851.3390.960.865.21
1 GT = Ground Truth, obtained via manual photocopy-weighing technique.
Table 5. Data Sources under different shooting angles and heights and Measurement Repetitions for Table 3.
Table 5. Data Sources under different shooting angles and heights and Measurement Repetitions for Table 3.
Leaf TypeRepetitionsSh. 1Sh. 2Sh. 3Sh. 4Sh. 5Sh. 6Sh. 7Sh. 8Sh. 9Sh. 10Sh. 11Sh. 12
Pear_1110Agriculture 15 02101 i085Agriculture 15 02101 i086Agriculture 15 02101 i087Agriculture 15 02101 i088Agriculture 15 02101 i089Agriculture 15 02101 i090
Pear_2113Agriculture 15 02101 i091Agriculture 15 02101 i092Agriculture 15 02101 i093Agriculture 15 02101 i094Agriculture 15 02101 i095Agriculture 15 02101 i096Agriculture 15 02101 i097Agriculture 15 02101 i098
Pear_3195Agriculture 15 02101 i099Agriculture 15 02101 i100Agriculture 15 02101 i101Agriculture 15 02101 i102Agriculture 15 02101 i103
Pear_4177Agriculture 15 02101 i104Agriculture 15 02101 i105Agriculture 15 02101 i106Agriculture 15 02101 i107Agriculture 15 02101 i108Agriculture 15 02101 i109
Pear_5107Agriculture 15 02101 i110Agriculture 15 02101 i111Agriculture 15 02101 i112Agriculture 15 02101 i113Agriculture 15 02101 i114
Tomato_1116Agriculture 15 02101 i115Agriculture 15 02101 i116Agriculture 15 02101 i117Agriculture 15 02101 i118
Tomato_2156Agriculture 15 02101 i119Agriculture 15 02101 i120
Tomato_3106Agriculture 15 02101 i121Agriculture 15 02101 i122Agriculture 15 02101 i123
Tomato_4222Agriculture 15 02101 i124Agriculture 15 02101 i125Agriculture 15 02101 i126
Tomato_5200Agriculture 15 02101 i127Agriculture 15 02101 i128
Tomato_6313Agriculture 15 02101 i129Agriculture 15 02101 i130Agriculture 15 02101 i131Agriculture 15 02101 i132Agriculture 15 02101 i133Agriculture 15 02101 i134Agriculture 15 02101 i135
B_millet_1143Agriculture 15 02101 i136Agriculture 15 02101 i137Agriculture 15 02101 i138Agriculture 15 02101 i139
B_millet_2151Agriculture 15 02101 i140Agriculture 15 02101 i141Agriculture 15 02101 i142Agriculture 15 02101 i143Agriculture 15 02101 i144Agriculture 15 02101 i145
B_millet_3164Agriculture 15 02101 i146Agriculture 15 02101 i147Agriculture 15 02101 i148Agriculture 15 02101 i149
B_millet_4174Agriculture 15 02101 i150Agriculture 15 02101 i151Agriculture 15 02101 i152Agriculture 15 02101 i153Agriculture 15 02101 i154Agriculture 15 02101 i155
B_millet_5221Agriculture 15 02101 i156Agriculture 15 02101 i157Agriculture 15 02101 i158Agriculture 15 02101 i159Agriculture 15 02101 i160Agriculture 15 02101 i161Agriculture 15 02101 i162
Grape_1282Agriculture 15 02101 i163Agriculture 15 02101 i164Agriculture 15 02101 i165Agriculture 15 02101 i166Agriculture 15 02101 i167
Grape_2294Agriculture 15 02101 i168Agriculture 15 02101 i169Agriculture 15 02101 i170Agriculture 15 02101 i171Agriculture 15 02101 i172Agriculture 15 02101 i173Agriculture 15 02101 i174Agriculture 15 02101 i175Agriculture 15 02101 i176
Grape_3229Agriculture 15 02101 i177Agriculture 15 02101 i178Agriculture 15 02101 i179Agriculture 15 02101 i180Agriculture 15 02101 i181Agriculture 15 02101 i182Agriculture 15 02101 i183Agriculture 15 02101 i184
Grape_4236Agriculture 15 02101 i185Agriculture 15 02101 i186Agriculture 15 02101 i187Agriculture 15 02101 i188Agriculture 15 02101 i189Agriculture 15 02101 i190Agriculture 15 02101 i191Agriculture 15 02101 i192
Grape_5371Agriculture 15 02101 i193Agriculture 15 02101 i194Agriculture 15 02101 i195Agriculture 15 02101 i196Agriculture 15 02101 i197Agriculture 15 02101 i198Agriculture 15 02101 i199Agriculture 15 02101 i200Agriculture 15 02101 i201Agriculture 15 02101 i202Agriculture 15 02101 i203Agriculture 15 02101 i204
Table 6. Summary of Partial Measurement Data for Pear_5 under Various Imaging Parameters.
Table 6. Summary of Partial Measurement Data for Pear_5 under Various Imaging Parameters.
Shooting NameMeas. Leaf Area (cm2)Meas. Time (s)Scaling Factor (p)iter1iter2c1c2r
Sh. 144.38776.09150.2133110
Sh. 144.43778.60880.2151110
Sh. 144.75478.37360.2153110
Sh. 144.68046.68700.124215
Sh. 144.82109.32180.2243110
Sh. 242.03859.14750.2151110
Sh. 242.19268.92570.2153110
Sh. 242.822611.56200.2171110
Sh. 242.49209.56790.2243110
Sh. 242.235915.09760.2263110
Sh. 342.104712.11560.2151110
Sh. 342.535512.84610.2241110
Sh. 343.853428.69160.3242115
Sh. 343.48267.31140.124225
Sh. 343.51828.46540.126215
Sh. 443.92858.46220.2242110
Sh. 443.62438.52150.2242110
Sh. 443.67568.67250.2242110
Sh. 443.91848.88930.2242110
Sh. 442.51479.13880.2242110
Sh. 543.76985.93100.124215
Sh. 543.89956.05770.124215
Sh. 543.93606.42180.124215
Sh. 544.55146.47350.124215
Sh. 543.78806.65470.124215
Mean Area (cm2)43.5185RMSE (cm2)1.0053
SD (cm2)0.8963CV (%)2.06
GT Area (cm2)43.0292Min Measured Time(s)5.9310
Table 7. Comparison of Leaf Area Measurement Results Across Different Methods and Conditions.
Table 7. Comparison of Leaf Area Measurement Results Across Different Methods and Conditions.
DescriptionMulti-Branch LeafSingle-Branch LeafImageJ: Different ReferencesOpenPheno: Different Heights
Prop.OP.IJ.Prop.OP.IJ.CoinRectangleH11H2H3H4
Original ImageAgriculture 15 02101 i205Agriculture 15 02101 i206Agriculture 15 02101 i207Agriculture 15 02101 i208Agriculture 15 02101 i209Agriculture 15 02101 i210Agriculture 15 02101 i211Agriculture 15 02101 i212Agriculture 15 02101 i213Agriculture 15 02101 i214Agriculture 15 02101 i215Agriculture 15 02101 i216Agriculture 15 02101 i217Agriculture 15 02101 i218
Leaf ExtractionAgriculture 15 02101 i219Agriculture 15 02101 i220Agriculture 15 02101 i221Agriculture 15 02101 i222Agriculture 15 02101 i223Agriculture 15 02101 i224Agriculture 15 02101 i225Agriculture 15 02101 i226Agriculture 15 02101 i227Agriculture 15 02101 i228Agriculture 15 02101 i229Agriculture 15 02101 i230Agriculture 15 02101 i231Agriculture 15 02101 i232
Local ZoomAgriculture 15 02101 i233Agriculture 15 02101 i234Agriculture 15 02101 i235Agriculture 15 02101 i236Agriculture 15 02101 i237Agriculture 15 02101 i238Agriculture 15 02101 i239Agriculture 15 02101 i240Agriculture 15 02101 i241Agriculture 15 02101 i242Agriculture 15 02101 i243Agriculture 15 02101 i244Agriculture 15 02101 i245Agriculture 15 02101 i246
Leaf Area (cm2)86.2582.8284.7654.7549.6952.0487.7885.8889.4891.6053.2952.5947.6851.24
Table 8. Experimental Results for Different Manual Sampling Points on Identical Rectified Image.
Table 8. Experimental Results for Different Manual Sampling Points on Identical Rectified Image.
Manual SeedsRough Seg. (Leaf)Auto-Init. Cont.Evol. StageFin. Cont.Meas. Area (cm2)Meas. Time (s)
Agriculture 15 02101 i247Agriculture 15 02101 i248Agriculture 15 02101 i249Agriculture 15 02101 i250Agriculture 15 02101 i251445.2125.64
Agriculture 15 02101 i252Agriculture 15 02101 i253Agriculture 15 02101 i254Agriculture 15 02101 i255Agriculture 15 02101 i256446.1224.66
Agriculture 15 02101 i257Agriculture 15 02101 i258Agriculture 15 02101 i259Agriculture 15 02101 i260Agriculture 15 02101 i261446.0124.67
Agriculture 15 02101 i262Agriculture 15 02101 i263Agriculture 15 02101 i264Agriculture 15 02101 i265Agriculture 15 02101 i266444.1723.91
Agriculture 15 02101 i267Agriculture 15 02101 i268Agriculture 15 02101 i269Agriculture 15 02101 i270Agriculture 15 02101 i271446.4923.31
Agriculture 15 02101 i272Agriculture 15 02101 i273Agriculture 15 02101 i274Agriculture 15 02101 i275Agriculture 15 02101 i276445.9125.33
Table 9. Extraction Results of Broomcorn Millet Under Different number of Initial Leaf Contour Configurations.
Table 9. Extraction Results of Broomcorn Millet Under Different number of Initial Leaf Contour Configurations.
Parameter/Metricc1 = 1c1 = 1c1 = 2c1 = 2c1 = 3c1 = 4c1 = 6c1 = 10
Auto-Init. Cont.Agriculture 15 02101 i277Agriculture 15 02101 i278Agriculture 15 02101 i279Agriculture 15 02101 i280Agriculture 15 02101 i281Agriculture 15 02101 i282Agriculture 15 02101 i283Agriculture 15 02101 i284
Iter = 2Agriculture 15 02101 i285Agriculture 15 02101 i286Agriculture 15 02101 i287Agriculture 15 02101 i288Agriculture 15 02101 i289Agriculture 15 02101 i290Agriculture 15 02101 i291Agriculture 15 02101 i292
Iter = 10Agriculture 15 02101 i293Agriculture 15 02101 i294Agriculture 15 02101 i295Agriculture 15 02101 i296Agriculture 15 02101 i297Agriculture 15 02101 i298Agriculture 15 02101 i299Agriculture 15 02101 i300
Meas. area (cm2)71.8781.1781.5481.1480.5882.1681.0981.99
Meas. Time (s)20.7321.1019.1820.3318.0819.0318.3518.42
Table 10. Mean Leaf Area (cm2) and Segmentation Time (s) Across Scaling Factors (Values averaged over 10 repetitions).
Table 10. Mean Leaf Area (cm2) and Segmentation Time (s) Across Scaling Factors (Values averaged over 10 repetitions).
Species/MorphologyParameterScaling Factor (p)
0.10.20.30.40.50.60.70.80.91
Broomcorn millet
(Smooth margins)
Area34.9734.8234.5034.6034.5634.2434.3134.2934.1934.12
Time9.5822.6149.51120.12248.08425.27695.681010.941458.951995.62
Grape
(Serrated-only margins)
Area71.0971.7872.0972.1972.1372.1372.0572.2171.9572.20
Time9.2219.1440.4381.93157.62310.65533.57796.521169.331586.34
Tomato
(Serrated + Lobed)
Area95.8396.0996.1596.9597.4596.7197.2696.1796.3797.15
Time8.3515.5924.6555.0266.07109.23188.29304.56513.01684
Table 11. Statistical Analysis of Measurement Stability.
Table 11. Statistical Analysis of Measurement Stability.
MetricBroomcorn Millet
(Smooth)
Grape
(Serrated)
Tomato
(Serrated + Lobed)
MaxΔArea 1 (cm2)0.851.121.62
SD (cm2)0.280.340.57
CV (%)0.810.470.59
GT Area (cm2)34.5671.5896.42
RMSE (cm2)0.280.520.57
Time Model R20.99090.98940.9915
1 MaxΔArea = Max Area Difference.
Table 12. Experimental results under different iteration counts.
Table 12. Experimental results under different iteration counts.
Iteration CountsAuto-Init. Cont.iter = iter1 + iter2Leaf Fin. Seg.Meas. Area (cm2)Meas. Time (s)
2461020406080100
iter1 = 0
iter2 = 100
Agriculture 15 02101 i301Agriculture 15 02101 i302Agriculture 15 02101 i303Agriculture 15 02101 i304Agriculture 15 02101 i305Agriculture 15 02101 i306Agriculture 15 02101 i307Agriculture 15 02101 i308Agriculture 15 02101 i309Agriculture 15 02101 i310Agriculture 15 02101 i31191.41143.17
iter1 = 100
iter2 = 0
Agriculture 15 02101 i312Agriculture 15 02101 i313Agriculture 15 02101 i314Agriculture 15 02101 i315Agriculture 15 02101 i316Agriculture 15 02101 i317Agriculture 15 02101 i318Agriculture 15 02101 i319Agriculture 15 02101 i320Agriculture 15 02101 i321Agriculture 15 02101 i32286.67136.58
Iter1 = 80
Iter2 = 20
Agriculture 15 02101 i323Agriculture 15 02101 i324Agriculture 15 02101 i325Agriculture 15 02101 i326Agriculture 15 02101 i327Agriculture 15 02101 i328Agriculture 15 02101 i329Agriculture 15 02101 i330Agriculture 15 02101 i331Agriculture 15 02101 i332Agriculture 15 02101 i33391.32127.07
Table 13. Experimental results under different shooting angles and heights.
Table 13. Experimental results under different shooting angles and heights.
Orig. ImageHough LinesRect. ImageMeas. Mean Area (cm2)Meas. Mean Time(s)
Agriculture 15 02101 i334Agriculture 15 02101 i335Agriculture 15 02101 i336137.3721.16
Agriculture 15 02101 i337Agriculture 15 02101 i338Agriculture 15 02101 i339137.6120.94
Agriculture 15 02101 i340Agriculture 15 02101 i341Agriculture 15 02101 i342136.2620.32
Agriculture 15 02101 i343Agriculture 15 02101 i344Agriculture 15 02101 i345136.1019.89
Agriculture 15 02101 i346Agriculture 15 02101 i347Agriculture 15 02101 i348137.5020.79
Table 14. Experimental results for pear leave with different colors.
Table 14. Experimental results for pear leave with different colors.
Orig. ImageHough LinesRect. ImageAuto-Init. Cont.Fin. Cont.Leaf Fin. Seg.Meas. Area (cm2)Meas. Time (s)GT 1 Area
(cm2)
AE
(cm2)
Agriculture 15 02101 i349Agriculture 15 02101 i350Agriculture 15 02101 i351Agriculture 15 02101 i352Agriculture 15 02101 i353Agriculture 15 02101 i35454.759.9854.720.03
Agriculture 15 02101 i355Agriculture 15 02101 i356Agriculture 15 02101 i357Agriculture 15 02101 i358Agriculture 15 02101 i359Agriculture 15 02101 i36029.8712.5629.550.32
Agriculture 15 02101 i361Agriculture 15 02101 i362Agriculture 15 02101 i363Agriculture 15 02101 i364Agriculture 15 02101 i365Agriculture 15 02101 i36637.1910.3037.330.14
Agriculture 15 02101 i367Agriculture 15 02101 i368Agriculture 15 02101 i369Agriculture 15 02101 i370Agriculture 15 02101 i371Agriculture 15 02101 i37227.268.6526.650.61
Agriculture 15 02101 i373Agriculture 15 02101 i374Agriculture 15 02101 i375Agriculture 15 02101 i376Agriculture 15 02101 i377Agriculture 15 02101 i37842.228.8043.030.81
1 GT = Ground Truth, obtained via manual photocopy-weighing technique.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Hao, C.; Zhang, X.; Guo, W.; Bi, Z.; Lan, Z.; Zhang, L.; Han, Y. A Semi-Automatic and Visual Leaf Area Measurement System Integrating Hough Transform and Gaussian Level-Set Method. Agriculture 2025, 15, 2101. https://doi.org/10.3390/agriculture15192101

AMA Style

Wang L, Hao C, Zhang X, Guo W, Bi Z, Lan Z, Zhang L, Han Y. A Semi-Automatic and Visual Leaf Area Measurement System Integrating Hough Transform and Gaussian Level-Set Method. Agriculture. 2025; 15(19):2101. https://doi.org/10.3390/agriculture15192101

Chicago/Turabian Style

Wang, Linjuan, Chengyi Hao, Xiaoying Zhang, Wenfeng Guo, Zhifang Bi, Zhaoqing Lan, Lili Zhang, and Yuanhuai Han. 2025. "A Semi-Automatic and Visual Leaf Area Measurement System Integrating Hough Transform and Gaussian Level-Set Method" Agriculture 15, no. 19: 2101. https://doi.org/10.3390/agriculture15192101

APA Style

Wang, L., Hao, C., Zhang, X., Guo, W., Bi, Z., Lan, Z., Zhang, L., & Han, Y. (2025). A Semi-Automatic and Visual Leaf Area Measurement System Integrating Hough Transform and Gaussian Level-Set Method. Agriculture, 15(19), 2101. https://doi.org/10.3390/agriculture15192101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop