Next Article in Journal
Genome-Wide Identification of the ANP Gene Family in Banana (Musa spp.) and Analysis of MaNPK1 Response to Drought Stress Induced by Piriformospora indica
Previous Article in Journal
Study of the Effects of Co-Application of Biochar and Biogas Slurry on Nitrogen Cycling Enzyme Activity and Nitrogen Use Efficiency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RipenessGAN: Growth Day Embedding-Enhanced GAN for Stage-Wise Jujube Ripeness Data Generation

AI Robotics R&D Division, Korea Institute of Robotics & Technology Convergence, Seoul 06372, Republic of Korea
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(10), 2409; https://doi.org/10.3390/agronomy15102409
Submission received: 18 September 2025 / Revised: 14 October 2025 / Accepted: 15 October 2025 / Published: 17 October 2025
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

RipenessGAN is a novel Generative Adversarial Network (GAN) designed to generate synthetic images across different ripeness stages of jujubes (green fruit, white ripe fruit, semi-red fruit, and fully red fruit), aiming to provide balanced training data for diverse applications beyond classification accuracy. This study addresses the problem of data imbalance by augmenting each ripeness stage using our proposed Growth Day Embedding mechanism, thereby enhancing the performance of downstream classification models. The core innovation of RipenessGAN lies in its ability to capture continuous temporal transitions among discrete ripeness classes by incorporating fine-grained growth day information (0–56 days) in addition to traditional class labels. The experimental results show that RipenessGAN produces synthetic data with higher visual quality and greater diversity compared to CycleGAN. Furthermore, the classification models trained on the enriched dataset exhibit more consistent and accurate performance. We also conducted comprehensive comparisons of RipenessGAN against CycleGAN and class-conditional diffusion models (DDPM) under strictly controlled and fair experimental settings, carefully matching model architectures, computational resources, training conditions, and evaluation metrics. The results indicate that although diffusion models yield highly realistic images and CycleGAN ensures stable cycle-consistent generation, RipenessGAN provides superior practical benefits in training efficiency, temporal controllability, and adaptability for agricultural applications. This research demonstrates the potential of RipenessGAN to mitigate data imbalance in agriculture and highlights its scalability to other crops.

1. Introduction

The ripeness of fruits is a critical factor that directly impacts consumer satisfaction and market value. In particular, fruits such as jujubes require precise harvest timing to maintain their high nutritional value and ensure optimal quality for various culinary applications. However, existing methods for assessing jujube ripeness are largely manual, requiring significant labor and time while being prone to inaccuracies due to subjective judgment. To address these challenges, artificial intelligence (AI) and machine learning (ML) technologies have been actively adopted. Among these, Generative Adversarial Networks (GANs) [1], and more recently, diffusion models [2,3], have proven to be effective tools for generating synthetic data and addressing data imbalance in image classification tasks.
Figure 1 illustrates the stages of jujube ripeness, from unripe to fully mature. Each stage presents distinct visual characteristics that influence both market value and harvest timing. Accurately categorizing jujube ripeness across these stages requires a balanced dataset that adequately represents each stage, which is often challenging due to the scarcity of data for certain stages.
Previous studies [4,5] have primarily focused on generating data for quality classification of jujubes using traditional GAN [6] approaches, while research aimed at creating datasets for different stages of jujube growth remains limited. CycleGAN [7] has been successfully applied to agricultural image translation tasks, demonstrating effectiveness in cross-domain image generation with cycle consistency constraints. However, CycleGAN lacks fine-grained temporal control and struggles with generating intermediate states between discrete classes. In response, this study proposes “RipenessGAN”, a GAN model with Growth Day Embedding developed to synthesize realistic images for various ripeness stages of jujubes through controlled temporal generation. Additionally, we explore the potential of diffusion models, such as Denoising Diffusion Probabilistic Models (DDPMs) [2], for the same task. RipenessGAN demonstrates faster convergence and lower computational demand compared to both CycleGAN and diffusion models under the tested conditions, and it provides improved temporal control. These results suggest that RipenessGAN may have potential advantages for real-time and resource-constrained agricultural applications.
In practical agricultural settings, more accurate and automated assessment of fruit ripeness directly leads to tangible benefits such as labor savings and improved operational efficiency. By streamlining the classification of harvest-ready fruits, RipenessGAN can help reduce the need for manual inspection, lower labor costs, and minimize human error. Furthermore, consistent and objective grading of ripeness stages can enhance product quality and marketability by standardizing harvesting processes and improving the reputation of produce among buyers and consumers. As the technology is integrated into digital agriculture workflows, these benefits are anticipated to support smarter crop management, optimized harvest scheduling, and increased profitability.

1.1. Research Purpose and Background

Traditional methods for classifying the ripeness of jujubes largely depend on subjective assessments or suffer from insufficient data, limiting model performance. For example, when data for specific growth stages are lacking, classification models fail to learn these stages effectively, resulting in poor accuracy for these classes. This is a common issue in agricultural and forestry datasets, where data imbalance frequently occurs [8,9,10]. Models trained on imbalanced data tend to underestimate or overestimate certain classes (e.g., ripe vs. unripe jujubes), which can significantly impact decisions such as harvest timing.
CycleGAN has been widely adopted for agricultural image enhancement and cross-domain translation tasks. While CycleGAN provides stable training through cycle consistency loss and can generate high-quality images without paired training data, it lacks the ability to model fine-grained temporal progression within the same domain. CycleGAN is primarily designed for style transfer between different domains rather than temporal progression within a single domain [7]. Diffusion models, on the other hand, have demonstrated superior sample quality and diversity in various generative tasks, but their computational intensity and slow inference speed limit their practical deployment in real-world agricultural applications [11].
Recent variants of CycleGAN for agricultural applications have introduced self-attention mechanisms and improved loss functions, improving feature consistency and capturing more temporal patterns in the generated images [12,13]. While these enhancements offer better diversity and temporal interpolation, there remains a gap in modeling physiologically accurate and fine-grained stage transitions over extended growth periods. Likewise, recent lightweight diffusion models have achieved faster inference and reduced computational demands [14], expanding use in more constrained settings. However, field deployment still faces a trade-off between speed, scalability, and sample fidelity. Thus, ongoing advances are gradually alleviating some prior limitations, but robust solutions for continuous, interpretable agricultural temporal modeling and resource efficiency are still limited.
In recent years, agricultural vision research has witnessed substantial advancements driven by the adoption of Transformer architectures [15,16], Vision-Language Models [17], and multimodal generative frameworks [18]. These models have demonstrated improved semantic representation, enhanced trait prediction, and more robust analysis of complex agricultural data. Furthermore, addressing data imbalance in agricultural datasets remains an ongoing challenge. Traditional strategies such as random oversampling [19], class-weighted loss functions [20], and ensemble learning [21] have been extensively studied and applied in crop phenotyping and classification tasks. While these methods help mitigate label distribution issues, they often fall short in effectively modeling the temporal and stage-wise progression inherent to agricultural growth cycles.
To be practically valuable for agricultural vision tasks, a generative model should satisfy several specific requirements. Temporal consistency is critical for modeling smooth developmental transitions, as fruit and crop growth are inherently continuous. In addition, computational efficiency and scalability are essential due to resource constraints typical of real-world agricultural environments. For deployment in practical agricultural environments, generative models must produce realistic, stage-aware synthetic data while ensuring reliability under limited computational resources and minimal human supervision. These considerations motivated the design of RipenessGAN, which aims to address both temporal progression and computational efficiency, thus bridging the gap between synthetic data generation and practical deployment in agricultural systems.
Figure 2 illustrates the overall workflow of our approach, where synthetic jujube ripeness stage data are generated using RipenessGAN, and classification accuracy is evaluated with a trained classification model. By modeling the continuous temporal progression of jujube ripening (0-56 days) while maintaining discrete class boundaries, RipenessGAN can generate intermediate states and smooth transitions between ripeness stages, effectively addressing data imbalance while maintaining temporal consistency that neither CycleGAN nor diffusion models can provide.
Figure 3 provides a detailed visualization of jujube ripening at fine-grained temporal intervals. Each column corresponds to a distinct growth day, ranging from day 0 (real reference samples) to day 56. The progression is sampled at key intervals: day 0, 5, 14, 20, 28, 35, 42, and 56. The top-to-bottom arrangement within each column depicts different individual fruit instances, ensuring the robustness of visual trends across samples.
The leftmost column exclusively features real jujube images, which serve as the biological baseline for subsequent transformations. The remaining columns present synthetic samples produced by RipenessGAN, with the model conditioned on target growth days. As growth day increases, the images exhibit a smooth and realistic evolution in color, surface gloss, and textural complexity—precisely capturing the physiological ripening trajectory from green, immature fruit toward fully mature, red jujubes. Notably, subtle transitions such as the onset of reddish patches, changes in surface reflectance, and eventual uniform coloring are faithfully synthesized, demonstrating the model’s capacity for high-fidelity temporal interpolation.
This figure exemplifies how RipenessGAN not only bridges discrete class boundaries but also generates biologically plausible intermediate states. By enabling the visualization of continuous ripening, our method offers new opportunities for automated phenotyping, dataset balancing, and real-world agricultural trait analysis.

1.2. Key Contributions

  • Growth day embedding architecture: We introduce a Growth Day Embedding mechanism, enabling fine-grained temporal control over jujube ripeness generation, modeling continuous growth (0–56 days) within discrete class boundaries (0–3), and exceeding the expressiveness of existing GANs and diffusion model frameworks.
  • RipenessGAN framework: We propose RipenessGAN, a generative model incorporating Growth Day Embedding to synthesize realistic, temporally controlled images, demonstrating superior diversity and consistency across all ripeness stages compared to CycleGAN.
  • Temporal consistency loss: We developed a temporal consistency loss function to ensure smooth transitions between growth days while preserving class-specific characteristics, offering progression modeling that is unavailable with CycleGAN’s cycle consistency or diffusion models’ denoising.
  • Addressing data imbalance through temporal augmentation: RipenessGAN’s synthetic data mitigates class imbalance by generating intermediate temporal states, providing more comprehensive augmentation than domain translation approaches.
  • Comprehensive, three-way comparison: We implemented a controlled comparison methodology, ensuring fair evaluation between RipenessGAN, CycleGAN, and DDPM using an equivalent computational budget, an identical training dataset, and comparable model complexity.

2. Materials and Methods

2.1. Dataset Acquisition and Preprocessing

In this study, we utilized an open-access dataset of jujube images provided by Ban et al. [22]. The dataset comprises high-resolution images of jujubes captured at five distinct maturity stages: green fruit, white ripe fruit, semi-red fruit, fully red fruit, and softened fruit. Each class is evenly represented, with a total of 5000 original images (700 per class for training, 200 per class for validation, and 100 per class for testing), as described in the source publication. All images were annotated by agricultural experts to ensure accurate labeling.
For preprocessing, all images were resized to 192 × 192 pixels to fit the input requirements of deep learning models. Pixel values were normalized to the range [ 1 ,   1 ] . To increase dataset diversity and improve model generalizability, data augmentation techniques, such as rotation, flipping, and brightness adjustment, were applied. The total number of images was further increased to 20,000 through augmentation, following the approach outlined by Ban et al. [22].
The dataset and code are publicly available and can be accessed at https://github.com/kuiersaila/Jujube_classification_model (accessed on 14 October 2025).
The choice of jujube as the case crop was motivated by the availability of a suitable open dataset, the crop’s economic relevance, and its clearly distinguishable visual ripening stages, which make it an optimal subject for temporal modeling.

2.2. RipenessGAN Framework with Growth Day Embedding

In this study, we propose the RipenessGAN framework to effectively generate the growth process of jujubes. Based on existing GAN architectures, we added our Growth Day Embedding mechanism to enhance temporal consistency and transition quality between stages, addressing the limitations present in both CycleGAN and diffusion model approaches.

2.2.1. Growth Day Embedding Architecture

Growth Day Embedding provides more detailed temporal information than simple class labels (0–3) used in traditional approaches or the domain translation capabilities of CycleGAN. Each class represents a specific ripeness stage: class 0 (green fruit), class 1 (white ripe fruit), class 2 (semi-red fruit), and class 3 (fully red fruit). While each class is separated by 14-day (2-week) intervals, the actual growth process is continuous, so we introduced more granular embedding to capture the transitional states between these distinct stages.
Figure 4 presents the overall architecture of RipenessGAN. In this framework, the generator receives an input image along with its class label and Growth Day Embedding and then outputs a ripeness-transformed image that reflects both categorical and temporal information. The PatchGAN discriminator evaluates these outputs by distinguishing real images from fake ones, ensuring the generator produces visually realistic and temporally consistent samples.
Our detailed embedding implementation expands the embedding space to represent growth days up to 56 days, considering that each class (0–3) is separated by 14-day intervals, allowing modeling of daily changes within classes. To convert growth days into discrete embedding indices, real-valued growth days are rounded to the nearest integer to index the embedding table. Growth days are embedded into 16-dimensional vectors to richly represent temporal characteristics, allowing for learning more complex temporal patterns than the simple scalar values used in CycleGAN or the noise-based conditioning used in diffusion models. For data augmentation during training, linear interpolation is applied between adjacent Growth Day Embeddings to simulate continuous growth processes, inspired by continuous conditional generation techniques. Through this detailed embedding approach, the model can learn intra-class changes and inter-class transitions more naturally than CycleGAN’s cycle consistency constraints or the diffusion model’s iterative denoising process.
We set the embedding dimension to 16 based on ablation analysis for balancing temporal expressiveness and model overfitting risk (Algorithm 1).
Algorithm 1 RipenessGAN Training with Hinge Adversarial and Temporal Consistency Loss
1.
Sample a minibatch of ( x , c , d ) from data, and noise z from p z
2.
Growth Day Embedding: Compute class embedding e c = ClassEmbedding ( c ) , growth day embedding e d = GrowthEmbedding ( d )
3.
Generator: Generate y ^ = G ( z , e c , e d )
4.
Discriminator (Hinge loss):
Update θ D by minimizing:
L D a d v = E x , c , d [ max ( 0 , 1 D ( x , c , d ) ) ] + E z , c , d [ max ( 0 , 1 + D ( G ( z , c , d ) , c , d ) ) ]
5.
Temporal Consistency Loss:
  • For each ( x , c , d 1 , d 2 ) in minibatch:
    w ( d 1 , d 2 ) = exp ( d 1 d 2 ) 2 2 σ 2
  • Evaluate L t e m p = E x , c , d 1 , d 2 [ w ( d 1 , d 2 ) G ( x , c , d 1 ) G ( x , c , d 2 ) 1 ]
6.
Generator (Hinge + Temporal Loss):
L G = E z , c , d [ D ( G ( z , c , d ) , c , d ) ] + λ t c L t e m p
Update θ G by minimizing L G .

2.2.2. Generator and Discriminator Structure

The generator adopts a U-Net encoder–decoder architecture that processes an input image x R 192 × 192 × 3 along with class label c { 0 , 1 , 2 , 3 } and growth day d [ 0 ,   56 ] . The architecture consists of the following:
  • Encoder: Five convolutional blocks (kernel size 4 × 4 , stride 2) progressively downsampling to 6 × 6 × 512 features.
  • Class/Day embedding:
    E c ( c ) :   { 0 , 1 , 2 , 3 } R 32 E d ( d ) :   [ 0 , 56 ] R 16
  • Decoder: Five transposed convolutional blocks (kernel size 4 × 4 , stride 2) with skip connections, progressively upsampling to 192 × 192 × 3 .
  • Self-attention [23]: A self-attention layer is inserted at a 48 × 48 resolution to capture long-range dependencies.
The discriminator implements a PatchGAN architecture:
  • Input: Image x R 192 × 192 × 3 + concatenated embeddings [ E c ( c ) , E d ( d ) ] .
  • Backbone: Six convolutional blocks (kernel size 4 × 4 ; stride 2) with spectral normalization.
  • Output: 6 × 6 feature map, where each element represents the real/fake probability for a 70 × 70 patch.
  • Attention mechanism: Self-attention layer at 48 × 48 resolution.

2.2.3. Growth Day-Based Loss Functions

We expanded the loss functions to utilize Growth Day Embedding by adding terms to strengthen temporal consistency to existing loss functions. The basic adversarial loss is implemented based on hinge loss:
L adv D = E x p data [ max ( 0 , 1 D ( x , c , d ) ) ] + E x p data [ max ( 0 , 1 + D ( G ( x , c , d ) , c , d ) ) ]
L adv G = E x p data [ D ( G ( x , c , d ) , c , d ) ]
where c is the class label, and d is the growth day.
Our temporal consistency loss ensures continuity between growth days, providing capabilities that are unavailable in CycleGAN’s cycle consistency or diffusion models’ denoising objectives:
L temp = E x p data , d 1 , d 2 [ G ( x , c , d 1 ) G ( x , c , d 2 )   ·   ω ( | d 1 d 2 | ) ]
where ω ( | d 1 d 2 | ) is a weight function defined as
ω ( | d 1 d 2 | ) = exp | d 1 d 2 | 2 2 σ 2
with σ = 7 days. This Gaussian-inspired weighting function assigns higher weights to smaller day differences, encouraging smooth transitions between temporally close stages while allowing greater visual changes for larger time intervals. Temporal smoothness in RipenessGAN is achieved by combining Growth Day Embedding with a dedicated temporal consistency loss, which is weighted by day intervals (Equations (3) and (4)).
The value was determined to provide stable convergence and visually smooth temporal transitions in all trials.

2.3. Fair Comparison Methodology for RipenessGAN vs. CycleGAN vs. DDPM (Diffusion)

To ensure a fair and unbiased comparison between RipenessGAN, CycleGAN, and DDPM, we implemented a controlled experimental methodology that addresses the common issues in generative model comparisons. Recent studies have highlighted that many comparisons between different generative approaches suffer from inconsistent experimental conditions, where certain models often benefit from larger network architectures, longer training times, and greater computational resources, leading to potentially biased conclusions [24,25,26].
Our fair comparison methodology is based on the following principles: First, we ensured architectural consistency [27] by implementing RipenessGAN, CycleGAN, and DDPM using similar backbone architectures and keeping network depth, channel dimensions, and attention mechanisms consistent across all models to eliminate major architectural advantages. For CycleGAN, we used the standard generator and discriminator architectures with ResNet blocks, while for DDPM, we employed a U-Net [28] backbone with equivalent complexity.
Second, we maintained computational budget equality by allocating identical training time and GPU resources to all models. Each model is trained for the same number of gradient updates using the same computational infrastructure (4× NVIDIA RTX 3090 GPUs, Santa Clara, CA, USA) with equivalent memory usage and processing time.
Third, we applied training condition standardization [29,30,31] by using identical training datasets, batch sizes, and optimization strategies. Data augmentation techniques and preprocessing pipelines are kept consistent. The learning rate schedules and regularization techniques are adapted appropriately for each model type while maintaining equivalent training complexity.
Fourth, we implemented evaluation consistency by using the same evaluation metrics (FID [32], PSNR [33], SSIM [34], LPIPS [35], and classification accuracy) and test datasets for all models. The inference procedures were standardized to ensure that generation quality comparisons were not affected by different sampling strategies or post-processing techniques. For CycleGAN, we evaluated both forward and backward translation capabilities, while for DDPM, we used 50 denoising steps for inference to balance quality and speed.
This controlled comparison methodology allows us to isolate the fundamental algorithmic differences between RipenessGAN, CycleGAN, and DDPM, providing more reliable insights into their relative performance for agricultural image generation tasks. By eliminating confounding factors, such as architectural advantages or computational resource disparities, our evaluation focuses on the core strengths and limitations of each generative modeling approach in the context of jujube ripeness data augmentation.

2.4. CycleGAN Implementation

For comparative purposes, we implemented CycleGAN to match the computational complexity of RipenessGAN. The CycleGAN model uses the standard cycle consistency loss and adversarial loss, with a generator composed of 9 ResNet [36] blocks and PatchGAN [6] discriminators. The cycle consistency loss weight was set to 10.0, and the adversarial loss weight was set to 1.0, following established best practices. The model was trained under the same computational budget, hardware resources, and training duration as RipenessGAN.

2.5. DDPM Implementation

For comparison, we implemented a class-conditional diffusion model based on the Denoising Diffusion Probabilistic Model (DDPM) framework, utilizing a UNet2DCondition [2] Model architecture with complexity matched to RipenessGAN. The model incorporates class conditioning via cross-attention mechanisms. Training used the standard DDPM objective with 1000 diffusion timesteps and 50 denoising steps for inference, balancing quality and computational efficiency. The model employs a cosine learning rate schedule and was trained with the same computational budget, hardware resources, and training duration as RipenessGAN. The UNet architecture includes attention layers at multiple resolutions (32 × 32, 16 × 16, and 8 × 8) and employs group normalization and SiLU activations, consistent with modern diffusion model design.
To further illustrate the learning dynamics of DDPM, we visualized the model’s inference results for generated images at various training stages, from 0 to 200 epochs. As shown in Figure 5, images produced by the model at early epochs (e.g., epoch 0) are dominated by noise and lack recognizable structure. As training progresses toward 200 epochs, the outputs increasingly exhibit meaningful features, clearer contours, and more realistic jujube representations, indicating successful learning of the data distribution and ripeness progression. While diffusion models generate realistic images using iterative noise removal, their conditioning mechanism is based on random noise vectors rather than explicit temporal or semantic information. In contrast, RipenessGAN leverages Growth Day Embeddings, which provide direct temporal guidance during generation. This embedding structure, coupled with our temporal consistency loss, explicitly enforces smooth transitions and controllable ripening progress, which is not achievable with standard diffusion methods.

2.6. Classifier Training Protocol

To evaluate the impact of synthetic data, a ResNet-18-based classifier was trained under four settings: (1) using only the original jujube dataset; (2) using the original dataset augmented with CycleGAN-generated images; (3) using the original dataset augmented with DDPM-generated images; and (4) using the original dataset augmented with RipenessGAN-generated images. Performance was assessed on both training and test sets. Additionally, stage-wise ripeness transformation experiments were conducted, focusing on transitions from class 0 (green fruits) to classes 1 (white ripe), 2 (semi-red), and 3 (fully red).

3. Results

3.1. Evaluation Metrics

The performance of RipenessGAN, CycleGAN, and DDPM was evaluated using several comprehensive metrics. Fréchet Inception Distance (FID) evaluates the quality of generated images by measuring the statistical similarity between generated and real image distributions [32]. Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) assess the pixel-level and structural similarity between generated and real images. Classification accuracy measures the impact of data augmentation on the performance of ripeness classification models. To ensure fair comparison, all metrics were computed using identical evaluation protocols and test datasets for RipenessGAN, CycleGAN, and diffusion models.

3.2. Three-Way Comparison Results: RipenessGAN vs. CycleGAN vs. DDPM (Diffusion)

Table 1 presents the results of our controlled comparison between RipenessGAN, CycleGAN, and DDPM under equivalent computational conditions. The comparison demonstrates that when architectural complexity, training resources, and evaluation conditions are standardized, RipenessGAN achieves superior performance while offering significant advantages in computational efficiency and temporal control.
The results reveal that RipenessGAN demonstrates superior performance in most metrics, particularly excelling in perceptual quality (LPIPS) and classification accuracy while maintaining competitive FID scores compared to other models. Most notably, RipenessGAN achieves these results with approximately 2× faster training time compared to CycleGAN and 4× faster than DDPM, making it highly practical for deployment in resource-constrained agricultural scenarios.

3.3. Computational Efficiency Analysis

The computational efficiency comparison reveals the significant practical advantages of RipenessGAN over both CycleGAN and diffusion models. While diffusion models require iterative denoising processes with multiple forward passes (typically 50–1000 steps), and CycleGAN requires training two generators and two discriminators with cycle consistency constraints, RipenessGAN generates images in a single forward pass with a more efficient architecture. During training, RipenessGAN converges to high-quality results within 2.1 h, compared to 3.5 h for CycleGAN and 8.2 h for the equivalent diffusion model under fair comparison conditions.
The memory efficiency analysis shows that RipenessGAN requires approximately 40% less GPU memory during training compared to CycleGAN due to its single generator-discriminator pair architecture, and 60% less compared to the diffusion model’s need to store intermediate states across multiple timesteps. This efficiency advantage indicates that RipenessGAN may offer potential benefits for deployment in resource-limited agricultural environments where real-time processing and energy efficiency are important.

3.4. Ablation Study: Effect of Synthetic Data Augmentation

To further assess the practical impact of synthetic data from different generative models, we compared the performance of ResNet-18 classifiers trained with and without augmented data from each model. Table 2 presents the results.
Table 2 presents a class-wise comparison of classifier performance across different synthetic data augmentation methods. For each generative model, we report precision, recall [37], F1-score [38], and overall accuracy (%) for the “White ripe,” “Semi-red,” and “Fully-red” jujube classes. As shown in the table, synthetic images generated by RipenessGAN, CycleGAN, or DDPM substantially improve all major metrics compared to training with the original dataset alone.
Critically, augmentation with RipenessGAN-generated data yields the best overall classification performance, achieving the highest accuracy, precision, and F1-scores for all classes. While CycleGAN and DDPM augmentation also outperform the baseline, their improvements are less pronounced—particularly in the more challenging “Semi-red” class. These results highlight RipenessGAN’s effectiveness in generating informative synthetic data for classification tasks in imbalanced jujube datasets.
Figure 6 provides a bar graph illustrating the overall classification accuracy achieved by each data augmentation strategy. This visualization enables a direct comparison of classifier performance across generative approaches and highlights the consistent improvements resulting from synthetic augmentation, with RipenessGAN leading in accuracy.
To further examine classifier behavior, confusion matrices [39] for all four training settings are shown in Figure 7. These matrices visually demonstrate how synthetic data augmentation, especially with RipenessGAN, reduces misclassification between neighboring maturity stages and offers additional insights into model-specific improvements.

3.5. Stage-Wise Ripeness Transformation Experiments

To evaluate the visual transformation capabilities of the models, we conducted stage-wise ripeness transformation experiments across four maturity categories: class 0 (green), class 1 (white ripe), class 2 (semi-red), and class 3 (fully red). This experimental setup allowed a direct comparison of how CycleGAN, DDPM, and RipenessGAN synthesize and manipulate developmental transitions in jujube appearance. Representative qualitative results are shown in Figure 8, illustrating the outcomes of each generative model under identical transformation settings. In the results, CycleGAN demonstrated strong proficiency in converting surface-level attributes. The model successfully altered the color spectrum and gloss intensity to represent each targeted maturity stage while preserving the overall fruit geometry. However, its reliance on pixel-level domain translation frequently produced exaggerated gloss, unnaturally strong reflections, and overexposed patches on the skin surface. Such artifacts reduced the natural realism of its outputs and diminished their suitability for agricultural classification tasks, where subtle textural detail and accurate color gradation are critical [6,34,40,41].
DDPM, in contrast, exhibited clear strengths in generating natural textures and diverse surface attributes due to its iterative noise-driven sampling strategy [42]. The resulting samples often appeared visually realistic with smooth color shading and non-uniform patterns, mimicking real-world ripening progression. Nonetheless, the uncontrolled stochasticity of diffusion frequently impaired structural fidelity. Generated fruits sometimes displayed distorted or inflated shapes, blurred edges, and irregular contours. Moreover, DDPM struggled to reproduce consistent class-specific color profiles, resulting in overlapping or ambiguous boundaries between ripeness stages. These deficiencies in structural preservation and chromatic precision limited its effectiveness as an augmentation tool for improving classification accuracy. Compared to these approaches, RipenessGAN consistently produced superior visual and semantic results.
To provide further clarity, here, we explicitly summarize the mechanistic distinctions that give RipenessGAN practical advantages over both CycleGAN and diffusion models. Unlike CycleGAN, which primarily performs domain translation based on coarse class labels and often struggles to generate intermediate or temporally smooth transitions, RipenessGAN incorporates a Growth Day Embedding mechanism that allows for fine-grained control over the temporal progression of ripeness stages. This embedding, combined with a tailored temporal consistency loss, ensures that the synthesis process follows biologically plausible trajectories and produces more realistic and structurally consistent images across all stages. In comparison, diffusion models rely on iterative denoising based on random noise, which, while effective for texture realism, does not provide explicit temporal conditioning and sometimes results in ambiguous stage boundaries or shape distortions. By explicitly modeling both categorical and temporal factors, RipenessGAN achieves superior controllability, clearer stage transitions, and more faithful representations of continuous ripening processes.
By integrating temporal embedding and stage progression control, it was able to synthesize fruits that not only preserved geometric structure and clear object boundaries but also matched the progression of ripeness with accurate and nuanced transitions. Unlike CycleGAN, RipenessGAN avoided excessive exposure artifacts, and unlike DDPM, it faithfully maintained structural integrity. This balance of visual fidelity, semantic clarity, and reduced artifact rate made RipenessGAN the most reliable framework for generating class-distinctive samples with high realism.

4. Discussion

Through systematic comparison, our study highlights both the potential and limitations of three prominent generative paradigms for agricultural image synthesis. CycleGAN excels in preserving global fruit geometry while enabling stable domain-level transformations of gloss and color between maturity stages. Notably, CycleGAN achieves an average FID of 118.96 and LPIPS of 0.4102, reflecting relatively lower perceptual quality and increased artifacts. These superficial lighting effects—due to the model’s pixel-mapping nature—undermine subtle textural cues and contribute to lower classifier performance (accuracy 98.67%) when the model is used for data augmentation. DDPM contributes enhanced surface realism and variability compared to CycleGAN, supported by a higher mean SSIM (0.4698) and PSNR (16.2858), but it struggles to show clear color differences between ripeness classes. Its average FID (73.24) and the highest LPIPS (0.4463) among the compared models indicate lower structural fidelity and more pronounced boundary ambiguity. As a consequence, while DDPM slightly improves classifier accuracy over the original dataset (98.08%), its performance is lower than when using CycleGAN- or RipenessGAN-augmented data. RipenessGAN, in contrast, is architected with mechanisms tailored for agricultural stage modeling. Its integration of temporal embedding ensures accurate and controllable synthesis along developmental trajectories, while progression control enables smooth stage-wise transitions. These design elements yielded the best perceptual and semantic results, as demonstrated by the lowest average FID (55.2112), an LPIPS score of 0.3659, and superior structural fidelity. Most importantly, classifiers trained using RipenessGAN-augmented data achieved the highest accuracy (98.75%) across all maturity classes, confirming the effectiveness of RipenessGAN as a reliable tool for dataset balancing and classification improvement in agricultural tasks. In addition to numerical classification improvements, RipenessGAN’s key advantage is its capacity to generate temporally smooth and biologically plausible ripening transitions, as validated in both qualitative and quantitative analyses in Section 3.5. Unlike existing GAN-based augmentation, which often introduces abrupt transitions or unrealistic artifacts, our method explicitly models gradual changes in agricultural visual traits such as color progression, surface gloss, and texture. These properties more faithfully reflect physiological maturation, supporting higher model interpretability, improved phenotyping, and more effective mitigation of class imbalance in real-world agricultural datasets.
Importantly, the visual differences generated by the models—including color progression, surface gloss, and texture—closely align with the known physiological changes that occur during jujube ripening. For example, gradual increases in surface gloss and the deepening of color produced by RipenessGAN match the natural maturation stages of jujube fruits, as observed in agricultural phenotyping. These traits, which are distinctly represented in the synthetic images, facilitate more reliable classification and analysis of ripening stages and enable improved downstream phenotyping accuracy relative to the abrupt or artifact-prone transitions generated by baseline GAN models.
Such qualitative enhancements, while not always reflected in large accuracy gains, provide substantive benefits for downstream agricultural tasks requiring realistic and temporally consistent data.
In future work, beyond the ResNet-18 classifier used in our evaluation, it remains an important research direction to assess performance using temporal models such as temporal CNNs or RNNs, which are specifically designed to leverage sequential and stage-wise dependencies in ripeness progression data.

Limitations and Generalizability

The Growth Day Embedding approach is designed to be generalizable to other crops with discernible stage-wise appearance changes—such as tomato, grape, or apple. Given appropriate datasets, RipenessGAN can be readily adapted to different crops. Nevertheless, practical limitations may arise due to crop-specific visual cues and dataset constraints, which are acknowledged as important considerations for broader deployment.
While this study provides comprehensive quantitative and qualitative comparisons among RipenessGAN, CycleGAN, and DDPM, additional analysis is warranted. CycleGAN artifacts—such as exaggerated gloss and overexposed skin—result primarily from its direct pixel-level style mapping, which can amplify minor input noise and lose textural nuance. The structural blurring and indistinct boundaries in DDPM are due to its iterative denoising, which sometimes erases fine edges or class distinctions when noise is poorly modeled. Furthermore, results and claims are limited by the use of a single class-balanced jujube dataset; broader evaluation on more diverse, real-world agricultural data will be necessary to fully assess the model’s generalizability. Finally, the feasibility of practical deployment, including inference efficiency and integration in low-resource agricultural settings, remains to be established through empirical testing. These limitations represent important areas for future research and further expansion of the methodology.

5. Conclusions

This study demonstrates that RipenessGAN offers conceptual and practical advances over prior generative models such as CycleGAN and DDPM within agricultural image augmentation tasks. While CycleGAN and DDPM exhibit strengths in geometric stability and textural realism, both have limitations—such as artifact amplification and boundary instability—that affect fine-grained maturity classification. In contrast, RipenessGAN achieves more precise stage control with reduced artifacts, suggesting potential utility for future agricultural AI systems requiring balanced, temporally resolved synthetic data.
Rather than focusing solely on numerical improvements, these findings indicate broader significance for the modeling of continuous phenotyping processes in agriculture. Nevertheless, this work remains subject to important limitations regarding dataset diversity, experimental scope, and real-world deployability, which should be addressed in subsequent research.
Beyond ripeness detection, the capacity of RipenessGAN for controlled generative modeling could facilitate broader crop phenotyping and digital agriculture tasks—especially where nuanced developmental transitions must be captured. Future extensions will include the application of Growth Day Embedding to diverse crop species, the integration of environmental variables (such as lighting, soil, and climate) into generation, and thorough validation in practical field scenarios. These directions will further clarify the robustness and scalability of RipenessGAN for agricultural vision applications.
While classifier evaluation was conducted using ResNet-18, as noted in the Discussion section, future studies will consider temporal CNNs, RNNs, and other architectures for broader validation. In addition, we recognize that the current augmentation and evaluation settings are relatively simple; consequently, future work will incorporate more realistic environmental variables—including lighting variation, occlusion, and background complexity—and conduct field validations to rigorously assess the robustness and applicability of RipenessGAN in real-world agriculture.

Author Contributions

Conceptualization, J.-S.K.; methodology, J.-S.K.; software, J.-S.K.; validation, J.Y.; formal analysis, B.-J.P.; writing—original draft preparation, J.-S.K., J.K. and H.-Y.S.; writing—review and editing, S.C.J. and H.-J.C.; supervision, H.-J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the R&D Program for Forest Science Technology (Project No. RS-2023-KF002445), provided by the Korea Forest Service (Korea Forestry Promotion Institute), Republic of Korea.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the articlel. Further inquiries can be directed to the corresponding author.

Acknowledgments

This study was carried out with the support of the ‘R&D Program for Forest Science Technology (Project No. RS-2023-KF002445)’ provided by Korea Forest Service (Korea Forestry Promotion Institute).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  2. Ho, J.; Jain, A.; Abbeel, P. Denoising Diffusion Probabilistic Models. In Proceedings of the Advances in Neural Information Processing Systems, Online, 6–12 December 2020; Volume 33, pp. 6840–6851. [Google Scholar]
  3. Nichol, A.Q.; Dhariwal, P. Improved Denoising Diffusion Probabilistic Models. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 8162–8171. [Google Scholar]
  4. Bird, J.J.; Barnes, C.M.; Manso, L.J.; Ekárt, A.; Faria, D.R. Fruit quality and defect image classification with conditional GAN data augmentation. Sci. Hortic. 2022, 293, 110684. [Google Scholar] [CrossRef]
  5. Cang, H.; Yan, T.; Duan, L.; Yan, J.; Zhang, Y.; Tan, F.; Lv, X.; Gao, P. Jujube quality grading using a generative adversarial network with an imbalanced data set. Biosyst. Eng. 2023, 236, 224–237. [Google Scholar] [CrossRef]
  6. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  7. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  8. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  9. Adegbenjo, A.O.; Ngadi, M.O. Handling the imbalanced problem in agri-food data analysis. Foods 2024, 13, 3300. [Google Scholar] [CrossRef]
  10. Waldner, F.; Chen, Y.; Lawes, R.; Hochman, Z. Needle in a haystack: Mapping rare and infrequent crops using satellite imagery and data balancing methods. Remote Sens. Environ. 2019, 233, 111375. [Google Scholar] [CrossRef]
  11. Hu, X.; Chen, H.; Duan, Q.; Ahn, C.K.; Shang, H.; Zhang, D. A Comprehensive Review of Diffusion Models in Smart Agriculture: Progress, Applications, and Challenges. arXiv 2025, arXiv:2507.18376. [Google Scholar] [CrossRef]
  12. Liu, D.; Cao, Y.; Yang, J.; Wei, J.; Zhang, J.; Rao, C.; Wu, B.; Zhang, D. SM-CycleGAN: Crop image data enhancement method based on self-attention mechanism CycleGAN. Sci. Rep. 2024, 14, 9277. [Google Scholar] [CrossRef]
  13. Wang, L.; Wang, L.; Chen, S. ESA-CycleGAN: Edge feature and self-attention based cycle-consistent generative adversarial network for style transfer. IET Image Process. 2022, 16, 176–190. [Google Scholar] [CrossRef]
  14. Muhammad, A.; Salman, Z.; Lee, K.; Han, D. Harnessing the power of diffusion models for plant disease image augmentation. Front. Plant Sci. 2023, 14, 1280496. [Google Scholar] [CrossRef] [PubMed]
  15. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  16. Liu, Y.; Nezami, F.R.; Edelman, E.R. A transformer-based pyramid network for coronary calcified plaque segmentation in intravascular optical coherence tomography images. Comput. Med Imaging Graph. 2024, 113, 102347. [Google Scholar] [CrossRef]
  17. Zhang, J.; Huang, J.; Jin, S.; Lu, S. Vision-language models for vision tasks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5625–5644. [Google Scholar] [CrossRef] [PubMed]
  18. Huang, M.; Gu, C.; Bai, Y. Effect of fertilization on methane and nitrous oxide emissions and global warming potential on agricultural land in China: A meta-analysis. Agriculture 2023, 14, 34. [Google Scholar] [CrossRef]
  19. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  20. Cui, Y.; Jia, M.; Lin, T.Y.; Song, Y.; Belongie, S. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9268–9277. [Google Scholar]
  21. Shahhosseini, M.M. Optimized Ensemble Learning and Its Application in Agriculture. Ph.D. Thesis, Iowa State University, Ames, IA, USA, 2021. [Google Scholar]
  22. Ban, Z.; Fang, C.; Liu, L.; Wu, Z.; Chen, C.; Zhu, Y. Detection of fundamental quality traits of winter jujube based on computer vision and deep learning. Agronomy 2023, 13, 2095. [Google Scholar] [CrossRef]
  23. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
  24. Lucic, M.; Kurach, K.; Michalski, M.; Gelly, S.; Bousquet, O. Are gans created equal? A large-scale study. arXiv 2018, arXiv:1711.10337. [Google Scholar] [CrossRef]
  25. Maryam, M.; Biagiola, M.; Stocco, A.; Riccio, V. Benchmarking Generative AI Models for Deep Learning Test Input Generation. In Proceedings of the 2025 IEEE Conference on Software Testing, Verification and Validation (ICST), Napoli, Italy, 31 March–4 April 2025; pp. 174–185. [Google Scholar]
  26. Morande, S. Benchmarking Generative AI: A Comparative Evaluation and Practical Guidelines for Responsible Integration into Academic Research. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4571867 (accessed on 14 October 2025).
  27. Rosenkranz, M.; Kalina, K.A.; Brummund, J.; Kästner, M. A comparative study on different neural network architectures to model inelasticity. Int. J. Numer. Methods Eng. 2023, 124, 4802–4840. [Google Scholar] [CrossRef]
  28. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  29. Gao, Z.; Sun, Y. Statistical Inference for Generative Model Comparison. arXiv 2025, arXiv:2501.18897. [Google Scholar]
  30. Lesort, T.; Stoian, A.; Goudou, J.F.; Filliat, D. Training discriminative models to evaluate generative ones. In Proceedings of the International Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 604–619. [Google Scholar]
  31. Joshi, A.; Guevara, D.; Earles, M. Standardizing and centralizing datasets for efficient training of agricultural deep learning models. Plant Phenomics 2023, 5, 0084. [Google Scholar] [CrossRef]
  32. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. arXiv 2017, arXiv:1706.08500. [Google Scholar] [CrossRef]
  33. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  34. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  35. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 586–595. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  37. Davis, J.; Goadrich, M. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 233–240. [Google Scholar]
  38. Buttcher, S.; Clarke, C.L.; Cormack, G.V. Information Retrieval: Implementing and Evaluating Search Engines; Mit Press: Cambridge, MA, USA, 2016. [Google Scholar]
  39. Stehman, S.V. Selecting and interpreting measures of thematic classification accuracy. Remote Sens. Environ. 1997, 62, 77–89. [Google Scholar] [CrossRef]
  40. Yoo, T.K.; Choi, J.Y.; Kim, H.K. CycleGAN-based deep learning technique for artifact reduction in fundus photography. Graefe’s Arch. Clin. Exp. Ophthalmol. 2020, 258, 1631–1637. [Google Scholar] [CrossRef] [PubMed]
  41. Karras, T.; Laine, S.; Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4401–4410. [Google Scholar]
  42. Jazbec, M.; Wong-Toi, E.; Xia, G.; Zhang, D.; Nalisnick, E.; Mandt, S. Generative Uncertainty in Diffusion Models. arXiv 2025, arXiv:2502.20946. [Google Scholar] [CrossRef]
Figure 1. Visual representation of the jujube ripeness stages. (ac): class 0 (green jujube); (d): class 1 (white ripe jujube); (e): class 2 (semi-red jujube); (f): class 3 (fully red jujube).
Figure 1. Visual representation of the jujube ripeness stages. (ac): class 0 (green jujube); (d): class 1 (white ripe jujube); (e): class 2 (semi-red jujube); (f): class 3 (fully red jujube).
Agronomy 15 02409 g001
Figure 2. Workflow for generating synthetic jujube ripeness stage data using RipenessGAN, and evaluating classification accuracy with a trained classification model.
Figure 2. Workflow for generating synthetic jujube ripeness stage data using RipenessGAN, and evaluating classification accuracy with a trained classification model.
Agronomy 15 02409 g002
Figure 3. RipenessGAN-generated continuous jujube ripening sequence at growth days 0, 5, 14, 20, 28, 35, 42, and 56.
Figure 3. RipenessGAN-generated continuous jujube ripening sequence at growth days 0, 5, 14, 20, 28, 35, 42, and 56.
Agronomy 15 02409 g003
Figure 4. The overall architecture of RipenessGAN. The generator receives an input image, class label, and Growth Day Embedding, and outputs a ripeness-transformed image. The PatchGAN discriminator distinguishes real from fake images.
Figure 4. The overall architecture of RipenessGAN. The generator receives an input image, class label, and Growth Day Embedding, and outputs a ripeness-transformed image. The PatchGAN discriminator distinguishes real from fake images.
Agronomy 15 02409 g004
Figure 5. Epoch-wise visualization of DDPM-generated images during training.
Figure 5. Epoch-wise visualization of DDPM-generated images during training.
Agronomy 15 02409 g005
Figure 6. Overall classification accuracy for each data augmentation strategy.
Figure 6. Overall classification accuracy for each data augmentation strategy.
Agronomy 15 02409 g006
Figure 7. Confusion matrices for classifier performance under different augmentation settings.
Figure 7. Confusion matrices for classifier performance under different augmentation settings.
Agronomy 15 02409 g007
Figure 8. Comparative results of stage-wise jujube ripeness transformation across different generative models.
Figure 8. Comparative results of stage-wise jujube ripeness transformation across different generative models.
Agronomy 15 02409 g008
Table 1. Comparison of model performances. Arrows indicate performance trends: ↑ means better performance (higher value is preferable), while ↓ means worse performance (lower value is preferable).
Table 1. Comparison of model performances. Arrows indicate performance trends: ↑ means better performance (higher value is preferable), while ↓ means worse performance (lower value is preferable).
ModelClassFID (↓)PSNR (↑)SSIM (↑)LPIPS (↓)
CycleGANWhite ripe125.388015.08910.19060.4015
Semi-red139.208215.56060.17150.4293
Fully red92.270316.72930.29240.3997
Average118.955515.79300.21820.4102
DDPM (Diffusion)White ripe60.951715.61920.46920.4210
Semi-red58.085016.07100.46360.4423
Fully red100.683517.16730.47650.4756
Average73.240016.28580.46980.4463
RipenessGAN
(Proposed method)
White ripe22.940915.62790.44130.3538
Semi-red58.243215.81390.35760.3543
Fully red84.449617.08490.47800.3897
Average55.211216.17560.42560.3659
Table 2. Class-wise comparison between RipenessGAN, CycleGAN, and DDPM.
Table 2. Class-wise comparison between RipenessGAN, CycleGAN, and DDPM.
ModelClassPrecisionRecallF1Acc. (%)
OriginalWhite ripe0.96890.97770.973297.75
Semi-red0.96210.96700.9646
Fully red0.97230.97100.9716
Original + CycleGANWhite ripe0.99000.99000.990098.67
Semi-red0.99000.99100.9905
Fully red0.99200.99000.9910
Original + DDPM (Diffusion)White ripe0.97610.98100.978598.08
Semi-red0.97770.97800.9778
Fully red0.97840.97700.9777
Original + RipenessGAN
(Proposed method)
White ripe0.99010.99100.990598.75
Semi-red0.98710.99200.9896
Fully red0.99490.99000.9924
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kang, J.-S.; Yoon, J.; Park, B.-J.; Kim, J.; Jee, S.C.; Song, H.-Y.; Chung, H.-J. RipenessGAN: Growth Day Embedding-Enhanced GAN for Stage-Wise Jujube Ripeness Data Generation. Agronomy 2025, 15, 2409. https://doi.org/10.3390/agronomy15102409

AMA Style

Kang J-S, Yoon J, Park B-J, Kim J, Jee SC, Song H-Y, Chung H-J. RipenessGAN: Growth Day Embedding-Enhanced GAN for Stage-Wise Jujube Ripeness Data Generation. Agronomy. 2025; 15(10):2409. https://doi.org/10.3390/agronomy15102409

Chicago/Turabian Style

Kang, Jeon-Seong, Junwon Yoon, Beom-Joon Park, Junyoung Kim, Sung Chul Jee, Ha-Yoon Song, and Hyun-Joon Chung. 2025. "RipenessGAN: Growth Day Embedding-Enhanced GAN for Stage-Wise Jujube Ripeness Data Generation" Agronomy 15, no. 10: 2409. https://doi.org/10.3390/agronomy15102409

APA Style

Kang, J.-S., Yoon, J., Park, B.-J., Kim, J., Jee, S. C., Song, H.-Y., & Chung, H.-J. (2025). RipenessGAN: Growth Day Embedding-Enhanced GAN for Stage-Wise Jujube Ripeness Data Generation. Agronomy, 15(10), 2409. https://doi.org/10.3390/agronomy15102409

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop