RGSGAN–MACRNet: A More Accurate Recognition Method for Imperfect Corn Kernels Under Sample-Size-Limited Conditions
Abstract
1. Introduction
- •
- An RGSGAN model is designed by integrating residual structures and a spatial–channel synergistic attention mechanism into the generator, together with a Wasserstein distance and gradient penalty. This stabilizes adversarial training and enables the generation of high-quality imperfect corn kernel images under sample-size-limited conditions, thereby enhancing sample realism and diversity.
- •
- A MACRNet model is developed, featuring a multi-branch asymmetric convolutional residual architecture for multi-scale feature fusion. This reduces parameter count and computational cost while improving the representation of fine-grained textures and global structural information.
- •
- A hybrid data augmentation and recognition framework is proposed by combining generative and non-generative augmentation methods. This effectively alleviates data scarcity and class imbalance issues, thereby improving recognition accuracy and generalization for imperfect corn kernels.
2. Model and Methods
2.1. RGSGAN Model Architecture
2.2. Loss Function Design for RGSGAN
2.3. MACRNet Model Architecture
3. Results and Analysis
3.1. Dataset Construction
3.2. Experimental Environment and Parameter Settings
3.3. Evaluation Metrics
3.4. Evaluation of Data Augmentation Effectiveness
3.5. Comparison of Classification Performance
3.6. Ablation Studies
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhang, J.; Wang, Z.; Qu, M.; Cheng, F. Research on physicochemical properties, microscopic characterization and detection of different freezing-damaged corn seeds. Food Chem. X 2022, 14, 100338. [Google Scholar] [CrossRef]
- Yang, X.; Xu, X.; Wang, S.; Zhang, L.; Shen, G.; Teng, H.; Yang, C.; Song, C.; Xiang, W.; Wang, X. Identification, pathogenicity, and genetic diversity of Fusarium spp. associated with maize sheath rot in Heilongjiang Province, China. Int. J. Mol. Sci. 2022, 23, 10821. [Google Scholar] [CrossRef] [PubMed]
- Urfan, M.; Rajput, P.; Mahajan, P.; Sharma, S.; Hakla, H.R.; Kour, V.; Khajuria, B.; Chowdhary, R.; Lehana, P.K.; Karlupia, N. The Deep Learning-Crop Platform (DL-CRoP): For Species-Level Identification and Nutrient Status of Agricultural Crops. Research 2024, 7, 0491. [Google Scholar] [CrossRef] [PubMed]
- Jiang, H.; Zhang, S.; Yang, Z.; Zhao, L.; Zhou, Y.; Zhou, D. Quality classification of stored wheat based on evidence reasoning rule and stacking ensemble learning. Comput. Electron. Agric. 2023, 214, 108339. [Google Scholar] [CrossRef]
- Zhang, Q.; Tian, X.; Chen, W.; Yang, H.; Lv, P.; Wu, Y. Unsound wheat kernel recognition based on deep convolutional neural network transfer learning and feature fusion. J. Intell. Fuzzy Syst. 2022, 43, 5833–5858. [Google Scholar] [CrossRef]
- Li, H.; Ruan, C.; Zhao, J.; Huang, L.; Dong, Y.; Huang, W.; Liang, D. Integrating high-frequency detail information for enhanced corn leaf disease recognition: A model utilizing fusion imagery. Eur. J. Agron. 2025, 164, 127489. [Google Scholar] [CrossRef]
- Zhu, Y.; Wang, H.; Li, Z.; Zhen, T. Detection of corn unsound kernels based on GAN sample enhancement and improved lightweight network. J. Food Process Eng. 2024, 47, e14499. [Google Scholar] [CrossRef]
- Ge, M.; Chen, G.; Liu, W.; Liu, C.; Zheng, D. Study on the pore structure characteristics of maize grain piles and their effects on air flow distribution. Comput. Electron. Agric. 2024, 224, 109136. [Google Scholar] [CrossRef]
- Yang, W.; Shen, E.; Wang, X.; Mao, S.; Gong, Y.; Hu, P. Wi-Wheat+: Contact-free wheat moisture sensing with commodity WiFi based on entropy. Digit. Commun. Netw. 2023, 9, 698–709. [Google Scholar] [CrossRef]
- Li, J.; Guo, C. Method based on the support vector machine and information diffusion for prediction intervals of granary airtightness. J. Intell. Fuzzy Syst. 2023, 44, 6817–6827. [Google Scholar] [CrossRef]
- Assad, A.; Bhat, M.R.; Bhat, Z.; Ahanger, A.N.; Kundroo, M.; Dar, R.A.; Ahanger, A.B.; Dar, B. Apple diseases: Detection and classification using transfer learning. Qual. Assur. Saf. Crops Foods 2023, 15, 27–37. [Google Scholar] [CrossRef]
- Yasin, E.T.; Ropelewska, E.; Kursun, R.; Cinar, I.; Taspinar, Y.S.; Yasar, A.; Mirjalili, S.; Koklu, M. Optimized feature selection using gray wolf and particle swarm algorithms for corn seed image classification. J. Food Compos. Anal. 2025, 145, 107738. [Google Scholar] [CrossRef]
- Fan, C.; Wang, W.; Cui, T.; Liu, Y.; Qiao, M. Maize kernel broken rate prediction using machine vision and machine learning algorithms. Foods 2024, 13, 4044. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Hou, Y.; Cui, T.; Tan, D.S.; Xu, Y.; Zhang, D.; Qiao, M.; Xiong, L. Classifying grain and impurity to assess maize cleaning loss using time–frequency images of vibro-piezoelectric signals coupling machine learning. Comput. Electron. Agric. 2024, 227, 109583. [Google Scholar] [CrossRef]
- Li, Y.; Tan, D.S.; Cui, T.; Fan, H.; Xu, Y.; Zhang, D.; Qiao, M.; Hou, Y.; Xiong, L. Design and validation of novel maize grain cleaning loss detection system based on classification models of particle time-domain signals. Comput. Electron. Agric. 2024, 220, 108908. [Google Scholar] [CrossRef]
- Zhang, F.; Wang, M.; Zhang, F.; Xiong, Y.; Wang, X.; Ali, S.; Zhang, Y.; Fu, S. Hyperspectral imaging combined with GA-SVM for maize variety identification. Food Sci. Nutr. 2024, 12, 3177–3187. [Google Scholar] [CrossRef]
- Gao, W.; Jiang, M.; Shi, X.; Gao, S.; Zhang, W.; Al-qaness, M.A. A High-Precision Method for Corn Variety Identification Based on Data Fusion Technology and the PSX-Staking Algorithm. J. Food Compos. Anal. 2025, 147, 108098. [Google Scholar] [CrossRef]
- Galli, G.; Sabadin, F.; Yassue, R.M.; Galves, C.; Carvalho, H.F.; Crossa, J.; Montesinos-López, O.A.; Fritsche-Neto, R. Automated machine learning: A case study of genomic “image-based” prediction in maize hybrids. Front. Plant Sci. 2022, 13, 845524. [Google Scholar] [CrossRef]
- Liu, X.; Zhang, Z.; Li, Y.; Igathinathane, C.; Yu, J.; Rui, Z.; Azizi, A.; Wang, X.; Pourreza, A.; Zhang, M. Early-stage detection of maize seed germination based on RGB image and machine vision. Smart Agric. Technol. 2025, 11, 100927. [Google Scholar] [CrossRef]
- Han, X.; Zhengguang, C.; Jinming, L. Rapid detection of maize seed germination using near-infrared spectroscopy combined with Gaussian process regression. Food Chem. 2025, 491, 145254. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Quinlan, J.R. Induction of decision trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Tan, M.; Le, Q. Efficientnetv2: Smaller models and faster training. In Proceedings of the 38th International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; Volume 13, pp. 10096–10106. [Google Scholar]
- Todi, A.; Narula, N.; Sharma, M.; Gupta, U. Convnext: A contemporary architecture for convolutional neural networks for image classification. In Proceedings of the 2023 3rd International Conference on Innovative Sustainable Computational Technologies (CISCT), Dehradun, India, 8–9 September 2023; pp. 1–6. [Google Scholar]
- Yu, W.; Zhou, P.; Yan, S.; Wang, X. Inceptionnext: When inception meets convnext. In Proceedings of the IEEE/cvf Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–21 June 2024; pp. 5672–5683. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Dosovitskiy, A. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Guo, M.-H.; Lu, C.-Z.; Liu, Z.-N.; Cheng, M.-M.; Hu, S.-M. Visual attention network. Comput. Vis. Media 2023, 9, 733–752. [Google Scholar] [CrossRef]
- Wang, A.; Chen, H.; Lin, Z.; Han, J.; Ding, G. Repvit: Revisiting mobile cnn from vit perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 15909–15920. [Google Scholar]
- Meng, J.; Xia, Y.; Luo, B.; Kang, K.; Zhang, H. Detection of maize seed viability using time series multispectral imaging technology. J. Food Compos. Anal. 2025, 147, 108018. [Google Scholar] [CrossRef]
- Chang, W.; Yang, S.; Xi, X.; Wang, H.; Liu, Z.; Zhang, X.; Li, S.; Zhao, Y. Classification of seed maize using deep learning and transfer learning based on times series spectral feature reconstruction of remote sensing. Comput. Electron. Agric. 2025, 237, 110738. [Google Scholar] [CrossRef]
- Jiang, Y.; Wen, X.; Ge, H.; Li, G.; Chen, H.; Jiang, M.; Sun, Q.; Wei, S.; Li, P. Classification of Transgenic Corn Varieties Using Terahertz Spectroscopy and Convolutional Neural Network. J. Food Compos. Anal. 2025, 145, 107771. [Google Scholar] [CrossRef]
- Wang, H.; He, M.; Zhu, M.; Liu, G. WCG-VMamba: A multi-modal classification model for corn disease. Comput. Electron. Agric. 2025, 230, 109835. [Google Scholar] [CrossRef]
- Goyal, P.; Sharda, R.; Thaman, S.; Saini, M. A custom deep learning model for abiotic stress classification in maize in uncontrolled environments. Comput. Electron. Agric. 2025, 230, 109865. [Google Scholar] [CrossRef]
- Chen, S.; Li, Y.; Zhang, Y.; Yang, Y.; Zhang, X. Soft X-ray image recognition and classification of maize seed cracks based on image enhancement and optimized YOLOv8 model. Comput. Electron. Agric. 2024, 216, 108475. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, S.; Liu, J.; Wei, Y.; An, D.; Wu, J. Maize seed variety identification using hyperspectral imaging and self-supervised learning: A two-stage training approach without spectral preprocessing. Expert Syst. Appl. 2024, 238, 122113. [Google Scholar] [CrossRef]
- Yang, D.; Hu, J. Fast identification of maize varieties with small samples using near-infrared spectral feature selection and improved stacked sparse autoencoder deep learning. Expert Syst. Appl. 2025, 288, 128265. [Google Scholar] [CrossRef]
- Cai, J.; Pan, R.; Lin, J.; Liu, J.; Zhang, L.; Wen, X.; Chen, X.; Zhang, X. Improved EfficientNet for corn disease identification. Front. Plant Sci. 2023, 14, 1224385. [Google Scholar] [CrossRef]
- Ji, Y.; Ma, S.; Lv, S.; Wang, Y.; Lu, S.; Liu, M. Nanomaterials for targeted delivery of agrochemicals by an all-in-one combination strategy and deep learning. ACS Appl. Mater. Interfaces 2021, 13, 43374–43386. [Google Scholar] [CrossRef]
- Wang, B.; Chen, G.; Wen, J.; Li, L.; Jin, S.; Li, Y.; Zhou, L.; Zhang, W. SSATNet: Spectral-spatial attention transformer for hyperspectral corn image classification. Front. Plant Sci. 2025, 15, 1458978. [Google Scholar] [CrossRef]
- Zhang, S.; Ma, S.; Luo, X.; Chai, H.; Zhu, J. A novel lightweight model integrating convolutional neural network and self-attention mechanism for corn seeds quality image recognition. Eng. Appl. Artif. Intell. 2025, 159, 111716. [Google Scholar] [CrossRef]
- Zhang, L.; Liu, C.; Han, J.; Yang, Y. Variety Identification of Corn Seeds Based on Hyperspectral Imaging and Convolutional Neural Network. Foods 2025, 14, 3052. [Google Scholar] [CrossRef]
- Zhao, J.; Liu, C.; Han, J.; Zhou, Y.; Li, Y.; Zhang, L. Real-Time Corn Variety Recognition Using an Efficient DenXt Architecture with Lightweight Optimizations. Agriculture 2025, 15, 79. [Google Scholar] [CrossRef]
- Huang, H.; Liu, Y.; Zhu, S.; Feng, C.; Zhang, S.; Shi, L.; Sun, T.; Liu, C. Detection of mechanical damage in corn seeds using hyperspectral imaging and the ResNeSt_E deep learning network. Agriculture 2024, 14, 1780. [Google Scholar] [CrossRef]
- Theerthagiri, P.; Ruby, A.U.; Chandran, J.G.C.; Sardar, T.H.; Shafeeq BM, A. Deep SqueezeNet learning model for diagnosis and prediction of maize leaf diseases. J. Big Data 2024, 11, 112. [Google Scholar] [CrossRef]
- Dash, A.; Sethy, P.K.; Behera, S.K. Maize disease identification based on optimized support vector machine using deep feature of DenseNet201. J. Agric. Food Res. 2023, 14, 100824. [Google Scholar] [CrossRef]
- Si, Y.; Xu, H.; Zhu, X.; Zhang, W.; Dong, Y.; Chen, Y.; Li, H. SCSA: Exploring the synergistic effects between spatial and channel attention. Neurocomputing 2025, 634, 129866. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Li, H.; Li, H.; Shi, L.; Yu, T.; Su, Y. YOLOv8-SST lightweight small target fault detection system integrating SENet attention mechanism and Tr-OCR. Clust. Comput. 2025, 28, 501. [Google Scholar] [CrossRef]
- Xu, Y.; Li, D.; Li, C.; Yuan, Z.; Dai, Z. LiSA-MobileNetV2: An extremely lightweight deep learning model with Swish activation and attention mechanism for accurate rice disease classification. Front. Plant Sci. 2025, 16, 1619365. [Google Scholar] [CrossRef]
- Qi, H.; Chen, J.; Hu, S.; Zhao, G.; Zhang, C. SHMNet: A non-destructive detection method of maize seed purity based on hyperspectral imaging and multi-scale feature modulation. Microchem. J. 2025, 215, 114408. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar] [CrossRef]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
- Odena, A.; Olah, C.; Shlens, J. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 2642–2651. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 214–223. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. Adv. Neural Inf. Process. Syst. 2017, 30, 5769–5779. [Google Scholar]
- Feng, G.; Gu, Y.; Wang, C.; Zhang, D.; Xu, R.; Zhu, Z.; Luo, B. Wheat Fusarium head blight severity grading using generative adversarial networks and semi-supervised segmentation. Comput. Electron. Agric. 2025, 229, 109817. [Google Scholar] [CrossRef]
- Bao, X.; Huang, D.; Yang, B.; Li, J.; Opeyemi, A.T.; Wu, R.; Cheng, Z. Combining deep convolutional generative adversarial networks with visible-near infrared hyperspectral reflectance to improve prediction accuracy of anthocyanin content in rice seeds. Food Control 2025, 174, 111218. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhan, W.; Sun, Y.; Peng, J.; Zhang, Y.; Guo, Y.; Sun, K.; Gui, L. Mask-guided dual-perception generative adversarial network for synthesizing complex maize diseased leaves to augment datasets. Eng. Appl. Artif. Intell. 2024, 136, 108875. [Google Scholar] [CrossRef]
- Zhang, L.; Nie, Q.; Ji, H.; Wang, Y.; Wei, Y.; An, D. Hyperspectral imaging combined with generative adversarial network (GAN)-based data augmentation to identify haploid maize kernels. J. Food Compos. Anal. 2022, 106, 104346. [Google Scholar] [CrossRef]
- Zhang, L.; Wang, Y.; Wei, Y.; An, D. Near-infrared hyperspectral imaging technology combined with deep convolutional generative adversarial network to predict oil content of single maize kernel. Food Chem. 2022, 370, 131047. [Google Scholar] [CrossRef] [PubMed]
- Fan, L.; Ding, Y.; Fan, D.; Di, D.; Pagnucco, M.; Song, Y. GrainSpace: A large-scale dataset for fine-grained and domain-adaptive recognition of cereal grains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 21116–21125. [Google Scholar]













| Model | Optimizer | Epochs | Batch Size | Loss Function | LR Schedule | Data Processing | Dropout Rate |
|---|---|---|---|---|---|---|---|
| ResNet18 ResNet50 | SGD Initial Lr: 1 × 10−2 momentum: 0.9 Weight Decay: 1 × 10−4 | 120 | 64 | Cross- Entropy | Cosine annealing from initial Lr to 1 × 10−6 | Resize: (224, 224) Normalization: mean = [0.647, 0.638, 0.448] std = [0.177, 0.192, 0.209] Transformations: random resize cropping horizontal flipping | None |
| EfficientNet-v2 | RMSProp Initial Lr: 4 × 10−3 momentum: 0.9 Weight Decay: 1 × 10−4 | 0.2 | |||||
| ViT | Adam Initial Lr: 3 × 10−3 betas: (0.9, 0.999) Weight Decay: 1 × 10−4 | 0.2 | |||||
| Swin | AdamW Initial Lr: 1 × 10−3 betas: (0.9, 0.999) Weight Decay: 1 × 10−4 | 0.2 | |||||
| ConvNeXt-T InceptionNeXt ConvNeXt-v2 RepViT VAN | AdamW Initial Lr: 4 × 10−3 betas: (0.9, 0.999) Weight Decay: 1 × 10−4 | None | |||||
| MACRNet | Adam Initial Lr: 1 × 10−3 betas: (0.9, 0.999) Weight Decay: 1 × 10−4 | 0.2 |
| Model | Data Processing | Generator Optimizer | Discriminator Optimizer | Batch Size | Epochs | Loss Function |
|---|---|---|---|---|---|---|
| DCGAN | Resize: (224, 224) Normalization: mean = [0.647, 0.638, 0.448] std = [0.177, 0.192, 0.209] | Adam Lr: 1 × 10−4 betas: (0.5, 0.999) Weight Decay: 0 | Adam Lr: 2 × 10−4 betas: (0.5, 0.999) Weight Decay: 0 | 64 | 1600 | BCE |
| DCGAN-GP | WGAN-GP | |||||
| WGAN | Wasserstein | |||||
| CGAN | BCE | |||||
| LSGAN | Least Squares | |||||
| ACGAN | BCE + Aux loss | |||||
| RGSGAN | WGAN-GP |
| Model | FID |
|---|---|
| WGAN | 39.22 |
| DCGAN | 41.89 |
| DCGAN-GP | 38.14 |
| RGSGAN (ours) | 29.37 |
| Model | IS | FID |
|---|---|---|
| WGAN | 6.0 | 285.5 |
| CGAN | 6.9 | 244.6 |
| LSGAN | 5.7 | 225.0 |
| ACGAN | 7.7 | 288.5 |
| DCGAN | 5.3 | 313.1 |
| DCGAN -GP | 6.3 | 273.6 |
| DCGAN+ Residual | 5.6 | 296.5 |
| DCGAN+ SCSA | 6.8 | 251.1 |
| DCGAN+ Residual+ WGAN-GP | 6.9 | 190.2 |
| DCGAN+ Residual+ SCSA | 6.2 | 230.7 |
| DCGAN+ SCSA+ WGAN-GP | 7.8 | 175.9 |
| RGSGAN (ours) | 8.5 | 148.1 |
| Model | Macro-P (%) | Macro-R (%) | Macro-F1 (%) | Accuracy (%) |
|---|---|---|---|---|
| ResNet18 | 89.702 | 90.382 | 90.041 | 90.482 |
| ResNet50 | 95.597 | 96.257 | 95.926 | 96.137 |
| EfficientNet-v2 | 91.734 | 92.604 | 92.167 | 92.654 |
| ConvNeXt-T | 95.501 | 96.201 | 95.850 | 95.801 |
| InceptionNeXt | 97.143 | 97.513 | 97.328 | 97.293 |
| ConvNeXt-v2 | 94.115 | 94.815 | 94.464 | 94.715 |
| Swin | 91.588 | 92.268 | 91.927 | 92.468 |
| ViT | 92.579 | 93.179 | 92.878 | 93.029 |
| VAN | 94.632 | 94.982 | 94.807 | 94.882 |
| RepViT | 95.861 | 96.591 | 96.225 | 96.541 |
| MACRNet (ours) | 98.693 | 99.013 | 98.853 | 98.813 |
| Model | FLOPs (G) | Parameters (M) | Accuracy (%) |
|---|---|---|---|
| ResNet18 | 1.826 | 11.72 | 90.482 |
| ResNet50 | 3.848 | 22.43 | 96.137 |
| EfficientNet-v2 | 2.723 | 19.38 | 92.654 |
| ConvNeXt-T | 4.157 | 26.53 | 95.801 |
| InceptionNeXt | 3.910 | 24.57 | 97.293 |
| ConvNeXt-v2 | 4.149 | 26.51 | 94.715 |
| Swin | 4.147 | 25.71 | 92.468 |
| ViT | 4.359 | 26.98 | 93.029 |
| VAN | 4.859 | 25.76 | 94.882 |
| RepViT | 5.984 | 29.33 | 96.541 |
| MACRNet(ours) | 4.542 | 8.446 | 98.813 |
| Model | Mean | Std | Std Error | 95% CI |
|---|---|---|---|---|
| ResNet18 | 90.482 | 0.327 | 0.189 | 90.09–90.87 |
| ResNet50 | 96.137 | 0.249 | 0.144 | 95.52–96.75 |
| EfficientNet-v2 | 92.654 | 0.309 | 0.178 | 92.00–93.31 |
| ConvNeXt-T | 95.801 | 0.153 | 0.088 | 95.42–96.17 |
| InceptionNeXt | 97.293 | 0.045 | 0.026 | 97.18–97.41 |
| ConvNeXt-v2 | 94.715 | 0.097 | 0.056 | 94.48–94.95 |
| Swin | 92.468 | 0.153 | 0.088 | 92.09–92.85 |
| ViT | 93.029 | 0.153 | 0.088 | 92.66–93.40 |
| VAN | 94.882 | 0.076 | 0.044 | 94.69–95.07 |
| RepViT | 96.541 | 0.148 | 0.085 | 96.17–96.91 |
| MACRNet (ours) | 98.813 | 0.012 | 0.007 | 98.79–98.84 |
| Model | FLOPs (G) | Parameters (M) | Accuracy (%) |
|---|---|---|---|
| remove the 3 × 1/1 × 3 branch | 4.544 | 8.448 | 92.136 |
| remove the 5 × 1/1 × 5 branch | 4.541 | 8.445 | 94.222 |
| remove the 7 × 1/1 × 7 branch | 4.537 | 8.441 | 96.421 |
| remove the 9 × 1/1 × 9 branch | 4.533 | 8.446 | 93.745 |
| MACRNet (ours) | 4.542 | 8.446 | 98.813 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wan, C.; Li, W.; Zhang, Q.; Xiao, L.; Lv, P.; Zhao, H.; Jing, S. RGSGAN–MACRNet: A More Accurate Recognition Method for Imperfect Corn Kernels Under Sample-Size-Limited Conditions. Foods 2025, 14, 4356. https://doi.org/10.3390/foods14244356
Wan C, Li W, Zhang Q, Xiao L, Lv P, Zhao H, Jing S. RGSGAN–MACRNet: A More Accurate Recognition Method for Imperfect Corn Kernels Under Sample-Size-Limited Conditions. Foods. 2025; 14(24):4356. https://doi.org/10.3390/foods14244356
Chicago/Turabian StyleWan, Chenxia, Wenzheng Li, Qinghui Zhang, Le Xiao, Pengtao Lv, Huiyi Zhao, and Shihua Jing. 2025. "RGSGAN–MACRNet: A More Accurate Recognition Method for Imperfect Corn Kernels Under Sample-Size-Limited Conditions" Foods 14, no. 24: 4356. https://doi.org/10.3390/foods14244356
APA StyleWan, C., Li, W., Zhang, Q., Xiao, L., Lv, P., Zhao, H., & Jing, S. (2025). RGSGAN–MACRNet: A More Accurate Recognition Method for Imperfect Corn Kernels Under Sample-Size-Limited Conditions. Foods, 14(24), 4356. https://doi.org/10.3390/foods14244356

