Hyperspectral Analysis of Apricot Quality Parameters Using Classical Machine Learning and Deep Neural Networks †
Abstract
1. Introduction
2. Materials and Methods
2.1. Sample Preparation and Reference Measurement
- A total of 210 mg/kg DM at the first time point;
- A total of 93 mg/kg DM at the second time point;
- A total of 41 mg/kg DM at the third time point.
- Imaging spectrograph: Imspector V17E (Spectral Imaging Ltd., Oulu, Finland);
- Camera: CCD camera–Goldeye CL-008 SWIR Cool (Spectral Imaging Ltd., Oulu, Finland);
- Illumination: Four 50 W halogen lamps (Osram, Ruse, Bulgaria);
- Software: SpectralDAQ-NIR v.1 (Spectral Imaging Ltd., Oulu, Finland);
- Mechanical system: Electronically controlled displacement platform (Ezi-Servo, Ruse, Bulgaria)
2.2. Hyperspectral Image Data Acquisition and Preprocessing
2.3. Predictive Models
- standard machine learning models—PLSR and Stacked Autoencoders (SAEs) in combination with Random Forest (RF);
- convolutional neural networks: Convolutional Neural Networks (CNNs) in 1D, 2D, and 3D configurations.
2.4. Partial Least Square Regression (PLSR)
2.5. Stacked Autoencoders (SAEs) with Random Forest (RF)
2.6. Convolutional Neural Networks
- 1D-CNN (One-Dimensional Convolutional Neural Network): 1D-CNNs process one-dimensional sequential data, making them suitable for both time-series and spectral data processing analyses. In hyperspectral analysis, 1D-CNNs use spectral information at the pixel level, ignoring spatial dependencies [22]. They are computationally efficient, making them ideal for real-time applications [23].
- 2D-CNN (Two-Dimensional Convolutional Neural Network): It operates on two-dimensional image data, using convolutional filters to capture spatial features. They are commonly applied in RGB and hyperspectral images, often with prior spectral dimension reduction [24]. In hyperspectral imaging, 2D-CNNs analyzes local texture and spatial relationships within a given spectral band [25].
- 3D-CNN (Three-Dimensional Convolutional Neural Network): These types of networks simultaneously process both spectral and spatial information using three-dimensional convolutional filters [20]. This approach retains relationships across spectral bands and spatial features, making 3D-CNNs highly effective for hyperspectral image analysis. However, they require significant computational resources. Depending on the research requirements, different architectures are applied to achieve different results.
2.6.1. Development of 1D-CNN Architecture
2.6.2. Development of 2D-CNN Architecture
2.6.3. Development of 3D-CNN Architecture
3. Results and Discussion
3.1. PLS Model Results
3.2. SAE-RF Model Results
3.3. CNN Models Results
- 1D-CNN: The training and testing processes are given in Figure 11. The 1D-CNN model showed an R2 value of 0.7 for training and 0.65 for testing, The 1D-CNN model showed an R2 value of 0.7 for training and 0.65 for testing, indicating moderate but acceptable predictive performance with reasonable generalization ability, though there is room for improvement in capturing the underlying patterns in the data. The values for MAEt = 33.7 and RMSEt = 17.56 for training and MAEp = 33.71 and RMSEp = 36.65 for predicting.
- 2D-CNN: The training and testing processes are presented in Figure 12. The 2D-CNN model demonstrated strong performance, achieving an R2 value of 0.95, an MAE of 12.72, and an RMSE of 17.56 on the training set. On the test set, the model achieved an R2 value of 0.75, an MAE of 30.58, and an RMSE of 36.65. These results indicate excellent fit on the training data, but also suggest some degree of overfitting. Nevertheless, the model still explains 75% of the variance in the unseen data, which may be sufficient for the intended application.
- 3D-CNN: The training and testing processes are presented in Figure 13. The 3D-CNN model achieved an R2 value of 0.84 for training and 0.69 for testing, with MAE and RMSE values of 12.72 and 17.56 for training, and 30.58 and 36.65 for testing, respectively. The model demonstrates good performance on the training set, indicating its ability to capture underlying patterns in the data. However, the drop in performance on the testing set suggests potential overfitting. This discrepancy between training and testing results implies that the model may have learned specific features of the training data that do not generalize well to unseen samples. Such overfitting could be attributed to the limited size of the dataset or the high complexity of the 3D-CNN model relative to the amount of training data.
3.4. Comparison of Regression Models
4. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
SAE | Stacked Autoencoder |
DM | Dry Matter |
BPNN | Backpropagation Neural Network |
CNN | Convolutional Neural Network |
SAE-RF | Stacked Autoencoder in Combination with Random Forest |
PLSR | Partial Least Square Regression |
MAE | Mean Absolute Error |
RMSE | Root Mean Square Error |
PCA | Principal Component Analysis |
AE | Autoencoder |
References
- Elisabeth, J.M.B.; Esther, N.D.K.; Susan, G.B.d.K.; Joyce, S.; Arend, G.J.A.; Niels, F.M.K.; Theo, J.M.R. Hyperspectral imaging for tissue classification, a way toward smart laparoscopic colorectal surgery. J. Biomed. Opt. 2019, 24, 016002. [Google Scholar] [CrossRef]
- Wang, B.; Sun, J.; Xia, L.; Liu, J.; Wang, Z.; Li, P.; Sun, X. The Applications of Hyperspectral Imaging Technology for Agricultural Products Quality Analysis: A Review. Food Rev. Int. 2023, 39, 1043–1062. [Google Scholar] [CrossRef]
- Parrag, V.; Gillay, Z.; Kovács, Z.; Zitek, A.; Böhm, K.; Hinterstoisser, B.; Baranyai, L. Application of hyperspectral imaging to detect toxigenic Fusarium infection on cornmeal. Prog. Agric. Eng. Sci. 2020, 16, 51–60. [Google Scholar] [CrossRef]
- Wang, L.; Jin, J.; Song, Z.; Wang, J.; Zhang, L.; Rehman, T.U.; Tuinstra, M.R. LeafSpec: An accurate and portable hyperspectral corn leaf imager. Comput. Electron. Agric. 2020, 169, 105209. [Google Scholar] [CrossRef]
- Chen, J.; Chen, S.; Zhou, P.; Qian, Y. Deep neural network based hyperspectral pixel classification with factorized spectral-spatial feature representation. IEEE Access 2019, 7, 81407–81418. [Google Scholar] [CrossRef]
- Jung, D.H.; Kim, J.D.; Kim, H.Y.; Lee, T.S.; Kim, H.S.; Park, S.H. A hyperspectral data 3D convolutional neural network classification model for diagnosis of gray mold disease in strawberry leaves. Front. Plant Sci. 2022, 13, 837020. [Google Scholar] [CrossRef] [PubMed]
- Liu, K.H.; Yang, M.H.; Huang, S.T.; Lin, C. Plant species classification based on hyperspectral imaging via a lightweight convolutional neural network model. Front. Plant Sci. 2022, 13, 855660. [Google Scholar] [CrossRef]
- Nie, P.; Zhang, J.; Feng, X.; Yu, C.; He, Y. Classification of hybrid seeds using near-infrared hyperspectral imaging technology combined with deep learning. Sens. Actuators B Chem. 2019, 296, 126630. [Google Scholar] [CrossRef]
- Yu, Z.; Fang, H.; Zhangjin, Q.; Mi, C.; Feng, X.; He, Y. Hyperspectral imaging technology combined with deep learning for hybrid okra seed identification. Biosyst. Eng. 2021, 212, 46–61. [Google Scholar] [CrossRef]
- Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv 2013, arXiv:1312.6034. [Google Scholar] [CrossRef]
- Su, Z.; Zhang, C.; Yan, T.; Zhu, J.; Zeng, Y.; Lu, X.; Fan, L. Application of hyperspectral imaging for maturity and soluble solids content determination of strawberry with deep learning approaches. Front. Plant Sci. 2021, 12, 736334. [Google Scholar] [CrossRef]
- Zhu, H.; Chu, B.; Zhang, C.; Liu, F.; Jiang, L.; He, Y. Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers. Sci. Rep. 2017, 7, 4125. [Google Scholar] [CrossRef]
- Dejanov, M. Evaluation of β-Carotene Content in Apricots during the Drying Process Using Visual and Near-Infrared Spectroscopy. In Proceedings of the 8th International Conference on Energy Efficiency and Agricultural Engineering (EE&AE), Ruse, Bulgaria, 30 June 2022. [Google Scholar] [CrossRef]
- Wold, S.; Sjöström, M.; Eriksson, L. PLS-regression: A basic tool of chemometrics. Chemom. Intell. Lab. Syst. 2001, 58, 109–130. [Google Scholar] [CrossRef]
- Gowen, A.A.; O’Donnell, C.P.; Cullen, P.J.; Downey, G.; Frias, J.M. Hyperspectral imaging—An emerging process analytical tool for food quality and safety control. Trends Food Sci. Technol. 2007, 18, 590–598. [Google Scholar] [CrossRef]
- Huang, H.; Yu, H.; Xu, H.; Ying, Y. Near infrared spectroscopy for on/in-line monitoring of quality in foods and beverages: A review. J. Food Eng. 2008, 87, 303–313. [Google Scholar] [CrossRef]
- Mienye, I.D.; Swart, T.G. Deep Autoencoder Neural Networks: A Comprehensive Review and New Perspectives. Arch. Comput. Methods Eng. 2025, 32, 1–28. [Google Scholar] [CrossRef]
- Geoffrey, E.H.; Simon, O.; Yee-Whye, T. A Fast-Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
- Kawano, Y.; Yanai, K. Food Image Recognition with Deep Convolutional Features. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, Seattle, WA, USA, 13–17 September 2014; pp. 589–593. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Zhang, L.; Wang, Y.; Li, H. A Review of Convolutional Neural Networks in Computer Vision. Artif. Intell. Rev. 2024, 57, 99. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral–Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep Supervised Learning for Hyperspectral Data Classification through Convolutional Neural Networks. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
Train | Test | |||||
---|---|---|---|---|---|---|
Model | R2 | RMSE | MAE | R2 | RMSE | MAE |
PLSR | 0.97 | 36.65 | 10.7 | 0.97 | 17.56 | 9.81 |
SAE-RF | 0.82 | 32.46 | 23.12 | 0.83 | 30.17 | 24.03 |
1D-CNN | 0.7 | 17.57 | 33.78 | 0.75 | 36.65 | 30.58 |
2D-CNN | 0.95 | 17.56 | 12.72 | 0.75 | 36.65 | 30.58 |
3D-CNN | 0.84 | 17.56 | 22.07 | 0.69 | 36.65 | 36.11 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dejanov, M. Hyperspectral Analysis of Apricot Quality Parameters Using Classical Machine Learning and Deep Neural Networks. Eng. Proc. 2025, 107, 24. https://doi.org/10.3390/engproc2025104024
Dejanov M. Hyperspectral Analysis of Apricot Quality Parameters Using Classical Machine Learning and Deep Neural Networks. Engineering Proceedings. 2025; 107(1):24. https://doi.org/10.3390/engproc2025104024
Chicago/Turabian StyleDejanov, Martin. 2025. "Hyperspectral Analysis of Apricot Quality Parameters Using Classical Machine Learning and Deep Neural Networks" Engineering Proceedings 107, no. 1: 24. https://doi.org/10.3390/engproc2025104024
APA StyleDejanov, M. (2025). Hyperspectral Analysis of Apricot Quality Parameters Using Classical Machine Learning and Deep Neural Networks. Engineering Proceedings, 107(1), 24. https://doi.org/10.3390/engproc2025104024