A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning
Abstract
1. Introduction
2. Development of Intelligent Optical Sensors
2.1. Machine Learning in Optical Sensors
2.1.1. Supervised Learning
- •
- Support Vector Machines (SVMs). SVM is a supervised learning algorithm used for binary classification, multi-class classification, and even outlier detection [32]. The core principle behind SVM is to find a hyperplane that maximizes the margin between data points of different classes. The data points closest to this hyperplane are termed support vectors, which play a crucial role in defining the decision boundary. For regression tasks, Support Vector Regression (SVR) is employed—a variant of SVM designed for regression problems. Its objective is to find a hyperplane that fits the data points as closely as possible while keeping the bias or error within a specified threshold. For samples that are not linearly separable, kernel functions are used to project points into a high-dimensional space to achieve linear separability [33]. SVR demonstrates a distinct advantage over SVM when predicting continuous values, finding primary application in quantitative analysis. It can significantly outperform other regression models in predictive accuracy. Cao, JQ et al. [34] present a new method using a combination of the fast Fourier transform (FFT)–support vector regression (SVR) algorithm for fast spectral demodulation of an optical fiber torsion sensor based on Sagnac interferometer (SI). Experimental results demonstrate that with the aid of the FFT-SVR algorithm, the full torsion angle range from -360 degrees to 360 degrees can be predicted with a mean absolute error (MAE) of 3.05 degrees and determination coefficient of 0.9995. Consequently, SVMs are well suited for handling nonlinear classification problems and possess the potential to process diverse types of data in array sensing detection [35]. In practical applications, training SVMs is computationally intensive, particularly when handling large datasets or complex kernel functions. Outliers can also significantly impact the placement of the decision boundary, leading to suboptimal results. Compared to other methods, SVMs offer unique advantages as they require fewer training samples to build models and are less susceptible to outliers. Consequently, it has been employed to combine harvesters with non-destructive techniques for numerous applications, such as simple and rapid classification of various foods [36], disease detection [37], and quantitative analysis of chemical components in food materials [38]. However, when applied to extremely large datasets, SVM is computationally inefficient due to the high training time required. Additionally, SVM often performs poorly when data contain noise or when categories or labels overlap.
- •
- k-Nearest Neighbors. KNN performs classification by measuring distances between different feature values. Its core principle involves simply providing a target prediction, then calculating distances or similarities between this prediction and all samples, followed by voting on the decision using these distances or similarities. Consequently, the KNN classifier heavily relies on the K nearest values, where the parameter K significantly impacts the model’s recognition accuracy and thus requires optimization for further analysis [39]. The KNN algorithm offers advantages such as simplicity and high accuracy, making it widely applied in food variety classification, quality assessment of aquatic products and meat products, and pesticide residue detection in leafy vegetables. Furthermore, KNN serves not only as an effective classifier but also finds utility in hyperspectral imaging detection. Additionally, integrating KNN with E-nose data and machine vision images through decision layer fusion achieves higher accuracy in tea grade classification.
- •
- Artificial Neural Networks. Artificial Neural Networks are a category of biologically inspired computational models widely applied in classification and predictive scenarios. Each individual artificial neuron functions as a basic classifier, generating a decision-related signal based on inputs received from preceding neurons in the network. Information transmission across different layers of the ANN can be facilitated through diverse transfer functions, such as sigmoid, linear, hyperbolic tangent, and logistic functions. Typically, an artificial neural network is constructed by integrating hundreds of these fundamental computational components [30]. The training process of an ANN entails inputting a dataset with predefined target outputs into the network. Learning is accomplished by reducing the discrepancy between the network’s predicted outputs and the actual desired results. Backpropagation is a common technique employed to minimize the loss function in ANN computations; It allows errors to be propagated backward through the network layers, enabling continuous adjustments to the weights and biases of the neurons [40]. Key merits of ANNs include their capability to learn and model nonlinear, complex systems, their ability to generalize patterns and relationships from limited sample data, and their flexibility with input variables; specifically, they do not impose any constraints such as assumptions regarding the distribution of input data. This enables ANN to generalize nonlinear problems and handle noise or drift more effectively than traditional statistical methods. However, ANNs require substantial datasets for training to establish a robust model. For instance, training a high-performance image classification model (such as ResNet) typically requires annotated datasets comprising millions of images (e.g., the ImageNet dataset contains approximately 1.4 million images). Moreover, these datasets must be of high quality and comprehensive in scope. For example, a model designed to detect fruit ripeness using optical sensors should be trained on spectral data or images of fruit across different varieties, growth stages, lighting conditions, and surface states (clean, dewy, or muddy). Artificial neural networks have numerous applications when integrated with detection technologies, such as combining with electronic nose systems for precise quantitative analysis of microbial contamination, using hyperspectral systems to detect food adulteration and classify peanuts, and other non-destructive techniques in food quality-related applications.
- •
- Random Forest. Random Forest is an ensemble learning model built on the foundation of Decision Trees (DTs). As a widely used classification model, the DT features a hierarchical tree-like structure: each internal node corresponds to a specific feature, each branch denotes a decision rule or output test criterion, and each leaf node represents a categorical label [41]. In the RF framework, numerous DTs are trained independently—each utilizing a distinct subset of the training data and a unique selection of features. For classification tasks, the final prediction result is determined by majority voting among the outputs of all individual DTs. For regression tasks, it is obtained by averaging the predictions of the constituent trees. By leveraging the collective power of multiple DTs, RF effectively mitigates the overfitting issue commonly associated with a single DT, while achieving higher accuracy and stronger robustness. For example, in a study on Shigella classification, RF successfully distinguished between four different Shigella species with an 87% accuracy rate by synthesizing predictions from multiple trees and adopting a voting mechanism [42].
2.1.2. Unsupervised Learning
- •
- Principal Component Analysis. PCA is a widely used unsupervised algorithm in optical sensors, frequently employed to reduce the dimension of high-dimensional data by creating new variables that encapsulate the fundamental characteristics of the data in a smaller dimension [45]. In spatial analysis, each principal component captures distinct patterns representing the variation range of new variables obtained through transformations of multiple variables in the original data [46]. In practical applications, 2D or 3D PCA scatter plots based on array sensor data visualize feature relationships among similar analytes. Furthermore, PCA extracts fundamental features from samples by reducing dimensions, which greatly benefits subsequent supervised algorithms [47,48]. Additionally, as an extension of PCA, the Partial Least Squares Discriminant Analysis (PLS-DA) method utilizes predictive variables (X) and classification response variables (Y) to construct latent variables for optimal category separation [47].
- •
- Convolutional Neural Networks. The primary machine learning algorithm applicable to signal transformation is the convolutional neural network (CNN). Digital signals after transformation may contain complex patterns, and CNNs can automatically extract features from these signals, such as waveform variations and frequency distribution patterns. By training on a large number of transformed signal samples, CNNs can learn the associations between signal patterns and food quality or composition across different food inspection scenarios [49].In recent years, convolutional neural networks (CNNs) have emerged as highly dynamic models within the machine learning domain. CNNs can autonomously learn deep features from input digital information, which can be utilized for subsequent classification or regression tasks [50]. The CNN architecture consists of multiple stages, primarily comprising three core components: convolutional layers, pooling layers, and fully connected layers [51]. Among these, convolutional layers form the core of the CNN, primarily generating arrays of features known as feature maps. This is achieved by applying two-dimensional symmetric operations to images using filter kernels, followed by nonlinear transfer functions to complete feature extraction. Pooling layers typically follow convolutional layers, primarily serving to reduce the dimensionality of feature maps. This approach effectively decreases the number of parameters within the network, thereby shortening computational time. The output processed through convolutional and pooling layers forms feature maps, which are passed as input to subsequent connection layers to complete prediction tasks [52]. The final output from the convolutional layer is converted into a one-dimensional array and connected to a fully connected layer. This layer receives the results from the convolutional process and utilizes them to classify images into different labels, functioning similarly to traditional neural networks [49]. However, CNNs also have certain limitations. Typically, CNNs require large datasets to achieve high classification accuracy. Acquiring such large-scale databases often presents significant challenges across various disciplines [51].
- •
- Autoencoders. Autoencoders (AEs) are unsupervised neural networks trained via backpropagation. As powerful feature extraction tools, they map raw input data into feature vectors and reconstruct the original input using these vectors [53]. AEs receive raw inputs, with the encoder compressing representations that are subsequently decoded to reconstruct inputs. In deep AEs, lower hidden layers encode data while higher layers decode it, with error backpropagation driving training [30]. Due to their representational power, AEs can be stacked and layered to form deep learning networks. Several variants exist, including denoising AEs, sparse AEs, variational AEs, and contracting AEs [54]. These are invariably applied to high-dimensional data, with dimensionality reduction interpreted as PCA for a dataset. However, AEs offer greater flexibility than PCA. Furthermore, AEs permit both linear and nonlinear representations during dataset encoding, whereas only linear transformations are feasible in PCA [55]. In non-destructive testing techniques, AEs have been employed for feature extraction to enhance the predictive accuracy of calibration models. Ni et al. [56] developed a variably weighted stacked autoencoder to extract discriminative features from hyperspectral images (HSIs), which was applied to the online sorting of cotton films. In a similar vein, research has shown that when deep autoencoders are employed as a feature extraction tool in conjunction with HSIs, their performance in determining the chemical composition of dried black goji berries is comparable to, or even outperforms, that of PCA [55]. Furthermore, Huang et al. [57] proposed a computer vision system for estimating soluble solids content (SSCs) in Fuji apples at different maturity stages. This system employs stacked autoencoders to extract color features at the pixel level. Results indicate that the Stacked autoencoder (SAE)-based back propagation neural network (BPNN) model utilizing pixel-level color features achieves higher recognition rates at the feature level compared to BPNN models based solely on pure color features.
| ML Methods | Advantages | Attributes | Characteristic | Application Example | Ref. |
|---|---|---|---|---|---|
| Support Vector Machine | Good at handling small samples, high-dimensional data, precise classification, and strong robustness. | supervised learning | Classification and regression | Identifying honey from different origins using microscopic images of food structure | [58] |
| Convolutional Neural Network | Can automatically and efficiently extract image features, excelling in image detection tasks. | supervised learning | Automatic extraction of hierarchical features of data | Detecting defects on the surface of food through food appearance images. | [59] |
| Random Forest | Good tolerance to multi-feature data, less prone to overfitting, and has strong interpretability. | supervised learning | Classification and regression | Determine the category of food processing technology based on various optical characteristics. | [58] |
| K-Nearest neighbor | No training is required, does not rely on data distribution assumptions, and is applicable to a wide range of data types. | supervised learning | high computational complexity | Distinguish between different food categories by optical sensor data and determine the freshness or ripeness of food products based on optical characteristics. | [60] |
| Autoencoder | Strong dimensionality reduction ability, can remove noise and extract useful information. | Unsupervised learning | Nonlinear mapping, can be combined with other networks to improve performance | Extract key features from food optical sensor data for classification or detection. Identify anomalies or contamination in food products through reconstruction errors. | [61] |
| Principal Component Analysis | Effective dimensionality reduction, elimination of redundant information, simplifying data for easier subsequent analysis. | Unsupervised learning | Data downscaling and feature extraction | Process hyperspectral food detection data, extract key components for quality judgment. | [62] |
| Artificial Neural Networks | Can learn complex nonlinear relationships and adapt to various feature fusion analysis. | supervised learning | Dealing with nonlinear relationships | Determine food freshness by analyzing the appearance, spectrum, and other characteristics of the food. | [63] |
2.2. Principles and Detection Mechanisms of Intelligent Optical Sensors
2.2.1. Sensors with Strong Integration with Machine Learning
- •
- Fluorescence Sensors. Fluorescence sensors detect substances based on the emitted fluorescence signals following excitation. Their core advantages lie in high sensitivity and multi-wavelength responsiveness, making them particularly suitable for detecting low-concentration contaminants in food. Fluorescence microscopy or imaging systems capture images of fluorescently labeled food samples. By analyzing fluorescence intensity and distribution, these techniques enable high-throughput detection and analysis of microorganisms, cells, and other targets. Fluorescence imaging utilizes microscopy or imaging systems to visualize labeled food samples. When detecting microorganisms, fluorescent dyes are typically used to label them. When illuminated by the microscope or imaging system, labeled microbes emit fluorescence. By analyzing fluorescence intensity and distribution, the quantity, location, and activity of microorganisms can be accurately determined [68]. For example, multispectral fluorescence imaging technology was employed to detect defective cherry tomatoes. Fluorescence excitation and emission matrices were used to measure defective areas, intact surfaces, and stem regions to determine optimal excitation and emission wavelengths. Two-way ANOVA analysis indicated that 410 nm was the optimal excitation wavelength for detecting defective areas. Principal component analysis (PCA) was performed on the fluorescence emission spectra of all regions at the 410 nm excitation wavelength to identify emission wavelengths for defect detection. The primary emission wavelengths used for detection were 688 nm and 506 nm. Combining fluorescence images with the determined emission bands demonstrated the feasibility of detecting defective cherry tomatoes with an accuracy exceeding 98% [69]. The integration of machine learning with fluorescence sensors primarily manifests in the deep analysis of complex fluorescence signals. In CNN analysis of fluorescence spectra for identifying foodborne pathogens, metabolites or specific markers of foodborne pathogens produce characteristic fluorescence spectra. However, in practical detection, background fluorescence from food matrices interferes with target signals, resulting in complex and overlapping spectra. Convolutional neural networks (CNNs) automatically extract subtle spectral features through multi-layer convolutional operations, effectively distinguishing target pathogens from background interference. Zielinski, B et al. achieved 97.24% classification accuracy using deep learning CNNs to classify bacterial colony images [70].
- •
- Colorimetric Sensors. Colorimetric sensors enable qualitative or quantitative analysis by detecting color changes resulting from target substances reacting with colorimetric reagents. Their core advantages lie in visualization and portability, making them suitable for rapid on-site screening. The integration of machine learning further enhances the quantitative accuracy and scenario adaptability of colorimetric sensors. RGB Image Analysis Combined with SVM/RF for Quantitative Heavy Metal Detection: Heavy metal ions exhibit characteristic colors after reacting with specific colorimetric reagents. Capturing RGB images of the reaction system via smartphones or cameras converts color information into digital signals. Algorithms like Support Vector Machines (SVMs) or Random Forests (RFs) learn the mapping relationship between RGB values and heavy metal concentrations, eliminating subjective errors from manual visual color comparison. For example, in Cd2+ detection in rice, the RF model achieves quantitative errors below 3% across the 0.005–0.5 mg/kg concentration range by analyzing RGB images after color development, meeting national standard requirements. Smartphone apps integrate ML models for on-site detection: Embedding trained machine learning models into smartphone apps enables full automation of the “sampling–coloration–photography–detection” workflow. Smartphones have been integrated with rapid colorimetric sensors for heavy metal ions. Dang, KPT et al. [71] combined a bio-gold nanoparticle (AuNP) sensor with a lightbox designed for color reference and machine learning to detect Fe3+ ions in water. To enhance image quality, a lightbox was developed and standardized reference color values were implemented, significantly improving the performance of the machine learning algorithm. Compared to non-standardized methods (R2 = 0.8207), an approximate 6.7% improvement in evaluation metrics (R2 = 0.8207) was achieved.
- •
- Surface-Enhanced Raman Scattering (SERS) Sensors. SERS sensors amplify target molecule Raman scattering signals via nano-enhanced substrates. Their fingerprinting characteristics and single-molecule detection capability enable precise identification of complex food components. However, signals are susceptible to substrate interference and peak overlap, making machine learning crucial for overcoming these challenges. The integration of machine learning with SERS focuses on signal interpretation and multi-target discrimination: PCA-LDA distinguishes similar toxins: SERS spectral peaks of structurally similar toxins exhibit minimal differences, making traditional methods ineffective for differentiation. Principal Component Analysis (PCA) first reduces the dimensionality of high-dimensional SERS data, retaining key features; Linear Discriminant Analysis (LDA) then classifies categories based on these reduced features. For instance, in simultaneous detection of aflatoxins B1/B2 in peanut oil, the PCA-LDA model achieved 99.2% discrimination accuracy while maintaining stable identification even at low concentrations of 0.1 μg/kg. Deep learning for deconvoluting overlapping peaks: When multiple pesticide residues coexist in food, their SERS peaks often overlap to form “mixed peaks,” leading to significant errors in traditional peak area integration methods. Deep learning models can learn the underlying structure of overlapping peaks through multi-layer nonlinear transformations, achieving peak “deconvolution.” For instance, in the simultaneous detection of five pesticides including imidacloprid and chlorpyrifos in fruits and vegetables, the DBN model achieved 96% accuracy in resolving mixed peaks, with detection limits below 0.05 mg/kg, meeting multi-residue testing requirements.
- •
- Others. Sensors based on absorption/reflection spectroscopy (such as near-infrared and hyperspectral imaging) obtain chemical ‘fingerprint’ information by analyzing a substance’s absorption or reflection of light at specific wavelengths. This technology suite is maturely applied in non-destructive testing of internal quality and safety attributes, for instance predicting moisture, fat content and freshness in meat, and monitoring quality changes during storage of items like eggs [72]. Sensors based on elastically scattered light (e.g., Raman scattering, optical coherence tomography) obtain information about a substance’s physical structure, particle distribution, or internal defects by detecting scattered light signals resulting from elastic interactions between light and matter, without altering the wavelength. Machine learning plays a pivotal role in enhancing signal-to-noise ratios, feature extraction, and classification for such data. For instance, combining surface-enhanced Raman scattering with models like support vector machines can improve the sensitivity and specificity of detecting substances such as pesticide residues and illicit additives [73].
2.2.2. Luminescence Mechanisms
2.3. Intelligent Detection Components
2.3.1. Quantum Dots
2.3.2. Metal Nanoparticles
2.3.3. Upconversion Nanoparticles (UCNPs)
2.3.4. MOFs
2.3.5. Other Materials
3. Target Substances for Detection
3.1. Pesticide Residues


3.2. Heavy Metals
3.3. Microorganisms and Foodborne Pathogens
3.4. Other Applications
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Yang, J.; Lin, Y. Study on Evolution of Food Safety Status and Supervision Policy-A System Based on Quantity, Quality, and Development Safety. Sustainability 2019, 11, 6656. [Google Scholar] [CrossRef]
- Pu, Z.K.; Wen, H.Q.; Wang, J.; Liu, J.H.; Wang, S.F.; Qiao, S. A hole-dominated highly sensitive position-sensitive detector based on a MAPbI3 heterojunction. Surf. Interfaces 2025, 59, 105966. [Google Scholar] [CrossRef]
- Cavallaro, A.; Santonocito, R.; Puglisi, R.; Pappalardo, A.; La Spada, F.; Parlascino, R.; Riolo, M.; Cacciola, S.O.; Tuccitto, N.; Sfrazzetto, G.T. Fast detection of penicillium rot and the conservation status of packaged citrus fruit using an optical array sensor. Chem. Commun. 2024, 60, 13702–13705. [Google Scholar] [CrossRef] [PubMed]
- Natarajan, D.; Athinarayanan, B.G.; Annaman, L.S. Optical sensing system for meat freshness analysis. Opt. Eng. 2024, 63, 024108. [Google Scholar] [CrossRef]
- Zhou, S.; Kuang, H.; Liu, L.; Xu, C. Research Progress on Rapid Detection Standard System and Technology for Food Quality and Security. J. Food Sci. Biotechnol. 2024, 43, 1–8. [Google Scholar] [CrossRef]
- Umapathi, R.; Park, B.; Sonwal, S.; Rani, G.M.; Cho, Y.J.; Huh, Y.S. Advances in optical-sensing strategies for the on-site detection of pesticides in agricultural foods. Trends Food Sci. Technol. 2022, 119, 69–89. [Google Scholar] [CrossRef]
- Zhou, J.R.; Qi, Q.Q.; Wang, C.; Qian, Y.F.; Liu, G.M.; Wang, Y.B.; Fu, L.L. Surface plasmon resonance (SPR) biosensors for food allergen detection in food matrices. Biosens. Bioelectron. 2019, 142, 15. [Google Scholar] [CrossRef]
- Liu, J.W.; Wu, D.; Wu, Y.N.; Shi, Y.H.; Liu, W.Q.; Sun, Z.W.; Li, G.L. Recent advances in optical sensors and probes for the detection of freshness in food samples: A comprehensive review (2020-2023). TrAC Trends Anal. Chem. 2024, 177, 25. [Google Scholar] [CrossRef]
- Lin, Y.D.; Ma, J.; Wang, Q.J.; Sun, D.W. Applications of machine learning techniques for enhancing nondestructive food quality and safety detection. Crit. Rev. Food Sci. Nutr. 2023, 63, 1649–1669. [Google Scholar] [CrossRef]
- Goyal, R.; Singha, P.; Singh, S.K. Spectroscopic food adulteration detection using machine learning: Current challenges and future prospects. Trends Food Sci. Technol. 2024, 146, 19. [Google Scholar] [CrossRef]
- Lu, T.; Xinjie, S.; Li, W.U.; Juan, S.U.N.; Yao, Z.; Yuqian, S.H.I.; Yuanfeng, W.U. Progress in Optical and Electrochemical Sensors for Detection of Quinolone Antibiotics in Food. Sci. Technol. Food Ind. 2023, 44, 465–474. [Google Scholar] [CrossRef]
- Adir, O.; Poley, M.; Chen, G.; Froim, S.; Krinsky, N.; Shklover, J.; Shainsky-Roitman, J.; Lammers, T.; Schroeder, A. Integrating Artificial Intelligence and Nanotechnology for Precision Cancer Medicine. Adv. Mater. 2020, 32, 1901989. [Google Scholar] [CrossRef] [PubMed]
- Binson, V.A.; Thomas, S.; Subramoniam, M.; Arun, J.; Naveen, S.; Madhu, S. A Review of Machine Learning Algorithms for Biomedical Applications. Ann. Biomed. Eng. 2024, 52, 1159–1183. [Google Scholar] [CrossRef]
- Oymak, S.; Li, M.C.; Soltanolkotabi, M. Generalization Guarantees for Neural Architecture Search with Train-Validation Split. In Proceedings of the International Conference on Machine Learning (ICML), Online, 18–24 July 2021. [Google Scholar]
- Du, J.H.; Patil, P.; Roeder, K.; Kuchibhotla, A.K. Extrapolated Cross-Validation for Randomized Ensembles. J. Comput. Graph. Stat. 2024, 33, 1061–1072. [Google Scholar] [CrossRef] [PubMed]
- Pavlou, M.; Omar, R.Z.; Ambler, G. Penalized Regression Methods With Modified Cross-Validation and Bootstrap Tuning Produce Better Prediction Models. Biom. J. 2024, 66, e202300245. [Google Scholar] [CrossRef] [PubMed]
- Nturambirwe, J.F.I.; Opara, U.L. Machine learning applications to non-destructive defect detection in horticultural products. Biosyst. Eng. 2020, 189, 60–83. [Google Scholar] [CrossRef]
- Zhang, K.; Wang, J.; Liu, T.; Luo, Y.; Loh, X.J.; Chen, X. Machine Learning-Reinforced Noninvasive Biosensors for Healthcare. Adv. Healthc. Mater. 2021, 10, 2100734. [Google Scholar] [CrossRef]
- Chandrasekaran, I.; Panigrahi, S.S.; Ravikanth, L.; Singh, C.B. Potential of Near-Infrared (NIR) Spectroscopy and Hyperspectral Imaging for Quality and Safety Assessment of Fruits: An Overview. Food Anal. Methods 2019, 12, 2438–2458. [Google Scholar] [CrossRef]
- Yang, C.; Guo, Z.M.; Barbin, D.F.; Dai, Z.Q.; Watson, N.; Povey, M.; Zou, X.B. Hyperspectral Imaging and Deep Learning for Quality and Safety Inspection of Fruits and Vegetables: A Review. J. Agric. Food Chem. 2025, 73, 10019–10035. [Google Scholar] [CrossRef]
- Sun, D.W.; Pu, H.B.; Yu, J.X. Applications of hyperspectral imaging technology in the food industry. Nat. Rev. Electr. Eng. 2024, 1, 251–263. [Google Scholar] [CrossRef]
- Zhou, S.; Deng, X.; Li, C.; Liu, Y.; Jiang, H. Recognition-Oriented Image Compressive Sensing with Deep Learning. IEEE Trans. Multimed. 2023, 25, 2022–2032. [Google Scholar] [CrossRef]
- Amado, T.M.; Burman, M.R.; Chicote, R.F.; Espenida, S.M.C.; Masangcay, H.L.; Ventura, C.H.; Tolentino, L.K.S.; Padilla, M.V.C.; Madrigal, G.A.M.; Enriquez, L.A.C.; et al. Development of Predictive Models using Machine Learning Algorithms for Food Adulterants Bacteria Detection. In Proceedings of the 11th IEEE International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Laoag, Philippines, 29 November–1 December 2019. [Google Scholar]
- Ogbuju, E.; Ihinkalu, O.; Oladipo, F. An Interface Design Methodology for Serving Machine Learning Models. In Proceedings of the 4th African Conference on Human Computer Interaction (AfriCHI)—Beyond Limits, East London, South Africa, 27 November–1 December 2023; pp. 12–14. [Google Scholar]
- Zhu, X.; Wan, Z.; Tsang, D.C.W.; He, M.; Hou, D.; Su, Z.; Shang, J. Machine learning for the selection of carbon-based materials for tetracycline and sulfamethoxazole adsorption. Chem. Eng. J. 2021, 406, 126782. [Google Scholar] [CrossRef]
- He, Q.; Huang, H.; Wang, Y. Detection technologies, and machine learning in food: Recent advances and future trends. Food Biosci. 2024, 62, 105558. [Google Scholar] [CrossRef]
- Lin, Y.; Cheng, J.H.; Ma, J.; Zhou, C.; Sun, D.W. Elevating nanomaterial optical sensor arrays through the integration of advanced machine learning techniques for enhancing visual inspection of food quality and safety. Crit. Rev. Food Sci. Nutr. 2025, 65, 3628–3649. [Google Scholar] [CrossRef] [PubMed]
- Cavdaroglu, C.; Altug, N.; Serpen, A.; Öztop, M.H.; Ozen, B. Comparative performance of artificial neural networks and support vector Machines in detecting adulteration of apple juice concentrate using spectroscopy and time domain NMR. Food Res. Int. 2025, 201, 115616. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.H.; Sun, Y.J.; Sun, Y.X. Application of Intelligent Taste Analysis Based on Random Forest Algorithm in Food Quality Inspection. Comput. Intell. Neurosci. 2022, 2022, 6901184. [Google Scholar] [CrossRef] [PubMed]
- Merghadi, A.; Yunus, A.P.; Dou, J.; Whiteley, J.; Binh, T.; Dieu Tien, B.; Avtar, R.; Abderrahmane, B. Machine learning methods for landslide susceptibility studies: A comparative overview of algorithm performance. Earth-Sci. Rev. 2020, 207, 103225. [Google Scholar] [CrossRef]
- Gu, T.; Duan, P.; Wang, M.; Li, J.; Zhang, Y. Effects of non-landslide sampling strategies on machine learning models in landslide susceptibility mapping. Sci. Rep. 2024, 14, 7201. [Google Scholar] [CrossRef]
- Li, S.; Wang, W.; Li, L.; Zhou, J. Survey of Support Vector Machine in the Processing of Remote Sensing Image. In Proceedings of the International Forum on Mechanical and Material Engineering (IFMME 2013), Guangzhou, China, 13–14 June 2013; p. 1567. [Google Scholar]
- Deng, X.; Cao, S.; Horn, A.L. Emerging Applications of Machine Learning in Food Safety. Annu. Rev. Food Sci. Technol. 2021, 12, 513–538. [Google Scholar] [CrossRef]
- Cao, J.Q.; Guo, Y.Y.; Gao, W.; Wang, X.; Lou, S.Q.; Sheng, X.Z. A New Method for Fast Spectral Demodulation of Wide-Measurement Range Optical Fiber Torsion Sensor. Ieee Sens. J. 2025, 25, 19258–19267. [Google Scholar] [CrossRef]
- Xu, Z.; Wang, Z.; Liu, M.; Yan, B.; Ren, X.; Gao, Z. Machine learning assisted dual-channel carbon quantum dots-based fluorescence sensor array for detection of tetracyclines. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2020, 232, 118147. [Google Scholar] [CrossRef]
- Khojastehnazhand, M.; Ramezani, H. Machine vision system for classification of bulk raisins using texture features. J. Food Eng. 2020, 271, 109864. [Google Scholar] [CrossRef]
- Nagasubramanian, K.; Jones, S.; Singh, A.K.; Sarkar, S.; Singh, A.; Ganapathysubramanian, B. Plant disease identification using explainable 3D deep learning on hyperspectral images. Plant Methods 2019, 15, 98. [Google Scholar] [CrossRef] [PubMed]
- Wei, X.; He, J.; Zheng, S.; Ye, D. Modeling for SSC and firmness detection of persimmon based on NIR hyperspectral imaging by sample partitioning and variables selection. Infrared Phys. Technol. 2020, 105, 103099. [Google Scholar] [CrossRef]
- Mirzaee-Ghaleh, E.; Taheri-Garavand, A.; Ayari, F.; Lozano, J. Identification of Fresh-Chilled and Frozen-Thawed Chicken Meat and Estimation of their Shelf Life Using an E-Nose Machine Coupled Fuzzy KNN. Food Anal. Methods 2020, 13, 678–689. [Google Scholar] [CrossRef]
- Han, Z.; Gao, J. Pixel-level aflatoxin detecting based on deep learning and hyperspectral imaging. Comput. Electron. Agric. 2019, 164, 104888. [Google Scholar] [CrossRef]
- Cui, F.; Yue, Y.; Zhang, Y.; Zhang, Z.; Zhou, H.S. Advancing Biosensors with Machine Learning. Acs Sens. 2020, 5, 3346–3364. [Google Scholar] [CrossRef]
- Tang, J.-W.; Lyu, J.-W.; Lai, J.-X.; Zhang, X.-D.; Du, Y.-G.; Zhang, X.-Q.; Zhang, Y.-D.; Gu, B.; Zhang, X.; Gu, B.; et al. Determination of Shigella spp. via label-free SERS spectra coupled with deep learning. Microchem. J. 2023, 189, 108539. [Google Scholar] [CrossRef]
- Kotoku, J.i. An Introduction to Machine Learning. Jpn. J. Med. Phys. 2016, 36, 18–22. [Google Scholar] [CrossRef]
- Zhang, G.; Tang, L.; Liu, Z.; Zhou, L.; Liu, Y.; Jiang, Z. Machine-learning-based damage identification methods with features derived from moving principal component analysis. Mech. Adv. Mater. Struct. 2020, 27, 1789–1802. [Google Scholar] [CrossRef]
- Lin, Y.; Ma, J.; Sun, D.-W.; Cheng, J.-H.; Wang, Q. A pH-Responsive colourimetric sensor array based on machine learning for real-time monitoring of beef freshness. Food Control 2023, 150, 109729. [Google Scholar] [CrossRef]
- Chen, Z.; Lin, H.; Wang, F.; Adade, S.Y.-S.S.; Peng, T.; Chen, Q. Discrimination of toxigenic and non-toxigenic Aspergillus flavus in wheat based on nanocomposite colorimetric sensor array. Food Chem. 2024, 430, 137048. [Google Scholar] [CrossRef]
- Huang, L.; Zhang, Z.; Xing, H.; Sui, X.; Yang, J.; Wang, Y. Quantitative and qualitative analyses of metal ions in food and water by using a multicolor sensor array and chemometrics. Anal. Methods 2023, 15, 906–915. [Google Scholar] [CrossRef]
- Lu, Z.; Lu, N.; Xiao, Y.; Zhang, Y.; Tang, Z.; Zhang, M. Metal-Nanoparticle-Supported Nanozyme-Based Colorimetric Sensor Array for Precise Identification of Proteins and Oral Bacteria. Acs Appl. Mater. Interfaces 2022, 14, 11156–11166. [Google Scholar] [CrossRef]
- Liu, Y.; Pu, H.; Sun, D.-W. Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices. Trends Food Sci. Technol. 2021, 113, 193–204. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Lei, Z.; Chu, Z.; Fei, L.; Zhengjun, Q.; Yong, H. Application of deep learning in food: A review. Compr. Rev. Food Sci. Food Saf. 2019, 18, 1793–1811. [Google Scholar] [CrossRef] [PubMed]
- Deng, Z.; Wang, T.; Zheng, Y.; Zhang, W.; Yun, Y.-H. Deep learning in food authenticity: Recent advances and future trends. Trends Food Sci. Technol. 2024, 144, 104344. [Google Scholar] [CrossRef]
- Zhou, X.; Jun, S.; Yan, T.; Bing, L.; Hang, Y.; Quansheng, C. Hyperspectral technique combined with deep learning algorithm for detection of compound heavy metals in lettuce. Food Chem. 2020, 321, 126503. [Google Scholar] [CrossRef]
- Dargan, S.; Kumar, M.; Ayyagari, M.R.; Kumar, G. A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning. Arch. Comput. Methods Eng. 2020, 27, 1071–1092. [Google Scholar] [CrossRef]
- Zhang, C.; Wu, W.; Zhou, L.; Cheng, H.; Ye, X.; He, Y. Developing deep learning based regression approaches for determination of chemical compositions in dry black goji berries (Lycium ruthenicum Murr.) using near-infrared hyperspectral imaging. Food Chem. 2020, 319, 126536. [Google Scholar] [CrossRef]
- Ni, C.; Li, Z.; Zhang, X.; Sun, X.; Huang, Y.; Zhao, L.; Zhu, T.; Wang, D. Online Sorting of the Film on Cotton Based on Deep Learning and Hyperspectral Imaging. Ieee Access 2020, 8, 93028–93038. [Google Scholar] [CrossRef]
- Huang, Y.; Wang, J.; Li, N.; Yang, J.; Ren, Z. Predicting soluble solids content in “Fuji” apples of different ripening stages based on multiple information fusion. Pattern Recognit. Lett. 2021, 151, 76–84. [Google Scholar] [CrossRef]
- Qin, J.W.; Hong, J.; Cho, H.; Van Kessel, J.A.S.; Baek, I.; Chao, K.; Kim, M.S. A MULTIMODAL OPTICAL SENSING SYSTEM FOR AUTOMATED AND INTELLIGENT FOOD SAFETY INSPECTION. J. Asabe 2023, 66, 839–849. [Google Scholar] [CrossRef]
- Wang, A.; Zhai, L.J. GA-Net: Global-aware Attention-guided CNN for Food Image Classification. In Proceedings of the IEEE 5th Annual World AI IoT Congress (AIIoT), Seattle, WA, USA, 29–31 May 2024; pp. 408–413. [Google Scholar]
- Mousavizadegan, M.; Shalileh, F.; Mostajabodavati, S.; Mohammadi, J.; Hosseini, M. Machine learning-assisted image-based optical devices for health monitoring and food safety. Trac-Trends Anal. Chem. 2024, 177, 117794. [Google Scholar] [CrossRef]
- Lee, I.H.; Ma, L.Y. Integrating machine learning, optical sensors, and robotics for advanced food quality assessment and food processing. Food Innov. Adv. 2025, 4, 65–72. [Google Scholar] [CrossRef]
- Morawski, R.Z.; Miekina, A. PCA-based algorithm for calibration of spectrophotometric analysers of food. In Proceedings of the Joint International-Measurement-Confederation TC1-TC7-TC13 Symposium on Measurement Across Physical and Behavioural Sciences (IMEKO), Genoa, Italy, 4–6 September 2013. [Google Scholar]
- Ali, S.S.E.; Dildar, S.A. An Efficient Quality Inspection of Food Products Using Neural Network Classification. J. Intell. Syst. 2020, 29, 1425–1440. [Google Scholar] [CrossRef]
- Li, H.; Song, W.Y.; Zhao, Y.; Cao, Q.; Wen, A.H. Optical Trapping, Sensing, and Imaging by Photonic Nanojets. Photonics 2021, 8, 434. [Google Scholar] [CrossRef]
- Yu, X.; Ji, Y.; Shen, X.Y.; Le, X.Y. Progress in Advanced Infrared Optoelectronic Sensors. Nanomaterials 2024, 14, 845. [Google Scholar] [CrossRef]
- Ran, Y.X.; Yu, J.D.; Cao, F.; Yu, J.F.; Bu, L.J.; Lu, G.H. In situ infrared spectroscopy depth profilometer for organic thin films. Rev. Sci. Instrum. 2022, 93, 113901. [Google Scholar] [CrossRef]
- Rodriguez-Saona, L.; Aykas, D.P.; Borba, K.R.; Urtubia, A. Miniaturization of optical sensors and their potential for high-throughput screening of foods. Curr. Opin. Food Sci. 2020, 31, 136–150. [Google Scholar] [CrossRef]
- Lu, G.; Duan, L.; Meng, S.; Cai, P.; Ding, S.; Wang, X. Development of a colorimetric and turn-on fluorescent probe with large Stokes shift for H2S detection and its multiple applications in environmental, food analysis and biological imaging. Dye. Pigment. 2023, 220, 111687. [Google Scholar] [CrossRef]
- Baek, I.-S.; Kim, M.S.; Lee, H.; Lee, W.-H.; Cho, B.-K. Optimal Fluorescence Waveband Determination for Detecting Defective Cherry Tomatoes Using a Fluorescence Excitation-Emission Matrix. Sensors 2014, 14, 21483–21496. [Google Scholar] [CrossRef] [PubMed]
- Zielinski, B.; Plichta, A.; Misztal, K.; Spurek, P.; Brzychczy-Wloch, M.; Ochonska, D. Deep learning approach to bacterial colony classification. PLoS ONE 2017, 12, e0184554. [Google Scholar] [CrossRef] [PubMed]
- Dang, K.-P.T.; Nguyen, T.T.-G.; Cao, T.-D.; Le, V.-D.; Dang, C.-H.; Duy, N.P.H.; Phuong, P.T.T.; Huy, D.M.; Kim Chi, T.T.; Nguyen, T.-D. Biogenic fabrication of a gold nanoparticle sensor for detection of Fe3+ ions using a smartphone and machine learning. RSC Adv. 2024, 14, 20466–20478. [Google Scholar] [CrossRef]
- Atwa, E.M.; Xu, S.M.; Rashwan, A.K.; Abdelshafy, A.M.; Elmasry, G.; Al-Rejaie, S.; Xu, H.X.; Lin, H.J.; Pan, J.M. Advances in Emerging Non-Destructive Technologies for Detecting Raw Egg Freshness: A Comprehensive Review. Foods 2024, 13, 3563. [Google Scholar] [CrossRef]
- Gul, N.; Muzaffar, K.; Shah, S.Z.A.; Assad, A.; Makroo, H.A.; Dar, B.N. Deep learning hyperspectral imaging: A rapid and reliable alternative to conventional techniques in the testing of food quality and safety. Qual. Assur. Saf. Crops Foods 2024, 16, 78–97. [Google Scholar] [CrossRef]
- Zhang, W.; Lang, X.; Liu, X.; Li, G.; Singh, R.; Zhang, B.; Kumar, S. Advances in Tapered Optical Fiber Sensor Structures: From Conventional to Novel and Emerging. Preprints 2023. [Google Scholar] [CrossRef]
- Sun, Y.; Waterhouse, G.I.N.; Qiao, X.; Xiao, J.; Xu, Z. Determination of chloramphenicol in food using nanomaterial-based electrochemical and optical sensors-A review. Food Chem. 2023, 410, 135434. [Google Scholar] [CrossRef]
- Chen, H.; Zhang, L.; Hu, Y.; Zhou, C.; Lan, W.; Fu, H.; She, Y. Nanomaterials as optical sensors for application in rapid detection of food contaminants, quality and authenticity. Sens. Actuators B—Chem 2021, 329, 129135. [Google Scholar] [CrossRef]
- Lim, S.J.; Ma, L.; Schleife, A.; Smith, A.M. Quantum dot surface engineering: Toward inert fluorophores with compact size and bright, stable emission. Coord. Chem. Rev. 2016, 320, 216–237. [Google Scholar] [CrossRef]
- Zhan, S.J.; Li, B.X.; Chen, T.; Tu, Y.D.; Ji, H.; Othman, D.M.; Xiao, M.F.; Liu, R.J.; Zhang, Z.H.; Tang, Y.; et al. High responsivity colloidal quantum dots phototransistors for low-dose near-infrared photodetection and image communication. Light-Sci. Appl. 2025, 14, 201. [Google Scholar] [CrossRef] [PubMed]
- Cheng, S.; Yang, M.; Hao, M.; Liang, X.; Fang, X.; Xiang, W. An innovative strategy: Ultra-stable alkaline earth modified CsPbBr3 quantum dots glass was prepared by washing-heat cycle for high definition backlight display. Mater. Today Chem. 2023, 31, 101628. [Google Scholar] [CrossRef]
- Ma, P.H.; Jia, X.X.; He, Y.Y.; Tao, J.H.; Wang, Q.; Wei, C. Recent progress of quantum dots for food safety assessment: A review. Trends Food Sci. Technol. 2024, 143, 104310. [Google Scholar] [CrossRef]
- Xia, X.H.; Zhao, P.; Zheng, J.; Li, X.H.; Zeng, X.; Men, D.; Luo, Y.Y.; Hou, C.J.; Huo, D.Q. A novel quantum dot-based ratiometric fluorescence sensor array: For reducing substances detection and Baijiu quality discrimination. Anal. Chim. Acta 2025, 1347, 343785. [Google Scholar] [CrossRef]
- Niu, C.; Yao, Z.; Jiang, S. Synthesis and application of quantum dots in detection of environmental contaminants in food: A comprehensive review. Sci. Total Environ. 2023, 882, 163565. [Google Scholar] [CrossRef]
- Qi, P.; Chen, X.; Sun, Y.; Zhang, D. Multivalent glycosylated Cu:CdS quantum dots as a platform for rapid bacterial discrimination and detection. Sens. Actuators B—Chem 2018, 254, 431–436. [Google Scholar] [CrossRef]
- Yao, F.; Li, L.; Donglei, S.; Hanyue, L.; Haiyan, F.; Tianming, Y.; Yuanbin, S.; Chuang, N. “Turn-off” fluorescent data array sensor based on double quantum dots coupled with chemometrics for highly sensitive and selective detection of multicomponent pesticides. Anal. Chim. Acta 2016, 916, 84–91. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, J.; Xu, Z.; Liu, H.; Yuan, T.; Wang, X.; Wei, J.; Shi, Q. Detection of multiple metal ions in water with a fluorescence sensor based on carbon quantum dots assisted by stepwise prediction and machine learning. Environ. Chem. Lett. 2022, 20, 3415–3420. [Google Scholar] [CrossRef]
- Li, F.; Jiang, J.; Peng, H.; Li, C.; Li, B.; He, J. Platinum nanozyme catalyzed multichannel colorimetric sensor array for identification and detection of pesticides. Sens. Actuators B—Chem 2022, 369, 132334. [Google Scholar] [CrossRef]
- de Arquer, F.P.G.; Talapin, D.V.; Klimov, V.I.; Arakawa, Y.; Bayer, M.; Sargent, E.H. Semiconductor quantum dots: Technological progress and future challenges. Science 2021, 373, eaaz8541. [Google Scholar] [CrossRef]
- He, M.-Q.; Ai, Y.; Hu, W.; Guan, L.; Ding, M.; Liang, Q. Recent Advances of Seed-Mediated Growth of Metal Nanoparticles: From Growth to Applications. Adv. Mater. 2023, 35, 2211915. [Google Scholar] [CrossRef] [PubMed]
- Lunjie, H.; Da-Wen, S.; Zhihui, W.; Hongbin, P.; Qingyi, W. Reproducible, shelf-stable, and bioaffinity SERS nanotags inspired by multivariate polyphenolic chemistry for bacterial identification. Anal. Chim. Acta 2021, 1167, 338570. [Google Scholar] [CrossRef] [PubMed]
- Fahimi-Kashani, N.; Hormozi-Nezhad, M.R. Gold-Nanoparticle-Based Colorimetric Sensor Array for Discrimination of Organophosphate Pesticides. Anal. Chem. 2016, 88, 8099–8106. [Google Scholar] [CrossRef] [PubMed]
- Yang, J.; Lu, S.; Chen, B.; Hu, F.; Li, C.; Guo, C. Machine learning-assisted optical nano-sensor arrays in microorganism analysis. Trac-Trends Anal. Chem. 2023, 159, 116945. [Google Scholar] [CrossRef]
- Koushkestani, M.; Abbasi-Moayed, S.; Ghasemi, F.; Mahdavi, V.; Hormozi-Nezhad, M.R. Simultaneous detection and identification of thiometon, phosalone, and prothioconazole pesticides using a nanoplasmonic sensor array. Food Chem. Toxicol. 2021, 151, 112109. [Google Scholar] [CrossRef]
- Guan, H.; Du, S.; Han, B.; Zhang, Q.; Wang, D. A rapid and sensitive smartphone colorimetric sensor for detection of ascorbic acid in food using the nanozyme paper-based microfluidic chip. Lwt-Food Sci. Technol. 2023, 184, 115043. [Google Scholar] [CrossRef]
- Wu, F.; Wang, H.; Lv, J.; Shi, X.; Wu, L.; Niu, X. Colorimetric sensor array based on Au2Pt nanozymes for antioxidant nutrition quality evaluation in food. Biosens. Bioelectron. 2023, 236, 115417. [Google Scholar] [CrossRef]
- Li, Y.; Mu, Z.; Yuan, Y.; Zhou, J.; Bai, L.; Qing, M. An enzymatic activity regulation-based clusterzyme sensor array for high-throughput identification of heavy metal ions. J. Hazard. Mater. 2023, 454, 131501. [Google Scholar] [CrossRef]
- Chen, H.; Zhou, K.; Zhao, G. Gold nanoparticles: From synthesis, properties to their potential application as colorimetric sensors in food safety screening. Trends Food Sci. Technol. 2018, 78, 83–94. [Google Scholar] [CrossRef]
- Liu, S.; An, Z.; Zhou, B. Optical multiplexing of upconversion in nanoparticles towards emerging applications. Chem. Eng. J. 2023, 452, 139649. [Google Scholar] [CrossRef]
- Wen, S.; Zhou, J.; Zheng, K.; Bednarkiewicz, A.; Liu, X.; Jin, D. Advances in highly doped upconversion nanoparticles. Nat. Commun. 2018, 9, 2415. [Google Scholar] [CrossRef]
- Wu, S.; Duan, N.; Shi, Z.; Fang, C.; Wang, Z. Simultaneous Aptasensor for Multiplex Pathogenic Bacteria Detection Based on Multicolor Upconversion Nanoparticles Labels. Anal. Chem. 2014, 86, 3100–3107. [Google Scholar] [CrossRef] [PubMed]
- Wang, Q.-Q.; Fang, Z.-Q.; Wu, Y.-T.; Zhang, M.; Shi, G. A single-component yet multifunctional tongue-mimicking sensor array for upconversion fluorescence biosensing. Analyst 2020, 145, 7191–7196. [Google Scholar] [CrossRef]
- Rong, Y.; Hassan, M.M.; Ouyang, Q.; Chen, Q. Lanthanide ion (Ln3+)-based upconversion sensor for quantification of food contaminants: A review. Compr. Rev. Food Sci. Food Saf. 2021, 20, 3531–3578. [Google Scholar] [CrossRef] [PubMed]
- Sun, D.-W.; Huang, L.; Pu, H.; Ma, J. Introducing reticular chemistry into agrochemistry. Chem. Soc. Rev. 2021, 50, 1070–1110. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Huang, L.; Sun, D.-W.; Pu, H. Interfacing metal-polyphenolic networks upon photothermal gold nanorods for triplex-evolved biocompatible bactericidal activity. J. Hazard. Mater. 2022, 426, 127824. [Google Scholar] [CrossRef]
- Yang, G.-L.; Jiang, X.-L.; Xu, H.; Zhao, B. Applications of MOFs as Luminescent Sensors for Environmental Pollutants. Small 2021, 17, 2005327. [Google Scholar] [CrossRef]
- Guan, L.; Jiang, Z.; Cui, Y.; Yang, Y.; Yang, D.; Qian, G. An MOF-Based Luminescent Sensor Array for Pattern Recognition and Quantification of Metal Ions. Adv. Opt. Mater. 2021, 9, 2002180. [Google Scholar] [CrossRef]
- Liu, X.; Huo, D.; Li, J.; Ma, Y.; Liu, H.; Luo, H.; Zhang, S.; Luo, X.; Hou, C. Pattern-recognizing-assisted detection of mildewed wheat by Dyes/Dyes-Cu-MOF paper-based colorimetric sensor array. Food Chem. 2023, 415, 135525. [Google Scholar] [CrossRef]
- Lu, Y.; Zhang, X.; Huang, Y. Tuning nanozyme property of Co@NC via V doping to construct colorimetric sensor array for quantifying and discriminating antioxidant phenolic compounds. Biosens. Bioelectron. 2022, 215, 114567. [Google Scholar] [CrossRef]
- Zhang, X.; Li, G.; Wu, D.; Li, X.; Hu, N.; Chen, J.; Chen, G.; Wu, Y. Recent progress in the design fabrication of metal-organic frameworks-based nanozymes and their applications to sensing and cancer therapy. Biosens. Bioelectron. 2019, 137, 178–198. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Huang, L.; Pu, H.; Sun, D.-W. Magnetic surface-enhanced Raman scattering (MagSERS) biosensors for microbial food safety: Fundamentals and applications. Trends Food Sci. Technol. 2021, 113, 366–381. [Google Scholar] [CrossRef]
- Wang, W.; Gunasekaran, S. Nanozymes-based biosensors for food quality and safety. Trac-Trends Anal. Chem. 2020, 126, 115841. [Google Scholar] [CrossRef]
- Zeng, L.; Zhu, Z.; Sun, D.-W. Novel graphene oxide/polymer composite membranes for the food industry: Structures, mechanisms and recent applications. Crit. Rev. Food Sci. Nutr. 2022, 62, 3705–3722. [Google Scholar] [CrossRef]
- Ximeng, Y.; Qian, H.; Huixin, Z.; Huiling, J.I.N.; Xinyang, H.; Xingang, M. Research progress on the application of high-resolution mass spectrometry technology in the pesticide residue analysis. J. North. Agric. 2023, 51, 85–92. [Google Scholar] [CrossRef]
- Fang, L.; Liao, X.; Jia, B.; Shi, L.; Kang, L.; Zhou, L.; Kong, W. Recent progress in immunosensors for pesticides. Biosens. Bioelectron. 2020, 164, 112255. [Google Scholar] [CrossRef]
- Li, M.; Pan, Q.; Wang, J.; Wang, Z.; Peng, C. Machine learning-assisted fluorescence sensor array for qualitative and quantitative analysis of pyrethroid pesticides. Food Chem. 2024, 433, 137368. [Google Scholar] [CrossRef]
- Li, M.; Jia, L.; Zhao, X.; Zhang, L.; Zhao, D.; Xu, J.; Zhao, T. Machine learning-assisted ratiometric fluorescence sensor array for recognition of multiple quinolones antibiotics. Food Chem. 2025, 478, 143722. [Google Scholar] [CrossRef]
- Li, H.; Luo, X.; Haruna, S.A.; Zareef, M.; Chen, Q.; Ding, Z.; Yan, Y. Au-Ag OHCs-based SERS sensor coupled with deep learning CNN algorithm to quantify thiram and pymetrozine in tea. Food Chem. 2023, 428, 136798. [Google Scholar] [CrossRef]
- He, Y.; Wang, Y.; Mao, G.; Liang, C.; Fan, M. Ratiometric fluorescent nanoprobes based on carbon dots and multicolor CdTe quantum dots for multiplexed determination of heavy metal ions. Anal. Chim. Acta 2022, 1191, 339251. [Google Scholar] [CrossRef]
- Li, W.; Zu, B.; Yang, Q.; Huang, Y.; Li, J. Adsorption of lead and cadmium by microplastics and their desorption behavior as vectors in the gastrointestinal environment. J. Environ. Chem. Eng. 2022, 10, 107379. [Google Scholar] [CrossRef]
- Pan, H.; Lakshmipriya, T.; Gopinath, S.C.B.; Anbu, P. High-Affinity Detection of Metal-Mediated Nephrotoxicity by Aptamer Nanomaterial Complementation. Curr. Nanosci. 2019, 15, 549–556. [Google Scholar] [CrossRef]
- Liu, F.; Zhang, Z.; Zhang, L.; Meng, R.-N.; Gao, J.; Jin, M.; Li, M.; Wang, X.-P. Effect of metal ions on Alzheimer’s disease. Brain Behav. 2022, 12, e2527. [Google Scholar] [CrossRef] [PubMed]
- Zhou, X.; Zhao, C.; Sun, J.; Cao, Y.; Yao, K.; Xu, M. A deep learning method for predicting lead content in oilseed rape leaves using fluorescence hyperspectral imaging. Food Chem. 2023, 409, 135251. [Google Scholar] [CrossRef] [PubMed]
- Zhou, X.; Zhao, C.; Sun, J.; Cheng, J.; Xu, M. Determination of lead content in oilseed rape leaves in silicon-free and silicon environments based on deep transfer learning and fluorescence hyperspectral imaging. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2024, 311, 123991. [Google Scholar] [CrossRef] [PubMed]
- Chen, M.-M.; Zhang, Y.-Q.; Cheng, L.-C.; Zhao, F.-J.; Wang, P. Machine learning-powered fluorescent sensor arrays for rapid detection of heavy metals and pesticides in complex environments. Biosens. Bioelectron. 2025, 287, 117706. [Google Scholar] [CrossRef]
- Lu, Z.; Chen, M.; Liu, T.; Wu, C.; Sun, M.; Su, G.; Wang, X.; Wang, Y.; Yin, H.; Zhou, X.; et al. Machine Learning System To Monitor Hg2+ and Sulfide Using a Polychromatic Fluorescence-Colorimetric Paper Sensor. ACS Appl. Mater. Interfaces 2023, 15, 9800–9812. [Google Scholar] [CrossRef]
- Chen, X.; Xu, J.; Zhou, H.; Zhao, Y.; Wu, Y.; Zhang, J.; Zhang, S. Tree-based machine learning models assisted fluorescent sensor array for detection of metal ions based on silver nanocluster probe. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2023, 297, 122738. [Google Scholar] [CrossRef]
- Chen, J.; Xiong, X.; Ye, J.; Shuai, X.; Zhou, J.; Liu, Q.; Huang, C.; Liu, H. Machine learning-assisted three-dimensional fluorescence for heavy metal multi-sensing. Sens. Actuators B—Chem 2025, 431, 137385. [Google Scholar] [CrossRef]
- Park, S.; Lee, J.; Khan, S.; Wahab, A.; Kim, M. Machine Learning-Based Heavy Metal Ion Detection Using Surface-Enhanced Raman Spectroscopy. Sensors 2022, 22, 596. [Google Scholar] [CrossRef]
- Han, F.; Huang, X.; Teye, E. Novel prediction of heavy metal residues in fish using a low-cost optical electronic tongue system based on colorimetric sensors array. J. Food Process Eng. 2019, 42, e12983. [Google Scholar] [CrossRef]
- Sajed, S.; Arefi, F.; Kolahdouz, M.; Sadeghi, M.A. Improving sensitivity of mercury detection using learning based smartphone colorimetry. Sens. Actuators B—Chem 2019, 298, 126942. [Google Scholar] [CrossRef]
- Saima, H.; Lijuan, X.; Yibin, Y. Conventional and emerging detection techniques for pathogenic bacteria in food science: A review. Trends Food Sci. Technol. 2018, 81, 61–73. [Google Scholar] [CrossRef]
- Lee, H.; Yoon, Y. Etiological Agents Implicated in Foodborne Illness World Wide. Food Sci. Anim. Resour. 2021, 41, 1–7. [Google Scholar] [CrossRef]
- Wang, F.; Xiao, M.; Qi, J.; Zhu, L. Paper-based fluorescence sensor array with functionalized carbon quantum dots for bacterial discrimination using a machine learning algorithm. Anal. Bioanal. Chem. 2024, 416, 3139–3148. [Google Scholar] [CrossRef] [PubMed]
- Zhu, L.; Mei, L.; Xuan, Y.; Wang, F. Machine learning assisted paper-based fluorescent sensor array with metal-doped multicolor carbon quantum dots for identification and inactivation of bacteria. Talanta 2025, 293, 128035. [Google Scholar] [CrossRef] [PubMed]
- Qin, Y.; Sun, J.; Huang, W.; Yue, H.; Meng, F.; Zhang, M. Colorimetric sensor array for the rapid distinction and detection of various antibiotic-resistant psychrophilic bacteria in raw milk based-on machine learning. Food Chem. X 2024, 22, 101281. [Google Scholar] [CrossRef]
- Li, Y.; Chen, F.; Liu, Y.; Khan, M.A.; Zhao, H.; Cao, H.; Ye, D. Identification of multiple foodborne pathogens using single-atom nanozyme colorimetric sensor arrays and machine learning. Chem. Eng. J. 2025, 511, 162115. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, M.; Mujumdar, A.S.; Wang, D. Deep Learning Used with a Colorimetric Sensor Array to Detect Indole for Nondestructive Monitoring of Shrimp Freshness. ACS Appl. Mater. Interfaces 2024, 16, 37445–37455. [Google Scholar] [CrossRef]
- Abed, M.M.; Wouters, C.L.; Froehlich, C.E.; Nguyen, T.B.; Caldwell, R.; Riley, K.L.; Roy, P.; Reineke, T.M.; Haynes, C.L. A Machine Learning-Enabled SERS Sensor: Multiplex Detection of Lipopolysaccharides from Foodborne Pathogenic Bacteria. ACS Appl. Mater. Interfaces 2025, 17, 45139–45149. [Google Scholar] [CrossRef]
- Potarniche, I.-A.; Sarosi, C.; Terebes, R.M.; Szolga, L.; Galatus, R. Classification of Food Additives Using UV Spectroscopy and One-Dimensional Convolutional Neural Network. Sensors 2023, 23, 7517. [Google Scholar] [CrossRef]
- Wen, R.; Han, Y.; Tang, X.; Zeng, W.; Zhang, S.; Wu, L. Advancing food safety with SERS: The role of noble metal nanomaterials in detecting food contaminants. Trends Food Sci. Technol. 2025, 160, 104995. [Google Scholar] [CrossRef]
- Feng, Y.; Wang, Y.; Beykal, B.; Xiao, Z.; Luo, Y. Machine learning supported ground beef freshness monitoring based on near-infrared and paper chromogenic array. Food Front. 2024, 5, 2199–2210. [Google Scholar] [CrossRef]
- Lee, J.; Kim, M.; Yoon, J.; Yoo, K.; Byun, S.J. PA2E: Real-Time Anomaly Detection With Hyperspectral Imaging for Food Safety Inspection. IEEE Access 2024, 12, 175535–175549. [Google Scholar] [CrossRef]
- Tahir, M.W.; Zaidi, N.A.; Blank, R.; Vinayaka, P.P.; Vellekoop, M.J.; Lang, W. Fungus Detection Through Optical Sensor System Using Two Different Kinds of Feature Vectors for the Classification. IEEE Sens. J. 2017, 17, 5341–5349. [Google Scholar] [CrossRef]
- Rahman, N.; Arefin, M.; Rahman, S.; Islam, M.S.; Khatun, T.; Akter, U. Enhancing the accuracy of fruit freshness detection by utilizing transfer learning and customizing convolutional neural network(CNN). In Proceedings of the 3rd International Conference on Computing and Machine Intelligence (ICMI), Mt Pleasant, MI, USA, 13–14 April 2024. [Google Scholar]
- Iqbal, M.; Haider, S.T.; Shoukat, R.S.; Rehman, S.U.; Mahmood, K.; Villar, S.G.; Lopez, L.A.D.; Ashraf, I. Canned Apple Fruit Freshness Detection Using Hybrid Convolutional Neural Network and Transfer Learning. J. Food Qual. 2025, 2025. [Google Scholar] [CrossRef]
- Xu, J.; Luo, X.; Duan, M.; Guo, K.; Shangguan, Y.; Zhao, Q.; Qiu, M.; Wang, F. Machine Learning-Assisted Portable Dual-Readout Biosensor for Visual Detection of Milk Allergen. Nano Lett. 2025, 25, 5477–5484. [Google Scholar] [CrossRef] [PubMed]




| Sample | Detection Method | Core Machine Learning | Effect | Testing Time | LOD | Reference |
|---|---|---|---|---|---|---|
| Pyrethroid pesticides (PPs) Delta, Fenva, Cyflu, Fenpro | Fluorescence | SVM, HCA | Classify PPs of different types and concentration ratios | 30 min | 0.047 μM, 0.06, 0.02, 0.047 | [114] |
| QNs LVLX, GTLX, MXLX | Fluorescent | PCA, DT | Converting high-dimensional correlated data into low-dimensional, uncorrelated principal components through linear transformation for model training. | 2~12 min | 8.93, 9.51, 4.25 nM | [114] |
| Thiram, Pymetrozine | SERS | PLS, ELM, CNN | Feature extraction, data dimensionality reduction, nonlinear fitting | 40~50 min | 0.286 ppb 29 ppb | [114] |
| Fungicide: Tebuconazole, trifloxystrobin, procymidone, cymoxanil, cyazofamid | Hyperspectral Imaging (HIS) | LR, SVM, RF, CNN, PCA | Data Dimension Reduction and Preliminary Visualization Basic Classification and Feature Association High-Dimensional Feature Extraction and Optimal Classification | 15–20 min | 0.0125–0.0625 g/L | [114] |
| Imidacloprid | Fluorescence | FNN | Capturing nonlinear relationships, high-precision prediction | 1.2 h | 75 nM (19 μg/kg) | [114] |
| TC, OTC, DOX | Fluorescent | SVM | Qualitative classification, linear fitting | 30–70 min | 0.077, 0.075, 0.256 μM | [114] |
| SUs: Met, Nic, Rim, Hal, Sul | Fluorescence, colorimetry | KNN, RF, SVM, DT | Qualitative identification, integrated decision trees, anti-overfitting, kernel function mapping in high-dimensional space, recursive feature partitioning | 40–60 min | 0.1 μg/mL | [114] |
| Chlorothalonil, Carbendazim, Diazinon, Fenvalerate | Fluorescence | HCA, PCA, LDA | Clustering, Dimension Reduction, Classification Prediction | 15 min | <10 ppb | [114] |
| TM | Smartphone-Assisted Visualization, Fluorescence | SVM, CNN | Quantitative, Visualization | 55–90 min | 0.1306 μmol/L (0.045 mg/kg) | [114] |
| Penconazole | Fluorescence | LDA, HCA | 8.22 ppb | [114] |
| Sample | Detection Method | Core Machine Learning | Effect | Testing Time | LOD | Reference |
|---|---|---|---|---|---|---|
| Pb2+ | Fluorescence hyperspectral imaging | SVR (SVM Return) SDAE | Modeling and analysis results, processing deep features of data | 25 min | [121,122] | |
| Cd2+, Hg2+ | Fluorescence | LDA, HCA | Qualitative discrimination, quantitative analysis; interference verification | 15 min | Cd[II]: 0.501 nM; Hg[II]: 0.535 nM | [123] |
| Hg2+ | Fluorescence | DL, SVM | Quantitative modeling to optimize detection accuracy | 10–35 min | Fluorescence spectroscopy: 0.002 μM, smartphone: 0.834 μM | [124] |
| Cu2+, Co2+, Ni2+,Cr3+, Mn2+, Fe3+ | Fluorescence | LDA, KNN | Dimension Reduction, Classification | 30–40 min | [125] | |
| Ni2+, Cr3+, Mn2+, Co2+, Zn2+, As3+, As5+, Cd2+, Pb2+ | Fluorescence | RF, SVM, ANN, DT | Qualitative Classification, Quantitative Regression, Anti-Interference, Model Comparison and Selection, Image Recognition | 40–45 min | 5–10 μM | [126] |
| Cr6+, Fe3+, Fe2+, Hg2+ | Fluorescence | SX-model (Stepwise Predictive Model), PCA, LDA, SVM, ANN | Prediction Logic, Unified Model Data Fusion, Model Validation and Optimization | 20–25 min | 1–50 μM | [85] |
| Fe3+ | Colorimetry | [71] | ||||
| Pb2+ | SERS | Radial Basis Function Kernel Support Vector Machine (RBFSVM) (LR, LinSVM, NB, DT, RF, MLP) (BC, PSN, RAW) (PCA, D-tSNE) | Model Comparison, Data Preprocessing, Dimension Reduction, and Visualization | 0.01–1000 μM (BACC > 80%) | [127] | |
| Pb, Cd, Hg | Colorimetry | PLS, ELM | Regression, Model Building | [128] | ||
| Hg2+ | Colorimetry | MLR | Build models, enhance sensitivity, reduce complexity, validate models | 12 min | 1 nM (0.2 ppb) | [129] |
| Detection Substance | Detection Method | Core Machine Learning | Effect | Testing Time | LOD | Reference |
|---|---|---|---|---|---|---|
| E. coli, P. aeruginosa, S. typhimurium, S. aureus, L. monocytogenes | Fluorescence | DT, LDA, KNN, SVM | Feature-Adaptive Selection for High-Dimensional Data Dimension Reduction Distance-Based Similarity Judgment for Nonlinear Classification | 12 min | 1.0 × 103 CFU/mL | [132,133] |
| Pseudomonas endophytica, Klebsiella oxytoca, Acinetobacter johnsonii, Chryseobacterium timonianum | Colorimetry | LDA | Dimension reduction, classification, result validation, visualization analysis | 6 h | 102 CFU/mL | [134] |
| S. aureus, SE, V. vulnificus, V. harveyi, L. monocytogenes, V. parahaemolyticus | Colorimetry | LDA, PCA, HCA | Classification, Dimension Reduction | 55 min | 10 × 5 CFU/mL | [135] |
| Indole | Colorimetry | DCNN | Core classification model, attention mechanism modeling, residual connections to address vanishing gradients, lightweight feature extraction | 3 μg/100 g | [136] | |
| Salmonella typhimurium, E. coli O26:B6, E. coli O111:B4 | SERS | PCA, SVC, DT, RF | Dimension reduction, model, filtering, clustering | 3.5 h | μg/mL 0.7, 1.5, 1.4 | [137] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Wang, Y.; Yang, Y.; Liu, H. A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning. Foods 2026, 15, 133. https://doi.org/10.3390/foods15010133
Wang Y, Yang Y, Liu H. A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning. Foods. 2026; 15(1):133. https://doi.org/10.3390/foods15010133
Chicago/Turabian StyleWang, Yuzhen, Yuchen Yang, and Huilin Liu. 2026. "A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning" Foods 15, no. 1: 133. https://doi.org/10.3390/foods15010133
APA StyleWang, Y., Yang, Y., & Liu, H. (2026). A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning. Foods, 15(1), 133. https://doi.org/10.3390/foods15010133

