# An Intelligent Sorting Method of Film in Cotton Combining Hyperspectral Imaging and the AlexNet-PCA Algorithm

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Hyperspectral Sorting System

#### 2.1.1. Experimental Materials

#### 2.1.2. Algorithm Environment

^{®}Core (TM)i7-6700 CPU by Intel Semiconductor Co., Ltd. in Dalian, China, 16 GB RAM, and NVIDIA GeForce RTX 2080Ti by Taiwan Integrated Circuit Manufacturing Co., Ltd. in Taiwan, China, 11 GB was obtained. Software environment for tensorflow-gpu 2.0.0, spectral 0.22.1, sklearn 0.23.2, matplotib 3.2.2, kears 2.3.1, cuda 10.2.89, cudnn 7.6.5 was used employing the Python 3.6 programming language.

#### 2.1.3. Technical Route

#### 2.2. Black and White Correction of Hyperspectral Images

#### 2.3. Dimension Reduction of Hyperspectral Data

#### 2.3.1. Linear Discriminant Analysis

#### 2.3.2. Principal Component Analysis

**Data conversion.**While reading is performed in the hyperspectral image data, each band data is converted into a one-dimensional vector. The hyperspectral image data are assumed to have a total of N bands with a $w\times h$ resolution, which can be represented as a matrix of $\left(w\times h\right)\times N$. Here, the band i can be expressed as

**For the eigenspace.**The mean vector of all bands is calculated as [24]

**Projection and similarity detection.**The difference vector between each band and the average band is projected into the eigenspace, and the eigenvector i is expressed as

#### 2.3.3. Independent Component Analysis

**(1) Bleaching data.**We set the average value of hyperspectral image data X as $\overline{X}$ and perform decentralized processing on the data to obtain

**(2) Finding the matrix W.**We let k be the number of iterations, and the iterative computation of ${w}^{\left(k\right)}$ can be expressed as [27]

**(3) Selecting band.**The matrix W is defined as $\left({w}_{1},{w}_{2},\dots {w}_{j},\dots ,{w}_{N}\right)$, the column vector d of W is defined as ${\left({w}_{1j},{w}_{2j},\dots {w}_{ij},\dots ,{w}_{Nj}\right)}^{T}(i,j=1,2,\dots N)$, where ${w}_{ij}$ indicates the capacity of the j band containing i independent component information. By calculating the average absolute weight factor, it can assess how much of each band contains independent component information:

#### 2.4. Construction of the Convolutional Neural Network

**Convolution layer.**The convolution layer applies a convolution kernel to transform the input matrix into a unit matrix for the next layer. During forward propagation, the convolution kernel computes the nodes in the right unit matrix by using the nodes in the left input matrix [29]. Multiple convolution kernels are used to convolve with input image data, and a series of feature graphs are obtained through an activation function after biasing [30]. In the paper, the ReLU activation function is utilized to map the input of neurons to the output; its nonlinear characteristics are introduced into the neural network, enabling its application to various nonlinear models. The convolution formula is expressed as follows [29]:

**Pooling layer.**If all the features obtained through convolution are inputted into the classifier, a significant amount of computation is required to handle it. In this case, the Pooling function is required to process the feature maps obtained by convolution, and the Max pooling method is utilized in this paper. The pooled element matrix can reduce the dimension of the feature information obtained from the convolution layer and reduce the size of the matrix in the direction of height and width while ensuring the invariance of the feature scale. Meanwhile, the number of parameters of the whole neural network can be reduced, thus improving the generalization ability of the model [31].

**Fully connected layer.**With multi-layer convolution and pooling processing, images are gradually extracted with higher-level and more abstract feature information, which is classified by fully connected layers [32]. After unrolling the input feature vector into one dimension, the fully connected layer outputs the result via weighted summation and activation functions. The output formula is [29]

#### 2.5. Design of Intelligent Recognition Algorithm for Film in Seed Cotton

## 3. Results and Discussion

#### 3.1. Design of Intelligent Recognition Algorithm for Film in Seed Cotton

#### 3.2. CNN Model Training

**LeNet model training.**Variations of training and testing accuracy of LeNet with the number of training epochs, training and testing loss curves are shown in Figure 5.

**AlexNet model training.**Variations of training and testing accuracy of AlexNet with the number of training epochs, training and testing loss curves are shown in Figure 6.

**VGGNet model training.**Variation of training and testing accuracy of VGGNet with the number of training epochs, training and testing loss curves are shown in Figure 7.

#### 3.3. CNN Model Testing

#### 3.4. AlexNet-PCA Multi-Dimensional Algorithm Experiment

#### 3.4.1. AlexNet-PCA Model Training

#### 3.4.2. AlexNet-PCA Model Testing

#### 3.4.3. Practical Application Testing of Model AlexNet-PCA-12

#### 3.5. Summary of Discussions and Results

## 4. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Yang, W.; Li, D.; Zhu, L.; Kang, Y.; Li, F. A new approach for image processing in foreign fiber detection. Comput. Electron. Agric.
**2009**, 2, 68–77. [Google Scholar] [CrossRef] - Whitelock, D.P.; Armijo, C.B.; Gamble, G.R.; Hughs, S.E. Survey of seed-cotton and lint cleaning equipment in US roller gins. Eng. Ginning
**2007**, 11, 128–140. [Google Scholar] - Zhang, H.; Wang, Q.; Li, Y.; Liu, Y.; Jia, D. Electrostatic separation motion analysis of machine-harvested cotton and residual film based on CFD. J. Comput. Methods Sci. Eng.
**2020**, 2, 771–783. [Google Scholar] [CrossRef] - Li, D.; Yang, W.; Wang, S. Classification of foreign fibers in cotton lint using machine vision and multi-class support vector machine. Comput. Electron. Agric.
**2010**, 2, 274–279. [Google Scholar] [CrossRef] - Guo, L.; Yu, Y.; Yu, H.; Tang, Y.; Li, J.; Du, Y.; Chu, Y.; Ma, S.; Ma, Y.; Zeng, X. Rapid quantitative analysis of adulterated rice with partial least squares regression using hyperspectral imaging system. J. Sci. Food Agric.
**2019**, 2, 5558–5564. [Google Scholar] [CrossRef] - Ma, J.; Sun, D.W. Prediction of monounsaturated and polyunsaturated fatty acids of various processed pork meats using improved hyperspectral imaging technique. Food Chem.
**2020**, 2, 126695. [Google Scholar] [CrossRef] [PubMed] - Zhang, M.; Li, C.; Yang, F. Classification of foreign matter embedded inside cotton lint using short wave infrared (SWIR) hyperspectral transmittance imaging. Comput. Electron. Agric.
**2017**, 2, 75–90. [Google Scholar] [CrossRef] - Jiang, Y.; Li, C. mRMR-based feature selection for classification of cotton foreign matter using hyperspectral imaging. Comput. Electron. Agric.
**2015**, 2, 191–200. [Google Scholar] [CrossRef] - Zhang, R.; Li, C.; Zhang, M.; Rodgers, J. Shortwave infrared hyperspectral reflectance imaging for cotton foreign matter classification. Comput. Electron. Agric.
**2016**, 2, 260–270. [Google Scholar] [CrossRef] - Zhao, Z.Q.; Zheng, P.; Xu, S.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst.
**2019**, 2, 3212–3232. [Google Scholar] [CrossRef] [Green Version] - Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep learning for computer vision: A brief review. Comput. Intell. Neurosci.
**2018**, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed] - Morales, G.; Sheppard, J.W.; Logan, R.D.; Shaw, J.A. Hyperspectral dimensionality reduction based on inter-band redundancy analysis and greedy spectral selection. Remote Sens.
**2021**, 2, 3649. [Google Scholar] [CrossRef] - Jia, S.; Zhao, Q.; Zhuang, J.; Tang, D.; Long, Y.; Xu, M.; Zhou, J.; Li, Q. Flexible Gabor-based superpixel-level unsupervised LDA for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens.
**2021**, 2, 10394–10409. [Google Scholar] [CrossRef] - Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-based edge-preserving features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens.
**2017**, 2, 7140–7151. [Google Scholar] [CrossRef] - Lupu, D.; Necoara, I.; Garrett, J.L.; Johansen, T.A. Stochastic Higher-Order Independent Component Analysis for Hyperspectral Dimensionality Reduction. IEEE Trans. Comput. Imaging
**2022**, 2, 1184–1194. [Google Scholar] [CrossRef] - Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. Remote Sens.
**2019**, 2, 279–317. [Google Scholar] [CrossRef] - Ni, C.; Li, Z.; Zhang, X.; Zhao, L.; Zhu, T.; Wang, D. Online sorting of the film on cotton based on deep learning and hyperspectral imaging. IEEE Access
**2020**, 2, 93028–93038. [Google Scholar] [CrossRef] - Fırat, H.; Asker, M.E.; Bayindir, M.İ.; Hanbay, D. Spatial-spectral classification of hyperspectral remote sensing images using 3D CNN based LeNet-5 architecture. Infrared Phys. Technol.
**2022**, 2, 104470. [Google Scholar] [CrossRef] - Jiang, B.; He, J.; Yang, S.; Fu, H.; Li, T.; Song, H.; He, D. Fusion of machine vision technology and AlexNet-CNNs deep learning network for the detection of postharvest apple pesticide residues. Artif. Intell. Agric.
**2019**, 2, 1–8. [Google Scholar] [CrossRef] - Zhao, J.; Pan, F.; Li, Z.; Lan, Y.; Lu, L.; Yang, D.; Wen, Y. Detection of cotton waterlogging stress based on hyperspectral images and convolutional neural network. Int. J. Agric. Biol. Eng.
**2021**, 2, 167–174. [Google Scholar] [CrossRef] - Zebari, R.; Abdulazeez, A.; Zeebaree, D.; Zebari, D.; Saeed, J. A comprehensive review of dimensionality reduction techniques for feature selection and feature extraction. J. Appl. Sci. Technol. Trends
**2020**, 2, 56–70. [Google Scholar] [CrossRef] - Qin, X.; Wang, S.; Chen, B.; Zhang, K. Robust Fisher Linear Discriminant Analysis with Generalized Correntropic Loss Function. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 7117–7121. [Google Scholar]
- Wu, S.X.; Wai, H.T.; Li, L.; Scaglione, A. A review of distributed algorithms for principal component analysis. Proc. IEEE
**2018**, 2, 1321–1340. [Google Scholar] [CrossRef] - Ye, M.; Ji, C.; Chen, H.; Lu, H.; Qian, Y. Residual deep PCA-based feature extraction for hyperspectral image classification. Neural Comput. Appl.
**2020**, 2, 14287–14300. [Google Scholar] [CrossRef] - Ghosh, A.; Barman, S. Application of Euclidean distance measurement and principal component analysis for gene identification. Gene
**2016**, 2, 112–120. [Google Scholar] [CrossRef] [PubMed] - Luo, Z. Independent Vector Analysis: Model, Applications, Challenges. Pattern Recognit.
**2023**, 138, 109376. [Google Scholar] [CrossRef] - Sajjad, M.; Yusoff, M.Z.; Yahya, N.; Haider, A.S. An efficient VLSI architecture for FastICA by using the algebraic Jacobi method for EVD. IEEE Access
**2021**, 2, 58287–58305. [Google Scholar] [CrossRef] - Huang, J.T.; Li, J.; Gong, Y. An analysis of convolutional neural networks for speech recognition. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 4989–4993. [Google Scholar]
- Chen, Z.Q.; Li, C.; Sanchez, R.V. Gearbox fault identification and classification with convolutional neural networks. Shock. Vib.
**2015**, 2015, 390134. [Google Scholar] [CrossRef] [Green Version] - Ranjbarzadeh, R.; Jafarzadeh, G.S.; Bendechache, M.; Amirabadi, A.; Ab, R.M.N.; Baseri, S.S.; Aghamohammadi, A.; Kooshki, F.M. Lung Infection Segmentation for COVID-19 Pneumonia Based on a Cascade Convolutional Network from CT Images. BioMed Res. Int.
**2021**, 2021, 5544742. [Google Scholar] [CrossRef] [PubMed] - Sun, M.; Song, Z.; Jiang, X.; Pan, J.; Pang, Y. Learning pooling for convolutional neural network. Neurocomputing
**2017**, 2, 96–104. [Google Scholar] [CrossRef] - Basha, S.H.S.; Dubey, S.R.; Pulabaigari, V.; Mukherjee, S. Impact of fully connected layers on performance of convolutional neural networks for image classification. Neurocomputing
**2020**, 2, 112–119. [Google Scholar] [CrossRef] [Green Version] - Janocha, K.; Czarnecki, W.M. On loss functions for deep neural networks in classification. arXiv
**2017**, arXiv:1702.05659. [Google Scholar] [CrossRef] - Kanezaki, A. Unsupervised image segmentation by backpropagation. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1543–1547. [Google Scholar]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data
**2021**, 2, 1–74. [Google Scholar]

**Figure 1.**Hyperspectral imaging system. (1) Hyperspectral camera, (2) Halogen lamp, (3) Electronic control platform, (4) Distance regulating mechanism, (5) Transfer platform, (6) Industrial computer.

Type | Variables | Kernel Parameter | Data Output |
---|---|---|---|

Input layer | 5 × 5 × D hyperspectral data set | ||

1-Conv | 3 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

2-Pool | Max pooling | Size: 3 × 3. All zero-filling. Step: 2 | |

3-Conv | 9 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

4-Pool | Max pooling | Size: 3 × 3. All zero-filling. Step: 2 | Dropout drops 25% weight |

Flatten layer | Convert multi-dimensional input into one dimension | ||

FC | Input neuron number: 108. Output neuron number: 18 | ||

Output layer | Softmax loss function outputs the probabilities of four units. |

Type | Variables | Kernel Parameter | Data Output |
---|---|---|---|

Input layer | 5 × 5 × D hyperspectral data set | ||

1-Conv | 3 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

1-Pool | Max pooling | Size: 3 × 3. All zero-filling. Step: 2 | |

2-Conv | 9 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

2-Pool | Max pooling | Size: 3 × 3. All zero-filling. Step: 2 | |

3-Conv | 12 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

4-Conv | 12 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

5-Conv | 9 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

5-Pool | Max pooling | Size: 3 × 3. All zero-filling. Step: 2 | |

Flatten layer | Convert multi-dimensional input into one dimension | ||

FC | Input neuron number: 27. Output neuron number: 60 | ||

Output layer | Dropout drops 50% weight, Softmax loss function outputs the probabilities of four units. |

Type | Variables | Kernel Parameter | Data Output |
---|---|---|---|

Input layer | 5 × 5 × D hyperspectral data set | ||

1-Conv | 3 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

1-Conv | 3 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

2-Pool | Max pooling | Size: 2 × 2. All zero-filling. Step: 2 | Dropout drops 20% weight |

3-Conv | 6 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

3-Conv | 6 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

4-Pool | Max pooling | Size: 2 × 2. All zero-filling. Step: 2 | Dropout drops 20% weight |

5-Conv | 12 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

5-Conv | 12 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

5-Conv | 12 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

6-Pool | Max pooling | Size: 2 × 2. All zero-filling. Step: 2 | Dropout drops 20% weight |

7-Conv | 24 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

7-Conv | 24 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

7-Conv | 24 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

8-Pool | Max pooling | Size: 2 × 2. All zero-filling. Step: 2 | Dropout drops 20% weight |

9-Conv | 24 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

9-Conv | 24 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

9-Conv | 24 convolution kernels | Size: 3 × 3. All zero-filling. Step: 1 | ReLU activation function |

10-Pool | Max pooling | Size: 2 × 2. All zero-filling. Step: 2 | Dropout drops 20% weight |

Flatten layer | Convert multi-dimensional input into one dimension | ||

FC | Input neuron number: 72. Output neuron number: 24 | ||

Output layer | Dropout drops 20% weight, Softmax loss function outputs the probabilities of four units. |

Model | Predictive | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|---|

Actual | ||||||

LeNet | 1 | 91.89 | 5.72 | 2.22 | 0.17 | |

2 | 4.78 | 93.26 | 0.12 | 1.83 | ||

3 | 3.99 | 0.05 | 90.49 | 5.47 | ||

4 | 0.09 | 0.86 | 2.07 | 96.97 | ||

AlexNet | 1 | 93.10 | 4.60 | 2.08 | 0.22 | |

2 | 5.17 | 92.87 | 0.18 | 1.77 | ||

3 | 3.60 | 0.03 | 90.03 | 6.34 | ||

4 | 0.00 | 0.81 | 1.24 | 97.95 | ||

VGGNet | 1 | 90.44 | 6.84 | 2.48 | 0.23 | |

2 | 4.06 | 95.13 | 0.04 | 0.77 | ||

3 | 4.40 | 0.07 | 84.55 | 10.98 | ||

4 | 0.03 | 1.90 | 0.95 | 97.12 |

Model | Predictive | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|---|

Actual | ||||||

LeNet | 1 | 89.13 | 8.86 | 1.57 | 0.44 | |

2 | 15.31 | 83.88 | 0.08 | 0.72 | ||

3 | 3.83 | 0.08 | 91.15 | 4.94 | ||

4 | 0.23 | 1.33 | 1.82 | 96.63 | ||

AlexNet | 1 | 91.80 | 6.72 | 1.18 | 0.30 | |

2 | 14.98 | 84.41 | 0.08 | 0.53 | ||

3 | 3.97 | 0.10 | 87.81 | 8.11 | ||

4 | 0.09 | 1.35 | 0.17 | 98.39 | ||

VGGNet | 1 | 87.95 | 8.75 | 2.11 | 1.19 | |

2 | 16.43 | 82.05 | 0.08 | 1.44 | ||

3 | 4.11 | 0.02 | 72.40 | 23.47 | ||

4 | 0.40 | 0.81 | 0.00 | 98.79 |

Model | Predictive | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|---|

Actual | ||||||

LeNet | 1 | 75.24 | 0.07 | 23.07 | 1.62 | |

2 | 69.72 | 2.80 | 14.28 | 13.20 | ||

3 | 90.51 | 0.75 | 8.62 | 0.12 | ||

4 | 74.90 | 23.22 | 0.13 | 1.75 | ||

AlexNet | 1 | 70.05 | 5.87 | 16.41 | 7.67 | |

2 | 32.78 | 32.72 | 15.84 | 18.65 | ||

3 | 78.95 | 0.40 | 19.75 | 0.90 | ||

4 | 65.08 | 24.49 | 5.19 | 5.25 | ||

VGGNet | 1 | 48.65 | 18.68 | 21.23 | 11.44 | |

2 | 29.46 | 38.76 | 8.64 | 23.14 | ||

3 | 28.10 | 10.37 | 51.14 | 10.39 | ||

4 | 18.10 | 58.86 | 9.01 | 14.03 |

LDA | PCA | ICA | |
---|---|---|---|

LeNet | 92.14 | 88.61 | 33.41 |

AlexNet | 92.45 | 89.00 | 42.78 |

VGGNet | 90.46 | 83.63 | 44.86 |

Average | 91.68 | 87.08 | 40.35 |

Model | Predictive | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|---|

Actual | ||||||

AlexNet-PCA-6 | 1 | 94.53 | 4.12 | 1.27 | 0.08 | |

2 | 3.18 | 96.21 | 0.11 | 0.49 | ||

3 | 2.30 | 0.01 | 94.44 | 3.25 | ||

4 | 0.03 | 0.95 | 0.29 | 98.73 | ||

AlexNet-PCA-9 | 1 | 96.88 | 1.67 | 1.37 | 0.07 | |

2 | 1.95 | 97.71 | 0.05 | 0.29 | ||

3 | 1.06 | 0.03 | 97.68 | 1.23 | ||

4 | 0.03 | 0.81 | 0.43 | 98.73 | ||

AlexNet-PCA-12 | 1 | 98.04 | 1.20 | 0.73 | 0.04 | |

2 | 1.00 | 98.75 | 0.02 | 0.23 | ||

3 | 1.55 | 0.04 | 97.34 | 1.07 | ||

4 | 0.03 | 0.92 | 0.43 | 98.62 | ||

AlexNet-PCA-15 | 1 | 98.38 | 0.50 | 1.11 | 0.01 | |

2 | 1.35 | 98.45 | 0.04 | 0.16 | ||

3 | 0.96 | 0.01 | 98.38 | 0.66 | ||

4 | 0.00 | 0.63 | 1.27 | 98.10 |

Dimension | PCA-3 | PCA-6 | PCA-9 | PCA-12 | PCA-15 |
---|---|---|---|---|---|

AlexNet | 89.00 | 95.17 | 97.42 | 98.07 | 98.38 |

Quantity of Trials | Quantity of Films in Cotton | Quantity of Removal Films | Removal Accuracy |
---|---|---|---|

1 | 132 | 128 | 96.97% |

2 | 112 | 109 | 97.32% |

3 | 98 | 95 | 96.94% |

4 | 146 | 142 | 97.26% |

5 | 157 | 152 | 96.82% |

6 | 104 | 100 | 96.15% |

7 | 128 | 125 | 97.66% |

8 | 168 | 163 | 97.02% |

9 | 84 | 82 | 97.62% |

10 | 113 | 109 | 96.46% |

Sum | 1242 | 1205 | 97.02% |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Li, Q.; Zhao, L.; Yu, X.; Liu, Z.; Zhang, Y.
An Intelligent Sorting Method of Film in Cotton Combining Hyperspectral Imaging and the AlexNet-PCA Algorithm. *Sensors* **2023**, *23*, 7041.
https://doi.org/10.3390/s23167041

**AMA Style**

Li Q, Zhao L, Yu X, Liu Z, Zhang Y.
An Intelligent Sorting Method of Film in Cotton Combining Hyperspectral Imaging and the AlexNet-PCA Algorithm. *Sensors*. 2023; 23(16):7041.
https://doi.org/10.3390/s23167041

**Chicago/Turabian Style**

Li, Quang, Ling Zhao, Xin Yu, Zongbin Liu, and Yiqing Zhang.
2023. "An Intelligent Sorting Method of Film in Cotton Combining Hyperspectral Imaging and the AlexNet-PCA Algorithm" *Sensors* 23, no. 16: 7041.
https://doi.org/10.3390/s23167041