Adaptive Pneumatic Separation Based on LGDNet Visual Perception for a Representative Fibrous–Granular Mixture
Abstract
1. Introduction
- A perception-informed regulation framework is proposed for pneumatic separation under non-stationary feeds, where the visually estimated composition ratio r is directly mapped to the inlet airflow velocity v as the final control variable.
- A lightweight visual ratio estimation model (LGDNet) is developed for dense and highly overlapped mixture images, enabling robust in-line composition estimation under high-throughput conditions.
- Extensive evaluations are conducted, including controlled benchmarking under a unified training/inference protocol and mechanism-level validation of regime-dependent airflow policies in a CFD–DEM simulation environment.
2. Related Work
2.1. Online Composition Sensing and Sensing Modalities
2.2. Vision-Based Perception for Dense Mixture Streams
2.3. Lightweight CNNs for Industrial Edge Deployment
2.4. CFD–DEM Studies and Adaptive Regulation in Pneumatic Separation
2.5. Summary and Positioning
3. System Framework and Data Acquisition
3.1. System Overview
3.2. Data Acquisition
4. Methods
4.1. LGDNet Architecture
4.2. CFD–DEM Simulation Configuration
4.3. Adaptive Control Strategy
5. Experimental Results
5.1. Experimental Setup and Evaluation Metrics
5.2. Performance Benchmark and Error Analysis
5.3. Module Contribution and Architectural Variants
5.4. Control Strategy Validation
6. Discussion
6.1. Challenges of Visual Perception in Dense Heterogeneous Mixtures
6.2. Architectural Efficiency, Redundancy, and Deployability
6.3. Mechanism of Aerodynamic Regulation
6.4. Limitations and Future Work
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| LGDNet | LGCC–Ghost–Dense Network |
| LGCC | Learned Grouped Channel Convolution |
| GhostDWConv | Ghost Depthwise Convolution |
| CNN | Convolutional Neural Network |
| CFD | Computational Fluid Dynamics |
| DEM | Discrete Element Method |
| MAE | Mean Absolute Error |
| PLC | Programmable Logic Controller |
| VFD | Variable Frequency Drive |
| IPC | Industrial Personal Computer |
| LSK | Large Selective Kernel |
| ECA | Efficient Channel Attention |
| ASPP | Atrous Spatial Pyramid Pooling |
References
- Kaas, A.; Mütze, T.; Peuker, U.A. Review on zigzag air classifier. Processes 2022, 10, 764. [Google Scholar] [CrossRef]
- Eswaraiah, C.; Kavitha, T.; Vidyasagar, S.; Narayanan, S.S. Classification of metals and plastics from printed circuit boards (PCB) using air classifier. Chem. Eng. Process. 2008, 47, 565–576. [Google Scholar] [CrossRef]
- Lukas, E.; Roloff, C.; Mann, H.; Kerst, K.; Hagemeier, T.; van Wachem, B.; Thévenin, D.; Tomas, J. Experimental Study and Modelling of Particle Behaviour in a Multi-stage Zigzag Air Classifier. In Dynamic Flowsheet Simulation of Solids Processes; Springer: Berlin/Heidelberg, Germany, 2020; pp. 391–410. [Google Scholar]
- Stepanenko, S.; Kotov, B.; Kuzmych, A.; Kalinichenko, R.; Hryshchenko, V. Research of the process of air separation of grain material in a vertical zigzag channel. J. Cent. Eur. Agric. 2023, 24, 225–235. [Google Scholar] [CrossRef]
- Reguła, T.; Frączek, J.; Fitas, J. A Model of Transport of Particulate Biomass in a Stream of Fluid. Processes 2020, 9, 5. [Google Scholar] [CrossRef]
- Zhou, L.; Elemam, M.A.; Agarwal, R.K.; Shi, W. Aerodynamic Systems. In Discrete Element Method for Multiphase Flows with Biogenic Particles: Agriculture Applications; Springer: Berlin/Heidelberg, Germany, 2024; pp. 5–18. [Google Scholar]
- Kharchenko, S.; Borshch, Y.; Kovalyshyn, S.; Piven, M.; Abduev, M.; Miernik, A.; Popardowski, E.; Kiełbasa, P. Modeling of aerodynamic separation of preliminarily stratified grain mixture in vertical pneumatic separation duct. Appl. Sci. 2021, 11, 4383. [Google Scholar] [CrossRef]
- Choszcz, D.J.; Reszczyński, P.S.; Kolankowska, E.; Konopka, S.; Lipiński, A. The effect of selected factors on separation efficiency in a pneumatic conical separator. Sustainability 2020, 12, 3051. [Google Scholar] [CrossRef]
- Huang, G.; Liu, S.; Van der Maaten, L.; Weinberger, K.Q. CondenseNet: An efficient densenet using learned group convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2752–2761. [Google Scholar]
- Yang, L.; Jiang, H.; Cai, R.; Wang, Y.; Song, S.; Huang, G.; Tian, Q. CondenseNetV2: Sparse feature reactivation for deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3569–3578. [Google Scholar]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. GhostNet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
- El-Emam, M.A.; Zhou, L.; Shi, W.; Han, C.; Bai, L.; Agarwal, R. Theories and applications of CFD–DEM coupling approach for granular flow: A review. Arch. Comput. Methods Eng. 2021, 28, 4979–5020. [Google Scholar] [CrossRef]
- Zhou, C.; Su, J.; Jiang, X.; Shi, Z. Numerical simulation and experimental verification for the sorting behaviors of mixed biomass particles in a novel Z-shaped fluidized bed. Chem. Eng. J. 2022, 441, 136109. [Google Scholar] [CrossRef]
- Mohd Khairi, M.T.; Ibrahim, S.; Md Yunus, M.A.; Faramarzi, M. Noninvasive techniques for detection of foreign bodies in food: A review. J. Food Process Eng. 2018, 41, e12808. [Google Scholar] [CrossRef]
- Lee, D.-H.; Kim, E.-S.; Cho, J.-S.; Ryu, J.-H.; Min, B.-S. A two-stage automatic labeling method for detecting abnormal food items in X-ray images. J. Food Meas. Charact. 2022, 16, 2999–3009. [Google Scholar] [CrossRef]
- Dianyu, E.; Liu, Y.; Xiao, Y.; Fan, Y.; Yin, M.; Liu, S.; Zou, R.; Wang, H. Numerical investigation of vertical pneumatic separation of heterogeneous particle swarms in lithium iron phosphate battery recycling. Sep. Purif. Technol. 2025, 354, 135329. [Google Scholar]
- Zhang, K.; Zhang, J.; Guo, J.; Yao, L. Application of near-infrared spectroscopy and hyperspectral imaging for non-destructive quality evaluation and detection of adulteration in food products: A review. Foods 2022, 11, 440. [Google Scholar]
- Sun, J.; Shen, X.; Sun, Q. Hyperspectral image few-shot classification network based on the earth mover’s distance. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5534114. [Google Scholar] [CrossRef]
- Choi, J.; Lim, B.; Yoo, Y. Advancing plastic waste classification and recycling efficiency: Integrating image sensors and deep learning algorithms. Appl. Sci. 2023, 13, 10224. [Google Scholar] [CrossRef]
- Chakraborty, S.K.; Subeesh, A.; Dubey, K.; Jat, D.; Chandel, N.S.; Potdar, R.; Rao, N.G.; Kumar, D. Development of an optimally designed real-time automatic citrus fruit grading–sorting machine leveraging computer vision-based adaptive deep learning model. Eng. Appl. Artif. Intell. 2023, 120, 105826. [Google Scholar] [CrossRef]
- Ziouzios, D.; Baras, N.; Balafas, V.; Dasygenis, M.; Stimoniaris, A. Intelligent and real-time detection and classification algorithm for recycled materials using convolutional neural networks. Recycling 2022, 7, 9. [Google Scholar] [CrossRef]
- Karim, M.J.; Munir, S.; Khandakar, A.; Ahsan, M.; Haider, J. RTDRNet-lite: A lightweight real-time detection framework for robotic waste sorting. Waste Manag. 2025, 208, 115164. [Google Scholar] [CrossRef]
- Jiang, Z.; Zhao, L.; Li, S.; Jia, Y. Real-time object detection method based on improved YOLOv4-tiny. arXiv 2020, arXiv:2011.04244. [Google Scholar] [CrossRef]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
- Özkan, K.; Ergin, S.; Işık, Ş.; Işıklı, İ. A new classification scheme of plastic wastes based upon recycling labels. Waste Manag. 2015, 35, 29–35. [Google Scholar] [CrossRef]
- Qin, Z.; Zhang, P.; Li, X. Ultra fast deep lane detection with hybrid anchor driven ordinal classification. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 46, 2555–2568. [Google Scholar] [CrossRef] [PubMed]
- Nxumalo, T.L.; Rimiru, R.M.; Magagula, V.M. A deep learning ordinal classifier. Int. J. Adv. Comput. Sci. Appl. 2025, 16, 301–308. [Google Scholar] [CrossRef]
- Barbero-Gómez, J.; Cruz, R.P.M.; Cardoso, J.S.; Gutiérrez, P.A.; Hervás-Martínez, C. CNN explanation methods for ordinal regression tasks. Neurocomputing 2025, 615, 128878. [Google Scholar] [CrossRef]
- Huang, G.; Liu, S.; Van der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Chang, S.; Zheng, B. A lightweight convolutional neural network for automated crack inspection. Constr. Build. Mater. 2024, 416, 135151. [Google Scholar] [CrossRef]
- Zhang, L.; Kuang, J.; Teng, Y.; Xiang, S.; Li, L.; Zhou, Y. A Lightweight Infrared and Visible Light Multimodal Fusion Method for Object Detection in Power Inspection. Processes 2025, 13, 2720. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Li, Y.; Hou, Q.; Zheng, Z.; Cheng, M.-M.; Yang, J.; Li, X. Large selective kernel network for remote sensing object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 16794–16805. [Google Scholar]
- Liu, J.; Li, H.; Zuo, F.; Zhao, Z.; Lu, S. Kd-lightnet: A lightweight network based on knowledge distillation for industrial defect detection. IEEE Trans. Instrum. Meas. 2023, 72, 3525713. [Google Scholar] [CrossRef]
- Cui, K.F.E.; Zhou, G.G.D.; Lu, X.; Pasuto, A. Mechanisms of friction segregation in dense granular flows. Powder Technol. 2025, 434, 121292. [Google Scholar] [CrossRef]
- Adamchuk, V.; Bulgakov, V.; Ivanovs, S.; Holovach, I.; Ihnatiev, Y. Theoretical study of pneumatic separation of grain mixtures in vortex flow. Eng. Rural Dev. 2021, 20, 657–664. [Google Scholar]
- Wang, F.; Zhao, Z.; Peng, J.; Fang, Y. The influence of secondary air guide vanes on the flow field and performance of a turbine air classifier. Processes 2025, 13, 2268. [Google Scholar] [CrossRef]
- Chang, M.; Fan, Y.; Lu, C. Filtration of dust particulates with a granular bed filter in a novel coupled separator. Sep. Purif. Technol. 2024, 342, 126502. [Google Scholar] [CrossRef]
- Tang, H.; Li, Y.; Zhou, N.; Cheng, M.; Tao, S.; Xu, B. A real-time perception-motion codesign method for image-based visual servoing in embodied intelligence systems. J. Ind. Inf. Integr. 2025, 48, 100933. [Google Scholar] [CrossRef]
- Chen, H.; Li, Q.; Ye, Z.; Pang, S. Neural Network-Based Parameter Estimation and Compensation Control for Time-Delay Servo System of Aeroengine. Aerospace 2025, 12, 64. [Google Scholar] [CrossRef]
- Si, D.; Yan, Z.; Lu, C. Performance experiments and pressure drop prediction modelling of an energy-saving cyclone separator. Fuel 2024, 372, 132165. [Google Scholar] [CrossRef]
- Mehta, S.; Rastegari, M.; Shapiro, L.; Hajishirzi, H. ESPNetv2: A Light-Weight, Power Efficient, and General Purpose Convolutional Neural Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9190–9200. [Google Scholar]
- Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]








| Particle Type | Mass (g) | Density (g/cm3) | Length (mm) | Width (mm) | Height (mm) |
|---|---|---|---|---|---|
| Light fibrous particle (shred) | 0.002 | 0.592 | 17 | 2.0 | 1.4 |
| Heavy stem-like impurity (stem fragment) | 0.020 | 0.908 | 20 | 1.2 | 0.4 |
| Dataset Attribute | Value/Description |
|---|---|
| Total Raw Images | 8400 |
| Number of Classes | 21 (0% to 100% impurity ratio, step = 5%) |
| Images per Class (Raw) | ∼400 |
| Augmentation Techniques | Rotation (), Horizontal Flip, Brightness () |
| Total Augmented Images | ∼21,000 |
| Training Set (70%) | 14,700 |
| Validation Set (20%) | 4200 |
| Test Set (10%) | 2100 |
| Air Velocity v (m/s) | Target Recovery (g) | Impurity Carryover (g) |
|---|---|---|
| 9 | 31.7 | 4.24 |
| 10 | 51.6 | 7.61 |
| 11 | 69.9 | 8.82 |
| 12 | 69.9 | 16.9 |
| 13 | 70.0 | 21.1 |
| 14 | 70.0 | 29.8 |
| Impurity Ratio r (%) | Target Recovery (g) | Impurity Carryover (g) |
|---|---|---|
| 10 | 89.9 | 2.10 |
| 30 | 69.9 | 8.82 |
| 50 | 49.8 | 30.6 |
| 67 | 33.4 | 37.9 |
| 75 | 24.8 | 44.5 |
| Model | Params (M) | Latency (ms) | Top-1 (%) | Top-5 (%) | Interval Acc. (%) | MAE |
|---|---|---|---|---|---|---|
| LGDNet (Proposed) | 0.080 | 28.25 | 66.86 | 97.14 | 74.10 | 4.85 |
| GhostNet-1.0x | 3.928 | 32.66 | 64.38 | 95.24 | 74.48 | 5.19 |
| EfficientNet-B0 | 4.034 | 33.57 | 60.38 | 90.29 | 68.19 | 6.05 |
| CondenseNetV2 | 2.539 | 27.16 | 58.86 | 92.57 | 70.48 | 6.24 |
| ESPNetV2 | 0.667 | 26.80 | 56.95 | 91.24 | 66.67 | 7.09 |
| YOLO11n-cls | 1.558 | 31.43 | 49.91 | 91.05 | 62.10 | 7.44 |
| ResNet18 | 11.187 | 30.80 | 57.14 | 90.67 | 65.52 | 7.57 |
| MobileNetV3-S | 1.539 | 29.47 | 47.62 | 87.24 | 58.10 | 9.29 |
| Model Variant | Params (M) | Top-1 (%) | Top-5 (%) | Interval Acc. (%) | MAE |
|---|---|---|---|---|---|
| Baseline (DenseNet var.) | 0.270 | 72.95 | 98.67 | 79.05 | 3.96 |
| Model A (+GhostDWConv) | 0.150 | 66.29 | 97.90 | 74.86 | 5.02 |
| LGDNet (+LGCC on A) | 0.080 | 66.86 | 97.14 | 74.10 | 4.85 |
| Model Variant | Params (M) | Top-1 (%) | Top-5 (%) | Interval Acc. (%) | MAE |
|---|---|---|---|---|---|
| LGDNet (Proposed) | 0.080 | 66.86 | 97.14 | 74.10 | 4.85 |
| Model B (+LSK on LGDNet) | 0.098 | 66.86 | 97.90 | 74.67 | 4.91 |
| Model C (+ECA on B) | 0.098 | 63.81 | 96.95 | 71.62 | 5.63 |
| Model D (+ASPP on B) | 0.208 | 64.00 | 96.95 | 72.76 | 5.78 |
| Model E (+ASPP on C) | 0.208 | 65.52 | 96.95 | 74.48 | 4.85 |
| Formulation | Params (M) | MAE | Interval Acc. (%) | Latency (ms) |
|---|---|---|---|---|
| Classification (Proposed) | 0.080 | 4.85 | 74.10 | 28.25 |
| Regression (Model F) | 0.074 | 7.52 | 49.33 | 28.49 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Jiang, S.; Wang, R.; Yang, S.; Li, L.; Si, H.; Gao, X.; Chen, X.; Chen, L.; Pan, H. Adaptive Pneumatic Separation Based on LGDNet Visual Perception for a Representative Fibrous–Granular Mixture. Machines 2026, 14, 66. https://doi.org/10.3390/machines14010066
Jiang S, Wang R, Yang S, Li L, Si H, Gao X, Chen X, Chen L, Pan H. Adaptive Pneumatic Separation Based on LGDNet Visual Perception for a Representative Fibrous–Granular Mixture. Machines. 2026; 14(1):66. https://doi.org/10.3390/machines14010066
Chicago/Turabian StyleJiang, Shan, Rifeng Wang, Sichuang Yang, Lulu Li, Hengchi Si, Xiulong Gao, Xuhong Chen, Lin Chen, and Haihong Pan. 2026. "Adaptive Pneumatic Separation Based on LGDNet Visual Perception for a Representative Fibrous–Granular Mixture" Machines 14, no. 1: 66. https://doi.org/10.3390/machines14010066
APA StyleJiang, S., Wang, R., Yang, S., Li, L., Si, H., Gao, X., Chen, X., Chen, L., & Pan, H. (2026). Adaptive Pneumatic Separation Based on LGDNet Visual Perception for a Representative Fibrous–Granular Mixture. Machines, 14(1), 66. https://doi.org/10.3390/machines14010066
