Detection of Bad Stapled Nails in Wooden Packages
Abstract
:1. Introduction
1.1. Motivation
1.2. Related Work
2. Materials and Methods
2.1. Working Principle and Measurement Issues
2.1.1. Using Traditional Computer Vision Techniques
2.1.2. Using Novel Neural Networks Architectures
2.2. The Instrument
2.3. Deep Learning
2.4. Training the CNN for Badly Stapled Nails Detection
3. Results
Confidence Matrix | SSD | FASTER-RCNN | ||||
---|---|---|---|---|---|---|
AlexNET | MOBILENET | RESNET | RESNET | INCEPTION | ||
Changing View angle | C11 | 0.7250 | 0.7050 | 0.6050 | 0.6550 | 0.6800 |
C22 | 0.7500 | 0.6800 | 0.5800 | 0.5550 | 0.5250 | |
C12 | 0.2750 | 0.2950 | 0.3950 | 0.3450 | 0.3200 | |
C21 | 0.2500 | 0.3200 | 0.4200 | 0.4450 | 0.4750 | |
Changing illumination conditions | C11 | 0.6800 | 0.5850 | 0.6550 | 0.6750 | 0.6800 |
C22 | 0.7050 | 0.6100 | 0.5700 | 0.6050 | 0.6100 | |
C12 | 0.3200 | 0.4150 | 0.3450 | 0.3250 | 0.3900 | |
C21 | 0.2950 | 0.3900 | 0.4350 | 0.3950 | 0.3200 |
Changes in Image Capturing Conditions
4. Discussion
5. Conclusions
- -
- Traditional computer vision techniques based on structured lighting do not solve the problem of badly stapled nail detection.
- -
- Novel neural network architectures allow to detection of badly stapled nails efficiently.
- -
- The AlexNet architecture achieves an accuracy of over the 96% if ROIs with nail presence are predefined in the image.
- -
- The MobileNet model accuracy is 95% with no predefined ROIs with nail presence.
- -
- The computing time of the MobileNet with no predefined ROIs is twice the computing time of the AlexNet model.
- -
- Reducing the area of the image with ROIs where nails are present improves the accuracy and reduces the computing time.
- -
- Indirect lighting allows an image free of shadows and brightness to be acquired.
- -
- The system can work with up to 60 crates per minute using the AlexNet model.
- -
- Neural networks represent a step ahead in resolving many industrial applications that could not be solved with traditional computer vision algorithms.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Parfitt, J.; Barthel, M.; Macnaughton, S. Food Waste within Food Supply Chains: Quantification and Potential for Change to 2050. Philos. Trans. R. Soc. B Biol. Sci. 2010, 365, 3065–3081. [Google Scholar] [CrossRef]
- Wasala, W.M.C.B.; Dharmasena, D.A.N.; Dissanayake, T.M.R.; Thilakarathne, B.M.K.S. Vibration Simulation Testing of Banana Bulk Transport Packaging Systems. Trop. Agric. Res. 2015, 26, 355. [Google Scholar] [CrossRef]
- Porat, R.; Lichter, A.; Terry, L.A.; Harker, R.; Buzby, J. Postharvest Losses of Fruit and Vegetables during Retail and Consumers’ Homes: Quantifications, Causes, and Means of Prevention. Postharvest Biol. Technol. 2018, 139, 135–149. [Google Scholar] [CrossRef]
- Lepine, J.; Rouillard, V.; Sek, M. On the Use of Machine Learning to Detect Shocks in Road Vehicle Vibration Signals. Packag. Technol. Sci. 2017, 30, 387–398. [Google Scholar] [CrossRef]
- Verghese, K.; Lewis, H.; Lockrey, S.; Williams, H. Packaging’s Role in Minimizing Food Loss and Waste Across the Supply Chain. Packag. Technol. Sci. 2015, 28, 603–620. [Google Scholar] [CrossRef]
- Grönman, K.; Soukka, R.; Järvi-Kääriäinen, T.; Katajajuuri, J.-M.; Kuisma, M.; Koivupuro, H.-K.; Ollila, M.; Pitkänen, M.; Miettinen, O.; Silvenius, F.; et al. Framework for Sustainable Food Packaging Design. Packag. Technol. Sci. 2013, 26, 187–200. [Google Scholar] [CrossRef]
- Valdés, A.; Ramos, M.; Beltrán, A.; Jiménez, A.; Garrigós, M. State of the Art of Antimicrobial Edible Coatings for Food Packaging Applications. Coatings 2017, 7, 56. [Google Scholar] [CrossRef]
- See, J.E.; Drury, C.G.; Speed, A.; Williams, A.; Khalandi, N. The Role of Visual Inspection in the 21st Century. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2017, 61, 262–266. [Google Scholar] [CrossRef]
- Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
- Basu, J.K.; Bhattacharyya, D.; Kim, T. Use of Artificial Neural Network in Pattern Recognition. Int. J. Softw. Eng. Its Appl. 2010, 4. [Google Scholar]
- Werbos, P.J. Links Between Artificial Neural Networks (ANN) and Statistical Pattern Recognition. In Machine Intelligence and Pattern Recognition; Elsevier: Amsterdam, The Netherlands, 1991; pp. 11–31. [Google Scholar]
- Funahashi, K.; Nakamura, Y. Approximation of Dynamical Systems by Continuous Time Recurrent Neural Networks. Neural Netw. 1993, 6, 801–806. [Google Scholar] [CrossRef]
- Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-Art in Artificial Neural Network Applications: A Survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef]
- Antunes, A.; Laflaquiere, A.; Cangelosi, A. Solving Bidirectional Tasks Using MTRNN. In Proceedings of the 2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Tokyo, Japan, 17–20 September 2018; pp. 19–25. [Google Scholar]
- Di Nuovo, A.; De La Cruz, V.M.; Cangelosi, A. A Deep Learning Neural Network for Number Cognition: A Bi-Cultural Study with the ICub. In Proceedings of the 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Providence, RI, USA, 13–16 August 2015; pp. 320–325. [Google Scholar]
- Sigaud, O.; Droniou, A. Towards Deep Developmental Learning. IEEE Trans. Cogn. Dev. Syst. 2016, 8, 99–114. [Google Scholar] [CrossRef]
- Zorzi, M.; Testolin, A.; Stoianov, I.P. Modeling Language and Cognition with Deep Unsupervised Learning: A Tutorial Overview. Front. Psychol. 2013, 4, 515. [Google Scholar] [CrossRef]
- Salvaris, M.; Dean, D.; Tok, W.H. Deep Learning with Azure; Apress: Berkeley, CA, USA, 2018; ISBN 978-1-4842-3678-9. [Google Scholar]
- Davies, S.; Lucas, A.; Ricolfe-Viala, C.; Di Nuovo, A. A Database for Learning Numbers by Visual Finger Recognition in Developmental Neuro-Robotics. Front. Neurorobot. 2021, 15, 619504. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Comput. Vis. Pattern Recognit. 2018, 1–14. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Qin, X.; He, S.; Zhang, Z.; Dehghan, M.; Jagersand, M. ByLabel: A Boundary Based Semi-Automatic Image Annotation Tool. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1804–1813. [Google Scholar]
- Canziani, A.; Paszke, A.; Culurciello, E. An Analysis of Deep Neural Network Models for Practical Applications. Comput. Vis. Pattern Recognit. 2016, 1–7. [Google Scholar]
- Kang, B.; Tripathi, S.; Nguyen, T.Q. Real-Time Sign Language Fingerspelling Recognition Using Convolutional Neural Networks from Depth Map. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 136–140. [Google Scholar]
- Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A Survey of the Recent Architectures of Deep Convolutional Neural Networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef]
- Chen, L.; Liu, H.; Mo, J.; Zhang, D.; Yang, J.; Lin, F.; Zheng, Z.; Jia, R. Cross Channel Aggregation Similarity Network for Salient Object Detection. Int. J. Mach. Learn. Cybern. 2022, 13, 2153–2169. [Google Scholar] [CrossRef]
- Zhang, D.; Zheng, Z.; Li, M.; Liu, R. CSART: Channel and Spatial Attention-Guided Residual Learning for Real-Time Object Tracking. Neurocomputing 2021, 436, 260–272. [Google Scholar] [CrossRef]
- Razavian, A.S.; Azizpour, H.; Sullivan, J.; Carlsson, S. CNN Features Off-the-Shelf: An Astounding Baseline for Recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014; pp. 512–519. [Google Scholar]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A Survey of Transfer Learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Krasin, I.; Duerig, T.; Alldrin, N.; Ferrari, V.; Abu-El-Haija, S.; Kuznetsova, A.; Rom, H.; Uijlings, J.; Popov, S.; Veit, A.; et al. Openimages: A Public Dataset for Large-Scale Multi-Label and Multi-Class Image Classification. Available online: https://github.com/openimages (accessed on 15 November 2022).
- Parkhi, O.M.; Vedaldi, A.; Zisserman, A.; Jawahar, C.V. Cats and Dogs. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3498–3505. [Google Scholar]
- Ricolfe-Viala, C.; Blanes, C. Improving Robot Perception Skills Using a Fast Image-Labelling Method with Minimal Human Intervention. Appl. Sci. 2022, 12, 1557. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Comput. Vis. Pattern Recognit. 2015, 1–14. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
Confidence Matrix | SSD | FASTER-RCNN | |||||
---|---|---|---|---|---|---|---|
Traditional CV | AlexNET | MOBILENET | RESNET | RESNET | INCEPTION | ||
Training Data | C11 | 0.5250 | 1 | 0.9750 | 0.9550 | 0.9450 | 0.9600 |
C22 | 0.5300 | 1 | 0.9800 | 0.9300 | 0.9350 | 0.9400 | |
C12 | 0.4750 | 0 | 0.0250 | 0.0450 | 0.0550 | 0.0400 | |
C21 | 0.4700 | 0 | 0.0200 | 0.0700 | 0.0650 | 0.0600 | |
Testing Data | C11 | 0.4500 | 0.9650 | 0.9500 | 0.9050 | 0.9250 | 0.9150 |
C22 | 0.4550 | 0.9850 | 0.9550 | 0.9150 | 0.9200 | 0.9350 | |
C12 | 0.5500 | 0.0350 | 0.0500 | 0.0950 | 0.0750 | 0.0850 | |
C21 | 0.5450 | 0.0150 | 0.0450 | 0.0850 | 0.0800 | 0.0650 | |
Computing time (ms) | 321 | 923 | 1567 | 1675 | 2212 | 2190 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ricolfe-Viala, C.; Correcher, A.; Blanes, C. Detection of Bad Stapled Nails in Wooden Packages. Appl. Sci. 2023, 13, 5644. https://doi.org/10.3390/app13095644
Ricolfe-Viala C, Correcher A, Blanes C. Detection of Bad Stapled Nails in Wooden Packages. Applied Sciences. 2023; 13(9):5644. https://doi.org/10.3390/app13095644
Chicago/Turabian StyleRicolfe-Viala, Carlos, Antonio Correcher, and Carlos Blanes. 2023. "Detection of Bad Stapled Nails in Wooden Packages" Applied Sciences 13, no. 9: 5644. https://doi.org/10.3390/app13095644
APA StyleRicolfe-Viala, C., Correcher, A., & Blanes, C. (2023). Detection of Bad Stapled Nails in Wooden Packages. Applied Sciences, 13(9), 5644. https://doi.org/10.3390/app13095644