Color Structured Light Stripe Edge Detection Method Based on Generative Adversarial Networks
Abstract
:1. Introduction
2. Problem Definition and Our Approach Overview
2.1. Problem Definition
2.2. Approach Overview
- (1)
- The characteristics of the deformed color stripe pattern images were analyzed;
- (2)
- The previous methods of edge detection in color images were evaluated;
- (3)
- A GAN-based method for detecting the color structured light stripe edge was proposed;
- (4)
- A specific dataset was designed;
- (5)
- The model was trained;
- (6)
- The color structured light stripe edges in the test images were detected using the trained model;
- (7)
- A comparison with other methods was conducted to evaluate the performance and accuracy of the proposed method.
3. Proposed Method
3.1. Network Architecture
3.1.1. Attention Gate
3.1.2. Horizontal Elastomeric Attention Residual Unet-Based GAN (HEAR-GAN)
3.2. Dataset Design
3.3. Color Structured Light Stripe Edge Detection Method
- (1)
- The GAN-based model for color structured light stripe edge detection is trained with the prepared training set and pre-set parameters. With the properties of the GANs, this model can significantly reduce the effect of noise and characteristics, even colored textures, of the object surface;
- (2)
- Test images are preprocessed with a bilateral filter to reduce noise and preserve edges in the images. Similar to the images in the training set, these images are sequentially cut into patches of 128 × 128 pixels. These patches are then fed into the trained model for prediction;
- (3)
- The predicted image patches consist mainly of two components: stripes (bright regions) and background (dark regions). They are sequentially merged together to obtain large images of the same size as the original test image;
- (4)
- With such clean resulting images, a morphological skeletonization algorithm is applied to detect the centerlines in the images. The positions of these lines are equivalent to the positions of the left edges of the color stripes in the deformed pattern images.
4. Experimental Results and Discussion
4.1. Experimental Setup
4.2. Training Process
4.3. Segmentation Performance Evaluation of the Proposed Network
4.4. Pixel Location Accuracy
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Pages, J.; Salvi, J.; Forest, J. A new optimised De Bruijn coding strategy for structured light patterns. In Proceedings of the 17th International Conference on Pattern Recognition, San José, CA, USA, 26–26 August 2004; Volume 4, pp. 284–287. [Google Scholar]
- Donlic, M.; Petkovic, T.; Pribanic, T. 3D surface profilometry using phase shifting of De Bruijn pattern. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 963–971. [Google Scholar]
- Petković, T.; Pribanić, T.; Ðonlić, M. Single-shot dense 3D reconstruction using self-equalizing De Bruijn sequence. IEEE Trans. Image Process. 2016, 25, 5131–5144. [Google Scholar] [CrossRef]
- Je, C.; Lee, K.H.; Lee, S.W. Multi-projector color structured-light vision. Signal Process. Image Commun. 2013, 28, 1046–1058. [Google Scholar] [CrossRef] [Green Version]
- Lee, K.H.; Je, C.; Lee, S.W. Color-stripe structured light robust to surface color and discontinuity. In Proceedings of the Asian Conference on Computer Vision, Tokyo, Japan, 18–22 November 2007; pp. 507–516. [Google Scholar]
- Zhang, C.; Xu, J.; Xi, N.; Zhao, J.; Shi, Q. A robust surface coding method for optically challenging objects using structured light. IEEE Trans. Autom. Sci. Eng. 2014, 11, 775–788. [Google Scholar] [CrossRef]
- Rocchini, C.M.P.P.C.; Cignoni, P.; Montani, C.; Pingi, P.; Scopigno, R. A low cost 3D scanner based on structured light. In Computer Graphics Forum; Wiley-Blackwell: Hoboken, NJ, USA, 2001; Volume 20, pp. 299–308. [Google Scholar]
- Dhankhar, P.; Sahu, N. A review and research of edge detection techniques for image segmentation. Int. J. Comput. Sci. Mob. Comput. 2013, 2, 86–92. [Google Scholar]
- Magnier, B. Edge detection: A review of dissimilarity evaluations and a proposed normalized measure. Multimed. Tools Appl. 2018, 77, 9489–9533. [Google Scholar] [CrossRef] [Green Version]
- Owotogbe, J.S.; Ibiyemi, T.S.; Adu, B.A. Edge detection techniques on digital images-a review. Int. J. Innov. Sci. Res. Technol. 2019, 4, 329–332. [Google Scholar]
- Azeroual, A.; Afdel, K. Fast image edge detection based on faber schauder wavelet and otsu threshold. Heliyon 2017, 3, e00485. [Google Scholar] [CrossRef] [Green Version]
- Lopez-Molina, C.; De Baets, B.; Bustince, H.; Sanz, J.A.; Barrenechea, E. Multiscale edge detection based on Gaussian smoothing and edge tracking. Knowl. Based Syst. 2013, 44, 101–111. [Google Scholar] [CrossRef]
- Ahmad, I.; Moon, I.; Shin, S.J. Color-to-grayscale algorithms effect on edge detection—A comparative study. In Proceedings of the 2018 International Conference on Electronics, Information, and Communication (ICEIC), Honolulu, HI, USA, 24–27 January 2018; IEEE Press: New York, NY, USA, 2018; pp. 1–4. [Google Scholar]
- Cheon, Y.; Lee, C. Color edge detection based on Bhattacharyya distance. In Proceedings of the 14th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Madrid, Spain, 26–28 July 2017; Volume 2, pp. 368–371. [Google Scholar]
- Xin, G.; Ke, C.; Xiaoguang, H. An improved Canny edge detection algorithm for color image. In Proceedings of the IEEE 10th International Conference on Industrial Informatics, Beijing, China, 25–27 July 2012; pp. 113–117. [Google Scholar]
- Qin, X. A modified Canny edge detector based on weighted least squares. Comput. Stat. 2021, 36, 641–659. [Google Scholar] [CrossRef]
- Rashid, H.; Tanveer, M.A.; Khan, H.A. Skin lesion classification using GAN based data augmentation. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE Press: New York, NY, USA, 2019; pp. 916–919. [Google Scholar]
- Yang, S.; Jiang, L.; Liu, Z.; Loy, C.C. Unsupervised Image-to-Image Translation with Generative Prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; IEEE Press: New York, NY, USA, 2022; pp. 18332–18341. [Google Scholar]
- Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Miao, S.; Mansi, T.; Liao, R. Unsupervised X-ray image segmentation with task driven generative adversarial networks. Med. Image Anal. 2020, 62, 101664. [Google Scholar] [CrossRef]
- Zhang, K.; Zhang, Y.; Cheng, H.D. CrackGAN: Pavement crack detection using partially accurate ground truths based on generative adversarial learning. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1306–1319. [Google Scholar] [CrossRef]
- Yu, Y.; Gong, Z.; Zhong, P.; Shan, J. Unsupervised representation learning with deep convolutional neural network for remote sensing images. In Proceedings of the International Conference on Image and Graphics, Shanghai, China, 13–15 September 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 97–108. [Google Scholar]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Zhang, K.; Zhang, Y.; Cheng, H.D. Self-supervised structure learning for crack detection based on cycle-consistent generative adversarial networks. J. Comput. Civ. Eng. 2020, 34, 04020004. [Google Scholar] [CrossRef]
- Nie, X.; Ding, H.; Qi, M.; Wang, Y.; Wong, E.K. Urca-gan: Upsample residual channel-wise attention generative adversarial network for image-to-image translation. Neurocomputing 2021, 443, 75–84. [Google Scholar] [CrossRef]
- Pham, D.; Ha, M.; San, C.; Xiao, C.; Cao, S. Accurate stacked-sheet counting method based on deep learning. J. Opt. Soc. Am. A 2020, 37, 1206–1218. [Google Scholar] [CrossRef]
- Poma, X.S.; Riba, E.; Sappa, A. Dense extreme inception network: Towards a robust cnn model for edge detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 2–5 March 2020; pp. 1923–1932. [Google Scholar]
- Dhar, P.; Guha, S.; Biswas, T.; Abedin, M.Z. A system design for license plate recognition by using edge detection and convolution neural network. In 2018 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 8–9 February 2018; IEEE Press: New York, NY, USA, 2018; pp. 1–4. [Google Scholar]
- Li, X.; Jiao, H.; Wang, Y. Edge detection algorithm of cancer image based on deep learning. Bioengineered 2020, 11, 693–707. [Google Scholar] [CrossRef]
- Hou, S.M.; Jia, C.L.; Wanga, Y.B.; Brown, M. A review of the edge detection technology. Sparklinglight Trans. Artif. Intell. Quantum Comput. (STAIQC) 2021, 1, 26–37. [Google Scholar] [CrossRef]
- Cai, S.; Wu, Y.; Chen, G. A Novel Elastomeric UNet for Medical Image Segmentation. Front. Aging Neurosci. 2022, 14, 841297. [Google Scholar] [CrossRef]
- Xiao, X.; Lian, S.; Luo, Z.; Li, S. Weighted res-unet for high-quality retina vessel segmentation. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; IEEE Press: New York, NY, USA; pp. 327–331. [Google Scholar]
- Luo, Z.; Zhang, Y.; Zhou, L.; Zhang, B.; Luo, J.; Wu, H. Micro-vessel image segmentation based on the AD-UNet model. IEEE Access 2019, 7, 143402–143411. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, L.; Duan, S.; Li, Y. An image denoising method based on deep residual GAN. J. Phys. Conf. Ser. 2020, 1550, 032127. [Google Scholar] [CrossRef]
- Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Išgum, I. Generative Adversarial Networks for noise reduction in low-dose CT. IEEE Trans. Med. Imaging 2017, 36, 2536–2545. [Google Scholar] [CrossRef]
- Alsaiari, A.; Rustagi, R.; Thomas, M.M.; Forbes, A.G. Image denoising using a generative adversarial network. In Proceedings of the 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT), Kahului, HI, USA, 14–17 March 2019; IEEE Press: New York, NY, USA, 2019; pp. 126–132. [Google Scholar]
- Je, C.; Lee, S.W.; Park, R.H. Colour-stripe permutation pattern for rapid structured-light range imaging. Opt. Commun. 2012, 285, 2320–2331. [Google Scholar] [CrossRef]
- Li, Y.; Wang, Z. 3D reconstruction with single-shot structured light RGB line pattern. Sensors 2021, 21, 4819. [Google Scholar] [CrossRef]
- Je, C.; Choi, K.; Lee, S.W. Green-blue stripe pattern for range sensing from a single image. arXiv 2017, arXiv:1701.02123. [Google Scholar]
- Ha, M.; Pham, D.; Xiao, C. Accurate feature point detection method exploiting the line structure of the projection pattern for 3D reconstruction. Appl. Opt. 2021, 60, 2926–2937. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A. AImage-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE Press: New York, NY, USA, 2017; pp. 1125–1134. [Google Scholar]
- Lv, J.; Wang, P.; Tong, X.; Wang, C. Parallel imaging with a combination of sensitivity encoding and generative adversarial networks. Quant. Imaging Med. Surg. 2020, 10, 2260. [Google Scholar] [CrossRef]
- Tian, R.; Sun, G.; Liu, X.; Zheng, B. Sobel edge detection based on weighted nuclear norm minimization image denoising. Electronics 2021, 10, 655. [Google Scholar] [CrossRef]
Object | TP | FP | FN | TN | Pre (%) | Re (%) |
---|---|---|---|---|---|---|
Colored textured plastic box | 410,466 | 19,199 | 19,001 | 4,138,854 | 95.53 | 95.58 |
Wave metal sheet | 423,066 | 11,872 | 12,952 | 4,139,630 | 97.27 | 97.03 |
Porcelain dish | 411,728 | 12,032 | 11,153 | 4,152,607 | 97.16 | 97.36 |
Colored textured carton box | 388,841 | 20,373 | 28,854 | 4,149,452 | 95.02 | 93.09 |
Open book | 421,753 | 11,370 | 9447 | 4,144,950 | 97.37 | 97.81 |
Method | TP | FP | FN | TN | Acc (%) | Pre (%) | Re (%) | F1 (%) |
---|---|---|---|---|---|---|---|---|
Pix2pix | 289,185 | 99,553 | 88,510 | 4,110,272 | 95.90 | 74.39 | 76.57 | 75.46 |
RU-GAN | 291,195 | 72,268 | 66,500 | 4,157,557 | 96.98 | 80.12 | 81.41 | 80.76 |
URCA-GAN | 378,496 | 48,021 | 39,199 | 4,121,804 | 98.10 | 88.74 | 90.62 | 89.67 |
Proposed w/o AG | 383,662 | 59,401 | 24,033 | 4,120,424 | 98.18 | 86.59 | 94.11 | 90.19 |
Proposed | 388,841 | 20,373 | 28,854 | 4,149,452 | 98.93 | 95.02 | 93.09 | 94.05 |
Object | Method in [16] (C) | Method in [43] (S) | Proposed Method (P) | MSE Difference betweenC& P | MSE Difference betweenS& P |
---|---|---|---|---|---|
Colored textured plastic box | 0.0383 | 0.0350 | 0.0302 | 0.0081 | 0.0048 |
Waved metal sheet | 0.0359 | 0.0332 | 0.0286 | 0.0073 | 0.0046 |
Porcelain dish | 0.0331 | 0.0301 | 0.0275 | 0.0056 | 0.0026 |
Colored textured carton box | 0.0437 | 0.0412 | 0.0306 | 0.0131 | 0.0106 |
Open book | 0.0343 | 0.0315 | 0.0271 | 0.0072 | 0.0044 |
Sample mean | 0.0082 | 0.0054 | |||
Sample variant | 0.0028 | 0.0031 | |||
Standard error | 0.0012 | 0.0014 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pham, D.; Ha, M.; Xiao, C. Color Structured Light Stripe Edge Detection Method Based on Generative Adversarial Networks. Appl. Sci. 2023, 13, 198. https://doi.org/10.3390/app13010198
Pham D, Ha M, Xiao C. Color Structured Light Stripe Edge Detection Method Based on Generative Adversarial Networks. Applied Sciences. 2023; 13(1):198. https://doi.org/10.3390/app13010198
Chicago/Turabian StylePham, Dieuthuy, Minhtuan Ha, and Changyan Xiao. 2023. "Color Structured Light Stripe Edge Detection Method Based on Generative Adversarial Networks" Applied Sciences 13, no. 1: 198. https://doi.org/10.3390/app13010198
APA StylePham, D., Ha, M., & Xiao, C. (2023). Color Structured Light Stripe Edge Detection Method Based on Generative Adversarial Networks. Applied Sciences, 13(1), 198. https://doi.org/10.3390/app13010198