Next Article in Journal
Dynamic Group Recommendation Based on the Attention Mechanism
Previous Article in Journal
Evaluating the Degree of Uncertainty of Research Activities in Industry 4.0
Open AccessArticle

MU R-CNN: A Two-Dimensional Code Instance Segmentation Network Based on Deep Learning

1
School of Information Engineering, Xijing University, Xi’an 710123, China
2
Shaanxi Key Laboratory of Integrated and Intelligent Navigation, Xi’an 710068, China
3
Beijing Jiurui Technology co., LTD, Beijing 100107, China
4
Xi’an University of Posts and Telecommunications, Xi’an 710121, China
5
Dongfanghong Middle School, Anding District, Dingxi City 743000, China
6
Unit 95949 of CPLA, HeBei 061736, China
7
Xi’an Haitang Vocational College, Xi’an 710038, China
*
Author to whom correspondence should be addressed.
Future Internet 2019, 11(9), 197; https://doi.org/10.3390/fi11090197
Received: 29 July 2019 / Revised: 23 August 2019 / Accepted: 7 September 2019 / Published: 13 September 2019
(This article belongs to the Special Issue Manufacturing Systems and Internet of Thing)
In the context of Industry 4.0, the most popular way to identify and track objects is to add tags, and currently most companies still use cheap quick response (QR) tags, which can be positioned by computer vision (CV) technology. In CV, instance segmentation (IS) can detect the position of tags while also segmenting each instance. Currently, the mask region-based convolutional neural network (Mask R-CNN) method is used to realize IS, but the completeness of the instance mask cannot be guaranteed. Furthermore, due to the rich texture of QR tags, low-quality images can lower intersection-over-union (IoU) significantly, disabling it from accurately measuring the completeness of the instance mask. In order to optimize the IoU of the instance mask, a QR tag IS method named the mask UNet region-based convolutional neural network (MU R-CNN) is proposed. We utilize the UNet branch to reduce the impact of low image quality on IoU through texture segmentation. The UNet branch does not depend on the features of the Mask R-CNN branch so its training process can be carried out independently. The pre-trained optimal UNet model can ensure that the loss of MU R-CNN is accurate from the beginning of the end-to-end training. Experimental results show that the proposed MU R-CNN is applicable to both high- and low-quality images, and thus more suitable for Industry 4.0. View Full-Text
Keywords: quick response (QR); instance segmentation; dice loss; Mask R-CNN; Mask scoring R-CNN; UNet; product traceability system (PTS); visual navigation; automated guided vehicle (AGV); unmanned aerial vehicle (UAV) quick response (QR); instance segmentation; dice loss; Mask R-CNN; Mask scoring R-CNN; UNet; product traceability system (PTS); visual navigation; automated guided vehicle (AGV); unmanned aerial vehicle (UAV)
Show Figures

Figure 1

MDPI and ACS Style

Yuan, B.; Li, Y.; Jiang, F.; Xu, X.; Guo, Y.; Zhao, J.; Zhang, D.; Guo, J.; Shen, X. MU R-CNN: A Two-Dimensional Code Instance Segmentation Network Based on Deep Learning. Future Internet 2019, 11, 197.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop