A Data Matrix Code Recognition Method Based on L-Shaped Dashed Edge Localization Using Central Prior

The recognition of data matrix (DM) codes plays a crucial role in industrial production. Significant progress has been made with existing methods. However, for low-quality images with protrusions and interruptions on the L-shaped solid edge (finder pattern) and the dashed edge (timing pattern) of DM codes in industrial production environments, the recognition accuracy rate of existing methods sharply declines due to a lack of consideration for these interference issues. Therefore, ensuring recognition accuracy in the presence of these interference issues is a highly challenging task. To address such interference issues, unlike most existing methods focused on locating the L-shaped solid edge for DM code recognition, we in this paper propose a novel DM code recognition method based on locating the L-shaped dashed edge by incorporating the prior information of the center of the DM code. Specifically, we first use a deep learning-based object detection method to obtain the center of the DM code. Next, to enhance the accuracy of L-shaped dashed edge localization, we design a two-level screening strategy that combines the general constraints and central constraints. The central constraints fully exploit the prior information of the center of the DM code. Finally, we employ libdmtx to decode the content from the precise position image of the DM code. The image is generated by using the L-shaped dashed edge. Experimental results on various types of DM code datasets demonstrate that the proposed method outperforms the compared methods in terms of recognition accuracy rate and time consumption, thus holding significant practical value in an industrial production environment.


Introduction
Data matrix (DM) codes are widely employed across various scenarios such as industrial automation [1,2], warehouse management [3,4], and smart logistics centers [5,6].However, limitations in printing technology, insufficient precision in printing material processing, and scratches generated from contact between products often lead to the generation of low-quality DM code images in complex industrial production environments.These images are characterized by protrusions and interruptions on the L-shaped solid edge (finder pattern) and dashed edge (timing pattern), as shown in Figure 1.Such low-quality DM codes significantly diminish recognition rate.Therefore, it is urgent and meaningful to develop a method for recognizing low-quality DM codes with protrusions and interruptions on the L-shaped solid edge and dashed edge in complex industrial production environments.
In the case of limited protrusions and interruptions on the L-shaped solid edge, existing methods [7][8][9][10][11][12][13] based on L-shaped solid edge localization have achieved impressive results in DM code localization and recognition.However, when a certain degree of protrusions and interruptions occurs on the L-shaped solid edge, the success rate of these existing methods in terms of DM code localization sharply declines, which poses a significant challenge to DM code recognition.Nevertheless, the dashed edge possesses stronger regularity, robustness, and interference tolerance compared to the solid edge.Therefore, exploring ways to better utilize the characteristics of the dashed edge for DM code localization is a highly meaningful research direction.It is not a trivial task to locate the L-shaped dashed edge reliably.Relying solely on simple and general properties, such as the adjacent black and white modules and the length ratio of black and white modules, may lead one to locate fake dashed edges.We observed that the center of the DM code provides valuable prior information that is of great assistance when locating the dashed edge.Consequently, it is worth exploring methods of finding the center of the DM code and appropriately leveraging the properties of the center for better locating the dashed edge.
In order to address the interference issues caused by protrusions and interruptions on L-shaped solid and dashed edges, we in this paper propose a novel method for DM code recognition that utilizes the center of the DM code to locate the L-shaped dashed edge.Specifically, since the center of the DM code lacks distinct features, employing traditional image processing methods to acquire its position is exceedingly challenging.However, object detection methods [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] based on deep learning networks can locate the rectangular position of the DM code by identifying the features of the whole target, with the center of the rectangle approximating the center of the DM code.Therefore, we first employ YOLOv5 [34] to obtain the coarse position image of the DM code.Then, unlike existing methods [7][8][9][10][11][12][13] that rely on the L-shaped solid edge for locating DM code, we propose to locate DM code by first locating the L-shaped dashed edge using a two-level screening strategy.The strategy combines general constraints and central constraints deduced from the center of the coarse position of the DM code image.A precise position image of the DM code can be obtained from the rectangular region of the DM code which is determined by the L-shaped dashed edge.The green rectangle in Figure 2 illustrates the rectangular region of the DM code.In this region, protrusions on the L-shaped solid and dashed edges are excluded from the green rectangle in Figure 2b and interruptions are filled in Figure 2d.Finally, we utilize libdmtx [13] to recognize the DM code in the precise position image.Our proposed method effectively addresses the interference issues caused by protrusions and interruptions when locating the DM code, thereby improving the DM code recognition rate.Overall, our contributions in this paper can be summarized as follows: 1.
In order to effectively address the interference issue posed by protrusions and interruptions on the L-shaped solid and dashed edges, we propose a novel localization framework consisted of coarse localization based on a deep learning network and fine localization based on the L-shaped dashed edge of the DM code.2.
To improve the accuracy of locating the L-shaped dashed edge, we propose a twolevel screening strategy which combines general constraints and central constraints deduced from the center of the DM code.

3.
Experimental results on various types of datasets demonstrate that the proposed method achieves a higher recognition accuracy rate and lower time consumption than the compared methods.

Related Work
DM code recognition mainly consists of two stages: localization and decoding.The localization stage determines the position of the DM code within an image, which is essential for decoding.Moreover, accurate and efficient localization can reduce the complexity and time consumption of decoding.Therefore, the majority of researchers [7][8][9][10][11]35,36] prioritize efforts to enhance the accuracy and efficiency of the localization to improve overall DM code recognition performance.
In early attempts, most methods focused on improving the performance of locating the DM code.Huang et al. [7] utilized line segment detection [37] to locate the "L" finder pattern of the DM code.Dai et al. [8] employed Hough transform to detect four vertices of DM code in order to locate the incomplete DM code on metal surfaces.Karrach et al. [9] proposed an efficient method which does not require extensive pre-processing for locating the DM code in production engineering.Pivarciova et al.
[10] designed another efficient method mainly consisting locating the finder pattern and verification of the timing pattern to better locate the DM code.Although significant progress has been made with these methods, only consideration on ideal-quality DM code results in poor performance when they are applied in low-quality DM code environment.Therefore, utilizing deep learningbased methods with strong learning capability for localization is a new trend.
In recent years, several methods based on deep learning networks have been proposed for the detection and decoding of 2D barcode.For the detection of quick response (QR) code, several subsequent representative deep learning methods [20,[38][39][40][41] showed impressive capabilities and gained increased attention.Chou et al. [42] employed a convolutional neural network to detect QR code with rotation and deformation.Hansen et al. [43] focused on developing a method for adapting the deep learning-based detector YOLO [20] to detect QR code in a real-time and reliable way.For the detection and decoding of DM code, Almeida et al. [35] proposed a new pipeline in which the detector is based on the deep learning Faster R-CNN [18] network and a conventional decoder method is used.However, the detection is time-consuming because of the large number of parameters in Faster R-CNN.To solve this issue, Almeida et al. [36] conducted a comprehensive analysis and compared experiments on representative deep learning-based object detection methods [18,19,21,25] with different backbones [39][40][41][44][45][46], and they proposed to use YOLOV4 [21] as the detector to acquire balanced detection performance between accuracy and latency.To deal with the problem that images captured by mobile cameras are usually of low quality with poor contrast, a deep learning-based method for industrial DM code was proposed by Liao et al. [47] to learn the colors of two adjacent modules of a DM symbol.An edge image was generated to reconstruct the final barcode image.While existing methods have made considerable progress, the lack of consideration for the the interference issues caused by protrusions and interruptions on L-shaped solid and dashed edges limits their effectiveness when applied in an industrial production environment.
For these interference issues, unlike existing methods [7][8][9][10][11][12][13] that rely on the L-shaped solid edge for DM code localization, our method relies on the L-shaped dashed edge with stronger interference-resisting capabilities for DM code localization.Existing methods [35,36] yield results that still contain the interference issues caused by protrusions and interruptions on the L-shaped solid and dashed edges when simply utilizing object detection methods based on deep learning network.In contrast, our approach leverages an object detection method to yield the center of the DM code, which is then used to build a two-level screening strategy combining general constraints and central constraints for locating the L-shaped dashed edge.This successfully eliminates the interference issues and achieves a higher recognition accuracy rate and lower time consumption than the compared methods.

Overview
We present the overview architecture of our proposed method in Figure 3.This architecture consists of three processing stages: coarse localization, fine localization, and decoding.In the first stage, we take the original DM code image I org as an input into coarse localization based on a deep learning network, yielding the coarse position image I corase of the DM code.In the second stage, I corase is passed on to a fine localization based on the L-shaped dashed edge, generating the precise position image I precise of the DM code.In the third stage, I precise is fed into an existing decoding method to output the data of the DM code R data .In what follows, we detail the three stages of our method, including the coarse localization stage (Section 3.2), the fine localization stage (Section 3.3), and the decoding stage (Section 3.4).

Coarse Localization
This processing stage aims to obtain the coarse position image I corase and the center of the DM code.In the first stage in Figure 3, we employ the YOLOv5 [34] method for coarse localization because YOLOv5 offers superior detection accuracy and latency compared to most object detection methods.The original DM code image used as an input, as shown in Figure 4a, is passed to YOLOv5.The red rectangle in Figure 4b is the result detected by YOLOv5.Thanks to the powerful learning capabilities of deep learning methods, even when the solid and dashed edges of the DM code are affected by protrusions and interruptions, the detected results still contain the entire DM code instead of an incomplete DM code.The outputs of this stage are the coarse position image I corase , as shown in Figure 4c, and the center of the DM code, which is the center of I corase .Importantly, for lowresolution DM codes in high-resolution images, low-resolution DM code images evidently reduce the computational resources required for the two subsequent processing stages.Meanwhile, low-resolution DM code images also reduce the complexity of dashed edges localization and the time consumption of DM code recognition.
It is noteworthy that existing DM code recognition methods struggle to effectively locate and recognize DM code with protrusions and interruptions on solid and dashed edges.However, our proposed method can handle it well.Therefore, the next fine localization stage details how to exclude protrusions and interruptions on the dashed edge.

Fine Localization
The main target of this processing stage is to obtain the precise position image I precise of the DM code without protrusions and interruptions on the L-shaped solid and dashed edges.We present the procedure overview diagram of fine localization based on the L-shaped dashed edge in Figure 5.This procedure consists of three subprocesses: preprocessing, L-shaped dashed edge localization, and generation of the DM code's precise position image.

Prepocessing
The objective of the first subprocess is to obtain the upright DM code image without interruptions on the L-shaped solid and dashed edges.As shown in Figure 5, an upright DM code image I filled means that the solid and dashed edges of the DM code are parallel or perpendicular to the x-axis of the image.This subprocess consists of four steps: resizing, rotating, cropping, and filling.
(1) Resizing.To facilitate subsequent processing, we resize the coarse position image I corase to obtain the resized image I resized which is calculated as follows: where (x rs , y rs ) is the pixel coordinate of image I resized , and (x cs , y cs ) is the pixel coordinates of image I corase .r is a scale factor which is calculated as follows: where H cs and W cs are the height and width of image I corase .l rs is the resized length of the long side of image I resized , and we empirically set l rs to 200.Function max selects the max value of all elements.The height H rs and width W rs of the resized image I resized are calculated as follows: (2) Rotating.To reduce the complexity of L-shaped dashed edge localization, we rotate the resized image I resized until it is an upright DM code image I rotated with interruptions by three substeps.First, we employ the Canny method [48] to extract the edge image I edge from I resized .Then, we employ the Hough method [49] to identify white lines in the edge image, and each line must have a point count that exceeds a specified threshold value.Here, we set this threshold value to 35.The most frequent angle among all lines is determined to be the rotation angle θ.Finally, the image I rotated can be expressed as follows: where (x rt , y rt ) is the pixel coordinate of image I rotated .(x rs , y rs ) is the pixel coordinate of image I resized .(x c , y c ) is the rotated center coordinate of image I resized .The width and height of I rotated are the same as I resized .We found that the occupied width of the DM code in the image is shorter after rotation than before, which leads to the formation of invalid border regions.It is beneficial to improving the effect of subsequent central constraints on locating the L-shaped dashed edge and avoiding the interference of a fake dashed edge if these invalid border regions are eliminated.Therefore, we crop these regions of the rotated image I rotated in next processing step.
(3) Cropping.We crop the width d of invalid border regions in rotated image I rotated in order to obtain the cropped image I cropped .We assume that all four vertices of the DM code before rotation are located on the boundary of the image, that the center of the image is the center of the DM code, and that the shape of the image is square.Thus, we can draw a schematic diagram as shown in Figure 6.The green square represents the DM code whose four vertices are located on the red square that stands for the image.The yellow cross represents the center of the DM code and image.The yellow circle indicates the vertex trajectory formed by rotating the DM code 360 degree.Further, we assume that the black square represents the rotated DM code after the counterclockwise rotation angle θ of the green DM code.Therefore, the region between the black square and the red square is the invalid border region we need to crop.Since the vertex A becomes vertex A ′ after rotation, the height difference between vertex A and vertex A ′ is our desired cropped width d of the invalid border regions.In the coordinate system of the image, we assume that (x A , y A ) is the pixel coordinate of A and that (x A ′ , y A ′ ) is the pixel coordinate of A ′ .Since point A lies on the x-axis, namely y A equals 0, d is equal to the value of y A ′ .Given that the point A becomes point A' after rotating angle θ, y A ′ can be expressed by Equation (4) as follows: where (x c , y c ) represents the center coordinates of the image.Therefore, y A ′ can be calculated from the above equation if x A is known.As shown in Figure 6, we assume that α represents the angel ABO in right triangle ABO.The tangent of angle α can be expressed as follows: After transformation, x A can be expressed as follows: where W represents the width of the image before rotation.Angle α rotated from side AB to A ′ B ′ is equal to angle θ rotated from point A to point A ′ .Finally, y A ′ can be expressed by combing Equations ( 5) and ( 7) as follows: (4) Filling.To remove the interruption on L-shaped solid and dashed edges of DM code in image I cropped , we first utilize the Ostu method [50] to generate the binary image, then fill the interruptions of binary image by utilizing morphological erode and dilate operations to obtain the interruption filled image I f illed as shown in Figure 5.The structure element of morphological operations is 3 × 3 matrix.

L-Shaped Dashed Edge Localization
The target of the second subprocess is to locate the L-shaped dashed edge as marked in I lv2 of Figure 5.This subprocess involves two steps: counting and screening.
(1) Counting.We count the continuous black and white pixel segments of all rows and columns in image I f illed according to the horizontal and vertical directions.The single row and column of all segments in the horizontal and vertical directions are denoted as S h (r) and S v (c), respectively, where r ∈ [0, H), c ∈ [0, W).H and W are the height and width of the image.
(2) Screening.To locate all rows and columns of the L-shaped dashed edge, we specially design a two-level screening strategy which combines the general constraints and central constraints by fully utilizing the central properties of the DM code.The single row and column of dashed edges in the horizontal and vertical directions are denoted as D h (r) and D v (c), respectively.The details of the two-level screening strategy are as follows: Level 1: For the horizontal direction, s h (r, j) represents the j-th segment of the r-th row S h (r), where r ∈ [0, H), j ∈ [0, N h (r)), and N h (r) refer to the number of segments in the r-th row.c h (r, j) represents the color of s h (r, j), and the values 0 and 1 stand for the colors black and white, respectively.l h (r, j) denotes the length of s h (r, j).i st and i ed represent the index in S h (r) of the starting segment and ending segment of the dashed edge, respectively.The specific row of the dashed edge in the horizontal direction D h (r) should satisfy the following general constraints: 1.
The color of the starting segment is black, and the starting segment is not the first segment.The equation can be expressed as c h (r, i st ) = 0, where The length ratio between the starting segment and each subsequent segment should fall within the range [r min , r max ].Here, we set 0.5 and 2 to r min and r max , respectively.The equation can be expressed as r min ≤ l h (r, i st )/l h (r, j) ≤ r max , where The color of the ending segment is black, and the ending segment is not the last segment.
There is at least one segment between the starting segment and the ending segment.The equation can be expressed as c h (r, i ed ) = 0, where i ed ∈ [i st + 2, N h (r) − 1).For the vertical direction, D v (c) should satisfy the same constraints as D h (r).
Level 2: For the horizontal direction, h h (r, j) and t h (r, j) represent the head and tail column index of s h (r, j), respectively.n h,l (r) and n h,r (r) stand for the number of segments located to the left and right of the vertical middle line of the image.y h (r, j) represents the row value of s h (r, j).D h (r) should satisfy the following central constraints: 1.
The starting segment is located to the left of the vertical middle line of the image.The equation can be expressed as t h (r, i st ) < W/2.

2.
The ending segment is located to the right of the vertical middle line of the image.The equation can be expressed as h h (r, i ed ) > W/2.

3.
The number of segments located to the left and right of the vertical middle line of the image is greater than or equal to 4. The equation can be expressed as n h,l (r) ≥ 4, n h,r (r) ≥ 4.

4.
The difference in the number of segments between the left and right side of the vertical middle line of the image is less than or equal to 2. The equation can be expressed as The distance between the horizontal dashed edge and the horizontal middle line of the image is greater than one third of the image's height.The equation can be expressed as |y h (r, j) − H/2| > H/3.

For the vertical direction, D v (c) should satisfy the same constraints as D h (r).
As shown in Figure 7a, only employing the general constraints is insufficient to eliminate fake dashed edges located in either the left or right half of the image.However, these fake edges can be eliminated as shown in Figure 7b by combining the general constraints and the central constraints, which demonstrates that our two-level screening strategy is beneficial for locating the correct dashed edges of the DM code.

Generation of the Precise Position Image of the DM Code
The third subprocess aims to generate the precise position image of the DM code with the protrusions on L-shaped solid and dashed edges removed.As shown in Figure 5 (1) Localization of the data region of the DM code.To determine the data region r data of the DM code, we design special rules for the horizontal dashed edge and vertical dashed edge located in the second subprocess.We first select the middle row D h (M h ) and middle column D v (M v ) of the horizontal dashed edge and vertical dashed edge, with M h and M v being the row and column index of the image, respectively.Then, we select two horizontal coordinates, x 1 and x 2 , on D h (M h ) and two vertical coordinates, y 1 and y 2 , on D v (M v ).Finally, points (x 1 , y 1 ) and (x 2 , y 2 ) are used as the top-left and bottom-right coordinates of the rectangle, respectively, to form the data region r data of the DM code, as shown in the red box in Figure 8.In detail, x 1 , x 2 , y 1 , and y 2 can be obtained as follows: (푥 , 푦 ) (푥 , 푦 )

1.
As shown in Figure 9a, when D v (M v ) is located to the left of the vertical middle line of the image, i.e., M v < W/2, x 1 and x 2 are assigned as the head index of the starting segment and head index of the ending segment of D h (M h ), respectively.The equation can be expressed as As shown in Figure 9b, when D v (M v ) is located to the right of the vertical middle line of the image, i.e., W/2 < M v , x 1 and x 2 are assigned as the tail index of the starting segment and tail index of the ending segment of D h (M h ), respectively.The equation can be expressed as

3.
As shown in Figure 9c, when D h (M h ) is located above the horizontal middle line of the image, i.e., M h < H/2, y 1 and y 2 are assigned as the head index of the starting segment and head index of the ending segment of D v (M v ), respectively.The equation can be expressed as As shown in Figure 9d, when D h (M h ) is located below the horizontal middle line of the image, i.e., H/2 < M h , y 1 and y 2 are assigned as the tail index of the starting segment and tail index of the ending segment of D v (M v ), respectively.The equation can be expressed as (2) Localization of the precise position region of the DM code.To determine the precise position region r pp of the DM code, we expand the data region r data of the DM code along four directions of the rectangular edge.According to our observations, the dashed edges of the DM code are farther from the sides of the data region than the solid edges are in industrial production environments.Directly expanding the same width for solid and dashed edges may cause the loss of crucial information from the dashed edges, leading to the failure of DM code recognition.Therefore, we use different methods for expanding the solid and dashed edges to better adapt to the low-quality DM code images found in industrial production environments.Specifically, we assume that l h and l v represent the average length of black segments along a horizontal dashed edge and a vertical dashed edge, respectively.For horizontal and vertical solid edges, we expand the width l h and l v based on the two sides of r data near the two solid edges.For horizontal and vertical dashed edges, we expand the width l h /2 and l v /2 based on the D h (M h ) and D v (M v ) instead of two sides of r data near the two dashed edges.As shown in Figure 10, the results from comparing the two expansion methods for dashed edges indicate that the method based on the middle row or column of the dashed edge in Figure 10b can retain more crucial dashed edge information than the method based on the sides of r data near the two dashed edges in Figure 10a.The green rectangle in Figure 10b is our desired precise position region r pp of the DM code.(3) Generation of the precise position image of the DM code.We first extract the image of rectangle region r pp from image I f illed .Then, we add a white border with 5 pixels into the extracted image to generate the precision position image I precise of the DM code.As shown in Figure 11, the protrusions on the L-shaped solid and dashed edges in I precise have been removed.

Decoding
This processing stage is aimed at obtaining the data of the DM code R data .Through the preceding two processing stages, a precise position image of the DM code I precise without any protrusions or interruptions on the L-shaped solid and dashed edges has been generated, as shown in Figure 11.Of the current DM code decoding methods, we selected the libdmtx method, which offers optimal performance and speed, to obtain the data of the DM code R data .

Experimental Results and Analyses
In this section, we primarily describe the experimental setup for comparison, present the comparative results, and conduct detailed analysis.First, we describe the implementation details and experiment settings including the test dataset, comparison methods, and evaluation metrics.Subsequently, to demonstrate the effectiveness and superiority of our proposed method, we present and analyze the comparison results of recognition accuracy and time consumption between our proposed method and the compared methods.Finally, to further explain the details of the proposed method, we present time consumption analysis and conduct a series of ablation studies for the proposed method.

Implementation Details
We implemented the proposed method with the OpenCV(C++) library on a PC with a 3.2 GHz Intel CPU and an Nvidia 2060s GPU.We selected the YOLOv5s-v7.0 for coarse localization.In order to improve the detection accuracy of YOLOv5 [34] for the DM code given the interference of protrusions and interruptions, we added 204 images with and without interference to the training dataset.

Experiment Settings
Test dataset: To test the effectiveness and general capacity of the proposed method, we selected a test dataset which includes five types of images.These five types consist of perfect DM codes without interference, real DM codes without interference, DM codes with protrusions, DM codes with interruptions, and rotated DM codes.We denoted the five types as Type-1, Type-2, Type-3, Type-4, and Type-5, respectively.The test dataset contains a total of 265 images, and the number of each type is shown in Table 1.We present the several sample image of each type in Figure 12.Compared methods: The mainstream methods of DM recognition are zxing [12] and libdmtx [13].Unlike our proposed method recognize the DM code based on the L-shaped dashed edge localization, these two methods recognize the DM code based on the L-shaped solid edge localization.Therefore, we compared our method with these two methods which were implemented with python framework zxing-cpp 1.4.0 and pylibdmtx 0.1.10respectively.Additionally, we compared our method with two commercial software (onbarcode [51] trial version and inlite [52] 12.0.7675).
Evaluation metrics: Obviously, we need to recognize the DM code on products with high accuracy and low time consumption in industrial production environments.Consequently, we selected these two metrics to measure the performance of our method against the compared methods.

Recognition Accuracy Rate Comparison
The compared results of recognition accuracy rate are shown in Figure 13.As can be seen from the comparison results of each type, our method exhibits superior adaptability and achieves the highest recognition accuracy rate, which is credited to the design of the dashed edge localization in our proposed method.The specific analysis results are as follows: 1.
Since the DM code exhibits no interference or module distortion in Type-1, all five methods can recognize all images correctly.2.
Although the DM code has no interference in Type-2, there are two differences compared to Type-1: (1) The modules on the edge of the DM code are irregular and the sizes of modules are inconsistent; (2) There are gaps between rows of different DM code modules.These two differences pose some challenges for recognition methods.According to the compared results, our proposed method achieved a better recognition accuracy rate than onbarcode, inlite, zxing, and libdmtx.3.
Our proposed method achieved a significantly higher recognition accuracy rate compared to onbarcode, inlite, zxing, and libdmtx for Type-3.Zxing and libdmtx achieved relative low recognition rates because they lack enough consideration for the interference of protrusion.Onbarcode and inlite achieved recognition rate of about 50%.However, even in the presence of protrusion interference on solid and dashed edges of DM codes, the proposed method can still recognize them correctly with high success rates.This success is attributed to the central constraints employed in our proposed method for the precise localization of dashed edges.4.
Our proposed method can recognize the most DM codes with interruption interference in Type-4.This is because we conduct dilate and erode operations to fill the instances of interruption interference on L-shaped solid and dashed edges.Onbarcode and zxing can recognize more than half of the codes, demonstrating their ability to resist interruption interference.However, inlite and libdmtx fail to recognize most of them, indicating little consideration for interruption interference.5.
For rotated DM code images in Type-5, our proposed method achieved a recognition accuracy of 85%.This indicates that our method not only possesses resistance to protrusion and interruption interference but also that it exhibits resistance to 2D rotation.This expands the application scenarios of our proposed method into real production environments.

Time Consumption Comparison
Taking into account the background and quality of the DM code within the image, we classified all of the test images into two groups to compare time consumption.Group-1 consists of images (Type-1) with clean backgrounds and perfect DM codes generated by onbarcode and inlite.Group-2 (Type-2∼5) comprises real-world images with impure backgrounds and interference in the DM code.Table 2 presents the comparison results for time consumption on all recognized DM codes from the two groups.In the results of Group-1, in terms of average time consumption, libdmtx was the fastest, followed by zxing and the proposed method.In terms of stability, all five methods exhibited relatively stable performance.This is because the compared methods can quickly eliminate clean backgrounds and locate the DM code.In the results of Group-2, in terms of average time consumption, the proposed method was the fastest, followed by zxing, onbarcode, inlite, and libdmtx.In terms of stability, the four compared methods had larger standard deviation than our proposed method.This is because there are some similar localization features of the DM code in the background, which results in the compared methods wasting more time in locating and recognizing in these areas.However, in our proposed method, the deep learning method fully utilized GPU for rapid coarse localization, significantly reducing the time consumption of subsequent processing stages.

Time Consumption Analysis
Figure 14 shows the proportion of time consumption for the three stages of the proposed method on all recognized DM codes.Obviously, the coarse localization based on the YOLOv5 method accounts for the largest proportion of time consumption, reaching 72%, while the fine localization and decoding based on traditional image processing methods each account for 14%.This indicates that the majority of time consumed in our proposed method is spent on the coarse localization stage.The output results of coarse localization reduce a significant amount of ineffective processing regions, thus saving time for fine localization and decoding.Compared to the time-consuming operation of directly using the libdmtx method to traverse the entire image searching for the L-shaped solid edge, our proposed method makes reasonable use of the libdmtx method, resulting in a significant reduction in time consumption.

Ablation Study
We conducted ablation studies to confirm the impact of core processing modules in our proposed method for recognizing DM codes.Four cases are described as follows: (1) Case 1: Complete method; (2) Case 2: Excluding the YOLOv5 method for coarse localization; (3) Case 3: Excluding the module for cropping borders after rotation in the preprocessing subprocess in the second stage; (4) Case 4: Excluding central constraints in the L-shaped dashed edge localization subprocess in the second stage.
Based on the results shown in Table 3, we can draw the following conclusions: 1.
The complete method achieves the highest recognition accuracy rate, whereas the recognition accuracy rates for the other three cases diminish, with some even failing to recognize all images.This demonstrates the effectiveness of the core modules in our proposed method.

2.
The processing modules for coarse localization and central constraints are essential for the recognition of DM codes.Without these modules, recognition of all DM code images is not feasible.

3.
Cropping borders has a certain impact on the recognition accuracy rate.In the case of rotated DM codes, the requirement of the fifth point in Level 2 of the central constraints cannot be satisfied if the invalid borders are not cropped.This will result in the inability to locate dashed edges.However, in the case of upright DM codes, no invalid borders need to be cropped.Therefore, excluding the module for cropping borders has no impact on the localization of dashed edges and the recognition of DM codes.

Conclusions
In this paper, we proposed a novel DM code recognition method based on locating the L-shaped dashed edge by combining the center of the DM code.The proposed method can recognize low-quality images of DM codes with protrusions and interruptions on the L-shaped solid and dashed edges with high accuracy and low time consumption.Based on the experimental results presented in this paper, we can draw the following conclusions: (1) The L-shaped dashed edge can provide stronger regularity, robustness, and interference tolerance for DM code localization than the L-shaped solid edge; (2) The center of the DM code can provide more useful constraints for L-shaped dashed edge localization than general constraints; (3) The image with less content and lower resolution generated from coarse localization can reduce the complexity and time consumption of subsequent processing stages; (4) For DM codes with protrusions and interruptions on the L-shaped solid and dashed edges, our proposed method is a better solution than the compared methods in terms of recognition accuracy rate and time consumption.Therefore, it holds significant application value in real-world industrial production environments.

Figure 1 .
Figure 1.Interference issues marked by red arrows on the L-shaped solid edge (finder pattern) and dashed edge (timing pattern).(a) Protrusions.(b) Interruptions.

Figure 2 .
Figure 2. Rectangular region of the DM code without interference.(a) The DM code with protrusions marked by red arrows.(b) The DM code without protrusions in the green rectangle.(c) The DM code with interruptions marked by red arrows.(d) The DM code without interruptions in the green rectangle.

Stage 1 .Figure 3 .
Figure 3.The overview architecture of our proposed method.This architecture consists of three processing stages: coarse localization, fine localization, and decoding.

Figure 4 .
Figure 4.The input and output of coarse localization.(a) Input.(b) The coarse position of the DM code is in the red rectangle.(c) The coarse position image I corase .

Figure 5 .
Figure 5.The procedure overview diagram of fine localization based on the L-shaped dashed edge.

Figure 6 .
Figure 6.The schematic diagram used to illustrate the cropping of invalid border regions.

Figure 7 .
Figure 7.The results of two-level screening strategy.(a) Level 1: general constraints.The red lines represent the horizontal dashed edges screened by Level 1.The green lines represent the vertical dashed edges screened by Level 1.(b) Level 1 + 2: general and central constraints.The red lines represent the horizontal dashed edges screened by Level 1 + 2. The green lines represent the vertical dashed edges screened by Level 1 + 2.

Figure 8 .
Figure 8.The top-left coordinate (x 1 , y 1 ) and bottom-right coordinate (x 2 , y 2 ) of data region r data marked by red rectangle.

Figure 9 .
Figure 9. Cases for determining x 1 , x 2 , y 1 , and y 2 of the data region r data .(a) Determining x 1 , x 2 when the VDE is located to the left of the VML of the image.(b) Determining x 1 , x 2 when the VDE is located to the right of the VML of the image.(c) Determining y 1 , y 2 when the HDE is located above the HML of the image.(d) Determining y 1 , y 2 when the HDE is located below the HML of the image.

Figure 10 .
Figure 10.The results from comparing the two expansion methods for dashed edges.The red and green rectangles represent r data and r pp , respectively.The two red arrows point out the differences between the two results.The horizontal and vertical red dashed lines represent the middle row and column of the dashed edges, respectively.(a) The results of the expansion method based on the sides of r data near the two dashed edges.(b) The results of the expansion method based on the middle row or column of the dashed edges.

Figure 11 .
Figure 11.The precision position image I precise of the DM code.

Figure 12 .
Figure 12.Example images of the five types.The first row presents the original DM code images, and the second row presents zoomed-in images of the DM codes.(a) Type-1.(b) Type-2.(c) Type-3.(d) Type-4.(e) Type-5.

Figure 14 .
Figure 14.The proportion of time consumption for the three stages of the proposed method.

Table 1 .
The number of images showing each of the five types.

Table 2 .
The time consumption (ms) of the five methods.Only the recognized DM code images are counted.The best result is in bold.

Table 3 .
The results of ablation studies for core processing modules.means the case includes this module and means the case excludes this module.