Next Article in Journal
A Novel Analytical Method of Inductance Identification for Direct Drive PMSM with a Stator Winding Fault Considering Spatial Position of the Shorted Turns
Next Article in Special Issue
An Effective Surrogate Ensemble Modeling Method for Satellite Coverage Traffic Volume Prediction
Previous Article in Journal
Citywide Metro-to-Bus Transfer Behavior Identification Based on Combined Data from Smart Cards and GPS
Previous Article in Special Issue
An Integrated Cognitive Radio Network for Coastal Smart Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Defect Detection for Web Offset Printing Based on Machine Vision

1
Department of information Science, Xi’an University of Technology, Xi’an 710048, China
2
School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(17), 3598; https://doi.org/10.3390/app9173598
Submission received: 27 July 2019 / Revised: 29 August 2019 / Accepted: 29 August 2019 / Published: 2 September 2019

Abstract

:
In the printing industry, defect detection is of crucial importance for ensuring the quality of printed matter. However, rarely has research been conducted for web offset printing. In this paper, we propose an automatic defect detection method for web offset printing, which consists of determining first row of captured images, image registration and defect detection. Determining the first row of captured images is a particular problem of web offset printing, which has not been studied before. To solve this problem, a fast computational algorithm based on image projection is given, which can convert 2D image searching into 1D feature matching. For image registration, a shape context descriptor is constructed by considering the shape concave-convex feature, which can effectively reduce the dimension of features compared with the traditional image registration method. To tolerate the position difference and brightness deviation between the detected image and the reference image, a modified image subtraction is proposed for defect detection. The experimental results demonstrate the effectiveness of the proposed method.

1. Introduction

Automatic surface defect detection based on machine vision has been widely used in many industry fields, such as the semiconductor manufacturing industry [1,2,3], electronic industry [4], textile industry [5] and printing industry [6]. Especially in the printing industry, defect detection is indispensable to ensure the quality of the printed product. The traditional defect detection method still depends mainly on human visual inspection, which has many problems including slow speed, high cost and subjective instability. As a result, there is high demand for developing an automatic defect detection system based on machine vision in the printing industry.
Several defect detection methods have been proposed for web offset printing [7], printed text [8] and variable data prints [9]. Although there are many different kinds of defects in printed products, such as white spots, hickeys, streaks, color defects, etc. [10], from a technical point of view, the existing defect detection methods consist of the following major parts: reference image selection, image registration of the reference image and print image, subtraction of the detected image with the reference image for defect indicating.
For the selection of reference image, defect-free printing samples are first collected, and then are averaged as the reference image. Besides the reference image, two tolerance images are built with respect to the reference image for giving the maximum and minimum deviations of a pixel value [7,11]. However, how to build tolerance images adaptively is still an open research issue.
Due to mechanical vibration and distortion of the material of the printed matter, there is some position offset between the defective image and the reference image. Before doing image subtraction, the defective image has to be registered with the reference image. For image registration, the most common method is to select a special region of interest (ROI) and then use a correlation technique to match the detected image and the reference image [10,12,13]. However, searching the ROI is time-consuming, and the matching method based on correlation is sensitive to changes of light and noise. Considering limited rotate variance, multiple templates were set up in reference [13] for rotate deviation. The drawbacks of this method is that it is not able to adapt to large angle rotation.
After the defective image and the reference image have been registered, the image subtraction method is the most common method for identifying defect pixels. In reference [14], a method combining grayscale and gradient differences was proposed, where the grayscale difference between the reference image and the detected image was performed to determine the defect in the non-edge region while the gradient difference was used to determine the edge defect. Katsuyuki et al. [10] proposed an index space method to inspect printing defects. To improve the detection precision, a dynamic threshold with a hierarchical detection strategy was proposed in reference [12]. To avoid the contour artifact, a two-way image subtraction algorithm was proposed in reference [15]. In addition to the image subtraction, the incremental principal component analysis was used to detect printing defects for the first time [16].
Currently, deep learning has been successfully used in various fields [17,18]. It is gaining more and more attention of researchers in the field of defect detection [19]. However, labeling defect regions is time-consuming, and there are only a few defect samples in practice [20], which limits the application of the deep learning method in the field of printing defect detection.
For defect detection of web offset printing, research has rarely been conducted on this topic except reference [6,7]. The study in reference [6,7] is ground-breaking work. However, there are still some problems not investigated in reference [6,7] and much room for improvement. (1) Due to the continuous movement of printed paper in web offset printing, and the time uncertainty of beginning image collection, how to determine the first row of captured images is a new problem not studied before. For this issue, a fast computational algorithm is proposed by converting 2D image matching into 1D feature matching. To further speed up the search process, the pyramid algorithm is employed in the proposed algorithm. (2) Nowadays, most image registration methods rely on ROI-based or feature point-based matching methods, which are sensitive to light changes and noise. Unlike the existing methods, a shape context descriptor based on the concave-convex feature is presented for image registration, which can effectively reduce the dimension of features compared with the traditional image registration method [21]. (3) For defect identification using the image subtraction method, false defects occur easily because of a position difference and a brightness deviation between the detected image and the reference image. To overcome this problem, the traditional image subtraction method is modified for defect detection, which can tolerate the position and brightness differences in a limited scope, and thus can reduce the false detection rate.
The paper is organized as follows. Section 2 gives the methodology, which includes the algorithm for determining the first row for web offset images, the proposed image registration method based on the shape concave-convex context descriptor, and the modified subtraction method. The experimental results and discussion are shown in Section 3. The conclusion is drawn in Section 4.

2. Methodology

The proposed defect detection method consists of two parts as shown in Figure 1, the first part is template construction, while the second part is defect detection for printed images. For the first part, multiple defect-free samples are collected, and then are aligned to generate template images which include the reference image, a bright image and a dark image. The reference image is obtained by averaging multiple aligned defect-free images, and is used to register the detected image during the process of defect detection. In addition, two tolerance images, named bright image and dark image, are built with respect to the reference image. The bright image and the dark image give the maximum and minimum deviations of a pixel value, and in combination with the reference image serve as template images for further defect detection. For defect detection in the second part, the web offset image is first cut to the same size as the reference image by the algorithm in Section 2.1, then aligned with the reference image and compared with the bright image and the dark image. If a pixel value of the detected image is not in the range of the bright image and dark image, it is identified as a defect pixel. Finally, a Blob analysis algorithm is used to label defect region.

2.1. Determining First Row in Web Offset Images

2.1.1. Description of the Problem

For web offset printing, the printed paper roll is moving continuously through many color printing units which are used to transfer images from a printing plate cylinder to print media. Similarly, printed images are captured by line scan CCD cameras with the movement of printed matter. However, the height of a reference image is equal to the perimeter of printing plate cylinder, this raises a problem of determining the first row in captured images. From the first row of captured images, the subsequent detected image can be repeatedly obtained by cutting the captured images as the same rows of a reference image. Normally the camera works in conjunction with a rotary encoder mounted on the final printing cylinder as shown in Figure 2a, where the zero position of the rotary encoder can be used to determine the first row. The drawbacks of mounting a rotary encoder on the printing cylinder are the inconvenient installation and it can easily cause the instability of printing cylinder. To solve this problem, a new solution where the rotary encoder is installed in the position of the winding unit is given in Figure 2b. In Figure 2b, the cylinder mounted rotary encoder can be individually manufactured, is easy to install on the winding unit and in particular, will not affect the final printing unit. Thus, the solution in Figure 2b is a universal solution. Since the cylinder mounted rotary encoder on the winding unit is not in synchronization with the printing cylinder, how to determine the first row of captured images is a new problem as shown in Figure 3. Figure 3a is a continuous captured printed image, in which the red arrow indicates an image with a printing plate cylinder perimeter. Figure 3b is a reference image cropped from the captured image with a length of one printing cylinder perimeter, while Figure 3c is an initial frame image captured at the start time with the same size as the reference image during the detection process. In Figure 3c, the position marked by the red arrow may be the first row that corresponds to the reference image. Thus, how to quickly find the first row in the initial frame image is addressed in this section.

2.1.2. Search Method Based on Image Projection

Generally, the common method for determining first row of web offset images is the method of template matching based on normalized cross-correlation (NCC). For the template matching method, a sub-image with a discriminative feature from reference image should first be selected, and then search the corresponding region in the initial frame image. However, searching for the position of a sub-image in a large image is computationally expensive [1]. Moreover, selecting a sub-image with a discriminative feature is not an easy job if done manually, and in some cases a sub-image with a discriminative feature does not exist in the reference image. In this subsection, a fast searching algorithm based on image projection is given, which can convert 2D searching into 1D searching using NCC.
Inspired by the idea of searching for the first row only in the row direction, a 2D image can be converted into a 1D feature descriptor by projecting a 2D image along the row’s horizontal direction, and then the search process can be executed in 1D space. Figure 4 gives two gray average projection curves of Figure 3b,c. From Figure 4, we can see the projection curves of a reference image and an initial frame are similar except having a shift in the horizontal direction, which means the first row can be found when the two projection curves overlap simply by moving one of the projection curves. Thus determining first row in web offset images becomes a matching problem in 1D feature space. Considering the periodicity of the captured image, the projection curve of an initial frame is extended by a period as shown in Figure 5, where the projection curve of the reference image is matched with the extended projection curve of the initial frame image.
Suppose the reference image and initial frame are represented as R ( m , n ) and F ( m , n ) with the size of w * h , where w is the image width and h is the image height. The projection average gray values of R ( m , n ) and F ( m , n ) along with row direction are denoted by r ( n ) and f ( n ) , which are given in Equations (1) and (2).
r ( n ) = 1 w m = 1 w R ( m , n ) n = 1 , 2 , , h
f ( n ) = 1 w m = 1 w F ( m , n ) n = 1 , 2 , , h
For convenience, we rewrite r ( n ) as t = ( t i ) i = 1 h = ( r ( 1 ) , r ( 2 ) , , r ( h ) ) , the extend f ( n ) as d = ( d i ) i = 1 2 h = ( f ( 1 ) , f ( 2 ) , , f ( h ) , f ( 1 ) , f ( 2 ) , , f ( h ) ) . Then, NCC is employed for searching the first row as follows.
N C C ( r ) = i = 1 h ( t i t ¯ ) ( d i + r d ¯ r ) i = 1 h ( t i t ¯ ) 2 i = 1 h ( d i + r d ¯ r ) 2
where t ¯ and d ¯ r are the means of t and d and range from d r + 1 to d r + h , respectively, which are given by:
t ¯ = 1 h i = 1 h t i      and       d ¯ r = 1 h i = 1 h d i + r
To speed up the search process, two strategies are imported into the proposed algorithm. The first is to select some columns which have obviously changing data as the projected ROI, as shown in Figure 6 marked with blue color, for reducing projection time. While the other is to construct a pyramid with one-dimension as shown in Figure 7, where searching process goes from the top layer to the first layer. The process of constructing the pyramid and searching first row is as follows:
(1)
Smooth the initial projection average gray value by three-point average and set it as the first layer of the pyramid.
(2)
Construct next layer by down-sampling the current layer with a sampling rate of n . In our experiment, n is set as 2.
(3)
Repeat Step 2 until the number of the current layer is no less than 32, where 32 is an empirical value.
(4)
Search from the top layer to first layer using Equation (3). Suppose the best position index in the current layer is i , then the searching range in the next layer is i ± n .
(5)
The best position index in the first layer is the first row.

2.2. Image Registration Based on Shape Context with Concave-Convex Feature

2.2.1. Introduction of Shape Context

Image registration plays an important role for defect detection based on image subtraction, as it enables the detected image to be in the same position as the reference image for comparison. In the process of image registration, common features shared by the detected image and reference image, such as feature point, region or shape, should be extracted first. Among these features, shape is the most popular feature for image registration, which has invariant characteristics for light changing. Thus, shape descriptor is explored in this paper for image registration.
Since shapes are represented by a series of contour points, shape description can be expressed by the relative position of contour points. In 2002, Belongie et al. [21] proposed a shape context descriptor to describe this relation. The main idea of shape context is to create a feature descriptor for each contour point relative to other contour points. As shown in Figure 8, for any point, a series of circles centered on that point were set up, and these circles were divided into different sectors, then the feature descriptor of this point was obtained by counting the number of contour points in each sub-region with a different circle radius and sector. For two shape matching, any point in a shape should be matched with all points in another shape by computing a cost function, which is computationally expensive.

2.2.2. Shape Context Based on Concave-Convex Feature

Inspired by shape context, a simple yet effective image registration method based on the shape’s concave-convex feature is presented. As shown in Figure 9, a rectangular coordinate system with the shape center is set up, and then a descriptor is given for each point, finally the shape descriptor is obtained by counting the distribution of all point descriptors.
For a point p i as shown in Figure 8, its descriptor is defined as λ i = ( d i , q i , θ i ) , where d i is the distance from p i to o , q i is the quadrant number of p i , and θ i represents the shape concave-convex feature of p i relative to p i 1 and p i + 1 . It is worth noting that our proposed shape concave-convex feature only relies on its two adjacent points, not all points used in Figure 7. For distance d i , it is quantized to a trisection according to the maximum distance d max . The value of q i ranges from 1 to 4. The range of θ i is between 0 and π , and is uniformly quantized to six segments. Considering all combinations of d i , q i and θ i values, there are 72 ( 3 × 4 × 6 ) different combinations. Thus, we build a histogram with 72 bins by counting all points on the shape, the histogram is used as the shape descriptor.
Let f ( p i ) = f ( d i , q i , θ i ) [ 1 , 72 ] , then the histogram is constructed as follows.
h ( k ) = { # p i | f ( p i ) = k , i = 1 , 2 , , N } k = 1 , 2 , , 72
where N is the number of points on the shape.
If there are two shapes, their corresponding histograms are h p and h q , a similarity measure between two shapes is defined as follows:
S = 1 2 k = 1 72 [ h p ( k ) h q ( k ) ] 2 h p ( k ) + h q ( k )
In the process of print production, the distortion of a printed image can be approximately regarded as rigid distortion. Therefore, the transformation parameter can be evaluated by Equation (6).
( x ^ y ^ ) = ( cos θ sin θ sin θ cos θ ) ( x y ) + ( t x t y )
where θ is the angle of the rotation, t x and t y are the translation along the X-axis and Y-axis, respectively.

2.3. Defect Detection Based on Image Subtraction

As shown in Figure 1, the proposed defect detection algorithm consists of two parts: template construction and defect detection for a printed image. Unlike the traditional method, the constructed template images include a reference image, a bright image and a dark image. The traditional defect detection method is image subtraction, where the defective image is subtracted from the reference image, and then a fixed threshold is employed to determine whether a pixel is defective. However, using a fixed threshold may easily lead to false positive or false negative pixels, since the fluctuating range of intensity value is different with the pixel position. To solve this problem, two tolerance images, named bright image and dark image, are built with respect to the reference image. The bright image and the dark image give the maximum and minimum deviations of a pixel value, as well as in combination with the reference image serve as template images for further defect detection.
Let I i ( x , y ) ( i = 1 , 2 , , n ) denote defect-free samples, where n is the number of defect-free samples. The reference image I a ( x , y ) and the standard deviation σ ( x , y ) of defect-free samples are expressed as Equations (7) and (8).
I a ( x , y ) = 1 n i = 1 n I i ( x , y )
σ ( x , y ) = 1 n i = 1 n ( I i ( x , y ) I a ( x , y ) ) 2
The bright image T b ( x , y ) and the dark image T d ( x , y ) can be defined as follows.
T b ( x , y ) = I a ( x , y ) + max ( a , b × σ ( x , y ) )
T d ( x , y ) = I a ( x , y ) max ( a , b × σ ( x , y ) )
where a and b are two empirical values that need to be determined by experimentation.
Besides the deviation of pixel values, there is still slight distortion of image contour because of changes to the material of printed matter and machinery shake, which cause false defects at the edge of patterns. To avoid this problem, operations of gray dilation and gray erosion with a 3 × 3 structure element are implemented for the bright image and dark image, respectively. Figure 10 gives an illustration of the reference image, bright image and dark image.
For defect detection, the continuous collected web offset image is first cut to the same size as the reference image by the algorithm in Section 2.1, and then is aligned with the reference image. In the next process, the detected image is compared with the bright image and the dark image pixel-by-pixel. If a pixel value of the detected image is not in the range the bright image and dark image, it is identified as a defect pixel. Finally, a Blob analysis algorithm is used to label the defect region.

3. Experiment

In this section, the methods proposed in this paper were executed on a personal computer with an Intel core 3-2100 in VC++6.0, and two experiments were presented: (1) verifying the validity algorithm for determining first row in web offset images, (2) evaluating the performance of the proposed defect detection method.

3.1. Performance of Determining the First Row in Web Offset Images

For determining the first row in web offset images, two factors may affect the speed and precision, which are the number of layers in a pyramid and the size of the ROI. For the precision, it is defined as the number of rows searched by our algorithm deviated far from the actual row. In practical application, plus or minus one row is acceptable.
Table 1 is the experimental results of different pyramid layers for 20 images with a size of 6200 × 8192 . As shown in Table 1, the searching speed becomes faster and faster with the increase of pyramid layers, while the precision is reduced. From the experimental results in Table 1, the conclusion is that a pyramid with five layers is the best choice, which achieves a balance between the time and the precision, in particular, the speed of five layers is approximately 20 times faster than the speed of a one-layer pyramid.
To speed up the searching process, some columns with the image height of the reference image were selected as the ROI, the number of columns is called the size of ROI. Table 2 gives the searching times for 20 images with different image height and three different sizes of ROI. In Table 2, all 20 images are correctly matched with the reference image, and the pyramid with five layers is used. From Table 2, we can see the best size of ROI is 20, and the corresponding time is less than 10 ms, which can satisfy the requirement of real time in the printing industry.

3.2. Performance of the Proposed Defect Detection Method

In this subsection, two indices, called sensitivity and FPR (False Positive Rate), are employed to evaluate the performance of the defect detection method, which are defined as follows:
s e n s i t i v i t y = # T P # T P + # F N × 100 %
F P R = # F P # T P + # F N × 100 %
where TP represents defects that are correctly identified as defects. FN represents defects that are incorrectly identified as non-defects. FP represents non-defects that are incorrectly identified as defects.
The values of a and b in Equations (9) and (10) are two important parameters. If the parameter values are set too large, some defects will be incorrectly regarded as non-defects. In contrast, if they are set too small, some non-defects will be labeled as defects. In the experiment, 15 types of web offset images with different sizes were analyzed to determine parameters a and b in Equations (9) and (10). From the experiment, the best parameters are a = 15 and b = 5 . Table 3 gives the performance of the proposed method for three different image sizes. As shown in Table 3, the highest sensitivity and the lowest FPR are achieved when the number of training samples is 30, which means using 30 defect-free images is the best choice for creating good template images. Moreover, the average sensitivity and average FPR are 96.03% and 1.54%, respectively, while the detection speed is less than 160 ms for each image, which verifies the effectiveness of the proposed method.

4. Conclusions

In this study, a machine vision-based defect detection method was proposed for web offset printing images. To solve the problem of determining the first row, a fast searching technique based on image projection was proposed. The proposed searching method has been used in practical production, and can satisfy the requirement of speed and accuracy for web offset printing. For image registration, a simple yet effective image registration based on the shape context is given. The distinctive characteristic of the shape descriptor is the concave-convex feature of shapes, which can better describe shape contours. For defect detection, the proposed template images can tolerate the differences of pixel values and position differences on contours for defect-free samples. The experimental results demonstrate the effectiveness of the proposed method. To meet the increase in printing machine speed, the speed of detection has room for further improvement in a future study.

Author Contributions

E.Z. conceived this study and wrote the manuscript. Y.C. proposed some valuable suggestions and guided the experiments. M.G. designed the computational algorithms and wrote the program code. J.D. acquired the test images and performed the experiments. C.J. carried out the measurements and analyzed the experimental data.

Funding

This work is supported by the Key Program of Natural Science Foundation of Shaanxi Province of China under Grant No. 2017JZ020, the Project of Science and Technology of Shaanxi Province of China under Grant No. 2019GY-080, and the Project of Xi’an University of Technology of China under Grant No. 108-451418006.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Annaby, M.H.; Fouda, Y.M.; Rushdi, M.A. Improved normalized cross-correlation for defect detection in printed-circuit boards. IEEE Trans. Semicond. Manuf. 2019, 32, 199–211. [Google Scholar] [CrossRef]
  2. Vilas, H.G.; Yogesh, V.H.; Vijander, S. An efficient similarity measure approach for PCB surface defect detection. Pattern Anal. Appl. 2018, 21, 277–289. [Google Scholar]
  3. Haddad, B.M.; Lina, S.Y.; Karam, L.J.; Ye, J.; Patel, N.S.; Braun, M.W. Multifeature, sparse-based approach for defects detection and classification in semiconductor units. IEEE Trans. Autom. Sci. Eng. 2018, 15, 145–159. [Google Scholar] [CrossRef]
  4. Jian, C.; Gao, J.; Ao, Y. Automatic surface defect detection for mobile phone screen glass based on machine vision. Appl. Soft Comput. 2017, 52, 348–358. [Google Scholar] [CrossRef]
  5. Kang, X.; Zhang, E. A universal defect detection approach for various types of fabrics based on the Elo-rating algorithm of the integral image. Text. Res. J. 2019, 1–28. [Google Scholar] [CrossRef]
  6. Shankar, N.G.; Ravi, N.; Zhong, Z.W. On-line defect detection in web offset printing. In Proceedings of the Fourth International Conference on Control and Automation (ICCA’03), Montreal, QC, Canada, 10–12 June 2003; pp. 794–798. [Google Scholar]
  7. Shankar, N.G.; Ravi, N.; Zhong, Z.W. A real-time print-defect detection system for web offset printing. Measurement 2009, 42, 645–652. [Google Scholar] [CrossRef]
  8. Jesper, B.P.; Kamal, N.; Thomas, B.M. Quality Inspection of Printed Texts. In Proceedings of the 23rd International Conference on Systems, Signals and Image Processing, Bratislava, Slovakia, 23–26 June 2016; pp. 1–4. [Google Scholar]
  9. Vans, M.; Schein, S.; Staelin, C.; Kisilev, P.; Simske, S.; Dagan, R.; Harush, S. Automatic visual inspection and defect detection on variable data prints. J. Electron. Imaging 2011, 20, 1–13. [Google Scholar] [CrossRef]
  10. Katsuyuki, T.; Shin’ichi, M.; Akira, I. High-speed defect detection method for color printed matter. In Proceedings of the 16th Annual Conference of IEEE Industrial Electronics Society, Pacific Grove, CA, USA, 27–30 November 1990; pp. 653–658. [Google Scholar]
  11. Yang, O.; Hu, T.; Guo, X.; Guo, B. An automation system for high-speed detection of printed matter and defect recognition. In Proceedings of the 2007 IEEE International Conference on Integration Technology, Shenzhen, China, 20–24 March 2007; pp. 213–217. [Google Scholar]
  12. Zhu, Z.; Guo, Y. On image registration and defect detection techniques in the print quality detection of cigarette wrapper. In Proceedings of the 27th Chinese Control Conference, Kunming, Yunnan, China, 16–18 July 2008; pp. 34–38. [Google Scholar]
  13. Luo, B.; Guo, G. Fast printing defects inspection based on multi-template matching. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, Changsha, China, 13–15 August 2016; pp. 1492–1496. [Google Scholar]
  14. Wang, Y.; Xu, S.; Zhu, Z.; Sun, Y.; Zhang, Z. Real-time defect detection method for printed images based on grayscale and gradient differences. J. Eng. Sci. Technol. Rev. 2018, 11, 180–188. [Google Scholar]
  15. Yang, X.; Wu, S. A rapid defect detecting algorithm for printed matter on the assembly line. In Proceedings of the 2012 International Conference on Systems and Informatics, Yantai, China, 19–20 May 2012; pp. 1842–1845. [Google Scholar]
  16. Sun, X.; Zhang, L.; Chen, B. On-line print-defect detecting in an incremental subspace learning framework. Sens. Rev. 2011, 31, 138–143. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Zhang, E.H.; Chen, W.J. Deep neural network for halftone image classification based on sparse auto-encoder. Eng. Appl. Artif. Intell. 2016, 50, 245–255. [Google Scholar] [CrossRef]
  18. Zhang, E.; Zhang, Y.; Duan, J. Color Inverse Halftoning Method with the Correlation of Multi-Color Components Based on Extreme Learning Machine. Appl. Sci. 2019, 9, 841. [Google Scholar] [CrossRef]
  19. Wang, T.; Chen, Y.; Qiao, M.; Snoussi, H. A fast and robust convolutional neural network-based defect detection model in product quality control. Int. J. Adv. Manuf. Technol. 2018, 94, 3465–3471. [Google Scholar] [CrossRef]
  20. Haselmann, M.; Gruber, D.P. Pixel-wise defect detection by CNNs without manually labeled training data. Appl. Artif. Intell. 2019, 33, 548–566. [Google Scholar] [CrossRef]
  21. Belongie, S.; Malik, J.; Jan, P. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Recognit. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of the proposed defect detection method.
Figure 1. Flowchart of the proposed defect detection method.
Applsci 09 03598 g001
Figure 2. Illustration of different installation methods of a rotary encoder for image capturing. (a) the usual method; (b) our proposed method.
Figure 2. Illustration of different installation methods of a rotary encoder for image capturing. (a) the usual method; (b) our proposed method.
Applsci 09 03598 g002
Figure 3. Illustration of determining the first row in captured web offset images. (a) continuous captured printing image; (b) reference image; (c) an initial frame image captured at the start time.
Figure 3. Illustration of determining the first row in captured web offset images. (a) continuous captured printing image; (b) reference image; (c) an initial frame image captured at the start time.
Applsci 09 03598 g003
Figure 4. Gray projection curve.
Figure 4. Gray projection curve.
Applsci 09 03598 g004
Figure 5. Extend gray projection curve of initial frame image.
Figure 5. Extend gray projection curve of initial frame image.
Applsci 09 03598 g005
Figure 6. Selection of projected region of interest (ROI).
Figure 6. Selection of projected region of interest (ROI).
Applsci 09 03598 g006
Figure 7. One dimensional pyramid.
Figure 7. One dimensional pyramid.
Applsci 09 03598 g007
Figure 8. Illustration of shape context.
Figure 8. Illustration of shape context.
Applsci 09 03598 g008
Figure 9. Concave-convex shape feature.
Figure 9. Concave-convex shape feature.
Applsci 09 03598 g009
Figure 10. Illustration of template images. (a) reference image; (b) bright image; (c) dark image.
Figure 10. Illustration of template images. (a) reference image; (b) bright image; (c) dark image.
Applsci 09 03598 g010
Table 1. Experimental results of different pyramid layers for 20 images.
Table 1. Experimental results of different pyramid layers for 20 images.
Number of Pyramid Layers12345678910
Precision±0±0±0±0±1±2±3±6±7±10
Times/ms3062.431293.86591.27318.24185.46126.46106.7185.1880.3080.98
Number of correct images20202020202020181412
Table 2. Executing time in different sizes of ROI (ms).
Table 2. Executing time in different sizes of ROI (ms).
Size of ROI201001000
Image Height
11641.602.057.08
81929.8511.8538.01
Table 3. Performance of the proposed defect detection performance. FPR, false positive rate.
Table 3. Performance of the proposed defect detection performance. FPR, false positive rate.
Image SizeNumber of Training SamplesNumber of DefectsNumber of Correctly Detected DefectsNumber of Falsely Detected DefectsSensitivity
(%)
FPR
(%)
Time (ms/Image)
4096 × 942106015002983.194.83140.13
4096 × 942206015491891.352.30140.13
4096 × 942306015851197.331.83140.13
4096 × 942356015851197.331.83140.13
4096 × 1164106825683783.285.4151.28
4096 × 1164206826232191.343.07151.28
4096 × 116430682654995.891.31151.28
4096 × 1248107396274184.845.54157.06
4096 × 1248207396991994.592.57157.06
4096 × 1248307397011194.861.49157.06

Share and Cite

MDPI and ACS Style

Zhang, E.; Chen, Y.; Gao, M.; Duan, J.; Jing, C. Automatic Defect Detection for Web Offset Printing Based on Machine Vision. Appl. Sci. 2019, 9, 3598. https://doi.org/10.3390/app9173598

AMA Style

Zhang E, Chen Y, Gao M, Duan J, Jing C. Automatic Defect Detection for Web Offset Printing Based on Machine Vision. Applied Sciences. 2019; 9(17):3598. https://doi.org/10.3390/app9173598

Chicago/Turabian Style

Zhang, Erhu, Yajun Chen, Min Gao, Jinghong Duan, and Cuining Jing. 2019. "Automatic Defect Detection for Web Offset Printing Based on Machine Vision" Applied Sciences 9, no. 17: 3598. https://doi.org/10.3390/app9173598

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop