You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

9 March 2016

Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors

,
,
,
,
and
1
Software Platform R&D Lab., LG Electronics Advanced Research Institute, 19 Yangjae-daero 11-gil, Seocho-gu, Seoul, 06772, Korea
2
Department of Computer Science, Yonsei University, 50 Yonsei-ro Seodaemun-gu, Seoul, 03722, Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Section Physical Sensors

Abstract

Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel’s type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.

1. Introduction

A contour is defined as a segment that is one pixel wide and one or more pixels in length, and a boundary is defined as an unbroken contour [1]. Contours and boundaries provide very important information for object representation and image recognition. For example, they are used to separate objects from their backgrounds, to calculate the sizes of objects, to classify shapes and to find the feature points of objects using the length and shape of their contour pixels [2,3]. Moreover, in the field of graphics and vision, it is possible to use the contour information to save the shape of objects and restore them to their original shapes for various applications. Therefore, there have been many studies on contour-tracing algorithms for extracting and tracing the contour of an object. Most of the algorithms are binary image-based contour-tracing algorithms [3,4,5,6,7,8,9], which trace contours on digitized black and white images taken from various image sensors.
In recent years, with the increasing popularity of smart/wearable mobile sensor devices [10], such as smart phones, smart watches and smart glasses, various real-time applications, such as image code recognition, face recognition, optical character recognition (OCR), logo recognition, augmented reality (AR) and mixed reality (MR), have emerged for sensor computing [11,12,13,14,15]. Because smart/wearable mobile sensor devices possess limited hardware resources, such as low-performance processors, small-sized memory, low-resolution displays and low battery capacity, they require simple and fast algorithms for image processing.
Generally, a contour tracing algorithm can be evaluated based on the following four criteria: (1) the accuracy of contour tracing; (2) processing time; (3) data size to save the traced contour information; and (4) the ability to accurately restore and enlarge the original contour using the saved data. However, few studies on contour-tracing algorithms have sought to satisfy all of these criteria. Some of the conventional algorithms miss contour pixels that are at specific relative pixel locations, and others require considerable processing time to trace the pixels, because shortcuts to the local patterns are not considered [7,16]. Moreover, most of the algorithms have no data-compression capabilities that enable them to save the contour information, and some of them cannot restore the contour perfectly using the saved data [17].
In this paper, we propose a novel contour-tracing algorithm based on pixel following that overcomes the above-mentioned problems, i.e.,: (1) it provides fast and accurate results for contour-pixel tracing; (2) contour information can be compressed to reduce the memory size; and (3) it accurately restores the compressed data to the original contour image. In order to achieve the objectives, the proposed algorithm initially distinguishes the local patterns made by adjacent contour pixels, and it then finds the next contour pixel that will be traced from the pattern.
The paper is organized as follows. In Section 2, we categorize conventional contour-tracing algorithms and introduce their characteristics. Then, in Section 3, we analyze their performance based on the accuracy and speed. Subsequently, in Section 4, the proposed algorithm is described, and its contour tracing procedure, contour data compression technique and restoration technique are presented. In Section 5, we present a comparison of the conventional algorithms and the proposed algorithm on the basis of their performance, along with experimental results that include the number of traced pixels and the processing times for real-time large-sized images. Moreover, we compare the compressed data size and its restored results with the original traced contour pixels. Finally, in Section 6, we summarize the characteristics and experimental results of the proposed algorithm.

3. Analysis of Conventional Contour-Following Algorithms

Figure 8 illustrates the tracing results of the pixel-following methods based on the contour shown in Figure 1. In the figure, the arrow with an anchor is the tracer at the starting pixel, the solid arrow shows the movement operation of the tracer and the dotted line is the way point (detected pixel) for determining whether or not the pixel is a contour pixel for pixel following.
Figure 8. Comparison of conventional contour pixel-following algorithms: (a) SBF; (b) MSBF; (c) ISBF; (d) MNT; (e) RSA; (f) TPA.

3.1. Pixel-Following Cases

As shown in Figure 8, ISBF traces most types of contour pixels, such as the inner corner, outer corner, inner-outer corner and straight-line pixels. In the case of the SBF, there are inconsistencies regarding tracing at the inner corner and inner-outer corner. For example, in the figure, the inner-corner pixel ( 3 , 6 ) and inner-outer corner pixel ( 7 , 8 ) are traced, but the inner corner ( 3 , 4 ) and inner-outer corner ( 7 , 9 ) pixels are missed. In the case of the MSBF, all of the inner-outer corner pixels are traced, but it has inconsistencies with regard to tracing the inner-corner pixel. Moreover, MNT, RSA and TPA have no problems with consistency, but they cannot trace the inner corners. Among these algorithms, the TPA can be easily changed to trace an inner-corner pixel, because it has waypoints on the inner corners, as shown in Figure 8f.

3.2. Start-Up Condition and Stopping Criteria

The pixel-following algorithm requires start and stop criteria to avoid incomplete or infinite contour tracing.

3.2.1. Assumption for Start

Commonly, tracing starts when the tracer enters a black pixel from a white pixel. Therefore, at the start of tracing, the tracer must be placed on a black pixel, and its rear pixel P R e a r should be white. Moreover, in the case of the MSBF and ISBF, if the P L e f t R e a r of the start pixel is an inner-outer corner pixel, it cannot trace all of the contour pixels using their stopping criteria [9,16]. In addition, TPA has to select a start pixel that has white pixels at the tracer positions P L e f t , P L e f t R e a r and P R i g h t R e a r [22]; otherwise, it cannot trace the left pixel of the contour.

3.2.2. Stop Criterion

There are three methods for stopping the contour tracing [9,22]. The first method is Jacob’s stopping criterion [22], which terminates the trace when the tracer reenters the start pixel with an absolute direction that is the same as the start direction, i.e., if the current tracer T ( P , d ) is the same as the start tracer S ( P , d ) , the pixel following is terminated. SBF, MSBF and ISBF use this criterion, and their tracing terminates at the start pixel, as shown in Figure 8a–c. The second method uses the number of reentries to the start pixel. In Figure 8d–f, the tracers of MNT and TPA revisit the start pixel ( 1 , 5 ) with different directional information; therefore, they do not stop, but rather go to the next contour pixel if the first method is applied. For this reason, if a specified number of reentries, e.g., three or four times, is satisfied, the trace is terminated [9]. This method is sometimes not efficient, because it requires unnecessary tracing to be performed one or more times. The final method checks the trace route that is traced by the previous pixel and the current pixel of the tracer and determines whether it has already been passed. This method is used for RSA [9,22] because its tracer has only pixel-location information and no directional information. In other words, whenever the tracer enters the i-th contour pixel, the current pixel location P i is appended sequentially into the traced contour path. Moreover, if the traced path of ( P i 1 , P i ) appears twice, the tracing is terminated. This method can be applied for all pixel-following methods, and it is simpler than the second method; however, it requires more operations than Jacob’s stopping criterion.

3.2.3. Limitations of Conventional Pixel-Following Methods

The above-mentioned conventional pixel-following methods have certain limitations. First, some of the algorithms, such as SBF and MSBF, perform unnecessary movement operations on white pixels, as shown in Figure 8a,b. Second, not all of the algorithms can define the contour in the case of contour pixels; therefore, they cannot be a descriptive feature of the object and determine connectivity among objects. For example, in Figure 8b, MSBF detects ( 2 , 7 ) as the inner-outer corner pixel, but does not indicate ( 8 , 7 ) as an inner-outer corner, because the traced paths on the pixels are different. Moreover, MSBF also cannot determine the inner corner, outer corner and straight-line pixel, as is the case with SBF. In the case of ISBF, it determines the inner-outer corner, front-inner corner and front-straight line pixels, but cannot determine the left-inner corner, left-straight line and some of the outer-corner pixels, as shown in Figure 8c. Similarly, MNT and RSA cannot determine and detect the inner corners, and TPA cannot classify the contour pixels into the different types of contour pixels. Finally, the data size of the traced contour must be considered. The pixel-following methods save all of the pixel points; therefore, their data are larger than those of the RD code method.

4. Proposed Contour-Tracing Algorithm

In this section, we propose a novel pixel-following algorithm that traces contour pixels by considering their local patterns. Therefore, it can classify the contour pixel as inner corner, outer corner, inner-outer corner and straight-line contour types. Further, it can easily determine the next contour pixel. Moreover, it can determine and save representative points of an image, such as the RD code method, by using pixel following, but not using run-data-based following. In addition, the data can be restored to the original contour pixels without the RD code data.

4.1. Contour Following

4.1.1. Assumptions for Start-Up and Stopping Criteria

The proposed algorithm runs under two assumptions for starting. One is the general condition for pixel following, where the rear pixel of the tracer on the start pixel is white. The other is that there is no left-rear inner-outer corner for the tracer at the start position, i.e., if the rear and the left pixels are white and the rear-left pixel is black, the start pixel has to be changed. These are the same starting conditions as those used for MSBF and ISBF. Moreover, the stop criteria of the proposed algorithm is Jacob’s stopping criterion, and the tracer is always on a contour pixel.

4.1.2. Procedures

The proposed algorithm has two stages. First, the tracer follows the contour pixel based on the intensities of the left-rear and left pixels. After that, the tracer follows the contour pixels according to the intensities of the front and front-left pixels. Figure 9 shows the contour tracing cases for the proposed contour-following algorithm. In the figure, the tracer is first on N 0 and queries states of N 1 and N 2 , as shown in Figure 9a, and then, the states determine the corresponding path to be traced from among Cases (1)–(4), as shown in Figure 9b. After Stage 1, the moved tracer queries states N 3 and N 4 , and it then traces contour pixels along the corresponding path using the states from among Cases (5)–(8).
Figure 9. Contour tracing cases for the proposed contour-following algorithm. (a) Adjacent pixels; (b) contour cases.
Hence, by using the proposed algorithm, the inner corners traced are considered as Case (1) or (6); the inner-outer corners are considered as Case (2) or (5); the outer corners are considered as Case (4) or (8); and the straight-line pixels are considered as the other cases. Therefore, all of the cases are easily classified using the algorithm.

4.1.3. States

Figure 10 describes the state transition for the automation of the proposed algorithm. In the figure, the start and termination occur at State 0. The first stage runs using States 1–4, and it then transits to State 5. The second stage continues from State 5 using States 6–11, and it then transits to State 0. For example, Case (1) in Figure 9b can be executed using the transit sequence State 1, State 3 and State 5.
Figure 10. State transition of automation for the proposed algorithm.
In Figure 10, [ N 1 N 2 N 3 N 4 ] represents the intensity vector of the tracer’s four adjacent pixels shown in Figure 9a. In the vector, d implies “do not care”; B represents a contour pixel (black) and W represents the background pixel (white). Moreover, update ( P , d ) refers to the movement operation, where P is the new contour-pixel location and d is the new directional information for the tracer. In States 1 and 3, the updates occur based on the tracer of State 0, and other updates are based on the tracer information in State 5.

4.1.4. Characteristics of the Proposed Algorithm

We designed the proposed algorithm based on two stages with two major goals. First, the check operation for stopping occurs only at State 0; therefore, the number of checking operations on black pixels is reduced. In other words, the proposed algorithm verifies that the check operation occurs when only the tracer has a white rear pixel. This is more efficient when compared to the check operation that occurs for every contour pixel, because its start condition and stop criterion also satisfy the condition that the rear pixel of the tracer on the start pixel should be white. Therefore, the transition of Stage 2 to States 6–11 for processing Cases (5)–(8) is designed such that it can be returned to State 0 only if the tracer has a white rear pixel, and it reduces any unnecessary operations that are required to stop the checking. Besides, the tracers of Cases (2) and (4) have a white rear pixel after the movement, but their end conditions are not considered. In Case (2), at the start, the tracer avoids the inner-outer corner pixels as the start and end pixel. Moreover, in Case (4), the tracer has no update; therefore, it is unnecessary to perform the check operation twice.
Second, the proposed algorithm eliminates some of the redundant operations that are used to detect white pixels. The conventional algorithms do not consider white pixels in the previous path; therefore, they sometimes re-detect white pixels on the previous tracing during the current tracing. For example, in Figure 8, the white pixel at ( 4 , 2 ) is detected twice while determining contour pixels, such as ( 4 , 3 ) and ( 5 , 3 ) . Moreover, RSA detects ( 4 , 2 ) three times while determining ( 4 , 3 ) , ( 5 , 3 ) and ( 6 , 3 ) . On the contrary, the proposed algorithm has two stages, and its second stage avoids the previous path. Figure 11 shows the contour-tracing results obtained for the proposed algorithm, and it detects ( 4 , 2 ) once for contour tracing. Moreover, the figure shows that the proposed algorithm has fewer operations on the white pixels when compared to conventional pixel-following methods, as shown in Figure 8, and it traces all types of contours.
Figure 11. Result of contour tracing using the proposed algorithm.
The pseudocode of the proposed algorithm is shown in Algorithm 3.
Table 2 describes the tracing results that were obtained by using the proposed algorithm up to the stage at which the tracer enters the 11th pixel. The code represents the contour pixel type, and it is classified automatically during tracing.
Table 2. Result table of the proposed contour tracing.
Algorithm 3 Procedure of the proposed tracer.
1:
procedure Proposed_Tracer
2:
     T ( P , d ) S ( P , d ) , where P is on black, P R e a r is on white and i 1
3:
    // Whenever T ( P , d ) is updated, i increases by1 and T ( p , d ) is saved automatically
4:
    do
5:
        // Stage 1
6:
        if P L e f t R e a r = black then
7:
           if P L e f t = black then
8:
               // Case 1
9:
                T ( P , d ) T ( P L e f t , d L e f t ) and Code ( i ) I n n e r
10:
                T ( P , d ) T ( P L e f t , d L e f t )
11:
           else
12:
               // Case 2
13:
                Code ( i ) I n n e r o u t e r
14:
                T ( P , d ) T ( P L e f t R e a r , d R e a r ) and Code ( i ) I n n e r o u t e r
15:
        else
16:
           if P L e f t = black then
17:
               // Case 3
18:
                T ( P , d ) T ( P L e f t , d L e f t ) and Code ( i ) S t r a i g h t
19:
           else
20:
               // Case 4
21:
                Code ( i ) O u t e r   
22:
        // Stage 2
23:
        if P F r o n t L e f t = black then
24:
           if P F r o n t = black then
25:
               // Case 6
26:
                T ( P , d ) T ( P F r o n t , d L e f t ) and Code ( i ) I n n e r
27:
                T ( P , d ) T ( P F r o n t , d R i g h t )
28:
           else
29:
               // Case 5
30:
                Code ( i ) I n n e r o u t e r
31:
                T ( P , d ) T ( P F r o n t L e f t , d ) and Code ( i ) I n n e r o u t e r
32:
        else if P F r o n t = black then
33:
           // Case 7
34:
            T ( P , d ) T ( P F r o n t , d R i g h t )
35:
        else
36:
           // Case 8
37:
            T ( P , d ) T ( P , d R e a r ) and i i 1 and Code ( i ) O u t e r
38:
    while T ( P , d ) S ( P , d )
In the table, C o d e ( i ) represents only one code, the contour pixel type per contour pixel, but it can have several codes. For example, there is an outer-corner pixel and an inner-outer corner pixel.

4.2. Data Compression and Restoration

The proposed algorithm saves representation points and the inner-outer corner points in the form of compressed data in order to reduce the data size. The representation points are feature points that are used for storing and restoring contour pixels, while the inner-outer corner points are used for accurately restoring the inner-outer corner pixels.

4.2.1. Data Structure

The representative points and inner-outer corner points are represented as vertices of contour pixels. Figure 12a shows the point types of the proposed algorithm. There are seven types of representative points. They are two outer corners, two inner corners and one inner-outer corner. In addition, there are two types of inner-outer corner point. Figure 12b illustrates all cases of contour tracing for the proposed algorithm and their corresponding representative points and inner-outer corner points.
Figure 12. Contour pixel reconstruction. (a) Representative points and inner-outer corner points; (b) cases of the proposed algorithm.
These points are saved as a sequence during contour tracing. If the i-th representative point R i is equivalent to ( r i , x , r i , y ) , then the set of representative points R is given by { R 0 , R 1 , R 2 , , R n 1 } and R 0 = R n because the starting and ending points are the same. Similarly, if C j is the j-th inner-outer corner point, it can be represented using its coordinate and its type. The type of point C j , T is assigned to be N W S E or N E S W , as shown in Figure 12a. Table 3 shows the data structure for data compression and the restoration of the contour pixels using the proposed algorithm.
Table 3. Data structure of the proposed contour tracer.

4.2.2. Contour Pixel Restoration

For the restoration, we propose a restoration algorithm comprising two stages, namely contour restoration and inner-outer corner restoration.

Contour Restoration with Representation Points

The sequence of representative points is important for the reconstruction of the contour pixels, because it represents the contour-tracing sequence. Hence, by using the sequence of representative points and the relative location between adjacent representative points in the representative point table, the contour can be restored easily. If there are two sequential representative points R i and R i + 1 , Λ ( R i , R i + 1 ) , which is the relative position classifier from R i to R i + 1 , can be described as:
Λ ( R i , R i + 1 ) = N E where r i , x < r i + 1 , x and r i , y > r i + 1 , y S E where r i , x < r i + 1 , x and r i , y < r i + 1 , y N W where r i , x > r i + 1 , x and r i , y > r i + 1 , y S W where r i , x > r i + 1 , x and r i , y < r i + 1 , y
Figure 13 shows the contour-pixel reconstruction methods using the relative positions from R i to R i + 1 . In the case of S E or N W , the contour pixels are filled in a clockwise manner, while in the other cases, these pixels are filled in a counterclockwise manner.
Figure 13. Contour pixel reconstruction. (a) SE; (b) NW; (c) SW; (d) NE.
The methods in Figure 13 are the basic methods employed to restore the contour, but they are problematic when used to reconstruct the inner-corner pixel using three or more representative points. Table 4 and Figure 14 show cases of the missing inner-corner pixels using sequential representative points R i , R i + 1 and R i + 2 .
Table 4. Examples of the inner corner missing.
Figure 14. Restoration cases for different sequences of representation points. (ah) Case 1–8.
As shown in Figure 15, the three representative points cannot restore the inner-corner pixel P m . Therefore, if the three sequential points form one of the cases in Table 4, P m of the middle representative point R i + 1 must be filled with a dark color.
Figure 15. Problem with TPA. TPA must start with white left (L), left-rear (W) and right-rear (R) pixels. However, generic inner contours cannot satisfy these criteria.
This reconstruction method is different from the restoration approach in the RD code method because the proposed method does not need the RD code, but requires only representative points, and their saved sequence naturally replaces the RD code. Therefore, it requires a smaller memory size when compared to the RD code method.

Inner-Outer Corner Restoration

The restored contour obtained using only the representative points has no inner-outer corners because the inner-outer corner is not considered. For this reason, if there are inner-outer corner points in the data table, as shown in Table 4, the inner-outer corners are generated using the data C j with their point coordinates and types. If a pixel restored using the representative points is o ( x , y ) and O is the restored contour, the function of the inner-outer corner restoration, R I O , can be obtained as:
R I O = O o ( c j , x 0 . 5 , c j , y 0 . 5 ) o ( c j , x + 0 . 5 , c j , y + 0 . 5 ) , where c j , T = N W S E O o ( c j , x 0 . 5 , c j , y + 0 . 5 ) o ( c j , x + 0 . 5 , c j , y 0 . 5 ) , where c j , T = N E S W

5. Experimental Result

To compare the proposed algorithm to conventional algorithms, we perform an experiment to determine their accuracy, speed and stored data size. Table 5 shows the experimental environment.
Table 5. Experimental environment.
We experimented on nine CCITT (Consultative Committee for International Telephony and Telegraphy) standard fax images with 200 dots per inch (dpi) [17]. All of these images have 1728 × 2339 pixels and a file size of 11,842 KB. Table 6 shows the document type of these images and the total number of contour pixels. We used these large-sized images because they have various types of contours, which is useful when comparing the efficiencies with regard to parameters, such as processing time and the accuracy of the trace results of the contour-tracing algorithms.
Table 6. CCITT (Consultative Committee for International Telephony and Telegraphy) fax standard images.
In order to compare the proposed algorithm to the conventional algorithms, we used the experimental method described in [18] to determine the start pixels of the outer and inner contours of the images. In other words, whenever any untraced contour pixel is searched using a raster scan from the left top to the right bottom of the images, this pixel is regarded as the start pixel, and the tracer starts contour tracing. If the contour is an outer contour, the tracer’s initial direction is assigned as east (“E”). On the contrary, in the case of the inner contour of an object, e.g., “e,” “p,” “q,” “R” and “o,” the tracer’s initial direction is assigned as west (“W”).
In the experiments, we did not consider the TPA because the initial condition of the TPA is that it must start with white left (“L”), left-rear (“W”) and right-rear (“R”) pixels [22], which is not satisfied in some of the inner contours. Figure 15 shows an example of this violation. In our experiments, there were many cases for which these conditions were not satisfied. Therefore, we could not perform identical experiments, and meaningful data were not obtained for comparing the trace results with those of the other methods.

5.1. Accuracy

The accuracy of contour tracing involves determining how accurately the tracing algorithm traces, and we measure it by counting the number of pixels traced. First, we apply each algorithm to the test images and mark the tracing on the images. Then, we count all of the marked contour pixels in the images. Therefore, even if a pixel is traced several times, it is counted only once. Table 7 shows the results of the comparison between the proposed algorithm and the conventional ones. In the table, “total number” implies the total number of ground truth contour pixels, including the inner corner, outer corner, inner-outer corner and straight-line pixels. The ground truth pixels are counted that are adjacent to the background pixels. In this result, MNT and RSA traced the least number of pixels as contours, because they could not trace the inner-corner pixels. SBF has inconsistencies with regard to the inner-outer corner and inner-corner types. Therefore, it traced fewer pixels when compared to ISBF and the proposed algorithm. Further, MSBF has inner-corner inconsistencies that are similar to those of SBF, and MSBF traced fewer pixels when compared to the proposed algorithm and ISBF. The proposed algorithm shows that 99 . 5 % of the total contour pixels were found to be the same as those in the case of ISBF, and it has the maximum total number of traced contour pixels. In conclusion, the proposed algorithm produced the best results with regard to tracing accuracy.
Table 7. Comparison of traced contour pixels.
Figure 16 shows the traced images resulting from MSBF and the proposed algorithm. As shown in Figure 16a, MSBF was not able to trace some of the inner corner pixels, but our proposed method (Figure 16b) was able to trace all of the corner-pixel types without any inconsistency. Moreover, as the proposed algorithm can classify each corner type, it can trace the selected type of contour pixels by omitting some cases, as shown in Figure 9b. For example, if we remove the tracing Cases (1) and (6) from the other cases, we can obtain a result without inner-corner tracing, and it is the same as the result of MNT and RSA. Figure 17b shows an image that was traced using the proposed algorithm without inner corners, and it shows that the image is consistently traced without inner corners.
Figure 16. Visual comparison of two contour-tracing methods. (a) MSBF; (b) the proposed method.
Figure 17. Visual comparison of two contour-tracing methods. (a) MNT; (b) the proposed method (without inner corners).

5.2. Speed

In order to measure the tracing time for each algorithm, we performed each algorithm 20 times per image and calculated the average time. We used the cv2.getTickCount() function supported by OpenCV 3.0.0 to measure the processing time. Table 8 shows the average processing time of each algorithm used for tracing the images and a linear model for estimating the process time as the number of traced pixels increases using the least-square estimation (LSE) method. In the table, we obtain the average processing time per traced contour pixel by dividing the total processing time by the total number of traced contour pixels.
Table 8. Speed experimental result (units: seconds).
Figure 18 illustrates a graph that uses data from Table 7. As shown in Figure 18, the proposed algorithm has the best performance, i.e., it had the least average processing time and showed the smallest increase in the ratio of the process time to the number of traced contour pixels, as shown in the LSE. In particular, although the proposed algorithm traced most of the contour pixels in each image, it has the best or a good performance when compared to the conventional algorithms. Furthermore, the proposed algorithm shows better standard deviation results than the conventional algorithms. Therefore, the proposed algorithm has better performance than the other algorithms for the number of traced contour pixels and the processing time.
Figure 18. Comparison of tracing times of the contour-tracing algorithms. (a) Processing time vs. the number of contour pixels; (b) average contour tracing time vs. images.

5.3. Reduced Memory

The proposed algorithm does not save all of the contour pixels, but it saves only the representative points and the inner-outer corner pixels. Table 9 shows the data size acquired from the above experiments performed using CCITT standard fax images. It shows the data sizes of traced contour pixels and their compressed data. The number of traced contour pixels (A), which are the same results from Table 7, and C and D in the table indicate the number of representative points and inner-outer corner points of the traced contour pixels. A and C are the number of ( x , y ) coordinates, and D represents the number of inner-outer corners that comprise ( x , y ) coordinates and the type of inner-outer corner. The benefit of storing only the representative points based on the vertex of the contour pixel is that it can significantly reduce the data size. The experimental results obtained show that the proposed algorithm reduced the data size to 19%–60% of the memory used when all of the contour pixels were stored, as shown in Table 9.
Table 9. Comparison between total contour pixels and representative points.

5.4. Restoration

Figure 19 shows an example of the retrieval points and a restoration result obtained using the proposed restoration algorithm. Figure 19a is an example image that has all of the contour pixel types, and it depicts the representative points and inner-outer corner points for contour description and restoration. This image has two contours, namely an outer contour and an inner contour that includes two inner-outer corners. Table 10 describes these data, and Figure 19b shows the restoration results, which were retrieved using the data from Table 10. In the figure, the restored contour accurately represents the original contour pixels.
Figure 19. Example of the restoration of contour pixels. (a) The original image and its saved points for restoration; (b) restoration by the saved data.
Table 10. Saved data.
Figure 20 shows the result obtained for the CCITT #1 image using the proposed restoration algorithm. Figure 20a represents the contour-tracing result, and Figure 20b depicts the result of the restoration from the compressed contour data. To verify the identity, we compared the contour pixels of the two images and found that they are identical with regard to the number of contour pixels and the pixel coordinates, i.e., the contour pixels in the restoration result are the same as the original contour pixels. As shown in Figure 19 and Figure 20, these experiments proved that the proposed algorithm could trace the inner and outer contours. Further, it was able to store the results using less memory by storing only the representative points and inner-outer corner points instead of all of the contour pixels; moreover, it could correctly restore all of the contour pixels from the compressed data. Besides, as shown in [17], compressed data based on vertex contours guarantee precise enlarging.
Figure 20. Result of the experiment for CCITT #1. Contour pixels are shown in black, original image pixels in grey. (a) Result of contour tracing; (b) result of contour restoration.

5.5. Limitations

In the experiments, there were missing contour pixels that did not satisfy the experimental conditions described in [17]. In Figure 21b,c, there are eight untraced contour pixels because the horizontal scan line cannot find a starting pixel for the contour under certain conditions. In other words, because the scan line seeks an untraced black pixel with an adjacent white pixel on the horizontal line, if the untraced contour pixel that is between two black pixels in the horizontal direction has an adjacent white pixel in the vertical and/or diagonal direction, the untraced contour pixel cannot be considered as the starting pixel. Therefore, as shown in Table 7, the missing contour pixels remain after running the proposed algorithm, and the untraced contour pixels of other algorithms are included in the missing contour pixels for the same reason. In particular, Image #9 has the largest number of missing contour pixels because it has many one-pixel-sized chessboard patterns that comprise inner-outer corner pixels. The chessboard pattern consists of one-pixel-sized inner-outer corner pixels, which tend to cause the missing start-pixel problem.
Figure 21. Example of untraced contour pixels caused by missing starting pixel CCITT Image #9 From ( 1093 , 1766 ) to ( 1108 , 1780 ) . (a) Original image; (b) traced by ISBF; (c) traced by the proposed algorithm.
To overcome the problem, we applied an eight-connection mask to the images to obtain the starting pixel, but the mask required many operations. In other words, we attempted to measure the performance of multi-direction scanning in order to eliminate the missing contour-pixel problem by using vertical and horizontal scans instead of an eight-connection mask operation. Table 11 shows the increase in the number of pixels traced using bidirectional scanning, and Table 12 describes the processing time for this method. Moreover, Figure 22 shows the tracing result that was obtained using the proposed algorithm based on bidirectional scanning, and it shows that seven of the missing pixels are traced, but one diagonal connective-contour pixel remained untraced.
Table 11. Increases of pixels traced by the bidirectional scan from the one-directional scan.
Table 12. Computational time traced bidirectionally.
Figure 22. Result of the proposed algorithm by using bidirectional scanning.
In the above tables, bidirectional scanning slightly increases the number of traced contour pixels, but their processing time increases significantly. Moreover, the proposed algorithm shows acceptable performance in terms of accuracy ( 99 . 5 % ) , although we performed only unidirectional scanning. Hence, unidirectional scanning based on the proposed algorithm is sufficient for the application to contour tracing under the condition that relatively few objects are present, and we performed real-time tracing, such as AR, MR and recognition-image-based code on small-scale images, such as those in a mobile computing environment.

6. Conclusions

In this paper, we proposed a contour-tracing algorithm to trace contours in low-performance devices, such as mobile phones, PDAs and embedded devices that have a processor with limited computational capability and a small memory capacity. The proposed algorithm traces contour pixels based on the pixel-following method, and it can also convert the contour information to compressed data and accurately restore it to the original contour using the vertex-following method. The proposed algorithm repeatedly executes the two tracing stages. First, the tracer moves to the next pixel based on its left and left-rear pixels. Next, it moves based on the front and front-left pixels. With these two tracing stages, the proposed algorithm extracts two contiguous contour pixels together. Because the proposed algorithm traces contiguous pixel pairs in a single step, there are more possible cases of the form a contour can take. Therefore, the classification of the contour is more complicated than the conventional algorithms. On the other hand, this classification actually takes less time to compute, because it reduces the duplicated detection of the background pixels. Moreover, based on the classified cases, we can determine the representative points and the inner-outer corner points that are based on the coordinates of the vertices, and we can store the contour data as points in order to reduce the data size. In addition, we proposed a restoration algorithm to retrieve all of the contour pixels from the representative points and the inner-outer corner points. The proposed algorithm performs accurate restoration, and it can restore the inner-outer corners that were not considered in conventional algorithms, such as the RD code method and the PXY method. Another characteristic of the proposed algorithm is that it can trace the desired type of connectivity because it is able to distinguish between the different types of connections of the contour pixels. For example, the proposed algorithm may trace without inner corners, which is similar to the performances of MNT and RSA.
We performed experiments with regard to three aspects: accuracy, speed and saving data. From the experiment results, the proposed algorithm had the best performance with regard to the accuracy of contour tracing, i.e., of all the algorithms, it traced the largest number of contour pixels. Moreover, it had the smallest average processing time per contour pixel and good performance with respect to the processing time of each image and the LSE. For this reason, it is considered to have reasonable performance, and based on its accuracy and processing time, it is regarded as the best of the different algorithms. In addition to the accuracy and speed, the proposed algorithm exhibited good performance with regard to the memory consumption. It stored only the representative points and inner-outer corner points, thus reducing the memory consumption. Besides, the proposed restoration algorithm successfully retrieved all of the contour pixels from the compressed data. Therefore, the proposed algorithm shows improved accuracy and fast processing of contour tracing, low memory consumption for saving the contour and good restoration ability.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. NRF-2012R1A2A2A01014499 and NRF-2015R1A2A1A10055673).

Author Contributions

Jonghoon Seo contributed to the design of the contour following algorithm and the implementation of the proposed algorithm. Seungho Chae contributed to the design of the contour restoration method and was in charge of the experiment process. Jinwook Shim and Dongchul Kim contributed to the writing and revising of the paper. Cheolho Cheong guided the conventional contour following algorithms and helped to design the proposed algorithm. Tack-Don Han guided the research direction and verified the research results. All authors made substantial contributions to the writing and revision of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mcqueen, W.A. Contour tracing and boundary detection for object identification in a digital image. U.S. Patent 6674904, 6 January 2004. [Google Scholar]
  2. Pratt, W. Digital Image Processing; Wiley & Sons: Hoboken, NJ, USA, 1978. [Google Scholar]
  3. Gose, E.; Johnsonbaugh, R.; Jost, S. Pattern Recognition and Image Analysis; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1996. [Google Scholar]
  4. Pitas, I. Digital Image Processing Algorithms and Applications; Wiley & Sons: Hoboken, NJ, USA, 2000. [Google Scholar]
  5. Das, M.; Paulik, M.; Loh, N. A bivariate autoregressive technique for analysis and classification of planar shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 97–103. [Google Scholar] [CrossRef]
  6. Papert, S. Uses of Technology to Enhance Education. Available online: http://hdl.handle.net/1721.1/6213 (accessed on 16 August 2015).
  7. Cheong, C.; Han, T.D. Improved simple boundary following algorithm. J. Korea Inf. Sci. Soc. Softw. Appl. 2006, 33, 427–439. [Google Scholar]
  8. Pavlidis, T. Algorithms for Graphics and Image Processing; Springer-Verlag: Berlin, Germany, 2012. [Google Scholar]
  9. Reddy, P.R.; Amarnadh, V.; Bhaskar, M. Evaluation of stopping criterion in contour tracing algorithms. Int. J. Comput. Sci. Inf. Technol. 2012, 3, 3888–3894. [Google Scholar]
  10. Aroca, R.V.; Gomes, R.B.; Dantas, R.R.; Calbo, A.G.; Gonçalves, L.M.G. A Wearable Mobile Sensor Platform to Assist Fruit Grading. Sensors 2013, 13, 6109–6140. [Google Scholar] [CrossRef] [PubMed]
  11. Wakaumi, H.; Nagasawa, C. A 2D Ternary Barcode Detection System with a Dual Threshold. In Proceedings of the 5th IEEE Conference on Sensors, Daegu, Korea, 22–25 October 2006; pp. 1511–1514.
  12. Brodić, D.; Milivojević, D.R.; Milivojević, Z. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction. Sensors 2010, 10, 5263–5279. [Google Scholar] [CrossRef] [PubMed]
  13. Kim, J.H.; Seo, K.S.; Wang, J. Rapid diagnostic barcode system for codetection of multiple protein markers. IEEE Sens. J. 2006, 6, 248–253. [Google Scholar]
  14. Tian, Y.; Guan, T.; Wang, C. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach. Sensors 2010, 10, 2885–2900. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, S.; Zhao, X.; Lei, B. Robust Facial Expression Recognition via Compressive Sensing. Sensors 2012, 12, 3747–3761. [Google Scholar] [CrossRef] [PubMed]
  16. Cheong, C.; Seo, J.; Han, T.D. Advanced contour tracing algorithms based on analysis of tracing conditions. In Proceedings of the 33rd KISS Fall Conference, Seoul, Korea, 20–21 October 2006; Volume 33, pp. 431–436.
  17. Miyatake, T.; Matsushima, H.; Ejiri, M. Contour representation of binary images using run-type direction codes. Mach. Vis. Appl. 1997, 9, 193–200. [Google Scholar] [CrossRef]
  18. Danielsson, P.E. An improvement of Kruse’s segmentation algorithm. Comput. Graph. Image Proc. 1981, 17, 394–396. [Google Scholar] [CrossRef]
  19. Shoji, K.; Miyamichi, J.; Hirano, K. Contour following and reconstruction of binary images stored in run format. Syst. Comput. Jpn. 1999, 30, 1–11. [Google Scholar] [CrossRef]
  20. Toussaint, G. Grids, Connectivity and Contour Tracing. Available online: http://www-cgrl.cs.mcgill.ca/~godfried/teaching/pr-notes/contour.ps (accessed on 26 August 2015).
  21. Suzuki, S.; Abe, K. Topological structural analysis of digitized binary images by border following. Comput. Vis. Graph. Image Proc. 1985, 30, 32–46. [Google Scholar] [CrossRef]
  22. Ghuneim, A. Contour Tracing. 2015. Available online: http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/index.html (accessed on 28 August 2015).

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.