Subpixel Localization of Isolated Edges and Streaks in Digital Images

Many modern sensing systems rely on the accurate extraction of measurement data from digital images. The localization of edges and streaks in digital images is an important example of this type of measurement, with these techniques appearing in many image processing pipelines. Several approaches attempt to solve this problem at both the pixel level and subpixel level. While the subpixel methods are often necessary for applications requiring best-possible accuracy, they are often susceptible to noise, use iterative methods, or require pre-processing. This work investigates a unified framework for subpixel edge and streak localization using Zernike moments with ramp-based and wedge-based signal models. The method described here is found to outperform the current state-of-the-art for digital images with common signal-to-noise ratios. Performance is demonstrated on both synthetic and real images.


Introduction
Digital images frequently contain valuable information about the real-world objects observed by a camera, telescope, or other optical system. This information may be used by a sensing system to understand, interpret, monitor, or analyze the properties of objects contained within the scene. Oftentimes, such systems attempt to distill the dense and complex information content of an image into a sparse set of simple and descriptive primitives-with edges and streaks being especially common examples. Making use of these primitives requires knowledge of their location in the image. Although pixel-level edge/streak localization is often adequate, some applications demand higher accuracy and motivate the need for subpixel localization.
Pixel-level edge localization is a ubiquitous image processing task, with a variety of techniques that can be found in almost every introductory text on image processing. Some popular classical methods are those of Sobel [1], Prewitt [2], Marr-Hildreth [3], and Canny [4], although there are many more. Motivated largely by problems in image segmentation, there has also been recent interest in edge detection and localization using deep learning [5], with notable contemporary examples including DeepEdge [6], DeepContour [7], holistic edge detector (HED) [8], and crisp edge detection (CED) [9].
There are also several methods available for subpixel edge localization. Many of these subpixel methods operate by refining a pixel-level edge guess into a subpixel-level estimate. The approaches for achieving such a subpixel correction vary, but generally belong to one of four different categories: moment-based [10,11], least-squares fitting [12], partial area effect [13], and interpolation [14].
In this work, we treat both edges and streaks as a local (and not global) concept, with each being identifiable by the 2D intensity pattern within a small image patch around a particular image point. Edge points are generally identified by finding pixels possessing a large intensity gradient (attempting to describe image points where there is thought to be an intensity discontinuity). A streak point is identified by finding pixels belonging to a bright (or dark) 1D path against a dark (or bright) background. In either case (edges or streaks), we seek only to localize isolated edge/streak points in this work.
This work presents a localization framework that is equally suitable to finding the subpixel location of edge and streak points within a digital image. Our method belongs to the moment-based category of techniques. We refine recent work on improved edge localization with Zernike moments [15] and then extend this approach to the closely related problem of streak localization. Unlike conventional subpixel edge localization methods using Zernike moments that assume an intensity step function [11], we model the underlying edge intensity function as a linear ramp. We use the same approach to model the underlying streak intensity function as a triangular wedge. This is the first application of Zernike moments (that the authors know of) to the subpixel localization of streaks in a digital image. The framework presented here is computationally efficient, non-iterative, and can be used within most imaging pipelines.
The remainder of this work is organized as follows. Section 2 introduces the coordinate frames and scaling conventions that are used in Section 3 to construct Zernike moments on local image patches. Section 4 describes how to use these Zernike moments for the subpixel localization of both edges and streaks. Performance of this approach is then demonstrated quantitatively on synthetic images (Section 5) and qualitatively on real images (Section 6).

Coordinate Frames and Conventions
Suppose that we have a digital image with N rows and M columns, with pixel intensity values stored in a N × M array (for a monochrome image). Define the u − v coordinate system with the origin in the upper lefthand corner such that pixel centers occur at integer values of u and v. The u-direction is to the right (corresponding to column number) and the v-direction is down (corresponding to the row number). We presume in this work that a different algorithm (e.g., Sobel [1], Canny [4]) has already produced pixel-level estimates for either an edge or streak location. Assuming such an algorithm has detected m such pixel locations, we denote the set of pixel-level guesses as {ũ i ,ṽ i } m i ⊂ Z * 2 (where Z * is the set of non-negative integers).
The algorithms presented in this work use a small image patch (e.g., 5 × 5 or 7 × 7) centered about a pixel-level estimate of an edge or streak location to compute a small correction to that feature's location. The result is subpixel-level localization of a point belonging to an edge or streak. Furthermore, the moment-based methods to be discussed in Section 3 require the signal to be contained within the unit circle. Thus, data within each small image patch must be scaled to lie within the unit circle. For the i-th patch,ū where N p is the size of the image patch (e.g., N p = 5 is a 5 × 5 patch). We generally constrain N p to be an odd integer, such that the pixel-level guess occurs at the center of the patch. This scaling ensures that ū ≤ 1 and v ≤ 1 for every point within the square patch, and thatū 2 +v 2 ≤ 1 within the inscribed circle. We also find it convenient to define a rotated version of theū −v coordinate frame with an orientation dictated by the local normal of the edge or streak. Define a frame with coordinate axes u andv that are rotated by an angle ψ relative to the unprimed frame ( Figure 1) such that theū direction is parallel with the local edge/streak normal andv is parallel to the edge/streak tangent. The directionū is chosen to be positive in the direction from dark to bright for an edge. Alternatively, for streaks, the positiveū direction is chosen to be from the patch center towards the streak's center. Thus, by construction, the correction from the pixel-level guess to the subpixel streak location is a small positive update along theū direction. The subpixel update along theū direction may be either positive or negative for an edge. Figure 1. Example geometry of a square image patch (N p = 5, shown in dark red) centered about pixel-level edge guess {ũ i ,ṽ i } shown in bright red. The edge has a blur width of 2w and is offset from the pixel-level guess by a distance . The primed frame (rotated by an angle ψ relative to the unprimed image frame) is aligned with the edge, withv being parallel to the edge andū being normal to the edge. Although this figure shows only an edge, these coordinate frame conventions are the same for both edges and streaks.

Computation of Zernike Moments in Digital Images
It is well-established that image moments are a useful tool for compactly describing the shape of the 2D intensity pattern within an image patch using only a small number of parameters. In general, 2D moments are a weighted average of the 2D signal value, with the weights for a particular moment coming from its corresponding basis function. That is, given a basis function P nm (u, v), the corresponding moment of the arbitrary 2D signal f (u, v) is computed as Since we will be computing moments within small image patches, we have chosen to express all functions in terms of the scaled pixel coordinates {ū,v} as defined in Equation (1).
The choice of basis functions P nm is somewhat arbitrary, although it is desirable that the chosen set is both complete and orthogonal. In the case of edge or streak localization, we are looking for basis function sets defined within the unit disk. If P nm is chosen to be a polynomial in two variables, there are an infinite number of complete orthogonal sets [16], with the Zernike polynomials being the most commonly used.

Zernike Polynomials
Zernike polynomials, originally developed to aid in the study of spherical aberrations in optical lenses [17], have since found uses for a broad array of applications [11,15,[18][19][20][21][22]. The Zernike polynomials may be written in either Cartesian or polar coordinates, with the polar form being the most commonly used [11], where j = √ −1 and These polynomials form a complete set over a continuous space contained within the unit circle. The 1D radial polynomials, R nm (ρ), and their corresponding 2D Zernike polynomials, P nm (u, v), may be computed for a few common combinations of n and m, where the order n and repetition m [15] (or angular dependence [23]) can assume any values that satisfy n − m even.
It is straightforward to show that Zernike polynomials are orthogonal under an L 2 -inner product, where P (α) and P (β) are two arbitrary polynomials of the set, P * (β) is the complex conjugate of P (β) , and δ αβ is the Kronecker delta function. Additionally, Q (α) is the normalization coefficient and may be computed as [23] Q nm = π n + 1 (13)

Zernike Moments for a Continuous 2D Signal
Zernike moments are formed by using the Zernike polynomials from Equation (3) as the basis functions in the 2D moment equation (Equation (2)). We express such a moment as although we often find that scaling with the normalization coefficient is not required, This, of course, leads to the simple scaling relation

Rotational Properties of Zernike Moments
Zernike moments of repetition m = 0 are rotationally invariant, as the value of the moment A nm is unaffected by the orientation of the underlying signal relative to theū −v coordinate system. For other values of m (i.e., for m > 0), we find that the moment A nm changes as the orientation of the underlying signal changes.
Consider, for example, the moment A nm for a particular image patch as computed in theū −v frame. Now consider the moment A nm for this same image patch as computed in theū −v frame that has been rotated by an angle ψ relative to the unprimed frame (see Figure 1). Noting that θ = θ − ψ, it is clear from Equations (3) and (15) that It is this relation that will ultimately allow us to determine the orientation of an edge or streak from the moment A 11 .

Zernike Moments for a Digital Image
A digital image, I(u, v), is a quantized representation of the continuous signal f (u, v). The image I(u, v) is presumed to be an array of digital numbers, with integer intensity values (e.g., 0-65,535 for a 16-bit image) occurring at integer values of u and v.
In this case, we approximate the Zernike moment integral from Equation (15) with a double summation. Therefore, assuming a local image patch of size N p × N p centered at a pixel-level edge/streak guess of {ũ i ,ṽ i }, one may compute the moment as where p = (N p − 1)/2 is a non-negative integer (since N p is an odd integer greater than one). The mask M nm is an N p × N p matrix of values found by the integration of P nm over the corresponding pixel and within the patch's inscribed circle. Values of M 11 and M 20 are shown for a 5 × 5 and 7 × 7 mask in [15]. It is observed that Equation (18) is simply an image correlation, such that one may compute the moment everywhere in the image according to where * is the 2D correlation operator. The edge and streak localization methods presented here will ultimately only use the moments A 11 and A 20 . Of note is that M 20 is real valued such that We observe, however, that M 11 is complex valued, Fortunately, given the structure of M 11 , one only needs to keep track of the real component in practice since [15] Re Thus, we may compute all the necessary moments through three simple image correlations (which, in practice, only need to be computed at the pixel-level edge or streak locations and not at every point in the image).

Moment-Based Edge and Streak Localization
The same procedure may be used for both edge and streak localization. In both cases, the image data in a small N p × N p image patch around a pixel-level edge/streak guess is scaled according to Equation (1) and the Zernike moments A 11 and A 20 are computed (Equations (20), (22) and (23)). These moments are used to compute the edge/streak orientation (ψ) and the distance along this direction by which the pixel-level edge/streak guess should be adjusted ( ). Consequently, both the edge and the streak are corrected to subpixel accuracy by which, after rearranging Equation (1), yields the correction we seek in practice The orientation ψ of both edges and streaks is found in the same way and using the same equation. The difference between the edge and streak correction is simply how the Zernike moments are used to compute .

Computing Edge or Streak Orientation
Determining the normal direction to an edge or streak is achieved in the exact same manner, with the final equation being equivalent for both. By construction, and as can be seen from Figure 1, the intensity value is only a function ofū (i.e., not a function ofv ) for both the edge and the streak. We see immediately from the form of P 11 in Equation (7) that Thus, recalling that exp(−jmψ) = cos(mψ) − j sin(mψ), we may rewrite Equation (17) as (for m = n = 1) such that The streak orientation may be found using the equation for the imaginary component of A 11 . Observing that we find that the orientation of the streak is computed in terms of the moment A 11 (computed using Equations (22) and (23)) as This relation has been known for some time for edges [11,15]. Although obvious within the present framework, this represents the first extension of Equation (31) to streaks (of which the authors are aware).

Computing for Edges
An edge is generally understood to describe a discontinuity in image intensity in one direction, with little intensity change in the direction orthogonal to this discontinuity. Real grayscale images, however, rarely exhibit a true intensity discontinuities. Instead, image blur and pixel quantization cause the intensity change rapidly over a small distance (a few pixels). Thus, we seek areas of high intensity gradient rather than true discontinuities. It has long been known [10] that using a step function for the edge model within a moment-based subpixel edge localization algorithm produces a biased edge update if the image is blurred. This was one of the motivations for introducing a ramp edge model in [15].
In many practical image processing problems, the point spread function (PSF) due to camera defocus and other optical effects is well modeled as a 2D Gaussian [24]. Consequently, the streak associated with a crisp edge (a true discontinuity) may be blurred according to where I is the perfectly crisp image, K G is the Gaussian kernel, and I blur is the blurred image. The one-dimensional intensity profile taken perpendicular to the edge is sometimes referred to as the edge spread function (ESF), which will generally take the shape of a sigmoid function. To avoid the mathematical complexities of the sigmoid function within the Zernike moment integrals, it was observed in [15] that a linear ramp provides an adequate engineering approximation for most practical cases. The objective, therefore, is to relate the width of the linear ramp (2w full-width, see Figure 2) with the width of the Gaussian kernel approximating the camera PSF (σ). We do this using the linear relationship w ≈ k edge σ where k edge is the scaling we seek. In [15], it was suggested to select k edge = 1.66. We performed a more comprehensive study and found that choosing k edge = 1.80 produced superior performance, especially as the SNR became very large. In general, we found reduced sensitivity to the choice of k edge as the images became noisier (lower SNR). Therefore, we choose to model an edge as a ramp, whose intensity changes linearly between a background intensity (h) and a foreground intensity (h + k). The midpoint of this transition is defined to occur at a distance from the image patch center and has a width of 2w. Since we are using Zernike moments, we define all these quantities within the unit disk ( Figure 2). By choosing to define the edge in theū −v frame, it is straightforward to write the intensity as a function ofū only, Using this ramp edge model, it is possible to analytically solve the double integral in the moment equation from Equation (15) in the edge-aligned (i.e., primed) frame. We do this for the moments A 11 and A 20 , leading to  (34)) within the unit circle, including background intensity h, peak intensity of edge k, edge width w, and distance from the origin to the midpoint of the edge .
Looking at the expressions for A 11 and A 20 , it is immediately evident that the the intensity-dependent variable k (which describes the magnitude of the intensity change across the edge) cancels out if one considers the ratio, Q E In many cases, the edge width w is known (e.g., from the imaging system point spread function), such that Q E is a function of only . Although the analytic expression for Q E is rather cumbersome, it was found in [15] that which may be rearranged to solve for the unknown Note that the ratio Q E is easy to compute in practice from the raw image moments found in a digital image, where A 20 from Equation (20)

Computing for Streaks
As a natural extension to the ideal step-function ESF, we model the ideal line spread function (LSF) as an impulse (where the LSF is defined as the 1D intensity profile perpendicular to the streak). As before, the perfectly crisp image is blurred with a Gaussian kernel, thus spreading out the line intensity, with the resulting LSF being Gaussian PDF. Rather than deal with the mathematical complexities of the Gaussian PDF, we choose to model the streak PSF as a wedge. To make practical use of the wedge model, it is necessary to determine the relationship between the wedge width (w, see Figure 3) and the Gaussian kernel width (σ), where k streak is the parameter we seek. We found that choosing k streak = 0.90 provided the best results, with low SNR images exhibiting less sensitivity to the exact choice of this parameter. The small image patch centered about the pixel-level guess is assumed to have a constant background intensity of h and contain a streak of intensity h + k. The wedge has a full width of 2w with a peak intensity occurring at a distance from the image patch (or disk) center. The sides of the wedge are linear ramps transitioning between the background and the streak's ridgeline. This is shown pictorially on the unit disk in Figure 3. As with the edge, we choose to define the streak model in theū −v frame such that the intensity is a function ofū only (and not a function ofv ), The analytical value of A 11 and A 20 may be found by evaluating the double integral from Equation (15) and where B 1 and B 2 are from Equation (38) and The ratio of A 20 to A 11 eliminates k, thus providing a function of only and w, Assuming the streak width w is known, we seek to rearrange Q S to solve for the unknown . The complicated form of Q S after substitution of Equation (45) and Equation (46) makes finding an analytic solution difficult for arbitrary values of w and . Fortunately, it is straightforward to find an approximation that is good enough for most practical image processing applications.
We know that streaks are thin, so it is instructive to explore what happens to Q S as w → 0. We find that the limit does permit a simple analytic solution, which may be solved for for S 0ˆ To choose the correct root, observe that where we know to choose the right limit since S ≥ 0 by construction. Thus we seek the root that is approximately zero when Q S 0 is a large negative number, which only happens when the plus sign is chosen in Equation (50). Therefore,ˆ This analytic result may be generalized to the situation where w > 0, which does not appear to permit an exact analytic solution. Therefore, we write a parameterized expression forˆ S that simplifies exactly to the form of Equation (52) when w = 0 and fit the parameters in a least squares sense. Using this approach, consider a model of the form S ≈ a 2 1 Q 2 S + a 2 Q S w + a 3 w 2 + a 4 w + a 5 + a 6 Q S + a 7 w + a 8 (53) We found the terms associated with a 1 , a 3 , a 5 , and a 6 to dominate the estimate of S , with the remaining terms contributing relatively little. Furthermore, it was found that a 1 ≈ a 6 regardless of the test set-up. Therefore, discarding the unimportant terms and letting a 1 = a 6 , we performed a three parameter fit for the streak correction of the form The result of the least squares fit found the values of a 1 and a 5 to exactly match the analytically derived coefficients for S 0 in Equation (52) and empirically found that Therefore, we may write the empirically derived expression for the streak update for arbitrary w asˆ As with the edge update, note that the streak ratio Q S is easy to compute in practice from the raw image moments found in a digital image, where A 20 from Equation (20) (23), and ψ from Equation (31). Thus, with w and Q S known, Equation (57) may be used to solve forˆ S for a given image patch.
Observe that Q E and Q S are the same moment ratio, A 20 /A 11 ; hence, the equations to compute these ratios from the raw image moments are the same (compare Equation (42) and Equation (58)). What differs is the assumption of the underlying signal (a ramp or an edge), leading to a different relationship (Equation (41) or Equation (57)) between the moment ratio and the subpixel location of the edge or streak.

Numerical Validation on Synthetic Images
The performance of the edge and streak localization methods presented in this work were quantitatively evaluated using synthetic images. We find synthetic images to be especially useful in this context since the true continuous location of every image feature is known. The perfectly known continuous underlying signal may be blurred to simulate camera defocus and quantized (both spatially and in intensity) to simulate differing image resolutions. Further, noise may be added with a prescribed intensity, allowing the unambiguous evaluation of performance as a function of signal-to-noise ratio (SNR). This is important, as the localization of edges and streaks is known to become more challenging as SNR decreases [25,26]. Of particular note is that our new streak localization method works for 1D paths of arbitrary shape, whereas most existing streak detection algorithms-especially for faint (low SNR) streaks-presume the streaks are straight lines.
For the examples presented here, perfect images were blurred by using a Gaussian point spread function (PSF). After blurring, zero-mean Gaussian noise was added to achieve the specified SNR.

Ideal Edge Localization Performance
It is important to quantify the error associated with the approximations used to arrive at the analytic edge update given in Equation (41). Therefore, as a bounding case, suppose that we perfectly compute the Zernike moments for a noise-free continuous signal. In this situation, the error inˆ E is given by the contours in Figure 4 for different situations. These contours visually demonstrate the performance improvement afforded by switching from the step-function edge model (red contours) to the ramp edge model (black contours). The results shown here are identical to the observations of Christian in [15].

Digital Image Edge Localization Performance
Our method performed favorably to other existing techniques when processing synthetic digital imagery. This was assessed through a Monte Carlo analysis where we evaluated performance of different algorithms for images having varying amounts of blur and noise. Figure 5 shows edge localization error with our technique (black contours) compared against the moment-based solution with a step-function edge model [11] and the partial area effect (PAE) [13]. Results for both of the two moment-based methods shown here assume a 5 × 5 pixel mask.
Note that the PAE method from [13] was chosen as one of the two comparison methods in Figure 5 since this represents the current state-of-the-art. Indeed, this method has recently been used for the subpixel localization of edges in a wide variety of applications [27][28][29].
We observe that the PAE algorithm produced nearly perfect edge localization in cases with no noise (infinite SNR; off the right-hand side of Figure 5). The Zernike moment methods tended to perform better than the PAE method as noise increased (as SNR decreased; towards the left-hand side of Figure 5). The method presented in this work outperforms the PAE method for most real-life SNR values.
Example performance of our subpixel edge localization algorithm in different noise/blur cases is shown in Figure 6. This example shows localization of the edge of a circle. Clear improvement is evident in all cases, as the algorithm moves the pixel-level edge guess (red ×) towards the true edge location (black line). We know the true edge location since these are synthetic images. Figure 5. Contours of edge localization error (in pixels, assuming a 5 × 5 mask) in a digital image for our method from Equation (41) (black), the step function approximation using Zernike moments (red) [11], and the partial area effect (blue) [13] as a function of SNR and blur. Error statistics are computed from a Monte Carlo analysis consisting of 5000 randomized images at each SNR and blur combination.

Ideal Streak Localization Performance
As with the case of edges, we begin the numerical assessment of our subpixel streak localization method by considering the case of a continuous signal. This allows us to directly quantify the error associated with the approximations used to arrive at the analytic expression in Equation (57). We considered all reasonably plausible combinations of streak location ( ) and streak width (w) and produced contours of errors in the estimateˆ S , as shown in Figure 7. These errors are low enough to be negligible when applied to a pixelated image.

Digital Image Streak Localization Performance
Our Zernike moment method also performed well in the subpixel localization of streaks. We performed a Monte Carlo analysis where streak localization error was recorded for varying amounts of image blur and noise. The results are shown as contours in Figure 8. As expected, localization performance decreases with increased noise and blur. Example performance of our subpixel streak localization algorithm in different noise/blur cases is shown in Figure 9. This example shows localization of a circular streak. Clear improvement is evident in all cases, as the algorithm moves the pixel-level streak guess (red ×) towards the true streak location (black line). We know the true streak location since these are synthetic images. Figure 9. Qualitative visualization of subpixel streak localization performance at varying levels of blur and SNR. The left column shows the full synthetically generated image and the right column shows a small area within that image. The rows represent different noise and blur levels (top: no noise or blur; middle: noise only(approximately 28.5 peak signal to noise ratio); bottom: noise and blur (2D Gaussian kernel with standard deviation of 0.3 pixels)). The black line is the exact location of the true streak center.

Validation on Real Data
After confirming that estimated edge and streak locations agree with the truth in simulated images, we apply our method to real digital images. As these real-world images do not provide perfect subpixel knowledge of the edge or streak location, verifying results is from visual inspection and is largely qualitative.
It is important to remember that the algorithm presented here only performs the subpixel localization (i.e., correction) on pixel-level location guesses (e.g., using Sobel [1], Canny [4], or other method); any edges or streaks that the higher-level algorithm fails to identify will not contribute to the final result. Note that these pixel-level guesses may be found automatically or manually. Regardless of how they are found, the subpixel correction discussed in this manuscript is automatic.
This section includes a number of example images with the accompanying results from the methods proposed in this paper. These examples show the raw image on the left-most frame, followed by two sections of the image in grayscale containing streaks or edges of interest (center and right frame). We highlight performance by progressively zooming in on a specific portion of the image (moving left to right), with blue boxes indicating the region-of-interest for the subsequent frame.
The middle frame of each example only shows the subpixel estimate overlay (green dots). The right frame of each example shows both the pixel-level guess overlay (red ×) and the subpixel estimate overlay (green dots). The right frame also shows the edge or streak estimates connected by a line to help illustrate the improvement in smoothness naturally produced by the subpixel correction. Figure 10 shows an application to natural disaster management that illustrates the difference in the shores of the Mississippi River in the aftermath of a flood (bottom) and its normal banks (top). Figures 11 and 12 show the application of the proposed technique for the subpixel localization of common road surface markings (e.g., pedestrian crosswalk markings, lane markings). Figures 13-15 show various applications to space exploration. Finally, Figures 16 and 17 highlight the potential use of this method in medical imaging (e.g., tracing the routes of blood vessels in a retinal scan, microscope imaging of tumors). The diversity of example images is intended to emphasize that the techniques presented in this manuscript are application agnostic and can be applied to a wide variety of image processing tasks. Figure 10. Images of the Mississippi River taken by the Landsat-5 spacecraft, where we seek to localize the river banks. The top image (LM05 L1TP_025032_20120830_20180521_01_T2) was collected on 21 May 2018 by the Multispectral Scanner System (MSS) and shows the river during normal conditions. The bottom image (LT05_L1TP_025032 20110508_20160902_01_T1) was collected on 2 September 2011 by the thematic mapper (TM) and shows the river after a major flooding event. The red × symbols denote pixel-level edge estimates and green dots denote the refined subpixel localization estimates. Image data is available from the U.S. Geological Survey (USGS) [30].      . Image of a retinal scan for a healthy eye, where we seek to localize blood vessels. The red × symbols denote pixel-level streak estimates and green dots denote the refined subpixel localization estimates. The original image is im00032 from the STARE database [32,33]. Figure 17. Microscope image from an in vitro tumor model embedded in a hydrogel. We seek to localize the edges of tumors to measure their growth over time [34,35]. The red × symbols denote pixel-level edge estimates and green dots denote the refined subpixel localization estimates. The original image is courtesy of Dr. Kristen Mills of Rensselaer Polytechnic Institute.

Conclusions
Many modern sensing systems rely on the accurate extraction of measurement data from digital images. The localization of edges and streaks in digital images is an important example of this type of measurement, with these techniques appearing in many image processing pipelines.
Zernike moments are powerful tools in image processing and have been used for subpixel edge localization for over 25 years. In this manuscript, we describe a new way to exploit Zernike moment data to produce subpixel edge estimates, resulting in improved localization performance relative to earlier techniques using Zernike moments to achieve this same task. We also show how this same framework can be extended to the task of subpixel localization of streak. As far as the authors know, this represents the first application of Zernike moments to subpixel streak localization.
Correcting a pixel-level guess of either an edge or a streak requires use of only two Zernike moments (A 11 and A 20 ), with both of these moments being computed over a small image patch centered about the pixel-level guess. One of the principal innovations of this work is the use of a linear ramp (for an edge) or triangular wedge (for a streak) signal model. These simplified models make it possible to refine the pixel-level guess to subpixel accuracy using an analytic function of these two moments and knowledge of the edge/streak width. Furthermore, we show this new method to be tolerant to noise and to outperform many existing methods. Performance was quantitatively evaluated on synthetic images (localization error less than 0.1 pixel for both edges and streaks) and qualitatively evaluated on real images. Applications were shown for remote sensing, localization of road markings, space exploration, and medical imaging.