Next Article in Journal
An Intraoperative Visualization System Using Hyperspectral Imaging to Aid in Brain Tumor Delineation
Previous Article in Journal
Experimental Study of Multispectral Characteristics of an Unmanned Aerial Vehicle at Different Observation Angles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Curviness on Four Different Image Sensor Forms and Structures

Department of Technology and Aesthetics, Blekinge Tekniska Högskola, 37141 Karlskrona, Sweden
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(2), 429; https://doi.org/10.3390/s18020429
Submission received: 15 December 2017 / Revised: 12 January 2018 / Accepted: 29 January 2018 / Published: 1 February 2018
(This article belongs to the Section Physical Sensors)

Abstract

:
The arrangement and form of the image sensor have a fundamental effect on any further image processing operation and image visualization. In this paper, we present a software-based method to change the arrangement and form of pixel sensors that generate hexagonal pixel forms on a hexagonal grid. We evaluate four different image sensor forms and structures, including the proposed method. A set of 23 pairs of images; randomly chosen, from a database of 280 pairs of images are used in the evaluation. Each pair of images have the same semantic meaning and general appearance, the major difference between them being the sharp transitions in their contours. The curviness variation is estimated by effect of the first and second order gradient operations, Hessian matrix and critical points detection on the generated images; having different grid structures, different pixel forms and virtual increased of fill factor as three major properties of sensor characteristics. The results show that the grid structure and pixel form are the first and second most important properties. Several dissimilarity parameters are presented for curviness quantification in which using extremum point showed to achieve distinctive results. The results also show that the hexagonal image is the best image type for distinguishing the contours in the images.

1. Introduction

The arrangement and form of photoreceptors vary from the fovea to the periphery of the retina. This is a consequence of evolution which argues that the arrangement and form of camera pixel sensors should be variable as well. However practical issues and history of camera development have made us to use fixed arrangements and forms of pixel sensors. Our previous works [1,2] showed that despite the limitations of hardware, it is possible to implement a software method to change the size of pixel sensors. In this paper, we present a software-based method to change the arrangement and form of pixel sensors.
The pixel sensor arrangement is often referred to as the grid structure. Most available cameras have rectangular grid structures. Previous works [3,4] have shown the feasibility of converting the rectangular to hexagonal grid structure by a half pixel shifting method (i.e., a software-based approach). Generation of the hexagonal pixel form is generally achieved by interpolation of intensity values of the rectangular pixel form. In this paper, we present a method, based on our previous works, for maximizing the size of rectangular pixel forms, that generates a hexagonal pixel form on a hexagonal grid. Each original rectangular pixel form is deformed to a hexagonal one using modelling of the incident photons onto the senor surface. To the best of our knowledge, there is no previous method which can offer hexagonal deformation of pixel form on a hexagonal grid.
The comparison of different grid structures or different pixel forms is a challenging task and should be directed to a more specific task. Inasmuch as human vision is highly evolved to detect objects in a dynamic natural scene, the gradient computation as an elementary operation in object detection becomes interesting and appropriate candidate for this specific task. We have focused our investigation on the effect of the sharp transitions in the contour of objects which is estimated by first and second order of gradient computation on the images generated by different grid structures and different pixel forms. Two categories of images having curved versus linear edges of the same object in a pair of images, are used to estimate the detectability of each of the four considered sensor structures for curviness.
This paper is organized as follows: in Section 2, related research on hexagonal grid resampling and form is explained. Then the four types of image generations are explained in Section 3. Section 4 and Section 5 present the methodology of curviness quantification and the experiment setup, respectively, then the results are shown and discussed in Section 6. Finally, we summarize the work described in this paper in Section 7.

2. Background

Due to a higher sampling efficiency, consistent connectivity and higher angular resolution of hexagonal grids [3] in comparison to square grids, in the last four decades hexagonal grids have been investigated in numerous methods and applications [3,5,6,7], which include image reconstruction [5], edge detection [7,8], image segmentation [9], and motion estimation [10]. Different algorithms and mathematical models have emerged in recent years to acquire hexagonal grids. For example, the rectangular grid can be suppressed in rows and columns alternatively and be sub-sampled; i.e., by a half-pixel shifting method [11]. In this way, a bigger hexagonal pixel is generated at the cost of obtaining lower resolution in comparison to the original rectangular grid. In the method, the distance between rows is changed by 3 / 2 and the pixel shifting can be achieved e.g., by implementing normalized convolution [12]. The significances of such a structure are the equidistant and 60 degrees intersection of the sampling points. In Yabushita et al. [5], the pseudohexagonal elements are composed of small square pixels with an aspect ratio of 12:14, which was later implemented by Jeevan et al. with a different ratio of 9:8 [13]. In the spiral architecture of He et al. [14] four square pixels are averaged and generate a hexagonal pixel. Based on the spiral architecture, a design procedure for the development of hexagonal tri-directional derivative operators is present in [15], that can be applied directly to hexagonal images and can be used to improve both the efficiency and accuracy with respect to feature extraction on conventional intensity images. Although the architecture preserves the main properties of object, it loses some degree of resolution, which has an impact, especially on the result of edge detection applications [8]. Later this architecture was improved by Wu et al. [16], by mapping the rectangular grids to hexagonal ones, processing images on hexagonal grids, and remapping the results to the square grids. By processing images on a hexagonal grid less distortion was observed [3]. All above software-based methods have one major common property: they convert the rectangular grid to the hexagonal one using a linear combination of rectangular pixels. The technique of resampling digital images on this pseudohexagonal grid by using three interpolation kernels is proposed in [17] and one blurring kernel have been demonstrated. Then a new spline based on a least-squares approach is presented in [18], used for converting to a hexagonal lattice and has been demonstrated to achieve better quality than traditional interpolative resampling. In the most recent research, Ref [19] introduced a method to convert images from square to hexagonal lattices in the frequency domain using the Fourier transform. However, in our approach, the rectangular pixels initiate a non-linear learning model based on the photons incident onto the senor surface. The approach is based on our earlier works [20,21] where the fill factor of an arbitrary image was estimated and used to obtain an enhanced image, as captured by a 100% fill factor sensor.
Recently as image quality and preserving of the quality during operations such as translation, rotation, and super resolution have become important issues to the research community, implementing a hexagonal grid has been shown to be one of the alternative solutions [12,22]. The hexagonal grid is our visual system solution for observing our complex environmental scenes. Believing that such natural scenes have had a great impact on the evolution of our visual system; i.e., in the creation of hexagonal grid in the fovea of the retina, then the justified question is which features in a natural scene and in its dynamical alteration cause such an impact. The pioneering work was done by Gestalt psychologists and, more in detail, by Rubin [23], who first demonstrated that contours contain most of the information related to object perception, like the shape, the color and the depth. In fact, by investigating simple conditions like those used by Gestalt psychologists, mostly consisting of contours only, Pinna et al. [24] demonstrated that the phenomenal complexity of the material attributes emerges through appropriate manipulation of the contours. Bar et al. [16] showed in their psychological study that our visual system prefers curved visual objects; i.e., the physical property of objects in a scene, which is manifested in sharp transitions in the contour of objects, has a critical influence on our perception of that object. Other studies such as [25] show the capacity of the hexagonal grid on detection of the sharp transitions in contour of objects. In this paper, we implement the curviness; i.e., the sharp transitions in contour of objects, as a comparison feature to evaluate four different grid structures.

3. Image Generation

In this section, we explain the generation of hexagonal enriched, square enriched, haft pixel shift enriched, and half pixel shift images from an original image which has a square pixel form on a square grid.

3.1. Generation of the Hexagonal Enriched Image (Hex_E)

The hexagonal enriched image has a hexagonal pixel form on a hexagonal grid. The generation process is similar to the resampling process in [1], which has three steps: projecting the original image pixel intensities onto a grid of sub-pixels; estimating the values of subpixels at the resampling positions; estimating each new hexagonal pixel intensity in a new hexagonal arrangement. The three steps are elaborated in the following.

3.1.1. A Grid of Virtual Image Sensor Pixels Is Constructed

Each pixel is projected onto a grid of L × L square subpixels. By using the fill factor F F value, the size of the active area is defined as S × S, where S = L × F F . The intensity value of every pixel in the image sensor array is assigned to the virtual active area in the new grid. The intensities of subpixels in the non-sensitive areas are assigned to be zero. An example of such sensor rearrangement on a sub-pixel level is presented on the left in Figure 1, where there is a 3 × 3 pixel grid, and the light and dark grey areas represent the active and non-active areas in each pixel. Assuming L = 30 and the active area is composed by 18 × 18 subpixels, and thereby the fill factor becomes 36% according to the above equation, and the intensities of active areas are represented by different grey level values. The size of the square subpixel grid for one pixel is examined from 20 × 20 to 40 × 40, the intensity in the generated images show no further significant changes after the size is 30 × 30. Thus, in the experiment, L is set to 30.

3.1.2. The Second Step Is to Estimate the Values of Subpixels in the New Grid of Subpixels

Considering the statistical fluctuation of incident photons and their conversion to electrons on the sensor, a local Gaussian model is estimated by maximum likelihood method from each certain neighborhood area of pixels. Using each local model, a local noise source is generated and introduced to each certain neighborhood. Then inspired by Monte Carlo simulation, all subpixels in each certain neighborhood are estimated in an iteration process using the known pixel values (for sub-pixels in the active area) or by linear polynomial reconstruction (for subpixels in non-sensitive area). In each iteration step the number of subpixels of the active area in the actual pixel is varied from zero to total number of subpixels of active area (i.e., the total sub-pixel number is defined by the fill factor). By estimating the intensity values of the subpixels during the iteration process, a vector of intensity values for each subpixel is created from which the final subpixel value is optimally predicted using Bayesian inference method and maximum likelihood of Gaussian distribution.

3.1.3. In the Third Step, the Subpixels Are Projected back to a Hexagonal Grid Shown as Red Grids on the Right of Figure 1, Where the Distance between Each Two Hexagonal Pixels Is the Same

Then the subpixels in each hexagonal area are estimated with respect to the virtual increase of the fill factor. The intensity value of a hexagonal pixel in the grid is the intensity value which has the strongest contribution in the histogram of belonging subpixels. The corresponding intensity is divided by the fill factor for removing the fill factor effect to obtain the hexagonal pixel intensity.

3.2. Generation of the Square Enriched Image (SQ_E)

The estimated square images are generated by three steps where the two steps explained in 3.1.1 and 3.1.2 are followed by a third step as follows. The subpixels are projected back to the original square grid shown as red grids on the left of Figure 1. The intensity value of each pixel in the square grid is the intensity value which has the strongest contribution in the histogram of its belonging subpixels. Then the corresponding intensity is divided by the fill factor to obtain the square pixel intensity by virtual increase of fill factor to 100% as the work in [1].

3.3. Generation of the Half Pixel Shift Image (HS) and Half Pixel Shift Enriched Image (HS_E)

The hexagonal grid in the previous work [3,4] is mimicked by a half-pixel shift which is derived from delaying sampling by half a pixel in the horizontal direction. The red grid, which is presented in the middle of Figure 1, is the new pseudohexagonal sampling structure whose pixel form is still square. The new pseudohexagonal grid is derived from a usual 2-D grid by shifting each even row a half pixel to the right and leaving odd rows unattached, or of course any similar translation. The half pixel shift image (HS) and half pixel shift enriched image (HS_E) are both generated from the original image (SQ) and enriched image (SQ_E) on the square grid, respectively.

4. Curviness Quantification

The curviness is quantified by comparison of the sharp transitions in contour of all correspondent objects in pair of images which have exact similar contents but two different contours; namely straight or curved contour. We define an image which has only straight or only curved contour as SC or CC image respectively. First and second order gradient operations are used in the quantification on each original image (i.e., SQ image type) and its set of generated images (i.e., Hex_E, SQ_E, HS_E, and HS image types). We elaborate these operations in following.

4.1. Implementing a First Order Gradient Operation

The familiar first order gradient J is defined as:
J ( z ) = J ( z ) J ( z )
where J represents the image, z and z represent the positions of two adjacent pixels which have a common border; i.e., a common side or corner border. The angle between orientation of adjacent pixels and horizontal axis represents direction of the gradient. The first order gradient values of a pair of images (i.e., even with different grids) can be compared by computing and analyzing the eigenvalues and eigenvectors which are obtained by solving the expression ( A λ j I ) e j = 0 , where A is the 2 × n matrix with n number of first order gradient values of each of the images, λ and e are the eigenvalue and the eigenvector respectively, j is the index number with value of 1 or 2, and I is the identity matrix. In the comparison, when two images have same contents but different grids, the range of eigenvalues from small to large values indicate the similarity to dissimilarity between grid structures. When two images have the same content but different contours the curviness can be quantified by comparison of the eigenvalues related to the images.

4.2. Implementing Hessian Matrix on SQ, and SQ_E Images

Analyzing the second order gradient operation in form of Hessian matrix computation has an intuitive justification in the context of curvature quantification. The eigenvalues analysis of the Hessian extracts the principle gradient directions in which the local second order structure of an image is decomposed. This directly gives the direction of smallest curvature (along the contour) [26,27]. The Hessian matrix of:
H = [ J x x J x y J y x J y y ]
is computed from convolution of the image J and gradients of the gaussian kernel G = 1 2 π σ 2 e x 2 + y 2 2 σ 2 as follows:
J x x = G x x J , J y y = G y y J , J x y = J y x = G x y J ,
where G x x ,   G x y and G y y represent the gradient kernel on the horizontal, vertical and diagonal directions, respectively.
The eigenvalues and eigenvectors of the Hessian matrix H are obtained by solving the expression ( H λ j h s I ) e j h s = 0 , where λ j h s and e j h s are the eigenvalue and the eigenvector, respectively, and j is the index number with value of 1 or 2. The first eigenvector (the one whose corresponding eigenvalue has the largest absolute value) is the direction of greatest curvature. The other eigenvector (always orthogonal to the first one) is the direction of least curvature. The corresponding eigenvalues are the respective amounts of these curvatures. Inspired by earlier work of Frangi et al. [22], three measurement parameters are derived from eigenvalues and eigenvectors of Hessian matrix which are used in comparison of the pair of images when they have the same content but different contours. The parameters are:
P 1 = a r c t a n g ( e 2 x h s e 2 y h s ) ,
P 2 = log ( 1 + ( λ 2 h s   λ 1 h s   ) 2 )   ,
P 3 = log ( 1 + λ 1 h s   2 + λ 2 h s   2 ) ,
where λ 1 h s (the largest one) and λ 2 h s are the eigenvalues of the Hessian matrix and e 1 h s = [ e 1 x h s , e 1 y h s ] e 2 h s = [ e 2 x h s , e 2 y h s ] are the related eigenvectors. P 1 ,   P 2 , P 3 measure the main orientation, the relation of the two principal curvatures (i.e., each of which measures amount of curvature bending in different directions), and second order structureness respectively. The dissimilarity of each pair of SC and CC images (i.e., which are having square grid and the same content but different contours) are measured by:
D P j ( S C , C C ) = v a r ( P j S C · P j C C ) v a r ( P j S C ) v a r ( P j C C )
where S C and C C are a pair of images which have the same contents but different contours and the pair can be the type of SQ or SQ_E images, j is the index number (which can have values of 1, 2, or 3), P j is one of the three parameters according to the j , P j S C and P j C C are the measurement parameters P j applied on the pair of images of S C and C C , respectively.

4.3. Implementing Second Order Operation to Detect Saddle and Extremum Points

The second order gradient operation has been used to detect spatial critical points; i.e., saddle and extremum points. The number of critical points in an image depends on the contour shape. Thus, a pair of images with the same content but different contours can be compared using the detected critical points in each image. Generally, the critical points are detected when the gradient is zero. This means the Hessian matrix can be used in this relation; the critical points are found by using eigenvalues of the Hessian matrix on a square grid. However, on a square grid the zeros of gradient will in general not coincide with the grid points, but lie somewhere in between them. Kuiiper [28] showed that by converting the square grid to a hexagonal grid; implementing a half pixel shifting method, it is possible to detect a more accurate number of critical points in a square grid-based image. In his detection process based on hexagonal grida, each point has six neighbours. The sign of intensity difference for each of these neighbours with respect to the point itself is determined which results in its classification into four different points: regular, minimum or maximum (extremum), saddle, and degenerated saddle point. We used this detection process not only on square based grid images, but also on hexagonal grid based images to detect saddle and extremum points.
In relation to curvature quantification, pair of SQ, SQ_E, and Hex_E image types are compared in relation to the detected critical points. The comparison measurement related to the saddle points is defined as:
R s a d d l e = 1   c s d a s d   ×   c s d | A s d B s d |   ×   b s d a s d , A s d   B s d
where A s d and B s d are two sets of a s d and b s d saddle points in images A and B respectively (where A and B are two types of images), C s d =   A s d B s d is the set of the saddle points that are on the same position in the two images with c s d =   | C s d | =   | A s d B s d | points, and | A s d B s d | is the number of all points which are in sets of A s d and B s d . Equation (7) is a normalized nonlinear dissimilarity measurement function based on the common detected saddle points in the two images; see a typical of such a function characteristic in top left of Figure 2. The comparison measurement related to the extremum points is defined as:
R e x t r e m u m =   1   a e x b e x   ×   c e x | A e x B e x | ,       B e x   A e x
where A e x and B e x are two sets of a e x and b e x extremum points where in images A and B, respectively (where A and B are two types of images), C e x =   A e x B e x is the set of the extremum points that are on the same position in the two images with c e x =   | C e x | =   | A e x B e x | points, and | A e x B e x | is all points which are in sets of A e x and B e x . Equation (8) is a normalized nonlinear dissimilarity measurement function based on the common detected extremum points; see a typical of such a function characteristic in top right of Figure 2. The comparison measurement related to the saddle points between two pairs of images is defined as:
R P s a d d l e =   1 c s d | A s d B s d |   ×   f s d | D s d E s d |   ×   m a x ( a s d , b s d ) m i n ( a s d , b s d )   ×   m a x ( d s d , e s d ) m i n ( d e x , e s d )
where a s d b s d d s d and e s d are the numbers of saddle points in sets of A s d B s d D s d and E s d in image A, B, D and E, respectively. The two pairs of images are (A, D) and (B, E). Each pair of images has the same type and different type from another pair. The images A and B are the SC images where the images D and E are CC images. The C s d =   A s d B s d and F s d = D s d E s d are the sets of the saddle points that are on the same position in each related two images with c e x =   | C e x | and f e x = | F e x | points respectively. Equation (9) is a normalized 2D nonlinear dissimilarity measurement function based on the common detected saddle points in each pair of the images; see a typical of such a function characteristic in bottom left of Figure 2. The comparison measurement related to the extremum points between two pairs of images is defined as:
R P e x t r e m u m =   1   c e x | A e x B e x |   ×   f e x | D e x E e x |   ×   m a x ( a e x , b e x ) m i n ( a e x , b e x )   ×   m a x ( d e x , e e x ) m i n ( d e x , e e x )
where a e x b e x d e x and e e x are the numbers of extremum points in sets of A e x B e x D e x and E e x in image A, B, D and E, respectively. The two pairs of images are (A, D) and (B, E). Each pair of images has the same type and different type from another pair. The images A and B are the SC images where the images D and E are CC images. The C e x =   A e x B e x and F e x = D e x E e x are the sets of the extremum points that are on the same position in each two related images with c e x =   | C e x | and f e x = | F e x | points. Equation (10) is a normalized 2D nonlinear dissimilarity measurement function based on the common detected saddle points in each pair of the images; see a typical of such a function characteristic in bottom right of Figure 2.

5. Experimental Setup

An image dataset from [29] is used for our experiments. The database was used earlier to investigate human visual preference of curved versus linear edges to find the physical elements in a visual stimulus which cause like or dislike of objects. The database is composed of 280 pairs of images, each pair of images have the same semantic meaning and general appearance. We found 51 image pairs had contour curvature differences additionally to the semantic meaning; i.e., straight to curved line. Then each pair of images with the same semantic content is defined as straight contour (SC) or curved contour (CC) images. Each of the images has the same resolution, 256 × 256, and in Uint8 format. Figure 3 shows twenty-three pairs of images which are randomly selected from the 51 pairs of images and used for the experiment. The images of the database are generated by computer graphic tools (i.e., without natural noise). However, they are compressed as jpg format in which noise is introduced accordingly as it is shown in Figure 3. It is certain that the noise affects the first and second order derivative operation of Equations (1) and (2). Some smoothing filters are used to reduce the noise but this significantly change the content of the object (straight or curved lines) in the images which is not appreciated due to the problematic goal (i.e., to compare straight lines vs curved ones). Instead of smoothing of the whole image, a template mask for each of the images is used to detect the background which is irrelevant to the object content, as it is shown in Figure 4. Each mask is generated automatically using morphological operations. When the images with different pixel structure and form are compared they are contaminated with the same amount of noise, thus effect of the noise on the results considered to be insignificant and the comparison results are obtained only from the masked area on the respective images. The images are converted to grayscale images, and then the fill factor of images is estimated by the method explained in [20]; fill factor value is estimated to be 36%. The impact of curved versus straight edges on the enriched hexagonal (Hex_E), enriched estimated square (SQ_E), enriched half pixel shifted (HS_E), and original (SQ) images is evaluated by computing the first and second order gradient operations; see Section 4.1 and Section 4.2, on the images. All images have the same resolution to ensure that the resolution is not affecting the number of gradients. All the processing is programmed and implemented by Matlab2017a on a stationary computer with an Intel i7-6850k CPU (Intel Corporation, California, USA, https://ark.intel.com/products/94188/) and a 32 GB RAM memory to keep the process stable and fast.

6. Results and Discussion

One of the original images and the set of enriched related generated images; hexagonal, estimated square, and half-pixel shifted images, which were explained in Section 3 are shown in Figure 5. The images from left to right in the first row are the original image, and the related generated images. The images in the second row of Figure 5 are the zoomed region of the images (shown as red square). The generated images show better dynamic range in comparison to the original images, as it was shown in [21].

6.1. First Order Gradient Operation

The sharp transitions in the contour of each objects in the images—the curviness—is quantified by implementing the first and second order gradient operations on the pair of original images and their set of generated images; each operation process is explained in more details in Section 4. For the images on the hexagonal grid; hexagonal and half-pixel shift images, the first order gradients are computed at six directions, which are 0, 60, 120, 180, 240, and 300 degrees. Due to resolution similarity of the generated images the on hexagonal grid, their number of pixels and the computed gradient elements are the same. The top and middle row of Figure 6 shows the sorted first order gradient values from the generated Hex_E image (i.e., the image shown in Figure 5) in comparison to the generated HS and HS_E image at 0, 60 and 120 degrees from left to right respectively. The amount of spreading of the gradient values reveals the correlation between the grid structure of the images. The more similar the image grids are, the amount of spreading is less. The more densely the points are distributed, the less variation from the gradient results are expected. Due to the grid similarity of original images and the SQ_E images, the correlations of sorted gradient values at 0, 45 and 90 degrees between them are linear which are shown in the bottom row of Figure 6. However, the correlations of sorted gradient values at 0, 60 and 120 degrees on the pseudo hexagonal grid structure and hexagonal grid structure are nonlinear and dissimilar; shown in top and middle rows of Figure 6.
The same is the correlation of sorted gradient values at 0 degree between the SQ image grid structure to both the pseudo hexagonal grid structure and hexagonal grid structure which are shown in Figure 7 on the first and third columns from left respectively; i.e., the correlation is in each case nonlinear and dissimilar.
Figure 7 shows that the gradient results from the four types of generated images in comparison to the original SQ image is different from each other; especially the second plot from left. This is because the grids in HS, HS_E and Hex_E images are more alike to each other and more different from the square grid (i.e., the grid of SQ and SQ_E images). The similarity/dissimilarity of each two grid structures are possible to visualize; as they are shown in Figure 6 and Figure 7. However, to quantify such a similarity/dissimilarity the first order gradient operation can be used as it is described in Section 4.1. Accordingly, the covariance of the gradient values of each two images are computed where each compared two images have the same contents but different grid structures. Then the eigenvalues and eigenvectors of each covariance matrix is computed using singular value decomposition (SVD) method.
The first eigenvalues, which are also the largest ones, of the covariance matrix between each pair of the original images and its set of generated images are shown in top of Figure 8, and the second eigenvalues are shown in the bottom of Figure 8. The blue, red, green and black lines represent the first or second eigenvalues of the covariance matrixes with respect to the four types of images presented in Section 3, respectively. The continuous lines and dash lines represent the computed first and second eigenvalues with respect to the original CC and SC images, respectively. Table 1 shows the summary of the comparison results of the two figures in Figure 8. The different properties among the types of images are caused by the diversity of their grid structure, pixel form or fill factor value. ‘Yes’ and ‘No’ in the table represent similarity and dissimilarity of such a property in relation between each generated image to the SQ image. The values in the last four columns of Table 1 are the sums of the first and second eigenvalues of the respected image type shown in Figure 8. In the table, the increase of the first or second eigenvalue indicates the increase of similarity or dissimilarity between the generated image and SQ image respectively. The SQ_E and HS images in relation to the SQ image show higher similarity than the other image types; see the first eigenvalue results in Table 1 and top figure in Figure 8. The comparison of these two types of images show that the grid structure is more important than pixel form and fill factor value to cause differences between them. The Hex_E and HS_E images in relation to the SQ image show higher dissimilarity, respectively. The comparison of these two types of images show that when the grid structures are the same the pixel form is more important than fill factor value to cause differences between them. The results show that the choice of grid structure, pixel form, and fill factor value are respectively important in generation of a new type of images. Here we should note that these three properties are not quite independent from each other. In Table 1, the results related to SC and CC for all four types of images show that they are clearly distinctive. However, the detail comparison of SC and CC in Figure 8 show that it is not possible to have a clear conclusion between SC and CC by first order operation; due to the results variation.

6.2. Hessian Matrix on SQ, and SQ_E Images

Table 2 and Table 3 show the measured P 1 ,   P 2 , and P 3 parameters (Equations (3)–(5)) between SC and CC images having SQ or SQ_E image type using Hessian matrix; for more detail see Section 4.2. The bold result values in the tables show the higher one in comparison of each pair of SC and CC images. In Table 2, the results related to the CC images have higher values than SC results. D P j ( S C , C C )   ( j = 1 ,   2 ,   3 ) indicates that the contours in the CC images are distinctively different (have more curviness) than the ones in SC images. Table 3 shows the similar results, that the results values are higher in majority of CC images in respect to SC results; 94% of measurement values. Table 4 shows the dissimilarity measurement of D P j ( S C , C C )   ( j = 1 ,   2 ,   3 ) in Equation (7). The comparison between SQ and SQ_E images using D P 1 ( S C , C C ) and D P 2 ( S C , C C ) values is not conclusive due to undistinctive result values. However, using the D P 3 ( S C , C C ) values the comparison is possible due to clear distinctive results. According to this comparison the SQ_E images have in average 62.5% better performance than SQ images. As far as the dissimilarity of the D P 3 ( S C , C C ) measures the second order structureness, we can conclude that the SQ_E image type far better can perform in detection of curviness than SQ image type.

6.3. Saddle and Extremum Points

The detected saddle and extremum points from the second order gradient operation on the same HS, HS_E and Hex_E images shown in Figure 5 are shown in Figure 9, respectively. The HS and HS_E images are generated from its corresponding SQ and SQ_E images by converting the square grid to a hexagonal grid, implementing the half pixel shifting method proposed in Section 3.3. As it is discussed in Section 4.3, in comparison to SQ and SQ_E images the HS and HS_E images are better at detecting the critical points. The number of saddle and extremum points in each SC or CC image having Hex_E, SQ_E and SQ image types are shown in Figure 10. This shows that in 74% of pairs of CC and SC images of each image type, CC images have detected more critical points than SC images. However, in the figure the comparison results among Hex_E, SQ and SQ_E image types are still undistinctive. The top and middle figures in Figure 11 show that the number of common saddle and extremum points respectively, and the bottom figure shows the total number of the critical points. The common points are those points which are in the same position in each pair of SC and CC images. The results indicate that the SQ image type detects more common saddle points; in 87% of SC and CC image pairs, and Hex_E image type detects more extremum points; in 74% of image pairs. Due to that the critical points in SQ, SQ_E and Hex_E images are detected in the hexagonal grid; see Section 4.3, the difference of the results between image types is affected by having different pixel form and fill factor value. The results values between SQ_E and Hex_E image type in Figure 11 are close to each other; indicating that the pixel form has more effect on the second order gradient than the fill factor.
The normalized nonlinear dissimilarity measurement values of R s a d d l e and R e x t r e m u m for SC and CC from three pairs of comparisons between SQ_E, SQ and Hex_E images are computed by Equations (7) and (8) and shown in Table 5, where the higher value represents the larger dissimilarity of the contours between respected image types. For each pair of SC and CC, the larger value of R s a d d l e or R e x t r e m u m is shown bolded. The correlations of the three pairs of comparisons in Table 5 are shown in Figure 12, where the points of four colors of blue, red, green and black represent the correspondent values of R s a d d l e and R e x t r e m u m for each pair of SC and CC images respectively. In the figure, the three axes represent the three pairs of comparison between image types. The results of Table 5 show that the R s a d d l e values for SC and CC images are close to each other, which is also verified in Figure 12; i.e., the blue points are mixed with red points. On the contrast, the R e x t r e m u m values for SC and CC images in Table 5 are distinctive which is shown in Figure 12 by the green and black points. According to these results our conclusion is that only the extremum points can be used for quantifying the curviness; i.e., to distinguish a CC image from a SC image. Comparing the two comparisons of SQ_E & SQ and Hex_E & SQ_E and considering the three properties among the type of images; see Table 1, the measured dissimilarity in Table 5 for each of the comparisons is caused by fill factor and pixel form respectively. This is due to that SQ_E and SQ images are converted to hexagonal grid by half pixel shift for detection of critical points. Thus the grid structures of all three types of images in the two comparisons are the same. According to Table 5, the dissimilarity comparison values of the Hex_E & SQ_E are higher than the SQ_E & SQ, which shows that the pixel form is more important property than the fill factor to cause dissimilarity between images. This is consistent with the results which are presented in Table 1. Thus, from the results we concluded that the importance of the three properties from high to low is the grid structure, the pixel form and the fill factor, respectively.
Table 6 shows the results of the R P s a d d l e and R P e x t r e m u m values of three comparisons computed by the Equation (9) and Equation (10). The values in each column represent dissimilarity values based on the common detected saddle or extremum points in each pair of SC and CC images which is used to compare two different image types. The dissimilarity values in the table are not consistent for each two image types in comparison to previous results, indicating that processing SC and CC together is not an adequate way to quantify the difference between two image types. By combining the results in Table 5 and Table 6, we conclude that according to all the measurement the Hex_E image type has the largest dissimilarity in comparison to the other image types, which means the Hex_E is the best image type among our tested image types for quantifying the curviness of a contour.

7. Conclusions

In this paper, we present a software-based method to generate images with hexagonal pixel form on a hexagonal sensor grid. Each original rectangular pixel form is deformed to a hexagonal one using modelling of the incident photons onto the sensor surface. Four different image sensor forms and structures, including the proposed method, are evaluated by measuring their ability to detect curviness. We introduce a method for curviness quantification by comparison of the sharp transitions in contour of all correspondent objects in pair of images which have exact similar contents but two different contours. The quantification measurements are achieved by implementing first and second order gradient operations in form of several introduced and defined dissimilarity parameters. We show how first and second gradient operations, Hessian matrix computation, and measurement of the dissimilarity parameters can be implemented on both square and hexagonal grid structures. We pay special attention in detection of critical points (i.e., saddle and extremum points) using different image types.
The grid structure, pixel form and fill factor are proposed for representing the three major properties of the sensor characteristics and the results indicate that the grid structure is the most important one that makes difference between the type of images, and the pixel form is the second important one. The results of curviness quantification indicate that the detection of extremum points can be used to highly distinct CC from SC images. We show that enriched hexagonal image (i.e., Hex_E) is best in detection of curviness; according to its curviness measurement results, in comparison to the other tested image types. In the future, we intend to study other grid structures and pixel forms.

Author Contributions

Wei Wen and Siamak Khatibi equally contributed to the textual content of this article. Wei Wen and Siamak Khatibi conceived and designed the methodology and experiments. Wei Wen performed the experiments. Wei Wen and Siamak Khatibi analyzed the data. Wei Wen and Siamak Khatibi wrote the paper together.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wen, W.; Khatibi, S. Novel Software-Based Method to Widen Dynamic Range of CCD Sensor Images. In Proceedings of the International Conference on Image and Graphics, Tianjin, China, 13–16 August 2015; pp. 572–583. [Google Scholar]
  2. Wen, W.; Khatibi, S. Back to basics: Towards novel computation and arrangement of spatial sensory in images. Acta Polytech. 2016, 56, 409–416. [Google Scholar] [CrossRef]
  3. He, X.; Jia, W. Hexagonal Structure for Intelligent Vision. In Proceedings of the 2005 International Conference on Information and Communication Technologies, Karachi, Pakistan, 27–28 August 2005; pp. 52–64. [Google Scholar]
  4. Horn, B. Robot Vision; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
  5. Yabushita, A.; Ogawa, K. Image reconstruction with a hexagonal grid. In Proceedings of the 2002 IEEE Nuclear Science Symposium Conference Record, Norfolk, VA, USA, 10–16 November 2002; Volume 3, pp. 1500–1503. [Google Scholar]
  6. Staunton, R.C.; Storey, N. A comparison between square and hexagonal sampling methods for pipeline image processing. In Proceedings of the 1989 Symposium on Visual Communications, Image Processing, and Intelligent Robotics Systems, Philadelphia, PA, USA, 1–3 November 1989; International Society for Optics and Photonics: Bellingham, WA, USA, 1990; pp. 142–151. [Google Scholar]
  7. Singh, I.; Oberoi, A.; Oberoi, M. Performance Evaluation of Edge Detection Techniques for Square, Hexagon and Enhanced Hexagonal Pixel Images. Int. J. Comput. Appl. 2015, 121. [Google Scholar] [CrossRef]
  8. Gardiner, B.; Coleman, S.A.; Scotney, B.W. Multiscale Edge Detection Using a Finite Element Framework for Hexagonal Pixel-Based Images. IEEE Trans. Image Process. 2016, 25, 1849–1861. [Google Scholar] [CrossRef] [PubMed]
  9. Burdescu, D.; Brezovan, M.; Ganea, E.; Stanescu, L. New Algorithm for Segmentation of Images Represented as Hypergraph Hexagonal-Grid. In Pattern Recognition and Image Analysis; Springer: Berlin/Heidelberg, Germany, 2011; pp. 395–402. [Google Scholar]
  10. Argyriou, V. Sub-Hexagonal Phase Correlation for Motion Estimation. IEEE Trans. Image Process. 2011, 20, 110–120. [Google Scholar] [CrossRef] [PubMed]
  11. Senthilnayaki, M.; Veni, S.; Kutty, K.A.N. Hexagonal Pixel Grid Modeling for Edge Detection and Design of Cellular Architecture for Binary Image Skeletonization. In Proceedings of the 2006 Annual IEEE India Conference, New Delhi, India, 15–17 September 2006; pp. 1–6. [Google Scholar]
  12. Linnér, E.; Strand, R. Comparison of restoration quality on square and hexagonal grids using normalized convolution. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 3046–3049. [Google Scholar]
  13. Jeevan, K.M.; Krishnakumar, S. An Algorithm for the Simulation of Pseudo Hexagonal Image Structure Using MATLAB. Int. J. Image Graph. Signal Process. 2016, 8, 57–63. [Google Scholar]
  14. He, X. 2D-Object Recognition with Spiral Architecture. Ph.D. Thesis, University of Technology, Sydney, Australia, 1999. [Google Scholar]
  15. Coleman, S.; Gardiner, B.; Scotney, B. Adaptive tri-direction edge detection operators based on the spiral architecture. In Proceedings of the 2010 17th IEEE International Conference on Image Processing (ICIP), Hong Kong, China, 26–29 September 2010; pp. 1961–1964. [Google Scholar]
  16. Wu, Q.; He, S.; Hintz, T. Virtual Spiral Architecture. In Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, Las Vegas, NV, USA, 21–24 June 2004; CSREA Press: Las Vegas, NV, USA, 2004. [Google Scholar]
  17. Her, I.; Yuan, C.-T. Resampling on a pseudohexagonal grid. CVGIP Graph. Models Image Process. 1994, 56, 336–347. [Google Scholar] [CrossRef]
  18. Van De Ville, D.; Philips, W.; Lemahieu, I. Least-squares spline resampling to a hexagonal lattice. Signal Process. Image Commun. 2002, 17, 393–408. [Google Scholar] [CrossRef]
  19. Li, X.; Gardiner, B.; Coleman, S.A. Square to Hexagonal lattice Conversion in the Frequency Domain. In Proceedings of the 2017 IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017. [Google Scholar]
  20. Wen, W.; Khatibi, S. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image. Sensors 2017, 17, 620. [Google Scholar] [CrossRef] [PubMed]
  21. Wen, W.; Khatibi, S. A software method to extend tonal levels and widen tonal range of CCD sensor images. In Proceedings of the 2015 9th International Conference on Signal Processing and Communication Systems (ICSPCS), Cairns, Australia, 14–16 December 2015; pp. 1–6. [Google Scholar]
  22. Coleman, S.; Scotney, B.; Gardiner, B. Tri-directional gradient operators for hexagonal image processing. J. Vis. Commun. Image Represent. 2016, 38, 614–626. [Google Scholar] [CrossRef]
  23. Rubin, E. Visuell Wahrgenommene Figuren: Studien in Psychologischer Analyse; Gyldendalske Boghandel: Copenhagen, Denmark, 1921; Volume 1. [Google Scholar]
  24. Pinna, B.; Deiana, K. Material properties from contours: New insights on object perception. Vis. Res. 2015, 115, 280–301. [Google Scholar] [CrossRef] [PubMed]
  25. Tirunelveli, G.; Gordon, R.; Pistorius, S. Comparison of square-pixel and hexagonal-pixel resolution in image processing. In Proceedings of the 2002 Canadian Conference on Electrical and Computer Engineering, Winnipeg, MB, Canada, 12–15 May 2002; Volume 2, pp. 867–872. [Google Scholar]
  26. Frangi, A.F.; Niessen, W.J.; Vincken, K.L.; Viergever, M.A. Multiscale vessel enhancement filtering. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Cambridge, MA, USA, 11–13 October 1998; Springer: Berlin, Germany, 1998; pp. 130–137. [Google Scholar]
  27. Vazquez, M.; Huyhn, N.; Chang, J.-M. Multi-Scale vessel Extraction Using Curvilinear Filter-Matching Applied to Digital Photographs of Human Placentas. Ph.D. Thesis, California State University, Long Beach, CA, USA, 2001. [Google Scholar]
  28. Kuijper, A. On detecting all saddle points in 2D images. Pattern Recognit. Lett. 2004, 25, 1665–1672. [Google Scholar] [CrossRef]
  29. Bar, M.; Neta, M. Humans Prefer Curved Visual Objects. Psychol. Sci. 2006, 17, 645–648. [Google Scholar] [CrossRef] [PubMed]
Figure 1. From left to right: the sensor rearrangement onto the subpixel, the projection of the square pixels onto the hexagonal grid by half pixel shifting method and the projection of the square pixels onto the hexagonal grid in generation of hexagonal image.
Figure 1. From left to right: the sensor rearrangement onto the subpixel, the projection of the square pixels onto the hexagonal grid by half pixel shifting method and the projection of the square pixels onto the hexagonal grid in generation of hexagonal image.
Sensors 18 00429 g001
Figure 2. Typical characteristics of R s d (   C s d   ) in Equation (7), R e x (   C e x   ) in Equation (8), R P s d ( C s d , F s d ) in Equation (9), R P e x ( C e x , F e x ) in Equation (10) are shown in top left, top right, bottom left, and bottom right respectively. C s d , C e x , F s d , and F e x are sets of possible values of c s d , c e x , f s d , and f e x , respectively.
Figure 2. Typical characteristics of R s d (   C s d   ) in Equation (7), R e x (   C e x   ) in Equation (8), R P s d ( C s d , F s d ) in Equation (9), R P e x ( C e x , F e x ) in Equation (10) are shown in top left, top right, bottom left, and bottom right respectively. C s d , C e x , F s d , and F e x are sets of possible values of c s d , c e x , f s d , and f e x , respectively.
Sensors 18 00429 g002
Figure 3. Twenty-three pairs of images from the database, where the images in first and third rows have sharp contours and the images in the second and fourth rows have the curved contours.
Figure 3. Twenty-three pairs of images from the database, where the images in first and third rows have sharp contours and the images in the second and fourth rows have the curved contours.
Sensors 18 00429 g003
Figure 4. One of the database images and its template mask for detecting the background.
Figure 4. One of the database images and its template mask for detecting the background.
Sensors 18 00429 g004
Figure 5. One of original images and its set of generated images.
Figure 5. One of original images and its set of generated images.
Sensors 18 00429 g005
Figure 6. The gradients correlation between the Hex_E image (second column) and HS_E image (fourth column) shown in Figure 5 at directions of 0, 60 and 120 degrees (top row). And the gradients correlation between the SQ image (first column) and SQ_E image (third column) shown in Figure 5 at directions of 0, 45 and 90 degrees (bottom row).
Figure 6. The gradients correlation between the Hex_E image (second column) and HS_E image (fourth column) shown in Figure 5 at directions of 0, 60 and 120 degrees (top row). And the gradients correlation between the SQ image (first column) and SQ_E image (third column) shown in Figure 5 at directions of 0, 45 and 90 degrees (bottom row).
Sensors 18 00429 g006aSensors 18 00429 g006b
Figure 7. The gradient cross comparison at 0 degree between one of the original images and its generated images: Hex_E image (first), SQ_E image (second), HS_E image (third), and HS image (fourth).
Figure 7. The gradient cross comparison at 0 degree between one of the original images and its generated images: Hex_E image (first), SQ_E image (second), HS_E image (third), and HS image (fourth).
Sensors 18 00429 g007
Figure 8. The first (Top) and second (Bottom) eigenvalues of the gradient values in a cross comparison between original image and four types of generated images. The blue lines represent the Hex_E images; the red lines represent the SQ_E images, the green lines represent the HS_E and the black lines represent the HS images. The continuous lines and dash lines represent the CC and SC images respectively.
Figure 8. The first (Top) and second (Bottom) eigenvalues of the gradient values in a cross comparison between original image and four types of generated images. The blue lines represent the Hex_E images; the red lines represent the SQ_E images, the green lines represent the HS_E and the black lines represent the HS images. The continuous lines and dash lines represent the CC and SC images respectively.
Sensors 18 00429 g008
Figure 9. The detected saddle and extremum points on HS, HS_E and Hex_E images.
Figure 9. The detected saddle and extremum points on HS, HS_E and Hex_E images.
Sensors 18 00429 g009
Figure 10. The number of the saddle and extremum points in each SC and CC image having Hex_E, SQ_E and SQ image types.
Figure 10. The number of the saddle and extremum points in each SC and CC image having Hex_E, SQ_E and SQ image types.
Sensors 18 00429 g010
Figure 11. The number of the common saddle (Top), extremum points (Middle) and the critical points (Bottom) between SC and CC image pairs having Hex_E, SQ_E and SQ image types.
Figure 11. The number of the common saddle (Top), extremum points (Middle) and the critical points (Bottom) between SC and CC image pairs having Hex_E, SQ_E and SQ image types.
Sensors 18 00429 g011
Figure 12. The co-relation of R s a d d l e and R e x t r e m u m values for SC and CC images from Table 5.
Figure 12. The co-relation of R s a d d l e and R e x t r e m u m values for SC and CC images from Table 5.
Sensors 18 00429 g012
Table 1. The summery of the comparison results of the two figures in Figure 8.
Table 1. The summery of the comparison results of the two figures in Figure 8.
Grid StructurePixel FormFill FactorFirst EigenvalueSecond Eigenvalue
SCCCSCCC
SQ_EYesYesNo272.98277.866.438.51
HSNoYesYes260.66264.5260.1261.29
HS_ENoYesNo210.19212.98106.33107.89
Hex_ENoNoNo187.28190.26112.32114.14
Table 2. The measured P 1 , P 2 , P 3 parameters of SQ images using Hessian matrix.
Table 2. The measured P 1 , P 2 , P 3 parameters of SQ images using Hessian matrix.
Image Index P 1 P 2 P 3
SCCCSCCCSCCC
149.04750.95344.12155.87945.16354.837
248.4151.5948.40751.59348.60151.399
349.46850.53223.67976.32128.14371.857
449.4250.5832.88767.11346.82953.171
547.20652.79438.41861.58237.50262.498
649.91450.08646.7153.2947.85652.144
747.70152.29936.2963.7146.74953.251
849.79150.20914.9785.0346.54953.451
949.57550.42515.39284.60848.16351.837
1049.40850.59216.17183.82948.05651.944
1149.01150.98936.45263.54849.22250.778
1249.07550.92514.80985.19143.01556.985
1348.72451.27616.61383.38745.12454.876
1448.16351.83726.79773.20344.7155.29
1548.89951.10116.95983.04146.38353.617
1649.16750.83318.20381.79746.1353.87
1749.56850.43225.9274.0845.67354.327
1848.74151.25937.23162.76945.06254.938
1949.47450.52628.15171.84948.68751.313
2049.65150.34914.97285.02844.69955.301
2148.75851.24244.46755.53342.29157.709
2248.83451.16646.47153.52945.99654.004
2348.61251.38817.29382.70746.21453.786
Table 3. The measured P 1 , P 2   , P 3 values of SQ_E images using Hessian matrix.
Table 3. The measured P 1 , P 2   , P 3 values of SQ_E images using Hessian matrix.
Image Index P 1 P 2 P 3
SC C C SC C C SCCC
149.35350.64744.34955.65144.97655.024
249.33750.66348.7651.2448.34351.657
351.62548.37566.59233.40830.41369.587
450.05749.94330.51769.48343.87456.126
547.56552.43537.02562.97535.03364.967
650.47449.52648.37451.62647.12452.876
748.21251.78836.41263.58844.6255.38
849.62750.37315.48584.51541.14358.857
949.34150.65915.85784.14347.78352.217
1049.18550.81516.62883.37247.9852.02
1149.82150.17934.75765.24341.40458.596
1248.96151.03915.12684.87443.71856.282
1348.85851.14219.54680.45445.80354.197
1448.58351.41732.95967.04145.68254.318
1549.01650.98417.62682.37446.9653.04
1649.26250.73823.97776.02346.31953.681
1749.56450.43625.4574.5544.95555.045
1848.85851.14235.82464.17644.51155.489
1949.47350.52733.35166.64947.25252.748
2049.39950.60119.88380.11744.59255.408
2148.43751.56343.32156.67941.17658.824
2249.10850.89246.23353.76744.76955.231
2348.60551.39517.53482.46646.78853.212
Table 4. The dissimilarity measurement of D P j ( S C , C C )   ( j = 1 ,   2 ,   3 ) for SQ and SQ_E image types using Hessian matrix.
Table 4. The dissimilarity measurement of D P j ( S C , C C )   ( j = 1 ,   2 ,   3 ) for SQ and SQ_E image types using Hessian matrix.
Image IndexSQSQ_E
D P 1 S C , C C D P 2 S C , C C D P 3 S C , C C D P 1 S C , C C D P 2 S C , C C D P 3 S C , C C
19.47512.08519.5678.50411.64431.7
25.94210.54810.4235.11710.37819.319
311.58113.92918.85210.24413.11228.209
413.03611.04316.94412.12111.1427.646
56.7848.5987.8946.5358.88414.992
68.80711.6914.2537.34511.12123.045
79.5239.53512.1478.689.18920.22
85.9679.2789.2365.7899.20717.652
96.0110.4198.5656.00110.46215.91
105.0199.4019.6514.9799.29318.443
1112.6959.95111.68511.8869.93219.111
127.97310.28911.857.8110.49418.112
136.34413.96813.876.16412.5823.444
143.61511.93712.0113.4969.54619.285
1510.37111.42611.6419.4311.76215.419
169.99415.30217.459.4214.2525.318
178.45514.04917.8378.14914.02828.25
186.25212.36914.1746.2812.70919.231
197.29711.69611.0816.8899.63915.6
208.23215.83417.7837.75514.70729.075
216.3968.1677.9626.2368.01315.206
2211.3610.29116.17610.96610.11528.117
235.75910.0599.7165.70910.1417.785
Table 5. The R s a d d l e and R e x t r e m u m values for SC and CC between SQ_E, SQ and Hex_E image types.
Table 5. The R s a d d l e and R e x t r e m u m values for SC and CC between SQ_E, SQ and Hex_E image types.
Image IndexSQ_E & SQHex_E & SQ_EHex_E & SQ
R s a d d l e R e x t r e m u m R s a d d l e R e x t r e m u m R s a d d l e R e x t r e m u m
SCCCSCCCSCCCSCCCSCCCSCCC
10.8650.7540.2990.8120.9400.9340.7690.8140.9460.9160.5960.922
20.7000.6900.3940.4990.9120.9220.7100.750.9090.9180.6880.781
30.8660.8360.3430.8680.9410.9280.7560.8660.9570.9540.6100.954
40.8570.8200.3290.8280.9480.9400.7390.8110.9530.9380.6200.926
50.6900.5380.3280.3850.8960.8900.7010.7780.9230.9100.6680.801
60.7860.7560.2770.9150.9290.9190.7670.8410.8920.8930.5380.967
70.8360.7310.3400.7920.9310.9300.7350.8020.9330.8990.6380.909
80.5280.5420.3420.5100.8760.8670.6800.7470.8710.8960.6490.797
90.8860.7690.5150.7920.9340.9330.7630.8210.9430.9020.6980.896
100.6310.5240.3400.8650.8940.9190.6620.7860.9080.7700.6130.926
110.8070.8240.2920.7290.9220.9320.7070.8040.9480.9390.6230.878
120.8770.6870.2730.8980.9260.9400.7340.7860.940.8050.5770.942
130.7820.7570.2600.7490.9280.9390.7390.7980.9210.9070.6080.886
140.8550.8280.3240.8710.9400.9490.7410.8310.9430.9170.6020.928
150.8930.8770.3210.8300.9300.9470.7250.8090.9500.9440.5830.929
160.7900.7440.1140.9400.9260.9380.7510.7980.8660.8330.4780.964
170.7570.6980.1490.9000.9330.9420.7370.7830.8650.8180.5300.944
180.8770.6540.2800.9210.9340.9380.7290.7740.9470.7840.5830.954
190.7650.6710.1930.8900.9250.9400.7310.7720.8610.8100.5220.937
200.8330.8030.1870.9060.9410.9480.7610.8070.9080.8890.5370.952
210.6830.7090.3080.7040.8790.9100.6920.8680.9200.9320.6200.915
220.8130.7840.3490.8280.9410.9510.7750.8220.9370.8990.6730.905
230.6560.7590.3190.6710.8980.9320.6850.7960.9130.9310.6200.859
Table 6. The R P s a d d l e and R P e x t r e m u m   values of three comparisons computed by the Equation (9) and Equation (10).
Table 6. The R P s a d d l e and R P e x t r e m u m   values of three comparisons computed by the Equation (9) and Equation (10).
Image IndexSQ_E vs. SQHex_E vs. SQ_EHex_E vs. SQ
R P s a d d l e R P e x t r e m u m R P s a d d l e R P e x t r e m u m R P s a d d l e R P e x t r e m u m
10.8850.5050.9630.9450.9590.850
20.7760.6300.9490.9190.9470.907
30.9070.5670.9610.9420.9740.843
40.8990.5180.9670.9340.9690.841
50.7140.5250.9330.9010.9500.886
60.8500.5160.9540.9450.9320.819
70.8680.5550.9580.9320.9510.863
80.6380.5670.9190.8940.9260.875
90.8930.7270.9600.9480.9540.897
100.7080.5550.9420.9270.9130.856
110.8780.5380.9550.9200.9670.845
120.8800.4590.9600.9420.9390.825
130.8450.4920.9600.9390.9480.858
140.9000.6080.9670.9530.9590.862
150.9290.5120.9630.9270.9690.826
160.8470.2600.9600.9500.9060.738
170.8210.3230.9630.9430.9020.791
180.8750.4070.9610.9380.9400.802
190.8180.3790.9600.9360.8980.781
200.8820.3590.9670.9500.9380.794
210.7850.5600.9350.9200.9550.864
220.8680.6230.9670.9540.9510.880
230.7940.5690.9480.9130.9530.860

Share and Cite

MDPI and ACS Style

Wen, W.; Khatibi, S. The Impact of Curviness on Four Different Image Sensor Forms and Structures. Sensors 2018, 18, 429. https://doi.org/10.3390/s18020429

AMA Style

Wen W, Khatibi S. The Impact of Curviness on Four Different Image Sensor Forms and Structures. Sensors. 2018; 18(2):429. https://doi.org/10.3390/s18020429

Chicago/Turabian Style

Wen, Wei, and Siamak Khatibi. 2018. "The Impact of Curviness on Four Different Image Sensor Forms and Structures" Sensors 18, no. 2: 429. https://doi.org/10.3390/s18020429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop