The Impact of Curviness on Four Different Image Sensor Forms and Structures

The arrangement and form of the image sensor have a fundamental effect on any further image processing operation and image visualization. In this paper, we present a software-based method to change the arrangement and form of pixel sensors that generate hexagonal pixel forms on a hexagonal grid. We evaluate four different image sensor forms and structures, including the proposed method. A set of 23 pairs of images; randomly chosen, from a database of 280 pairs of images are used in the evaluation. Each pair of images have the same semantic meaning and general appearance, the major difference between them being the sharp transitions in their contours. The curviness variation is estimated by effect of the first and second order gradient operations, Hessian matrix and critical points detection on the generated images; having different grid structures, different pixel forms and virtual increased of fill factor as three major properties of sensor characteristics. The results show that the grid structure and pixel form are the first and second most important properties. Several dissimilarity parameters are presented for curviness quantification in which using extremum point showed to achieve distinctive results. The results also show that the hexagonal image is the best image type for distinguishing the contours in the images.


Introduction
The arrangement and form of photoreceptors vary from the fovea to the periphery of the retina. This is a consequence of evolution which argues that the arrangement and form of camera pixel sensors should be variable as well. However practical issues and history of camera development have made us to use fixed arrangements and forms of pixel sensors. Our previous works [1,2] showed that despite the limitations of hardware, it is possible to implement a software method to change the size of pixel sensors. In this paper, we present a software-based method to change the arrangement and form of pixel sensors.
The pixel sensor arrangement is often referred to as the grid structure. Most available cameras have rectangular grid structures. Previous works [3,4] have shown the feasibility of converting the rectangular to hexagonal grid structure by a half pixel shifting method (i.e., a software-based approach). Generation of the hexagonal pixel form is generally achieved by interpolation of intensity values of the rectangular pixel form. In this paper, we present a method, based on our previous works, for maximizing the size of rectangular pixel forms, that generates a hexagonal pixel form on a hexagonal grid. Each original rectangular pixel form is deformed to a hexagonal one using modelling of the incident photons onto the senor surface. To the best of our knowledge, there is no previous method which can offer hexagonal deformation of pixel form on a hexagonal grid.
The comparison of different grid structures or different pixel forms is a challenging task and should be directed to a more specific task. Inasmuch as human vision is highly evolved to detect objects in a dynamic natural scene, the gradient computation as an elementary operation in object detection becomes interesting and appropriate candidate for this specific task. We have focused our investigation on the effect of the sharp transitions in the contour of objects which is estimated by first and second order of gradient computation on the images generated by different grid structures and different pixel forms. Two categories of images having curved versus linear edges of the same object in a pair of images, are used to estimate the detectability of each of the four considered sensor structures for curviness. This paper is organized as follows: in Section 2, related research on hexagonal grid resampling and form is explained. Then the four types of image generations are explained in Section 3. Sections 4 and 5 present the methodology of curviness quantification and the experiment setup, respectively, then the results are shown and discussed in Section 6. Finally, we summarize the work described in this paper in Section 7.

Background
Due to a higher sampling efficiency, consistent connectivity and higher angular resolution of hexagonal grids [3] in comparison to square grids, in the last four decades hexagonal grids have been investigated in numerous methods and applications [3,[5][6][7], which include image reconstruction [5], edge detection [7,8], image segmentation [9], and motion estimation [10]. Different algorithms and mathematical models have emerged in recent years to acquire hexagonal grids. For example, the rectangular grid can be suppressed in rows and columns alternatively and be sub-sampled; i.e., by a half-pixel shifting method [11]. In this way, a bigger hexagonal pixel is generated at the cost of obtaining lower resolution in comparison to the original rectangular grid. In the method, the distance between rows is changed by √ 3/2 and the pixel shifting can be achieved e.g., by implementing normalized convolution [12]. The significances of such a structure are the equidistant and 60 degrees intersection of the sampling points. In Yabushita et al. [5], the pseudohexagonal elements are composed of small square pixels with an aspect ratio of 12:14, which was later implemented by Jeevan et al. with a different ratio of 9:8 [13]. In the spiral architecture of He et al. [14] four square pixels are averaged and generate a hexagonal pixel. Based on the spiral architecture, a design procedure for the development of hexagonal tri-directional derivative operators is present in [15], that can be applied directly to hexagonal images and can be used to improve both the efficiency and accuracy with respect to feature extraction on conventional intensity images. Although the architecture preserves the main properties of object, it loses some degree of resolution, which has an impact, especially on the result of edge detection applications [8]. Later this architecture was improved by Wu et al. [16], by mapping the rectangular grids to hexagonal ones, processing images on hexagonal grids, and remapping the results to the square grids. By processing images on a hexagonal grid less distortion was observed [3]. All above software-based methods have one major common property: they convert the rectangular grid to the hexagonal one using a linear combination of rectangular pixels. The technique of resampling digital images on this pseudohexagonal grid by using three interpolation kernels is proposed in [17] and one blurring kernel have been demonstrated. Then a new spline based on a least-squares approach is presented in [18], used for converting to a hexagonal lattice and has been demonstrated to achieve better quality than traditional interpolative resampling. In the most recent research, Ref [19] introduced a method to convert images from square to hexagonal lattices in the frequency domain using the Fourier transform. However, in our approach, the rectangular pixels initiate a non-linear learning model based on the photons incident onto the senor surface. The approach is based on our earlier works [20,21] where the fill factor of an arbitrary image was estimated and used to obtain an enhanced image, as captured by a 100% fill factor sensor.
Recently as image quality and preserving of the quality during operations such as translation, rotation, and super resolution have become important issues to the research community, implementing a hexagonal grid has been shown to be one of the alternative solutions [12,22]. The hexagonal grid is our visual system solution for observing our complex environmental scenes. Believing that such natural scenes have had a great impact on the evolution of our visual system; i.e., in the creation of hexagonal grid in the fovea of the retina, then the justified question is which features in a natural scene and in its dynamical alteration cause such an impact. The pioneering work was done by Gestalt psychologists and, more in detail, by Rubin [23], who first demonstrated that contours contain most of the information related to object perception, like the shape, the color and the depth. In fact, by investigating simple conditions like those used by Gestalt psychologists, mostly consisting of contours only, Pinna et al. [24] demonstrated that the phenomenal complexity of the material attributes emerges through appropriate manipulation of the contours. Bar et al. [16] showed in their psychological study that our visual system prefers curved visual objects; i.e., the physical property of objects in a scene, which is manifested in sharp transitions in the contour of objects, has a critical influence on our perception of that object. Other studies such as [25] show the capacity of the hexagonal grid on detection of the sharp transitions in contour of objects. In this paper, we implement the curviness; i.e., the sharp transitions in contour of objects, as a comparison feature to evaluate four different grid structures.

Image Generation
In this section, we explain the generation of hexagonal enriched, square enriched, haft pixel shift enriched, and half pixel shift images from an original image which has a square pixel form on a square grid.

Generation of the Hexagonal Enriched Image (Hex_E)
The hexagonal enriched image has a hexagonal pixel form on a hexagonal grid. The generation process is similar to the resampling process in [1], which has three steps: projecting the original image pixel intensities onto a grid of sub-pixels; estimating the values of subpixels at the resampling positions; estimating each new hexagonal pixel intensity in a new hexagonal arrangement. The three steps are elaborated in the following.

A Grid of Virtual Image Sensor Pixels Is Constructed
Each pixel is projected onto a grid of L × L square subpixels. By using the fill factor FF value, the size of the active area is defined as S × S, where S = L × √ FF. The intensity value of every pixel in the image sensor array is assigned to the virtual active area in the new grid. The intensities of subpixels in the non-sensitive areas are assigned to be zero. An example of such sensor rearrangement on a sub-pixel level is presented on the left in Figure 1, where there is a 3 × 3 pixel grid, and the light and dark grey areas represent the active and non-active areas in each pixel. Assuming L = 30 and the active area is composed by 18 × 18 subpixels, and thereby the fill factor becomes 36% according to the above equation, and the intensities of active areas are represented by different grey level values. The size of the square subpixel grid for one pixel is examined from 20 × 20 to 40 × 40, the intensity in the generated images show no further significant changes after the size is 30 × 30. Thus, in the experiment, L is set to 30.

The Second
Step Is to Estimate the Values of Subpixels in the New Grid of Subpixels Considering the statistical fluctuation of incident photons and their conversion to electrons on the sensor, a local Gaussian model is estimated by maximum likelihood method from each certain neighborhood area of pixels. Using each local model, a local noise source is generated and introduced to each certain neighborhood. Then inspired by Monte Carlo simulation, all subpixels in each certain neighborhood are estimated in an iteration process using the known pixel values (for sub-pixels in the active area) or by linear polynomial reconstruction (for subpixels in non-sensitive area). In each iteration step the number of subpixels of the active area in the actual pixel is varied from zero to total number of subpixels of active area (i.e., the total sub-pixel number is defined by the fill factor). By estimating the intensity values of the subpixels during the iteration process, a vector of intensity values for each subpixel is created from which the final subpixel value is optimally predicted using Bayesian inference method and maximum likelihood of Gaussian distribution.

In the Third
Step, the Subpixels Are Projected back to a Hexagonal Grid Shown as Red Grids on the Right of Figure 1, Where the Distance between Each Two Hexagonal Pixels Is the Same Then the subpixels in each hexagonal area are estimated with respect to the virtual increase of the fill factor. The intensity value of a hexagonal pixel in the grid is the intensity value which has the strongest contribution in the histogram of belonging subpixels. The corresponding intensity is divided by the fill factor for removing the fill factor effect to obtain the hexagonal pixel intensity.  Figure 1, Where the Distance between Each Two Hexagonal Pixels Is the Same Then the subpixels in each hexagonal area are estimated with respect to the virtual increase of the fill factor. The intensity value of a hexagonal pixel in the grid is the intensity value which has the strongest contribution in the histogram of belonging subpixels. The corresponding intensity is divided by the fill factor for removing the fill factor effect to obtain the hexagonal pixel intensity.

Generation of the Square Enriched Image (SQ_E)
The estimated square images are generated by three steps where the two steps explained in 3.1.1 and 3.1.2 are followed by a third step as follows. The subpixels are projected back to the original square grid shown as red grids on the left of Figure 1. The intensity value of each pixel in the square grid is the intensity value which has the strongest contribution in the histogram of its belonging subpixels. Then the corresponding intensity is divided by the fill factor to obtain the square pixel intensity by virtual increase of fill factor to 100% as the work in [1].

Generation of the Half Pixel Shift Image (HS) and Half Pixel Shift Enriched Image (HS_E)
The hexagonal grid in the previous work [3,4] is mimicked by a half-pixel shift which is derived from delaying sampling by half a pixel in the horizontal direction. The red grid, which is presented in the middle of Figure 1, is the new pseudohexagonal sampling structure whose pixel form is still square. The new pseudohexagonal grid is derived from a usual 2-D grid by shifting each even row a half pixel to the right and leaving odd rows unattached, or of course any similar translation. The half pixel shift image (HS) and half pixel shift enriched image (HS_E) are both generated from the original image (SQ) and enriched image (SQ_E) on the square grid, respectively.

Curviness Quantification
The curviness is quantified by comparison of the sharp transitions in contour of all correspondent objects in pair of images which have exact similar contents but two different contours; namely straight or curved contour. We define an image which has only straight or only curved contour as SC or CC image respectively. First and second order gradient operations are used in the quantification on each original image (i.e., SQ image type) and its set of generated images (i.e., Hex_E, SQ_E, HS_E, and HS image types). We elaborate these operations in following.

Generation of the Square Enriched Image (SQ_E)
The estimated square images are generated by three steps where the two steps explained in 3.1.1 and 3.1.2 are followed by a third step as follows. The subpixels are projected back to the original square grid shown as red grids on the left of Figure 1. The intensity value of each pixel in the square grid is the intensity value which has the strongest contribution in the histogram of its belonging subpixels. Then the corresponding intensity is divided by the fill factor to obtain the square pixel intensity by virtual increase of fill factor to 100% as the work in [1].

Generation of the Half Pixel Shift Image (HS) and Half Pixel Shift Enriched Image (HS_E)
The hexagonal grid in the previous work [3,4] is mimicked by a half-pixel shift which is derived from delaying sampling by half a pixel in the horizontal direction. The red grid, which is presented in the middle of Figure 1, is the new pseudohexagonal sampling structure whose pixel form is still square. The new pseudohexagonal grid is derived from a usual 2-D grid by shifting each even row a half pixel to the right and leaving odd rows unattached, or of course any similar translation. The half pixel shift image (HS) and half pixel shift enriched image (HS_E) are both generated from the original image (SQ) and enriched image (SQ_E) on the square grid, respectively.

Curviness Quantification
The curviness is quantified by comparison of the sharp transitions in contour of all correspondent objects in pair of images which have exact similar contents but two different contours; namely straight or curved contour. We define an image which has only straight or only curved contour as SC or CC image respectively. First and second order gradient operations are used in the quantification on each original image (i.e., SQ image type) and its set of generated images (i.e., Hex_E, SQ_E, HS_E, and HS image types). We elaborate these operations in following.

Implementing a First Order Gradient Operation
The familiar first order gradient ∇J is defined as: where J represents the image, z and z represent the positions of two adjacent pixels which have a common border; i.e., a common side or corner border. The angle between orientation of adjacent pixels and horizontal axis represents direction of the gradient. The first order gradient values of a pair of images (i.e., even with different grids) can be compared by computing and analyzing the eigenvalues and eigenvectors which are obtained by solving the expression A − λ j I e j = 0, where A is the 2 × n matrix with n number of first order gradient values of each of the images, λ and e are the eigenvalue and the eigenvector respectively, j is the index number with value of 1 or 2, and I is the identity matrix.
In the comparison, when two images have same contents but different grids, the range of eigenvalues from small to large values indicate the similarity to dissimilarity between grid structures. When two images have the same content but different contours the curviness can be quantified by comparison of the eigenvalues related to the images.

Implementing Hessian Matrix on SQ, and SQ_E Images
Analyzing the second order gradient operation in form of Hessian matrix computation has an intuitive justification in the context of curvature quantification. The eigenvalues analysis of the Hessian extracts the principle gradient directions in which the local second order structure of an image is decomposed. This directly gives the direction of smallest curvature (along the contour) [26,27]. The Hessian matrix of: is computed from convolution of the image J and gradients of the gaussian kernel G = 1 2πσ 2 e − x 2 +y 2 2σ 2 as follows: where G xx , G xy and G yy represent the gradient kernel on the horizontal, vertical and diagonal directions, respectively. The eigenvalues and eigenvectors of the Hessian matrix H are obtained by solving the expression (H − λ hs j I)e hs j = 0, where λ hs j and e hs j are the eigenvalue and the eigenvector, respectively, and j is the index number with value of 1 or 2. The first eigenvector (the one whose corresponding eigenvalue has the largest absolute value) is the direction of greatest curvature. The other eigenvector (always orthogonal to the first one) is the direction of least curvature. The corresponding eigenvalues are the respective amounts of these curvatures. Inspired by earlier work of Frangi et al. [22], three measurement parameters are derived from eigenvalues and eigenvectors of Hessian matrix which are used in comparison of the pair of images when they have the same content but different contours. The parameters are: where λ hs 1 (the largest one) and λ hs 2 are the eigenvalues of the Hessian matrix and e hs 1 = e hs 1x , e hs 1y e hs 2 = e hs 2x , e hs 2y are the related eigenvectors. P 1 , P 2 , P 3 measure the main orientation, the relation of the two principal curvatures (i.e., each of which measures amount of curvature bending in different directions), and second order structureness respectively. The dissimilarity of each pair of SC and CC images (i.e., which are having square grid and the same content but different contours) are measured by: where SC and CC are a pair of images which have the same contents but different contours and the pair can be the type of SQ or SQ_E images, j is the index number (which can have values of 1, 2, or 3), P j is one of the three parameters according to the j, P SC j and P CC j are the measurement parameters P j applied on the pair of images of SC and CC, respectively.

Implementing Second Order Operation to Detect Saddle and Extremum Points
The second order gradient operation has been used to detect spatial critical points; i.e., saddle and extremum points. The number of critical points in an image depends on the contour shape. Thus, a pair of images with the same content but different contours can be compared using the detected critical points in each image. Generally, the critical points are detected when the gradient is zero. This means the Hessian matrix can be used in this relation; the critical points are found by using eigenvalues of the Hessian matrix on a square grid. However, on a square grid the zeros of gradient will in general not coincide with the grid points, but lie somewhere in between them. Kuiiper [28] showed that by converting the square grid to a hexagonal grid; implementing a half pixel shifting method, it is possible to detect a more accurate number of critical points in a square grid-based image. In his detection process based on hexagonal grida, each point has six neighbours. The sign of intensity difference for each of these neighbours with respect to the point itself is determined which results in its classification into four different points: regular, minimum or maximum (extremum), saddle, and degenerated saddle point. We used this detection process not only on square based grid images, but also on hexagonal grid based images to detect saddle and extremum points.
In relation to curvature quantification, pair of SQ, SQ_E, and Hex_E image types are compared in relation to the detected critical points. The comparison measurement related to the saddle points is defined as: where A sd and B sd are two sets of a sd and b sd saddle points in images A and B respectively (where A and B are two types of images), C sd = A sd ∩ B sd is the set of the saddle points that are on the same position in the two images with c sd = |C sd | = |A sd ∩ B sd | points, and |A sd ∪ B sd | is the number of all points which are in sets of A sd and B sd . Equation (7) is a normalized nonlinear dissimilarity measurement function based on the common detected saddle points in the two images; see a typical of such a function characteristic in top left of Figure 2. The comparison measurement related to the extremum points is defined as: where A ex and B ex are two sets of a ex and b ex extremum points where in images A and B, respectively (where A and B are two types of images), C ex = A ex ∩ B ex is the set of the extremum points that are on the same position in the two images with c ex = |C ex | = |A ex ∩ B ex | points, and |A ex ∪ B ex | is all points which are in sets of A ex and B ex . Equation (8) is a normalized nonlinear dissimilarity measurement function based on the common detected extremum points; see a typical of such a function characteristic in top right of Figure 2. The comparison measurement related to the saddle points between two pairs of images is defined as: where a sd b sd d sd and e sd are the numbers of saddle points in sets of A sd B sd D sd and E sd in image A, B, D and E, respectively. The two pairs of images are (A, D) and (B, E). Each pair of images has the same type and different type from another pair. The images A and B are the SC images where the images D and E are CC images. The C sd = A sd ∩ B sd and F sd = D sd ∩ E sd are the sets of the saddle points that are on the same position in each related two images with c ex = |C ex | and f ex = |F ex | points respectively. Equation (9) is a normalized 2D nonlinear dissimilarity measurement function based on the common detected saddle points in each pair of the images; see a typical of such a function characteristic in bottom left of Figure 2. The comparison measurement related to the extremum points between two pairs of images is defined as:    (7), ( ) in Equation (8), ( , ) in Equation (9), ( , ) in Equation (10)  , and , respectively.

Experimental Setup
An image dataset from [29] is used for our experiments. The database was used earlier to investigate human visual preference of curved versus linear edges to find the physical elements in a visual stimulus which cause like or dislike of objects. The database is composed of 280 pairs of images, Figure 2. Typical characteristics of R sd ( C sd ) in Equation (7), R ex ( C ex ) in Equation (8), RP sd (C sd , F sd ) in Equation (9), RP ex (C ex , F ex ) in Equation (10)

Experimental Setup
An image dataset from [29] is used for our experiments. The database was used earlier to investigate human visual preference of curved versus linear edges to find the physical elements in a visual stimulus which cause like or dislike of objects. The database is composed of 280 pairs of images, each pair of images have the same semantic meaning and general appearance. We found 51 image pairs had contour curvature differences additionally to the semantic meaning; i.e., straight to curved line. Then each pair of images with the same semantic content is defined as straight contour (SC) or curved contour (CC) images. Each of the images has the same resolution, 256 × 256, and in Uint8 format. Figure 3 shows twenty-three pairs of images which are randomly selected from the 51 pairs of images and used for the experiment. The images of the database are generated by computer graphic tools (i.e., without natural noise). However, they are compressed as jpg format in which noise is introduced accordingly as it is shown in Figure 3. It is certain that the noise affects the first and second order derivative operation of Equations (1) and (2). Some smoothing filters are used to reduce the noise but this significantly change the content of the object (straight or curved lines) in the images which is not appreciated due to the problematic goal (i.e., to compare straight lines vs curved ones). Instead of smoothing of the whole image, a template mask for each of the images is used to detect the background which is irrelevant to the object content, as it is shown in Figure 4. Each mask is generated automatically using morphological operations. When the images with different pixel structure and form are compared they are contaminated with the same amount of noise, thus effect of the noise on the results considered to be insignificant and the comparison results are obtained only from the masked area on the respective images. The images are converted to grayscale images, and then the fill factor of images is estimated by the method explained in [20]; fill factor value is estimated to be 36%. The impact of curved versus straight edges on the enriched hexagonal (Hex_E), enriched estimated square (SQ_E), enriched half pixel shifted (HS_E), and original (SQ) images is evaluated by computing the first and second order gradient operations; see Sections 4.1 and 4.2, on the images. All images have the same resolution to ensure that the resolution is not affecting the number of gradients. All the processing is programmed and implemented by Matlab2017a on a stationary computer with an Intel i7-6850k CPU (Intel Corporation, California, USA, https://ark.intel.com/products/94188/) and a 32 GB RAM memory to keep the process stable and fast. of images and used for the experiment. The images of the database are generated by computer graphic tools (i.e., without natural noise). However, they are compressed as jpg format in which noise is introduced accordingly as it is shown in Figure 3. It is certain that the noise affects the first and second order derivative operation of Equations (1) and (2). Some smoothing filters are used to reduce the noise but this significantly change the content of the object (straight or curved lines) in the images which is not appreciated due to the problematic goal (i.e., to compare straight lines vs curved ones). Instead of smoothing of the whole image, a template mask for each of the images is used to detect the background which is irrelevant to the object content, as it is shown in Figure 4. Each mask is generated automatically using morphological operations. When the images with different pixel structure and form are compared they are contaminated with the same amount of noise, thus effect of the noise on the results considered to be insignificant and the comparison results are obtained only from the masked area on the respective images. The images are converted to grayscale images, and then the fill factor of images is estimated by the method explained in [20]; fill factor value is estimated to be 36%.

Results and Discussion
One of the original images and the set of enriched related generated images; hexagonal, estimated square, and half-pixel shifted images, which were explained in Section 3 are shown in Figure 5. The images from left to right in the first row are the original image, and the related generated

Results and Discussion
One of the original images and the set of enriched related generated images; hexagonal, estimated square, and half-pixel shifted images, which were explained in Section 3 are shown in Figure 5. The images from left to right in the first row are the original image, and the related generated images. The images in the second row of Figure 5 are the zoomed region of the images (shown as red square). The generated images show better dynamic range in comparison to the original images, as it was shown in [21].

Results and Discussion
One of the original images and the set of enriched related generated images; hexagonal, estimated square, and half-pixel shifted images, which were explained in Section 3 are shown in Figure 5. The images from left to right in the first row are the original image, and the related generated images. The images in the second row of Figure 5 are the zoomed region of the images (shown as red square). The generated images show better dynamic range in comparison to the original images, as it was shown in [21].

First Order Gradient Operation
The sharp transitions in the contour of each objects in the images-the curviness-is quantified by implementing the first and second order gradient operations on the pair of original images and their set of generated images; each operation process is explained in more details in Section 4. For the images on the hexagonal grid; hexagonal and half-pixel shift images, the first order gradients are computed at six directions, which are 0, 60, 120, 180, 240, and 300 degrees. Due to resolution similarity of the generated images the on hexagonal grid, their number of pixels and the computed gradient elements are the same. The top and middle row of Figure 6 shows the sorted first order gradient values from the generated Hex_E image (i.e., the image shown in Figure 5) in comparison to the generated HS and HS_E image at 0, 60 and 120 degrees from left to right respectively. The amount of spreading of the gradient values reveals the correlation between the grid structure of the images. The more similar the image grids are, the amount of spreading is less. The more densely the points are distributed, the less variation from the gradient results are expected. Due to the grid similarity of original images and the SQ_E images, the correlations of sorted gradient values at 0, 45 and 90 degrees between them are linear which are shown in the bottom row of Figure 6. However, the correlations of sorted gradient values at 0, 60 and 120 degrees on the pseudo hexagonal grid structure and hexagonal grid structure are nonlinear and dissimilar; shown in top and middle rows of Figure 6.

First Order Gradient Operation
The sharp transitions in the contour of each objects in the images-the curviness-is quantified by implementing the first and second order gradient operations on the pair of original images and their set of generated images; each operation process is explained in more details in Section 4. For the images on the hexagonal grid; hexagonal and half-pixel shift images, the first order gradients are computed at six directions, which are 0, 60, 120, 180, 240, and 300 degrees. Due to resolution similarity of the generated images the on hexagonal grid, their number of pixels and the computed gradient elements are the same. The top and middle row of Figure 6 shows the sorted first order gradient values from the generated Hex_E image (i.e., the image shown in Figure 5) in comparison to the generated HS and HS_E image at 0, 60 and 120 degrees from left to right respectively. The amount of spreading of the gradient values reveals the correlation between the grid structure of the images. The more similar the image grids are, the amount of spreading is less. The more densely the points are distributed, the less variation from the gradient results are expected. Due to the grid similarity of original images and the SQ_E images, the correlations of sorted gradient values at 0, 45 and 90 degrees between them are linear which are shown in the bottom row of Figure 6. However, the correlations of sorted gradient values at 0, 60 and 120 degrees on the pseudo hexagonal grid structure and hexagonal grid structure are nonlinear and dissimilar; shown in top and middle rows of Figure 6. distributed, the less variation from the gradient results are expected. Due to the grid similarity of original images and the SQ_E images, the correlations of sorted gradient values at 0, 45 and 90 degrees between them are linear which are shown in the bottom row of Figure 6. However, the correlations of sorted gradient values at 0, 60 and 120 degrees on the pseudo hexagonal grid structure and hexagonal grid structure are nonlinear and dissimilar; shown in top and middle rows of Figure 6. The same is the correlation of sorted gradient values at 0 degree between the SQ image grid structure to both the pseudo hexagonal grid structure and hexagonal grid structure which are shown in Figure 7 on the first and third columns from left respectively; i.e., the correlation is in each case nonlinear and dissimilar. Figure 7 shows that the gradient results from the four types of generated images in comparison to the original SQ image is different from each other; especially the second plot from left. This is because the grids in HS, HS_E and Hex_E images are more alike to each other and more different from the square grid (i.e., the grid of SQ and SQ_E images). The similarity/dissimilarity of each two grid structures are possible to visualize; as they are shown in Figures 6 and 7. However, to quantify such a similarity/dissimilarity the first order gradient operation can be used as it is described in Section 4.1. Accordingly, the covariance of the gradient values of each two images are computed where each compared two images have the same contents but different grid structures. Then the eigenvalues and eigenvectors of each covariance matrix is computed using singular value decomposition (SVD) method. The same is the correlation of sorted gradient values at 0 degree between the SQ image grid structure to both the pseudo hexagonal grid structure and hexagonal grid structure which are shown in Figure 7 on the first and third columns from left respectively; i.e., the correlation is in each case nonlinear and dissimilar. Figure 7 shows that the gradient results from the four types of generated images in comparison to the original SQ image is different from each other; especially the second plot from left. This is because the grids in HS, HS_E and Hex_E images are more alike to each other and more different from the square grid (i.e., the grid of SQ and SQ_E images). The similarity/dissimilarity of each two grid structures are possible to visualize; as they are shown in Figures 6 and 7. However, to quantify such a similarity/dissimilarity the first order gradient operation can be used as it is described in Section 4.1. Accordingly, the covariance of the gradient values of each two images are computed where each compared two images have the same contents but different grid structures. Then the eigenvalues and eigenvectors of each covariance matrix is computed using singular value decomposition (SVD) method.
to the original SQ image is different from each other; especially the second plot from left. This is because the grids in HS, HS_E and Hex_E images are more alike to each other and more different from the square grid (i.e., the grid of SQ and SQ_E images). The similarity/dissimilarity of each two grid structures are possible to visualize; as they are shown in Figures 6 and 7. However, to quantify such a similarity/dissimilarity the first order gradient operation can be used as it is described in Section 4.1. Accordingly, the covariance of the gradient values of each two images are computed where each compared two images have the same contents but different grid structures. Then the eigenvalues and eigenvectors of each covariance matrix is computed using singular value decomposition (SVD) method. The first eigenvalues, which are also the largest ones, of the covariance matrix between each pair of the original images and its set of generated images are shown in top of Figure 8, and the second eigenvalues are shown in the bottom of Figure 8. The blue, red, green and black lines represent the first or second eigenvalues of the covariance matrixes with respect to the four types of images presented in Section 3, respectively. The continuous lines and dash lines represent the computed first and second eigenvalues with respect to the original CC and SC images, respectively. Table 1 shows The first eigenvalues, which are also the largest ones, of the covariance matrix between each pair of the original images and its set of generated images are shown in top of Figure 8, and the second eigenvalues are shown in the bottom of Figure 8. The blue, red, green and black lines represent the first or second eigenvalues of the covariance matrixes with respect to the four types of images presented in Section 3, respectively. The continuous lines and dash lines represent the computed first and second eigenvalues with respect to the original CC and SC images, respectively. Table 1 shows the summary of the comparison results of the two figures in Figure 8. The different properties among the types of images are caused by the diversity of their grid structure, pixel form or fill factor value. 'Yes' and 'No' in the table represent similarity and dissimilarity of such a property in relation between each generated image to the SQ image. The values in the last four columns of Table 1 are the sums of the first and second eigenvalues of the respected image type shown in Figure 8. In the table, the increase of the first or second eigenvalue indicates the increase of similarity or dissimilarity between the generated image and SQ image respectively. The SQ_E and HS images in relation to the SQ image show higher similarity than the other image types; see the first eigenvalue results in Table 1 and top figure in Figure 8. The comparison of these two types of images show that the grid structure is more important than pixel form and fill factor value to cause differences between them. The Hex_E and HS_E images in relation to the SQ image show higher dissimilarity, respectively. The comparison of these two types of images show that when the grid structures are the same the pixel form is more important than fill factor value to cause differences between them. The results show that the choice of grid structure, pixel form, and fill factor value are respectively important in generation of a new type of images. Here we should note that these three properties are not quite independent from each other. In Table 1, the results related to SC and CC for all four types of images show that they are clearly distinctive. However, the detail comparison of SC and CC in Figure 8 show that it is not possible to have a clear conclusion between SC and CC by first order operation; due to the results variation. type of images. Here we should note that these three properties are not quite independent from each other. In Table 1, the results related to SC and CC for all four types of images show that they are clearly distinctive. However, the detail comparison of SC and CC in Figure 8 show that it is not possible to have a clear conclusion between SC and CC by first order operation; due to the results variation.    Tables 2 and 3 show the measured P 1 , P 2 , and P 3 parameters (Equations (3)-(5)) between SC and CC images having SQ or SQ_E image type using Hessian matrix; for more detail see Section 4.2. The bold result values in the tables show the higher one in comparison of each pair of SC and CC images. In Table 2, the results related to the CC images have higher values than SC results. D (SC,CC) P j (j = 1, 2, 3) indicates that the contours in the CC images are distinctively different (have more curviness) than the ones in SC images. Table 3 shows the similar results, that the results values are higher in majority of CC images in respect to SC results; 94% of measurement values. Table 4 shows the dissimilarity measurement of D (SC,CC) P j (j = 1, 2, 3) in Equation (7). The comparison between SQ and SQ_E images using D measures the second order structureness, we can conclude that the SQ_E image type far better can perform in detection of curviness than SQ image type.

Saddle and Extremum Points
The detected saddle and extremum points from the second order gradient operation on the same HS, HS_E and Hex_E images shown in Figure 5 are shown in Figure 9, respectively. The HS and HS_E images are generated from its corresponding SQ and SQ_E images by converting the square grid to a hexagonal grid, implementing the half pixel shifting method proposed in Section 3.3. As it is discussed in Section 4.3, in comparison to SQ and SQ_E images the HS and HS_E images are better at detecting the critical points. The number of saddle and extremum points in each SC or CC image having Hex_E, SQ_E and SQ image types are shown in Figure 10. This shows that in 74% of pairs of CC and SC images of each image type, CC images have detected more critical points than SC images. However, in the figure the comparison results among Hex_E, SQ and SQ_E image types are still undistinctive. The top and middle figures in Figure 11 show that the number of common saddle and extremum points respectively, and the bottom figure shows the total number of the critical points. The common points are those points which are in the same position in each pair of SC and CC images. The results indicate that the SQ image type detects more common saddle points; in 87% of SC and CC image pairs, and Hex_E image type detects more extremum points; in 74% of image pairs. Due to that the critical points in SQ, SQ_E and Hex_E images are detected in the hexagonal grid; see Section 4.3, the difference of the results between image types is affected by having different pixel form and fill factor value. The results values between SQ_E and Hex_E image type in Figure 11 are close to each other; indicating that the pixel form has more effect on the second order gradient than the fill factor.
The results indicate that the SQ image type detects more common saddle points; in 87% of SC and CC image pairs, and Hex_E image type detects more extremum points; in 74% of image pairs. Due to that the critical points in SQ, SQ_E and Hex_E images are detected in the hexagonal grid; see Section 4.3, the difference of the results between image types is affected by having different pixel form and fill factor value. The results values between SQ_E and Hex_E image type in Figure 11 are close to each other; indicating that the pixel form has more effect on the second order gradient than the fill factor.   The normalized nonlinear dissimilarity measurement values of and for SC and CC from three pairs of comparisons between SQ_E, SQ and Hex_E images are computed by Equations (7) and (8) and shown in Table 5, where the higher value represents the larger dissimilarity of the contours between respected image types. For each pair of SC and CC, the larger value of or is shown bolded. The correlations of the three pairs of comparisons in Table 5 are shown in Figure 12, where the points of four colors of blue, red, green and black represent the  The normalized nonlinear dissimilarity measurement values of and for SC and CC from three pairs of comparisons between SQ_E, SQ and Hex_E images are computed by Equations (7) and (8) and shown in Table 5, where the higher value represents the larger dissimilarity of the contours between respected image types. For each pair of SC and CC, the larger value of or is shown bolded. The correlations of the three pairs of comparisons in Table 5 are shown in Figure 12, where the points of four colors of blue, red, green and black represent the The normalized nonlinear dissimilarity measurement values of R saddle and R extremum for SC and CC from three pairs of comparisons between SQ_E, SQ and Hex_E images are computed by Equations (7) and (8) and shown in Table 5, where the higher value represents the larger dissimilarity of the contours between respected image types. For each pair of SC and CC, the larger value of R saddle or R extremum is shown bolded. The correlations of the three pairs of comparisons in Table 5 are shown in Figure 12, where the points of four colors of blue, red, green and black represent the correspondent values of R saddle and R extremum for each pair of SC and CC images respectively. In the figure, the three axes represent the three pairs of comparison between image types. The results of Table 5 show that the R saddle values for SC and CC images are close to each other, which is also verified in Figure 12; i.e., the blue points are mixed with red points. On the contrast, the R extremum values for SC and CC images in Table 5 are distinctive which is shown in Figure 12 by the green and black points. According to these results our conclusion is that only the extremum points can be used for quantifying the curviness; i.e., to distinguish a CC image from a SC image. Comparing the two comparisons of SQ_E & SQ and Hex_E & SQ_E and considering the three properties among the type of images; see Table 1, the measured dissimilarity in Table 5 for each of the comparisons is caused by fill factor and pixel form respectively. This is due to that SQ_E and SQ images are converted to hexagonal grid by half pixel shift for detection of critical points. Thus the grid structures of all three types of images in the two comparisons are the same. According to Table 5, the dissimilarity comparison values of the Hex_E & SQ_E are higher than the SQ_E & SQ, which shows that the pixel form is more important property than the fill factor to cause dissimilarity between images. This is consistent with the results which are presented in Table 1. Thus, from the results we concluded that the importance of the three properties from high to low is the grid structure, the pixel form and the fill factor, respectively.

Conclusions
In this paper, we present a software-based method to generate images with hexagonal pixel form on a hexagonal sensor grid. Each original rectangular pixel form is deformed to a hexagonal one using modelling of the incident photons onto the sensor surface. Four different image sensor forms and structures, including the proposed method, are evaluated by measuring their ability to detect curviness. We introduce a method for curviness quantification by comparison of the sharp transitions in contour of all correspondent objects in pair of images which have exact similar contents but two different contours. The quantification measurements are achieved by implementing first and second order gradient operations in form of several introduced and defined dissimilarity parameters. We show how first and second gradient operations, Hessian matrix computation, and measurement of the dissimilarity parameters can be implemented on both square and hexagonal grid structures. We Figure 12. The co-relation of R saddle and R extremum values for SC and CC images from Table 5. Table 6 shows the results of the RP saddle and RP extremum values of three comparisons computed by the Equation (9) and Equation (10). The values in each column represent dissimilarity values based on the common detected saddle or extremum points in each pair of SC and CC images which is used to compare two different image types. The dissimilarity values in the table are not consistent for each two image types in comparison to previous results, indicating that processing SC and CC together is not an adequate way to quantify the difference between two image types. By combining the results in Tables 5 and 6, we conclude that according to all the measurement the Hex_E image type has the largest dissimilarity in comparison to the other image types, which means the Hex_E is the best image type among our tested image types for quantifying the curviness of a contour.

Conclusions
In this paper, we present a software-based method to generate images with hexagonal pixel form on a hexagonal sensor grid. Each original rectangular pixel form is deformed to a hexagonal one using modelling of the incident photons onto the sensor surface. Four different image sensor forms and structures, including the proposed method, are evaluated by measuring their ability to detect curviness. We introduce a method for curviness quantification by comparison of the sharp transitions in contour of all correspondent objects in pair of images which have exact similar contents but two different contours. The quantification measurements are achieved by implementing first and second order gradient operations in form of several introduced and defined dissimilarity parameters. We show how first and second gradient operations, Hessian matrix computation, and measurement of the dissimilarity parameters can be implemented on both square and hexagonal grid structures. We pay special attention in detection of critical points (i.e., saddle and extremum points) using different image types.
The grid structure, pixel form and fill factor are proposed for representing the three major properties of the sensor characteristics and the results indicate that the grid structure is the most important one that makes difference between the type of images, and the pixel form is the second important one. The results of curviness quantification indicate that the detection of extremum points can be used to highly distinct CC from SC images. We show that enriched hexagonal image (i.e., Hex_E) is best in detection of curviness; according to its curviness measurement results, in comparison to the other tested image types. In the future, we intend to study other grid structures and pixel forms.