Edge-Based and Prediction-Based Transformations for Lossless Image Compression

Pixelated images are used to transmit data between computing devices that have cameras and screens. Significant compression of pixelated images has been achieved by an “edge-based transformation and entropy coding” (ETEC) algorithm recently proposed by the authors of this paper. The study of ETEC is extended in this paper with a comprehensive performance evaluation. Furthermore, a novel algorithm termed “prediction-based transformation and entropy coding” (PTEC) is proposed in this paper for pixelated images. In the first stage of the PTEC method, the image is divided hierarchically to predict the current pixel using neighboring pixels. In the second stage, the prediction errors are used to form two matrices, where one matrix contains the absolute error value and the other contains the polarity of the prediction error. Finally, entropy coding is applied to the generated matrices. This paper also compares the novel ETEC and PTEC schemes with the existing lossless compression techniques: “joint photographic experts group lossless” (JPEG-LS), “set partitioning in hierarchical trees” (SPIHT) and “differential pulse code modulation” (DPCM). Our results show that, for pixelated images, the new ETEC and PTEC algorithms provide better compression than other schemes. Results also show that PTEC has a lower compression ratio but better computation time than ETEC. Furthermore, when both compression ratio and computation time are taken into consideration, PTEC is more suitable than ETEC for compressing pixelated as well as non-pixelated images.


Introduction
In today's information age, the world is overwhelmed with a huge amount of data.With the increasing use of computers, laptops, smartphones, and other computing devices, the amount of multimedia data in the form of text, audio, video, image, etc. are growing at an enormous speed.Storage of large volumes of data has already become an important concern for social media, email providers, medical institutes, universities, banks, and many other offices.In digital media such as in digital cameras, digital cinemas, and films, high resolution images are needed.In addition to the storage, data are often required to be transmitted over the Internet at the highest possible speed.Due to the constraint in storage facility and limitation in transmission bandwidth, compression of data is vital [1][2][3][4][5][6][7][8].
The basic idea of compressing images lies in the fact that several image pixels are correlated, and this correlation can be exploited to remove the redundant information [9].The removal of redundancy and irrelevancy leads to a reduction in image size.There are two major types of image compression-lossy and lossless [10][11][12].In the case of lossless compression, the reconstruction process can recover the original image from the compressed images.On the other hand, images that go through the lossy compression process cannot be precisely recovered to its actual form.Examples of lossy compression are some of the wavelet-based compressions such as embedded zerotrees of wavelet transforms (EZW), joint photographic experts group (JPEG) and the moving picture experts group (MPEG) compression.
A large number of research papers report image compression algorithms.For example, one study [13] is about discrete cosine transform (DCT)-based lossless image compression where the higher energy coefficients in each block are quantized.Next, an inverse DCT is performed only on the quantized coefficients.The resultant pixel values are in the 2-D spatial domain.The pixel values of two neighboring regions are then subtracted to obtain residual error sequence.The error sequence is encoded by an entropy coder such as Arithmetic or Huffman coding [13].Image compression in the frequency domain using wavelets is reported in several studies [12,[14][15][16][17].In the method described in [14] lifting-based bi-orthogonal wavelet transform is used which produces coefficients that can be rounded without any loss of data.In the work of [18] wavelet transform limits the image energy within fewer coefficients which are encoded by "set partitioning in hierarchical trees" (SPIHT) algorithm.
In [19] JPEG lossless (JPEG-LS), a prediction-based lossless scheme, is proposed for continuous tone images.In [14] embedded zero tree coding (EZW) method is proposed based on the zero tree hypothesis.The study in [12] proposes a compression algorithm based on combination of discrete wavelet transform (DWT) and intensity-based adaptive quantization coding (AQC).In this AQC method, the image is divided into sub-blocks.Next, the quantizer step in each sub-block is computed by subtracting the maximum and the minimum values of the block and then dividing the result by the quantization level.In the case of intensity-based adaptive quantizer coding (IBAQC) reported in [12] the image sub-block is classified into low and high intensity blocks based on the intensity variation of each block.To encode high intensity block, it is required to have large quantization level depending on the desired peak signal to noise ratio (PSNR).On the other hand, if the pixel value in the low intensity block is less than the threshold, it is required to encode this value without quantization; otherwise, it is required to quantize the bit value with less quantization level.In case of the composite DWT-IBAQC method, IBAQC is applied to the DWT coefficients of the image.Since the whole energy of the image is carried by only a few wavelet (DWT) coefficients, the IBQAC is used to encode only the coarse (low pass) wavelet coefficients [12].
Some researchers describe prediction-based lossless compression [1,3,[19][20][21][22][23][24].Moreover, the combination of wavelet transform and the concept of prediction are presented in some studies [25,26].In [25], the image is pre-processed by DPCM and then the wavelet transform is applied to the output of the DPCM.In [26], the image pixels are predicted by a hierarchical prediction scheme and then the wavelet transform is applied to the prediction error.Some work [5,9,11,[27][28][29][30] applies various types of image transformation or pixel difference or simple entropy coding.An image transformation scheme known as "J bit encoding" (JBE) has been proposed in [11].It can be noted that image transformation means rearranging the positions of the image components or pixels to make the image suitable for huge compression.In this [11] work, the original data are divided into two matrices where one matrix is for original nonzero data bytes, while the other matrix is for defining the positions of the zero/nonzero bytes.
A number of research papers use the high efficiency video coding (HEVC) standard for image compression [31][32][33][34].The work in [31] describes a lossless scheme that carries out sample-based prediction in the spatial domain.The work in [33] provides an overview of the intra coding techniques in the HEVC.The authors of [32] present a collection of DPCM-based intra-prediction method which is effective to predict strong edges and discontinuities.The work in [34] proposes piecewise mapping functions on residual blocks computed after DPCM-based prediction for lossless coding.Besides, the compression using HEVC, JPEG2000 [35,36] and graph-based transforms [37] are also reported.Moreover, the work in [5] presents a combination of fixed-size codebook and row-column reduction coding for lossless compression of discrete-color images.Table 1 provides a comparative study of different image compression algorithms reported in the literature.
One special type of image is the pixelated images that are used to carry data between optical modulators and optical detectors.This is known as pixelated optical wireless communication system in the literature.Figure 1 illustrates one example of a pixelated system [38].In such systems, a sequence of image frames is transmitted by liquid crystal display (LCD) or light emitting diodes (LED) arrays.A smart-phone with camera or an array of photodiode with imaging lens can be used as optical receivers [6][7][8].Such systems have the potential to have huge data rates as there are millions of pixels on the transmitter screens.The images created on the optical transmitter are required to be within the field of view (FOV) of the receiver imaging lens.Pixelated links can be used for secure data communication in banking and military applications.For instance, pixelated systems can be useful at gatherings such as shopping malls, retail store, trade shows, galleries, conferences, etc. where business cards, product videos, brochures, and photos can be exchanged without the help of the Internet (www) connection.The storage of pixelated images may be vital for offline processing.Since data are embedded within image pixels, the pixelated images must be processed by lossless compression methods.Any amount of loss in image entropy may lead to loss in the embedded data.A very important feature of pixelated images is that a single intensity value made of pixel blocks contains a single data, and this value of intensity changes abruptly at the transition of pixel blocks.This feature is not particularly exploited in the existing image compression techniques.Hence, none of the above-mentioned research reports are optimum for pixelated images as the special features of these images are yet to be exploited for compression.In fact, a new compression algorithm for pixelated images has been proposed by the authors of this paper in a very recent study [39].This new algorithm is termed as edge-based transformation and entropy coding (ETEC) having high compression ratio at moderate computation time.In this previous study [39], the ETEC method is evaluated for only four pixelated images.This paper extends the study of ETEC method for fifty (50) different pixelated images.Moreover, a new algorithm termed as prediction-based transformation and entropy coding (PTEC) is proposed to overcome the limitations of computation time of ETEC.The main contributions of this paper can be summarized as follows: (1) Providing a framework for ETEC method as a combination of JBE and entropy coding, and then evaluating its effectiveness for compressing a wide range of pixelated images.(2) Developing a new algorithm termed as PTEC by combining the aspects of hierarchical prediction approach, JBE method, and entropy coding.(3) Comparing the proposed ETEC and PTEC schemes with the existing compression techniques for a number of pixelated and non-pixelated standard images.
in the literature.Figure 1 illustrates one example of a pixelated system [38].In such systems, a sequence of image frames is transmitted by liquid crystal display (LCD) or light emitting diodes (LED) arrays.A smart-phone with camera or an array of photodiode with imaging lens can be used as optical receivers [6][7][8].Such systems have the potential to have huge data rates as there are millions of pixels on the transmitter screens.The images created on the optical transmitter are required to be within the field of view (FOV) of the receiver imaging lens.Pixelated links can be used for secure data communication in banking and military applications.For instance, pixelated systems can be useful at gatherings such as shopping malls, retail store, trade shows, galleries, conferences, etc. where business cards, product videos, brochures, and photos can be exchanged without the help of the Internet (www) connection.The storage of pixelated images may be vital for offline processing.Since data are embedded within image pixels, the pixelated images must be processed by lossless compression methods.Any amount of loss in image entropy may lead to loss in the embedded data.A very important feature of pixelated images is that a single intensity value made of pixel blocks contains a single data, and this value of intensity changes abruptly at the transition of pixel blocks.This feature is not particularly exploited in the existing image compression techniques.Hence, none of the above-mentioned research reports are optimum for pixelated images as the special features of these images are yet to be exploited for compression.In fact, a new compression algorithm for pixelated images has been proposed by the authors of this paper in a very recent study [39].This new algorithm is termed as edge-based transformation and entropy coding (ETEC) having high compression ratio at moderate computation time.In this previous study [39], the ETEC method is evaluated for only four pixelated images.This paper extends the study of ETEC method for fifty (50) different pixelated images.Moreover, a new algorithm termed as prediction-based transformation and entropy coding (PTEC) is proposed to overcome the limitations of computation time of ETEC.The main contributions of this paper can be summarized as follows: (1) Providing a framework for ETEC method as a combination of JBE and entropy coding, and then evaluating its effectiveness for compressing a wide range of pixelated images.(2) Developing a new algorithm termed as PTEC by combining the aspects of hierarchical prediction approach, JBE method, and entropy coding.(3) Comparing the proposed ETEC and PTEC schemes with the existing compression techniques for a number of pixelated and non-pixelated standard images.

Existing Image Compression Techniques
The JPEG-LS compression algorithm is suited for continuous tone images.The compression algorithm consists of four main parts, which are fixed predictor, bias canceller or adaptive corrector, context modeler and entropy coder [19].In JPEG-LS, the edge detection is performed by "median edge detection" (MED) process [19].JPEG-LS uses context modeling to measure the quantized gradient of surrounding image pixels.This context modeling of the predication error gives good results for images with texture pattern.Next correction values are added to the prediction error, and the remaining or residual error is encoded by Golomb coding [40] scheme.SPIHT [18,41] is an advanced encoding technique based on progressive image coding.SPIHT uses a threshold and encodes the most significant bit of the transformed image, followed by the application of increasing refinement.This paper considers SPIHT algorithm with lifting-based wavelet transform for 5/3 Le Gall wavelet filter.
Differential pulse code modulation (DPCM) [42] predictor can predict the current pixel based on its neighboring pixels as mentioned in the JPEG-LS predictor.The subtraction of the current pixel intensity and the predictor output gives predictor error e.The quantizer quantizes the error value using suitable quantization level.In case of lossless compression, the quantized level is unity.Next, an entropy coding is performed to get the final bit streams.The predictor operator can be expressed by the following equation where xs is predictor output, the terms a, b, c and d are constant, I is the intensity value and (x,y) represent the spatial indices of the pixels.
Arithmetic coding is an entropy coding used for lossless compression [43].In this method, the infrequently occurring symbols/characters are encoded with greater number of bits than frequent occurring symbols/characters.An important feature of Arithmetic coding is that it encodes the full information into a single long number and represents current information as a range.Huffman coding [44] is basically a prefix coding method which assigns variable length codes to input characters/symbols.In this scheme, the least frequently occurring character is assigned with the smallest of the codes within a code table.

Proposed Algorithms
This section describes the recently proposed ETEC method and then proposes the PTEC method.

ETEC
The study of ETEC is extended in this paper with a detailed analysis of the ETEC algorithm.It has already been mentioned in Section I that each pixel block of a pixelated image carries a single intensity value or a single piece of data.The pixel blocks have abrupt transition and thus have many directional edges.The ETEC method can be described by three steps.In the first step, the special feature of pixelated images is used to calculate a residual error € by using the following intensity gradient where ∂I/∂x is the derivative with respect to the x direction, ∂I/∂y is the derivative with respect to the y direction, I is the intensity value and (x,y) represent the spatial indices of the pixels.The maximum change of gradient between two co-ordinates represents the presence of edge either in the vertical or the horizontal direction.The edge pixels are responsible for the increase in the level of the residual error €.It can be noted that for the presence of vertical edges, the value of € can be reduced to obtain the vertical intensity gradient.Similarly, for the presence of horizontal edges, the value of € can be reduced to obtain the horizontal intensity gradient.In order to detect a strong edge, a threshold T h is applied to the residual error in between the previous neighbors.If the previous residual error is greater than the threshold T h , then the present pixel I(x,y) is considered to be on the edge.So, the direction of gradient is changed.This can be mathematically described as: As long as the previous residual error is less than the threshold, i.e., € < T h , the scanning direction remains the same.After the whole scanning, the term € contains lower entropy compared to the original image.
In the second step of ETEC method, two matrices A and B are generated to encode €.The dimensions of the matrix A is X × Y.The possible values of matrix A are 0 or 1 or 2 depending on the value of €.The matrix A is assigned a value of 0 where €(x, y) has a value of 0.Moreover, the matrix A is assigned values of 1 and 2 where €(x, y) has a value greater than and less than 0, respectively.On the other hand, the matrix B is assigned with the absolute value of €(x, y), except for €(x, y) = 0.After assigning the values for the two matrices, run-length coding [45] is applied to A. This coding is applied to the values whose corresponding run is greater than other values.This method manipulates bits of data to reduce the size and optimize input of the other algorithm.Figure 2 shows the block diagram of step 2 of ETEC method.
In the second step of ETEC method, two matrices A and B are generated to encode €.The dimensions of the matrix A is XY  .The possible values of matrix A are 0 or 1 or 2 depending on the value of € .The matrix A is assigned a value of 0 where   xy €, has a value of 0.
Moreover, the matrix A is assigned values of 1 and 2 where   xy €, has a value greater than and less than 0, respectively.On the other hand, the matrix B is assigned with the absolute value of   xy €, , except for    xy € , 0 .After assigning the values for the two matrices, run-length coding [45] is applied to A .This coding is applied to the values whose corresponding run is greater than other values.This method manipulates bits of data to reduce the size and optimize input of the other algorithm.Figure 2 shows the block diagram of step 2 of ETEC method.In the third step of ETEC, Huffman or Arithmetic coding is applied to matrices A and B .
Like other image compression methods, in ETEC, the general process of image decompression is just the opposite of compression.Figure 3 shows the flowchart of proposed ETEC algorithm.In the third step of ETEC, Huffman or Arithmetic coding is applied to matrices A and B. Like other image compression methods, in ETEC, the general process of image decompression is just the opposite of compression.Figure 3 shows the flowchart of proposed ETEC algorithm.
In the second step of ETEC method, two matrices A and B are generated to encode €.The dimensions of the matrix A is XY  .The possible values of matrix A are 0 or 1 or 2 depending on the value of € .The matrix A is assigned a value of 0 where   xy €, has a value of 0.
Moreover, the matrix A is assigned values of 1 and 2 where   xy €, has a value greater than and less than 0, respectively.On the other hand, the matrix B is assigned with the absolute value of   xy €, , except for    xy € , 0 .After assigning the values for the two matrices, run-length coding [45] is applied to A .This coding is applied to the values whose corresponding run is greater than other values.This method manipulates bits of data to reduce the size and optimize input of the other algorithm.Figure 2 shows the block diagram of step 2 of ETEC method.In the third step of ETEC, Huffman or Arithmetic coding is applied to matrices A and B .
Like other image compression methods, in ETEC, the general process of image decompression is just the opposite of compression.Figure 3 shows the flowchart of proposed ETEC algorithm.

PTEC
The main purpose of the proposed PTEC algorithm is to optimize the compression ratio and computational time for pixelated images as well as for other continuous tone images.In the case of a gray scale image, the signal variation is generally much smaller than that of a color image, but the intensity variation is still large near the edges of a gray scale image.For more accurate prediction of these signals and for accurate modeling of the prediction error, the hierarchical prediction scheme is used in PTEC.This method is described for the case where any image is divided into four subimages.At first, the gray scale image is decomposed into two subimages, i.e., a set of even numbered rows and a set of odd numbered rows, respectively.Figures 4 and 5 show the hierarchical decomposition of the input image X 0 .The input image is separated into two subimages: an even subimage X e and an odd subimage X o .Here the even subimage X e is formed by gathering all even rows of the input image, and the odd subimage is formed of the collection of all odd rows of the input image.Each subimage is further divided into two subimages based on the even columns and the odd columns.Then X ee is encoded and is used to predict the pixels in X eo .In addition, X ee is also used to estimate the statistics of prediction errors of X eo .After encoding X ee and X eo , these are used to predict the pixels in X oe .Furthermore, three subimages X ee , X eo , X oe are used to predict a given subimage X oo .With the increase in the number of subimages used to predict a given subimage, the probability of the prediction error may be decreased.To predict the pixels of the last subimage X oo , maximum of eight (8) adjacent neighbors are used.This is evident from Figure 5.It can be noted that if the original image is divided into eight or more subimages instead of only four, the complexity and computation time will increase.Suppose the image is scanned in a raster-scanning order; then the predictor is always based on its past casual neighbors ("context").Figure 6 shows the order of the casual neighbors.The current pixels of the subimage ee X are predicted based on the casual neighbors.A reasonable assumption made with this subimage source is the th N order Markovian property.This means in order to

PTEC
The main purpose of the proposed PTEC algorithm is to optimize the compression ratio and computational time for pixelated images as well as for other continuous tone images.In the case of a gray scale image, the signal variation is generally much smaller than that of a color image, but the intensity variation is still large near the edges of a gray scale image.For more accurate prediction of these signals and for accurate modeling of the prediction error, the hierarchical prediction scheme is used in PTEC.This method is described for the case where any image is divided into four subimages.At first, the gray scale image is decomposed into two subimages, i.e., a set of even numbered rows and a set of odd numbered rows, respectively.Figures 4 and 5 show the hierarchical decomposition of the input image X 0 .The input image is separated into two subimages: an even subimage e X and an odd subimage o X .Here the even subimage e X is formed by gathering all even rows of the input image, and the odd subimage is formed of the collection of all odd rows of the input image.Each subimage is further divided into two subimages based on the even columns and the odd columns.Then ee X is encoded and is used to predict the pixels in eo X .In addition, ee X is also used to estimate the statistics of prediction errors of eo X .After encoding ee X and eo X , these are used to predict the pixels in oe X .Furthermore, three subimages ee X , eo X , oe X are used to predict a given subimage oo X .With the increase in the number of subimages used to predict a given subimage, the probability of the prediction error may be decreased.To predict the pixels of the last subimage oo X , maximum of eight ( 8) adjacent neighbors are used.This is evident from Figure 5.It can be noted that if the original image is divided into eight or more subimages instead of only four, the complexity and computation time will increase.Suppose the image is scanned in a raster-scanning order; then the predictor is always based on its past casual neighbors ("context").Figure 6 shows the order of the casual neighbors.The current pixels of the subimage ee X are predicted based on the casual neighbors.A reasonable assumption made with this subimage source is the th N order Markovian property.This means in order to Suppose the image is scanned in a raster-scanning order; then the predictor is always based on its past casual neighbors ("context").Figure 6 shows the order of the casual neighbors.The current pixels of the subimage X ee are predicted based on the casual neighbors.A reasonable assumption made with this subimage source is the Nth order Markovian property.This means in order to predict a pixel, N nearest casual neighbors are required.Then the prediction of current pixel x(n) is predicted as follows: where a(k) is the prediction coefficient, and X(n − k) is the neighbors of X(n).For the prediction of X eo pixels using X ee , directional prediction is attached to avoid large prediction errors near the edge.
For each pixel X eo (i, j) in X eo , the horizontal predictor Xv (i, j) and vertical predictor Xh (i, j) are defined as shown in the following.Both Xv (i, j) and Xh (i, j) are determined by calculating the average of two different predictions.First, consider the case for Xh (i, j).The prediction value, Xh1 (i, j), is expressed as The second prediction value, Xh2 (i, j), is expressed as Now, the term Xh (i, j) is determined using the average of Xh1 (i, j) and Xh2 (i, j) as follows: Similarly, the term Xv (i, j) can be expressed as follows: Among these, one is selected as a predictor for X eo (i, j) from Equations ( 10) and (11).With these possible two predictors, the most common approach to encoding is mode selection; where better predictor for each pixel is selected and the mode selection is dependent on the vertical and horizontal edges.If X eo (i, j) − Xh (i, j) is smaller than X eo (i, j) − Xv (i, j) , the horizontal edge is stronger than the vertical edge.Otherwise, the vertical edge is stronger than horizontal edge.For the prediction of X oe using X ee and X eo , the vertical and horizontal edges as well as diagonal edges can be suitably predicted.For each pixel X oe (i, j) in X oe , the horizontal predictor Xh (i, j), vertical predictor Xv (i, j), and diagonal predictor Xdl (i, j) (left), Xdr (i, j) (right) are defined in the following.Again, Xv (i, j), Xh (i, j), Xdl (i, j) and Xdr (i, j) are determined by taking the average of two different predictions.The term Xh (i, j) is determined as follows Xh (i, j) = X oe (i, j − 1) + round (X ee (i, j − 1) − X ee (i, j)) + (X ee (i + 1, j − 1) − X ee (i + 1, j)) Now, consider the case for Xv (i, j).The first prediction value, Xv1 (i, j), is expressed as The second prediction value, Xv2 (i, j), is expressed as The term Xv (i, j) is determined using the average of Xv1 (i, j) and Xv2 (i, j) as follows: J. Imaging 2018, 4, 64 9 of 20 Now, consider the case for Xdr (i, j).The first prediction value, Xdr1 (i, j), is expressed as Xdr1 (i, j) = X eo (i, j) + round (X ee (i, j) − X ee (i + 1, j − 1)) + (X ee (i, j + 1) − X ee (i + 1, j)) 4 ( 16) The second prediction value, Xdr2 (i, j), is expressed as The term Xdr (i, j) is determined using the average of Xdr1 (i, j) and Xdr2 (i, j) as follows: Now, consider the case for Xdl (i, j).The first prediction value, Xdl1 (i, j), is expressed as The second prediction value, Xdl2 (i, j), is expressed as The term Xdl (i, j) is determined using the average of Xdl1 (i, j) and Xdl2 (i, j) as follows: Moreover, the selection of predictor is dependent on the presence of the directivity of the strong edges.By using Equations ( 12) and ( 21), it is possible to find an edge with a specified direction.Next, the residual error is encoded using modified J bit encoding.At the final stage, entropy coding is applied to the J bit encoded data.
The term v X i j ( , ) is determined using the average of v X i j 1 ( , ) and v X i j 2 ( , ) as follows: Now, consider the case for dr X i j ˆ( , ) .The first prediction value, dr X i j 1 ˆ( , ) , is expressed as The second prediction value, dr X i j 2 ˆ( , ) , is expressed as The term dr X i j ˆ( , ) is determined using the average of dr X i j 1 ˆ( , ) and dr X i j 2 ˆ( , ) as follows: Now, consider the case for dl X i j ˆ( , ) .The first prediction value, dl X i j 1 ˆ( , ) , is expressed as The second prediction value, dl X i j 2 ˆ( , ) , is expressed as The term dl X i j ˆ( , ) is determined using the average of dl X i j 1 ˆ( , ) and dl X i j 2 ˆ( , ) as follows: Moreover, the selection of predictor is dependent on the presence of the directivity of the strong edges.By using Equations ( 12) and ( 21), it is possible to find an edge with a specified direction.Next, the residual error is encoded using modified J bit encoding.At the final stage, entropy coding is applied to the J bit encoded data.

Results and Discussion
This section evaluates the performance of ETEC and PTEC schemes for various types of images.The evaluation is done with the help of MATLAB tool and computer having specifications of Intel core i3 (Intel, Shanghai, China), 3110 M 2.4 Hz processor, RAM 4 GB (Kingston, Shanghai, China), 1 GB VGA graphics card (Intel, Shanghai, China) and Windows 7 (32 bits) operating system (Microsoft, Shanghai, China).The intensity levels of the images are from 0 to 255 and the threshold term T h is assumed to have a value of 20.It can be noted that this value of T h has been selected as a near optimal value.Since a high value of T h may not recognize some edges in the images, whereas low values of T h may unnecessary consider any small transition as an edge Figure 7 shows 50 different types of pixelated images used for evaluating the compression algorithms.Some of these images are created using MATLAB tool, and the remaining ones are available in [46][47][48][49].Both Figure 7a,b have 25 images each.These images are made of different pixel blocks and each block is of different pixel sizes.Each pixel block has uniform intensity level.In some cases, a pixelated image may have very small pixel block or no block (each pixel block made of one pixel only).A number of metrics such as compression ratio, bits per pixel, saving percentage [28], and computation time are considered for comparing the algorithms.It can be noted that in this study the compression ratio is defined as the ratio of the size of the original image to the compressed image.Moreover, the saving percentage parameter is the difference between the original image and the compressed image as a percentage of the original image.Mathematically, the compression ratio is C 1 /C 2 , and the saving percentage is (1 − C 2 /C 1 ) where C 1 and C 2 are the size of the original image and the compressed image, respectively.The bit per pixel parameter is obtained by dividing the image size (in bytes) by the number of pixels in the compressed image.The computation time is the total amount of time required to perform the image compression using MATLAB tool with the given computer specified earlier in this section.
First, consider the compression ratio (denoted as CR) and bits/pixel parameters.In Table 2, compression ratio and bits/pixel metrics are compared for the proposed ETEC and PTEC techniques with the existing JPEG-LS, SPIHT, and DPCM methods.The comparison is done for the 50 pixelated images illustrated in Figure 7.The bits per pixel parameter of the first 15 images are plotted in Figure 8 for the proposed and existing compression algorithms.Now consider the saving percentage and computation time.Table 3 represents the percentage saving and computation time for the case of those 50 images.The computation time in seconds is also plotted for the first 15 images in Figure 9.It can be seen from Table 2 that for the pixelated images, the average bits per pixel for ETEC (0.299) and PTEC (0.592) are lower (better) than the existing JPEG-LS (0.836), SPIHT (2.105) and DPCM (2.17).Table 2 shows that the average compression ratio of ETEC (29.39) and PTEC (10.28) are better than SPIHT (3.178) and JPEG-LS (9.264) and DPCM (3.09).Table 2 also shows that the compression ratio of PTEC is not always better than JPEG-LS for all the 50 pixelated images.In particular, PTEC has better compression than JPEG-LS for pixelated images having large pixel blocks.For small pixel blocks, the compression performance of PTEC is worse than JPEG-LS.This is because of the hierarchical prediction of PTEC.In case of small pixel block images, the prediction error for the first subimage is very high due to the high randomness of pixel intensity.For large pixel block images, this problem is significantly reduced.Table 3 indicates that the computational time of ETEC (62.58 s) is worse than SPIHT (13.9 s), but better than JPEG-LS (526 s) and DPCM (17.48 s).Furthermore, PTEC method has a computation time of 18.406 s which is much better than ETEC (62.58 s) and comparable to SPIHT (13.9 s).So, for pixelated images and for the case where both compression and computation time are important, PTEC may be more suitable than ETEC, SPIHT, JPEG-LS and DPCM.(1) (2) (3) (4)  (    In the following, different compression algorithms are shown for standard images particularly, non-pixelated images.Figure 10 illustrates eight standard test images available in [50][51][52][53][54].These images have a resolution of 512 × 512 pixels.All these eight images are used to test compression ratio of different algorithms.For example, the Lena image is of 2,097,152 bits, but this image results in 1,063,464, 1,399,968, 1,170,220, 1,145,985 and 1,263,345 bits by using compression schemes of JPEG-LS, SPIHT, ETEC, PTEC and DPCM, respectively.Therefore, for the use of JPEG-LS on Lena image, the compression ratio is 1.972 (2,097,152/1,063,464) and bits/pixel is 4.0567 (1,063,464/512 × 512).Similarly, the compression ratio and bits/pixel values for different algorithms on different images can be easily obtained.These values are summarized in Table 4. Table 4 shows the comparison of compression ratio and bits per pixel representation of the ETEC and PTEC techniques with the existing JPEG-LS, SPIHT and DPCM for non-pixelated images.Figures 11 and  12 are the corresponding visual representation of Table 4 for the case of bits per pixel and compression ratio, respectively.When the average compression ratio is considered, PTEC (2.06) is better than SPIHT (1.76), ETEC (1.93) and DPCM (1.72), but worse than JPEG-LS (2.16).Similarly, PTEC is better than SPIHT, ETEC and DPCM but worse than JPEG-LS in terms of average bits/pixel   In the following, different compression algorithms are shown for standard images particularly, non-pixelated images.Figure 10 illustrates eight standard test images available in [50][51][52][53][54].These images have a resolution of 512 × 512 pixels.All these eight images are used to test compression ratio of different algorithms.For example, the Lena image is of 2,097,152 bits, but this image results in 1,063,464, 1,399,968, 1,170,220, 1,145,985 and 1,263,345 bits by using compression schemes of JPEG-LS, SPIHT, ETEC, PTEC and DPCM, respectively.Therefore, for the use of JPEG-LS on Lena image, the compression ratio is 1.972 (2,097,152/1,063,464) and bits/pixel is 4.0567 (1,063,464/512 × 512).Similarly, the compression ratio and bits/pixel values for different algorithms on different images can be easily obtained.These values are summarized in Table 4. Table 4 shows the comparison of compression ratio and bits per pixel representation of the ETEC and PTEC techniques with the existing JPEG-LS, SPIHT and DPCM for non-pixelated images.Figures 11 and  12 are the corresponding visual representation of Table 4 for the case of bits per pixel and compression ratio, respectively.When the average compression ratio is considered, PTEC (2.06) is better than SPIHT (1.76), ETEC (1.93) and DPCM (1.72), but worse than JPEG-LS (2.16).Similarly, PTEC is better than SPIHT, ETEC and DPCM but worse than JPEG-LS in terms of average bits/pixel In the following, different compression algorithms are shown for standard images particularly, non-pixelated images.Figure 10 illustrates eight standard test images available in [50][51][52][53][54].These images have a resolution of 512 × 512 pixels.All these eight images are used to test compression ratio of different algorithms.For example, the Lena image is of 2,097,152 bits, but this image results in 1,063,464, 1,399,968, 1,170,220, 1,145,985 and 1,263,345 bits by using compression schemes of JPEG-LS, SPIHT, ETEC, PTEC and DPCM, respectively.Therefore, for the use of JPEG-LS on Lena image, the compression ratio is 1.972 (2,097,152/1,063,464) and bits/pixel is 4.0567 (1,063,464/512 × 512).Similarly, the compression ratio and bits/pixel values for different algorithms on different images can be easily obtained.These values are summarized in Table 4. Table 4 shows the comparison of compression ratio and bits per pixel representation of the ETEC and PTEC techniques with the existing JPEG-LS, SPIHT and DPCM for non-pixelated images.Figures 11 and 12 are the corresponding visual representation of Table 4 for the case of bits per pixel and compression ratio, respectively.When the average compression ratio is considered, PTEC (2.06) is better than SPIHT (1.76), ETEC (1.93) and DPCM (1.72), but worse than JPEG-LS (2.16).Similarly, PTEC is better than SPIHT, ETEC and DPCM but worse than JPEG-LS in terms of average bits/pixel metric.Table 5 represents the percentage of saving area and computation time for the compression algorithms.It can be seen from Table 5 that the PTEC is better than SPIHT, ETEC and DPCM but worse than JPEG-LS in terms of percentage saving metric.Table 5 also shows that for the non-pixelated images, the average computation time of PTEC (74.50 s) is comparable to SPIHT (43.45 s) and DPCM (43.48 s), but better than ETEC (347.44 s) and JPEG-LS (2279.36s).Note that PTEC has much better computation time than ETEC.This is because the use of hierarchical approach in PTEC.In the hierarchical approach, the computational data matrix is reduced to 1  4 of the original data matrix.To handle a smaller matrix requires less time than handling a large one.
So, for non-pixelated images and for the case where both compression and computation time are important, PTEC, SPIHT and DPCM may be more suitable than ETEC and JPEG-LS.

Conclusions
This work describes two algorithms for compression of images, particularly pixelated images.One algorithm is termed as ETEC, which has recently been conceptualized by the authors of this paper.The other one is a prediction-based new algorithm termed as PTEC.The ETEC and PTEC techniques are compared with the existing JPEG-LS, SPIHT, and DPCM methods in terms of compression ratio and computation time.For the case of pixelated images, the compression ratio for PTEC is around 10.28, which is worse than ETEC (29.39) but better than JPEG-LS (9.264), SPIHT (3.178), and DPCM (3.09).In particular, for images having large pixel-blocks, the PTEC method

Conclusions
This work describes two algorithms for compression of images, particularly pixelated images.One algorithm is termed as ETEC, which has recently been conceptualized by the authors of this paper.The other one is a prediction-based new algorithm termed as PTEC.The ETEC and PTEC techniques are compared with the existing JPEG-LS, SPIHT, and DPCM methods in terms of compression ratio and computation time.For the case of pixelated images, the compression ratio for PTEC is around 10.28, which is worse than ETEC (29.39) but better than JPEG-LS (9.264), SPIHT

Figure 1 .
Figure 1.Illustration of (a) a pixelated optical wireless communication system [38] (b) a transmitted pixelated image.

Figure 1 .
Figure 1.Illustration of (a) a pixelated optical wireless communication system [38] (b) a transmitted pixelated image.

Figure 5 .
Figure 5. Input image and its decomposition image.

Figure 5 .
Figure 5. Input image and its decomposition image.

Figure 5 .
Figure 5. Input image and its decomposition image.

Figure 6 .
Figure 6.Ordering of the casual neighbors.Figure 6. Ordering of the casual neighbors.

Figure 6 .
Figure 6.Ordering of the casual neighbors.Figure 6. Ordering of the casual neighbors.

Figure 8 .
Figure 8.Comparison of bits per pixels for pixelated images.

Figure 9 .
Figure 9.Comparison of computation time for pixelated images.

Figure 8 .
Figure 8.Comparison of bits per pixels for pixelated images.

J 20 Figure 8 .
Figure 8.Comparison of bits per pixels for pixelated images.

Figure 9 .
Figure 9.Comparison of computation time for pixelated images.

Figure 9 .
Figure 9.Comparison of computation time for pixelated images.

Figure 11 .
Figure 11.Comparison of bits per pixels for non-pixelated images.

Figure 12 .
Figure 12.Comparison of compression ratio for non-pixelated images.

Figure 12 .
Figure 12.Comparison of compression ratio for non-pixelated images.

Figure 12 .
Figure 12.Comparison of compression ratio for non-pixelated images.

Table 2 .
Comparison of bits per pixel and compression ratio for pixelated images.

Table 3 .
Comparison of percentage saving and computation time for pixelated images.

Table 4 .
Comparison of bits per pixel and compression ratio for non-pixelated images.