Next Article in Journal
A Real-Time Kinect Signature-Based Patient Home Monitoring System
Previous Article in Journal
Optical Aptamer Probes of Fluorescent Imaging to Rapid Monitoring of Circulating Tumor Cell
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

1
Center of Science and Technology, Catholic University of Pernambuco (UNICAP), Recife 50050-900, Brazil
2
Centro de Informática, Universidade Federal de Pernambuco (UFPE), Recife 50740-560, Brazil
3
Department of Electrical Engineering, Center of Alternative and Renewable Energy, Federal University of Paraíba (UFPB), João Pessoa 58038-130, Brazil
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(11), 1963; https://doi.org/10.3390/s16111963
Submission received: 24 August 2016 / Revised: 11 November 2016 / Accepted: 15 November 2016 / Published: 23 November 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

1. Introduction

Signal compression techniques aim at decreasing the number of bits needed to represent the signal (such as speech, image, audio and video), enhancing the efficiency both of transmission and storage. Compression techniques are widely used in applications with storage and bandwidth constraints, such as: storage of medical images, satellite transmissions, voice communication in mobile telephony and videoconference. One of the many techniques used to achieve signal compression is vector quantization (VQ), in which a codebook is used for signal reconstruction.
Vector quantization [1,2] is a lossy compression technique, which uses a mapping Q of a vector X, in a K-dimensional Euclidean space, into another vector belonging to a finite subset W of K :
Q : K W .
The finite subset W is called a codebook. Each codebook element w j , 1 j N , is called a codevector. The number of components in the codevectors is the dimension (K). The size of the codebook is the number of codevectors, denoted by N. In several speech coding [3,4,5] and image coding [6,7,8,9] systems, VQ has been used successfully, leading to high compression rates. VQ has also been used in other applications, such as speaker identification [10,11], information security such as steganography and digital watermarking [12,13,14,15,16,17,18], and classification of pathological voice signals [19].
Vector quantization is an extension of scalar quantization in a multidimensional space. The performance of VQ depends on the designed codebooks. The prevailing algorithm for codebook design is Linde-Buzo-Gray (LBG) [20], also known as Generalized Lloyd Algorithm (GLA) or K-means. Other examples of codebook design algorithms are: fuzzy [7,21,22], competitive learning [23], memetic [24], genetic [25], firefly [26] and honey bee mating optimization [27].
In vector quantization of a digital image, a codebook of size N is used, consisting in K-dimensional vectors. The process replaces blocks of pixels from the corresponding image by the most similar blocks of pixels in the codebook. So, the better the codebook, the higher the quantized image quality.
Typical grouping approaches used in VQ split in two categories: crisp and fuzzy clustering. Traditionally, crisp clustering is executed by the K-means algorithm. Due to initialization dependency, K-means can be stuck in undesired local minima. On the other hand, fuzzy clustering is usually performed by the fuzzy K-means (FKM) algorithm [28]. FKM attributes each training pattern to every other cluster with different pertinence degrees [29]. Therefore, FKM is able to reduce the random initialization dependency [7,29,30,31] at a high computational cost.
The K-means (KM) and fuzzy clustering algorithms, e.g., fuzzy K-means (FKM), have been used in a wide range of scenarios and applications, such as: digital soil pattern recognition [32], archaeology [33], indoor localization [34], discrimination of cabernet sauvignon grapevine elements [35], white blood cell segmentation [36], abnormal lung sounds diagnosis [37], intelligent sensor networks in agriculture [38], magnetic resonance image (MRI) segmentation [39,40], speaker recognition [41] and image compression by VQ [29,42,43].
The aforementioned works show that clustering algorithm applications include image coding, biometric authentication, pattern recognition, among others. The performance evaluation of the clustering algorithms depends on the application. In signal compression, an important aspect is the quality of the reconstructed signal. In pattern recognition systems, an important figure of merit is the recognition rate. The processing time of the clustering algorithms is also a relevant aspect. In this paper, techniques are presented for accelerating families of fuzzy K-means algorithms applied to VQ codebook design for image compression. Simulations show that the presented techniques lead to a decrease in processing time for codebook design, while preserving its overall quality.
One of the many techniques used in this work is the Equal-average Nearest Neighbor Search (ENNS) [44,45], which is usually used in the minimum distance coding phase of VQ. However, in this paper, ENNS is used in some of fuzzy K-means families, precisely in the partitioning of the training set. The acceleration of FKM algorithms is also obtained by the use of a lookahead approach in the crisp phase of such algorithms, leading to a decrease in the number of iterations.
The remaining sections are organized as follows: Section 2 covers K-means algorithm and fuzzy K-means families. Section 3 presents modified versions of fuzzy K-means families. In Section 4, nearest neighbor search techniques are introduced with focus in the scenario of accelerating codebook design. The results and final considerations are presented in Section 5 and Section 6, respectively.

2. Codebook Desing Techniques

Vector quantization performance is highly dependent on codebook quality. The codebook is a set of reference patterns or templates. In digital image coding, the codebook corresponds to a set of reference blocks of pixels. In this paper, K-means algorithm and fuzzy K-means families are the techniques under consideration for codebook design.
The main difference between K-means and fuzzy K-means algorithms is that, in the former, each training vector belongs to one quantization cell. In the latter, each training vector can be associated to more than one quantization cell, with some degree of pertinence to each cell.
K-means algorithm partitions the K vector space by associating each training vector to a single cluster using nearest neighbor search. Therefore, given an input vector x i , it belongs to the cluster (cell or Voronoi region):
V ( w j )   if   d ( x i , w j ) <   d ( x i , w a )     a j ,
where d ( x i , w j ) is a distance measure. Euclidean square distance between x i and w j is widely used in digital image vector quantization. In this case, w j is the nearest neighbor (NN) of x i , that is,   w j is the quantized version of x i . This is equivalent to w j = Q ( x i ) . The nearest neighbor search can be associated to a pertinence function:
μ j ( x i ) =   {   1 ,   if   w j = NN ( x i )   0 ,   otherwise .
The distortion, obtained by representing the training vectors by their corresponding nearest neighbors, is:
J 1 = j = 1 N i = 1 M μ j ( x i ) d ( x i , w j ) ,
in which x i is the i-th training vector, 1 i M . As J 1 is a function of w j , in order to minimize the distortion, vectors w j are updated according to:
w j = i = 1 M μ j ( x i ) x i i = 1 M μ j ( x i ) ,   j = 1 , 2 , , N .
Equations (2) and (5) are related to the partitioning of the training set and to the codebook update. The algorithm stops at the end of the n -th iteration if:
J 1 ( n 1 ) J 1 ( n ) J 1 ( n )   ε .
The input parameters of the K-means algorithm are: codebook size (N), codevectors dimension (K) and a distortion threshold ε used as stop criterion.
The fuzzy K-means algorithm aims at minimizing the distortion between training vectors x i and codevectors w j which compose the codebook. Unlike K-means algorithm, fuzzy K-means measures the distortion by [29]:
J m = j = 1 N i = 1 M μ j ( x i ) m d ( x i , w j ) ,   1 < m < ,
subject to the following conditions:
{ μ j ( x i )   ϵ   [ 0 , 1 ]     i ,   j , 0 < i = 1 M μ j ( x i ) < M , j = 1 N μ j ( x i ) = 1 ,     i = 1 , 2 , ,   M
As stated in [29], J m function minimization results:
μ j ( x i ) = 1 l = 1 N ( d ( x i , w j ) d ( x i , w l ) ) 1 m 1 .
Therefore, for a given pertinence degree set of functions, the codevectors evolve at each iteration to minimize   J m , according to [29]:
w j = i = 1 M μ j ( x i ) m x i i = 1 M μ j ( x i ) m ,   j = 1 , 2 , , N .
The nebulosity at clusters transitions is controlled by parameter m and increases with this parameter.
The input parameters of the FKM algorithm are: the codebook size (N), the codevector dimension (K), the nebulosity control parameter m   ϵ   ( 1 , ) , and the distortion threshold ε .
This work uses two fuzzy K-means families, as proposed in [29]. The development of those algorithms is based on transition from fuzzy to crisp mode, being the latter mode equivalent to K-means algorithm strategy. The algorithm fuzzy 1 (FKM1) presents three modifications in its construction when compared to FKM. The first is how the pertinence function is calculated:
μ j ( x i ) = f ( d ( x i , w j ) ,   d m a x ( x i ) ) = ( 1 d ( x i , w j ) d m a x ( x i ) ) u ,
in which d m a x ( x i ) gives the maximum distance between the training vectors and codevectors, and u is a positive integer. The second modification concerns the codebook update, defined by Equation (5). The last modification is found in the transition from fuzzy to crisp mode. For that purpose, a distortion threshold ε is defined, with ε > ε . Therefore, FKM1 algorithm has the following parameters as input: N , K , u and two distortion thresholds—precisely, ε represents the fuzzy to crisp mode transition threshold and ε represents the stop criterion.
The fuzzy 2 family (FKM2) uses the same codebook update and pertinence function calculations as proposed by fuzzy K-means algorithm, that is, Equations (9) and (10), respectively. The only difference is the inclusion of fuzzy to crisp mode transition.

3. Accelerating Fuzzy K-Means Family Algorithm

One of the challenges in the clustering methods is to increase the convergence speed, that is, the decrease in the number of iterations. Some alternatives have been proposed to accelerate K-means algorithm, as the techniques of Lee et al. [46] and Paliwal-Ramasubramanian [47]. Both techniques recalculate the codevectors at the end of each iteration, according to the expression:
w j n + 1 = w j n + s ( C ( V ( w j n ) ) w j n ) ,
where w j n is the codevector at the n -th iteration, s is the scale and C ( V ( w j n ) ) is the centroid of the Voronoi region V ( w j n ) . Fixed scale s is used in [46]. The modification introduced in [46], proposed in [47], consists in using a scale s which depends on the iteration n , that is:
s = 1 + v v + n   ,
for some v > 0 .
In this paper, the fuzzy K-Means families accelerated version uses Equations (12) and (13) in codevectors updating. According to simulation results, for FKM1 and FKM2 algorithms, the scale s leads to savings in the number of iterations when applied to the crisp phase of the algorithms.

4. Nearest Neighbor Search Techniques for Accelerating the Codebook Design

When FKM1 and FKM2 algorithms change to crisp mode (which is equivalent to the conventional K-means algorithm), the complexity of the nearest neighbor search, performed by the K-means, can be minimized by efficient search techniques. Usually, K-means algorithm uses Full Search (FS) to compute the nearest neighbor, which is highly time consuming.
A great number of operations can be saved by eliminating poor codevector candidates to the nearest neighbor. This can be accomplished by using search techniques, such as Partial Distortion Search (PDS) [48] and Equal-average Nearest Neighbor Search (ENNS) [44,45]. Both were originally proposed to VQ encoding phase. Instead, in this paper, they are used in FKM1 and FKM2 algorithms. PDS and ENNS apply rejection criteria on codevectors, decreasing, by that means, the time spent in the nearest neighbor search.
PDS algorithm, as proposed in [48], consists of a traditional technique to computational complexity reduction involved in nearest neighbor search. PDS determines, for any q K , if the accumulated distance to the first q codevector components is greater than d m i n (the minimum distance found in the search so far). If the condition is true, that codevector does not represent the NN. So, it is assumed that the following expression is satisfied:
l = 1 q   ( x i l w j l ) 2 d m i n ,
where 1 q K , x i l is the l-th component of training (input) vector x i and w j l is the l-th component of codevector w j . When this condition is satisfied there is no need to perform the hole calculation for the Euclidean distance between x i and w j . With this approach, the number of multiplications, subtractions and additions is reduced, decreasing the search time and, therefore, accelerating the codebook design in comparison to the full search.
In the ENNS algorithm, the mean for each codevector is calculated and sorted previously. Then, a lookup is performed, using some search algorithm, to find the codevector with mean closest to the mean m x of the current input vector x. When such codevector is found, searches do not need to be performed for codevectors whose means m i satisfy the criterion:
m i m x + d m i n K    or  m i m x d m i n K ,
where m i is the mean of the i-th codevector, m x is the mean of current input vector and d m i n is the distance between the input vector and the codevector with the nearest mean.
When the elimination criterion is not satisfied for a given vector, it enters in a waiting list to be looked up later. After all winner candidates to that input vector are collected, a search is performed calculating the square Euclidean distance and the PDS is used.
ENNS decreases the computational time compared to the full search with N (codebook size) memory allocations penalty, compared to PDS. That fact is proved in [49]. Because ENNS was originally used in coding phase, it performs one means sorting, since the codebook vectors were previously designed. However, as for the codebook design, the K-means algorithm (on the crisp mode of FKM families), at each iteration, updates its codevectors, hereby a new average sorting is needed for each iteration. Acceleration alternatives in the scenario of FKM2 are presented as follows (see Algorithms 1–3). The notation MFKM2 stands for modified fuzzy K-means family 2, that is, an acceleration (savings in the number of iterations) obtaining by using the scale factor s in codebook update.
Algorithm 1. Partitioning step of the conventional FKM2 algorithm in crisp mode
For 1 m M
  • Calculate d ( x m , w j ) = l = 1 K ( x m l w j l ) 2 ,   j = 1 , 2 , , N
  • Determine the smallest of the N calculated distances. The nearest neighbor of x m is w j such that d ( x m , w j ) < d ( x m , w o )     o j   . In this case, x m is allocated to the Voronoi region V ( w j )

Codebook update step of the FKM2 algorithm:
Calculate w j n = i = 1 M μ j ( x i ) x i i = 1 M μ j ( x i ) ,     j = 1 , 2 , , N
Codebook update with w j n + 1 = w j n , in which w j n is the codevector at the n -th iteration
Algorithm 2. Partitioning step of the MFKM2 algorithm in crisp mode
For 1 m M
  • Calculate d ( x m , w j ) = l = 1 K ( x m l w j l ) 2 ,   j = 1 , 2 , , N
  • Determine the smallest of the N calculated distances. The nearest neighbor of x m is w j such that d ( x m , w j ) < d ( x m , w o )     o j   . In this case, x m is allocated to the Voronoi region V ( w j )

Codebook update step of the MFKM2 algorithm:
Calculate w j n = i = 1 M μ j ( x i ) x i i = 1 M μ j ( x i ) ,     j = 1 , 2 , , N
Codebook update with w j n + 1 = w j n + s ( C ( V ( w j n ) ) w j n )
It is worth mentioning that other approaches have been proposed in the literature for the purpose of fast codebook search. As an example, the method introduced by Chang and Wu [50] is an interesting partial-search technique based on a graph structure which leads to computational cost savings.
Algorithm 3. Partitioning step of the MFKM2 algorithm in crisp mode with the use of ENNS
(Calculate off-line the mean of each input vector)
Calculate the mean of each codevector and order the N means in ascending order
For 1 m M
  • Determine the codevector with the minimum absolute difference between its mean and the input vector mean. Obtain d m i n as the squared Euclidean distance between this codevector and the input vector
  • Eliminate from the search process the codevectors that satisfy:
    m i m x + d m i n K  or  m i m x d m i n K
    For the remaining codevectors, i.e., those who were not eliminated from the search, apply the PDS algorithm for calculating the distance and update d m i n (the minimum distance found in the search so far)
  • At the end of the process, the codevector w j corresponding to d m i n is the nearest neighbor of x m . In this case, x m is allocated to the Voronoi region V ( w j )
Codebook update step of the MFKM2 algorithm:
Calculate w j n = i = 1 M μ j ( x i ) x i i = 1 M μ j ( x i ) ,     j = 1 , 2 , , N
Codebook update with w j n + 1 = w j n + s ( C ( V ( w j n ) ) w j n )

5. Results

Simulations have been performed in a core I5-2450m (2.50 GHz) Intel computer using nine 256 × 256 pixel images: Lena, Barbara, Elaine, Boat, Clock, Goldhill, Peppers, Mandrill and Tiffany. Each image has 256 gray scale levels, as shown in Figure 1. The parameters used for the simulations were: K = 16 (4 × 4 pixel blocks), N = 32, 64, 128 and 256, u = 2 and two distortion thresholds, ε = 0.1 and ε = 0.001 . For each parameter combination of dimension K and codebook size N (for example N = 32 and K = 16), 20 random initializations were used for each algorithm.
Results are presented in terms of average number of iterations and average execution time (in seconds) of the codebook design algorithms, as well as average peak signal noise ratio (PSNR) and structural similarity (SSIM) index [51] of reconstructed images. The notation adopted for the methods are presented in Table 1. Results are organized in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15.
Regarding Table 2, all algorithms under consideration led to close values of PSNR. It can be noted that the use of the scale factors led to a decrease in the average number of iterations. In other words, it is observed, for instance, that the average number of iterations of MFKM is smaller than that of FKM. The decrease in the number of iterations is also observed when one compares MFKM1 with FKM1, as well as when one compares MFKM2 with FKM2. The use of PDS for nearest neighbor search contributes to reduce the time spent for codebook design. For instance, considering Elaine image, for FKM1 and FKM1-PDS, the use of PDS in the partitioning step of the second phase (crisp phase) of FKM1 led to a codebook design average time 0.34 s, which is lower than 0.38 s spent for codebook design using the full search (FS) or brute force in that phase. If the ENNS is used in substitution to FS, the time spent is 0.27 s. The highest time savings, concerning FKM1, is obtained by using the scale factor s to decrease the number of iterations combined with the use of ENNS for efficient nearest neighbor search. Indeed, regarding Elaine image, that combination led to an average time spent for codebook design equals 0.25 s.
With respect to Table 3, it is observed that the highest time spent for codebook design was for FKM algorithm. It is important to mention that this behavior is observed for all images and codebook sizes considered in the present work. As an example, for the Boat image and codebook size N = 32, the codebook design average time spent by FKM is 1.64 s, which is 8.2 times higher than the average time spent by KM and about 3.8 times higher than the average time spent by FKM2. Table 3 results also confirm the benefits of using the modified versions of the codebook design algorithms ( M versions, with the use of the scale factor s) and nearest search algorithms for codebook design time savings when compared to the standard versions of the codebook design algorithms. For each image under consideration, it is observed that all algorithms lead to close PSNR values.
From the results presented in Table 4 and Table 5, it is observed that the codebook design average time spent by FKM2 is higher than that one of FKM1. It is important to mention that the same behavior is observed for all the images under consideration, for codebook sizes 128 and 256. Regarding the number of iterations, it is observed in Table 4 and Table 5 that the modified versions with the use of the scale factor s (algorithms MFKM, MFKM1 and MFKM2) have and average execution time lower than that of the corresponding standard versions (FKM, FKM1 and FKM2 respectively)—due to the savings in the number of iterations. Table 4 and Table 5 point out that the lowest codebook design average time is obtained with the combination of the scale factor s and ENNS. Indeed, considering for instance fuzzy K-means family 2 and Clock image, in Table 5 the average time of MFKM2-ENNS is 0.92 s, which is lower than the average time presented by all the other versions (FKM2, MFKM2, FKM2-PDS, MFKM2-PDS and FKM2-ENNS).
It is observed in Table 2, Table 3, Table 4 and Table 5 that the best PSNR results, for five out of six images under consideration, for N = 32 and N = 64, are obtained by using algorithms MFKM2, MFKM2-PDS and MFKM2-ENNS.
From Table 6 and Table 7, for all images under consideration and for all codebook sizes, the modified versions (those using the scale factor s) of the algorithms led to average number of iterations smaller than that of the original versions. For instance, for Lena image, the average number of iterations of MFKM is 21.25 and the corresponding number of FKM is 27.60; for Goldhill image, MFKM1 average number of iterations is 16.30 and FKM1 average number of iterations is 20.60; for Boat image, the average number of iterations of MFKM2 is 14.15, and the corresponding number of FKM2 is 16.25. The use of ENNS has proved to be an effective alternative for codebook design time savings. Consider, for instance, Elaine image, for which the codebook design average time of FKM1-ENNS is 0.77 s, while the corresponding time for FKM1 is 1.13 s. For all images under consideration, for each family of fuzzy K-means algorithm, the highest codebook design time savings is obtained by combining the use of scale factor s (M version of the codebook design algorithm) with ENNS. As an example, for all images under consideration, the codebook design average time spent by MFKM2-ENNS is lower than the corresponding one of FKM2, MFKM2, FKM2-PDS, MFKM2-PDS and FKM2-ENNS.
As can be observed in Table 8 and Table 9, in comparison with FKM1 family, the modified version MFKM1 has a smaller average number of iterations, which lead to a lower codebook design average time. Additional time savings is obtained by the use of efficient nearest neighbor search methods, that is, PDS or ENNS. It is important to observe that the modified versions generally lead to higher PSNR values when compared to the original versions. As an example, for Lena image, MFKM1 led to 30.13 dB average PSNR, while the original version led to a corresponding 29.74 dB PSNR; for the same image, the substitution of FKM2 by MFKM2 led to an increase of 0.20 dB in terms of average PSNR.
According to Table 8 and Table 9, for codebook size N = 256, for four out of six images under consideration, the best PSNR results are obtained by using algorithms MFKM2, MFKM2-PDS and MFKM2-ENNS. Particularly, for Lena image, the substitution of KM by MFKM2-ENNS lead to a PSNR gain of 0.54 dB.
According to Table 10, the best performance in terms of SSIM is obtained by using MFKM codebooks—the highest SSIM values are observed for MFKM in five out of seven training sets. P-M-T is a training set corresponding to the concatenation of images Peppers, Mandrill and Tiffany. It is important to point out that, for a fixed training set (with the exception of Lena), the absolute difference between the best SSIM result and the worst SSIM result is below 0.0090.
It is observed in Table 11 that MFKM leads to the highest SSIM values for five out of seven training sets. For a fixed training set (with the exception of Elaine and Clock), the absolute difference between the best SSIM result and the worst SSIM result is below 0.0090.
For N = 256, it is observed in Table 13 that MFKM leads to the best SSIM results for 5 out of 7 traning sets considered. An interesting performance nuance must be pointed out—MFKM2, MFKM2-PDS and MFKM2-ENNS are the techniques that lead to the highest PSNR results (according to Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9), but do not lead to the best SSIM results (as can be observed from Table 10, Table 11, Table 12 and Table 13). It is important to observe that codebook design aims to decrease the distortion (mean square error) obtained in representing the training vectors by the corresponding nearest neighbors, that is, by the corresponding codevectors with minimum distance. In other words, higher PSNR values are obtained by codebooks that are more “tuned” with the training set, that is, by codebooks that introduce less distortion in terms of MSE, which do not necessarily correspond to higher SSIM values. PSNR and SSIM results are presented in Table 14 for images reconstructed by codebooks designed with the training set P-M-T. The method MFKM2-ENNS was used for codebooks designed for K = 16 and N = 32, 64, 128 and 256, leading to corresponding code rates 0.3125 bpp, 0.375 bpp, 0.4375 bpp and 0.5 bpp. It is observed that, for a given image, both PSNR and SSIM increases with N, that is, the distortion decreases with the code rate.
The last set of simulations show that vector quantization in the Discrete Wavelet Transform (DWT) domain (that is, by quantizing the wavelet coefficients) lead to reconstructed images with better quality when compared to the ones obtained by VQ in the spatial domain (that is, by quantizing the gray scale values of pixels). For the purpose of DWT VQ [52] at the code rate 0.3125 bpp, a three level multiresolution wavelet decomposition was performed [53] with the wavelet family Daubechies 6. The resulting subbands S i j are submitted to quantization schemes according to Figure 2.
Subbands S 21 , S 22 and S 23 are submitted to the respective wavelet VQ codebooks with N = 256 and K = 16 (blocks of 4 × 4 wavelet coefficents). Subbands S 31 , S 32 and S 33 are submitted to the respective wavelet VQ codebooks with N = 256 and K = 4 (blocks of 2 × 2 wavelet coefficents). Subband S 30 is submitted to scalar quantization (SQ) with 8.0 bpp. Subbands S 11 , S 12 and S 13 are excluded (that is, code rate 0 bpp)—one can observe in Figure 3 that the application of the inverse discrete wavelet transform after exclusion of subbands S 11 , S 12 and S 13 , preserving all the other subbands with the wavelet coefficients unchanged, leads to images close to the respective original ones (Figure 1), with good quality, as revealed by visual inspection.
It is worth mentioning that, in the general case, after the application of a multiresolution discrete wavelet transform (DWT) with L resolution levels, the subbands S i j , with i = 1 ,   2 ,   L and j = 1 ,   2 ,   3 , are submitted to multiresolution VQ codebooks. In other words, with the exception of subband S L 0 (corresponding to the approximation component in the lowest resolution level), each subband is quantized with a specific codebook. The subband S L 0 is submitted to 8.0 bpp scalar quantization, since it is the subband with the highest importance to the quality of the image obtained from the inverse discrete wavelet transform (IDWT).
Assume the general case of an image with P   ×   P pixels. The number of wavelet coefficients in S i j , with 1 i L , is P   ×   P 2 i   ×   2 i . Let R S i j be the code rate (in bpp or, correspondingly, in bit/coefficient) of VQ for subband S i j , 1 i L and 1 j 3 , and R S L 0 be the code rate (in bpp) of scalar quantization for subband S L 0 . The final code rate R T (in bpp) of the image coding using DWT (with L resolution levels) and VQ is given by:
R T =   1 P   ×   P   ( P   ×   P 2 L   ×   2 L R S L 0   +   i = 1 L j = 1 3 P   ×   P 2 i   ×   2 i   R S i j   ) ,
that is:
R T =   R S L 0   2 2 L + i = 1 L j = 1 3 R S i j   2 2 i .
For VQ with dimension K and codebook size N , it follows that the corresponding code rate is 1 K   log 2 N . Hence, according to Figure 2, it follows that:
R S 21   =   R S 22   =   R S 23   =   1 16 log 2 256 = 0.5   bpp
and:
R S 31   =   R S 32   =   R S 33   =   1 4 log 2 256 = 2.0   bpp .
From Figure 2, it follows that R S 30   = 8.0   bpp and R S 11   =   R S 12   =   R S 13   = 0   bpp . Thus, from Equation (17), the corresponding overall code rate under the conditions presented in Figure 2 is R T = 0.3125   bpp . It is worth mentioning that the importance of subbands S i j for the image quality increases with i that is the reason why R S 3 j   >   R S 2 j , for j = 1 ,   2 ,   3 .
As can be observed in Figure 4 and Figure 5, visual inspections of the reconstructed images reveal the superiority of DWT VQ over vector quantization in the spatial domain. The superiority is also confirmed in terms of PSNR and SSIM values.
The superiority of DWT VQ over spatial domain VQ is also observed in Table 15. As an example, by using P-M-T as the training set, PSNR gain of 3.10 dB for Elaine image is obtained by substituting spatial domain VQ by DWT VQ. For a given image, one can observe that better PSNR and SSIM results are obtained by DWT VQ with codebooks designed by P-M-T when compared to spatial domain VQ with codebook designed by the image itself. Consider, for instance, the Lena image. If the Lena image is reconstructed using spatial domain VQ with codebook designed by itself as training set, a PSNR 26.72 dB and a SSIM 0.7791 are obtained. If the Lena image is reconstructed in the DWT domain with multiresolution codebooks designed by P-M-T as training set, a PSNR 29.35 dB and a SSIM 0.8367 are obtained.
As a final comment, image coding based on VQ is one of the possible applications of the families of fuzzy K-means algorithms considered in this paper. The focus of the present work is to assess the fact that the proposed acceleration techniques make VQ codebook design faster, since other efficient image coding techniques exist.

6. Conclusions

In this work, alternatives were presented for accelerating families of fuzzy K-means algorithms applied to vector quantization codebook design. A lookahead approach was used with the purpose of decreasing the number of iterations of the algorithms. The approach consists in using a scale factor in the computation of the codevectors.
An additional acceleration was obtained by accommodating efficient nearest neighbor search techniques in the partitioning step of the algorithms. With such approach, savings are obtained in the number of operations spent by the algorithms. The combination of the scale factor (lookahead approach) with efficient nearest neighbor search was evaluated in the scenario of image vector quantization codebook design. Savings up to 40% in the time spent for codebook design were obtained, without sacrificing the quality of the codebook, assessed by the peak signal-to-noise ratio (PSNR) as well as by structural similarity (SSIM) index of the reconstructed images.

Acknowledgments

The authors would like to thank CNPq and PIBITI program of the Catholic University of Pernambuco for supporting this research. The authors also would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.

Author Contributions

Edson Mata and Francisco Madeiro have proposed the alternative of accelerating fuzzy K-means algorithms. Edson Mata implemented the techniques, with the support of Silvio Bandeira. All the authors have contributed to simulation results analysis and to the manuscript writing. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression; Kluwer Academic Publishers: Boston, MA, USA, 1992. [Google Scholar]
  2. Gray, R.M. Vector Quantization. IEEE ASSP Mag. 1984, 1, 4–29. [Google Scholar] [CrossRef]
  3. Ma, Z.; Taghia, J.; Kleijn, W.B.; Guo, J. Line Spectral Frequencies Modeling by a Mixture of Von Mises-Fisher Distributions. Signal Proc. 2015, 114, 219–224. [Google Scholar] [CrossRef]
  4. Paliwal, K.K.; Atal, B.S. Efficient Vector Quantization of LPC Parameters at 24 Bits/Frame. IEEE Trans. Audio Speech Lang. Proc. 1993, 1, 3–14. [Google Scholar] [CrossRef]
  5. Yahampath, P.; Rondeau, P. Multiple-Description Predictive-Vector Quantization with Applications to Low Bit-Rate Speech Coding Over Networks. IEEE Trans. Audio Speech Lang. Proc. 2007, 15, 749–755. [Google Scholar] [CrossRef]
  6. Akhtarkavan, E.; Salleh, M.F.M. Multiple Descriptions Coinciding Lattice Vector Quantizer for Wavelet Image Coding. IEEE Trans. Image Proc. 2012, 21, 653–661. [Google Scholar] [CrossRef] [PubMed]
  7. Tsolakis, D.; Tsekouras, G.E.; Tsimikas, J. Fuzzy Vector Quantization for Image Compression Based on Competitive Agglomeration and a Novel Codeword Migration Strategy. Eng. Appl. Artif. Intell. 2012, 25, 1212–1225. [Google Scholar] [CrossRef]
  8. Wen, J.; Ma, C.; Zhao, J. FIVQ Algorithm for Interference Hyper-Spectral Image Compression. Opt. Commun. 2014, 322, 97–104. [Google Scholar] [CrossRef]
  9. Hu, Y.C.; Chen, W.L.; Lo, C.C.; Wu, C.M.; Wen, C.H. Efficient VQ-Based Image Coding Scheme Using Inverse Function and Lossless Index Coding. Signal Proc. 2013, 93, 2432–2439. [Google Scholar] [CrossRef]
  10. Hanilçi, C.; Ertas, F. Investigation of the Effect of Data Duration and Speaker Gender on Text-Independent Speaker Recognition. Comput. Electr. Eng. 2013, 39, 441–452. [Google Scholar] [CrossRef]
  11. Madeiro, F.; Fechine, J.M.; Lopes, W.T.A.; Aguiar Neto, B.G.; Alencar, M.S. Identificação Vocal por Frequência Fundamental, QV e HMMS. In Em-TOM-Ação: A Prosódia em Perspectiva, 1st ed.; Aguiar, M.A.M., Madeiro, F., Eds.; Editora Universitária da UFPE: Recife, Brazil, 2007; pp. 91–120. [Google Scholar]
  12. Qin, C.; Chang, C.-C. A Novel Joint Data-Hiding and Compression Scheme Based on SMVQ and Image Inpainting. IEEE Trans. Image Proc. 2014, 23, 969–978. [Google Scholar]
  13. Chang, C.-C.; Wu, W.-C. Hiding Secret Data Adaptively in Vector Quantisation Index Tables. IEEE Proc. Vis. Image Signal Proc. 2006, 153, 589–597. [Google Scholar] [CrossRef]
  14. Qin, C.; Hu, Y.-C. Reversible Data Hiding in VQ Index Table with Lossless Coding and Adaptive Switching Mechanism. Signal Proc. 2016, 129, 48–55. [Google Scholar] [CrossRef]
  15. Chang, C.C.; Nguyen, T.S.; Lin, C.C. A Reversible Compression Code Hiding Using SOC and SMVQ Indices. Inf. Sci. 2015, 300, 85–99. [Google Scholar] [CrossRef]
  16. Tu, T.Y.; Wang, C.H. Reversible Data Hiding with High Payload Based on Referred Frequency for VQ Compressed Codes Index. Signal Proc. 2015, 108, 278–287. [Google Scholar] [CrossRef]
  17. Kieu, T.D.; Ramroach, S. A Reversible Steganographic Scheme for VQ Indices Based on Joint Neighboring Coding. Exp. Syst. Appl. 2015, 42, 713–722. [Google Scholar] [CrossRef]
  18. Hu, H.T.; Hsu, L.Y.; Chou, H.H. Variable-Dimensional Vector Modulation for Perceptual-Based DWT Blind Audio Watermarking with Adjustable Payload Capacity. Digit. Signal Proc. 2014, 31, 115–123. [Google Scholar] [CrossRef]
  19. Vieira, R.T.; Brunet, N.; Costa, S.C.; Correia, S.; Aguiar Neto, B.G.; Fechine, J.M. Combining Entropy Measurements and Cepstral Analysis for Pathological Voice Assessment. J. Med. Biol. Eng. 2012, 32, 429–435. [Google Scholar] [CrossRef]
  20. Linde, Y.; Buzo, A.; Gray, R.M. An Algorithm for Vector Quantizer Design. IEEE Trans. Commun. 1950, 28, 84–95. [Google Scholar] [CrossRef]
  21. Tsolakis, D.; Tsekouras, G.E.; Niros, A.D.; Rigos, A. On the Systematic Development of Fast Fuzzy Vector Quantization for Grayscale Image Compression. Inf. Sci. 2012, 36, 83–96. [Google Scholar] [CrossRef] [PubMed]
  22. Hunga, W.-L.; Chen, D.-H.; Yang, M.-S. Suppressed Fuzzy-Soft Learning Vector Quantization for MRI Segmentation. Inf. Sci. 2011, 52, 33–43. [Google Scholar] [CrossRef] [PubMed]
  23. Krishnamurthy, A.K.; Ahalt, S.C.; Melton, D.E.; Chen, P. Neural Networks for Vector Quantization of Speech and Images. IEEE J. Sel. Areas Commun. 1990, 8, 1449–1457. [Google Scholar] [CrossRef]
  24. Azevedo, C.R.B.; Azevedo, F.E.A.G.; Lopes, W.T.A.; Madeiro, F. Terrain-Based Memetic Algorithms to Vector Quantization Design. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2008); Krasnogor, N., Ed.; DEU: Berlin, Germany, 2009; Volume 236, pp. 197–211. [Google Scholar]
  25. Pan, J.S.; Mclnnes, F.R.; Jack, M.A. VQ Codebook Design Using Genetic Algorithms. Electron. Lett. 1995, 31, 1418–1419. [Google Scholar] [CrossRef]
  26. Horng, M.-H. Vector Quantization Using the Firefly Algorithm for Image Compression. Inf. Sci. 2012, 39, 1078–1091. [Google Scholar] [CrossRef]
  27. Horng, M.-H.; Jiang, T.-W. Image Vector Quantization Algorithm via Honey Bee Mating Optimization. Inf. Sci. 2012, 38, 1382–1392. [Google Scholar] [CrossRef]
  28. Karayiannis, N.B.; Bezdek, J.C. An Integrated Approach to Fuzzy Learning Vector Quantization and Fuzzy C-Means Clustering. IEEE Trans. Fuzzy Syst. 1997, 5, 622–628. [Google Scholar] [CrossRef]
  29. Karayiannis, N.B.; Pai, P.I. Fuzzy Vector Quantization Algorithms and Their Application in Image Compression. IEEE Trans. Image Proc. 1995, 4, 1193–1201. [Google Scholar] [CrossRef] [PubMed]
  30. Tsao, E.C.K.; Bezdek, J.C.; Pal, N.R. Fuzzy Kohonen Clustering Networks. Pattern Recognit. 1994, 27, 757–764. [Google Scholar] [CrossRef]
  31. Tsekouras, G.E.; Mamalis, A.; Anagnostopoulos, C.; Gavalas, D.; Economou, D. Improved Batch Fuzzy Learning Vector Quantization for Image Compression. Inf. Sci. 2008, 178, 3895–3907. [Google Scholar] [CrossRef]
  32. Triantafilisa, J.; Gibbsb, I.; Earla, N. Digital Soil Pattern Recognition in the Lower Namoi Valley Using Numerical Clustering of Gamma-Ray Spectrometry Data. Geoderma 2013, 192, 407–421. [Google Scholar] [CrossRef]
  33. Malinverni, E.S.; Fangi, G. Comparative Cluster Analysis to Localize Emergencies in Archaeology. J. Cult. Herit. 2009, 10, e10–e19. [Google Scholar] [CrossRef]
  34. Chen, G.; Meng, X.; Wang, Y.; Zhang, Y.; Tian, P.; Yang, H. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization. Sensors 2015, 15, 24595–24614. [Google Scholar] [CrossRef] [PubMed]
  35. Fernández, R.; Montes, H.; Salinas, C.; Sarria, J.; Armada, M. Combination of RGB and Multispectral Imagery for Discrimination of Cabernet Sauvignon Grapevine Elements. Sensors 2013, 13, 7838–7859. [Google Scholar] [CrossRef] [PubMed]
  36. Zhang, C.; Xiao, X.; Li, X.; Chen, Y.-J.; Zhen, W.; Chang, J.; Zheng, C.; Liu, Z. White Blood Cell Segmentation by Color-Space-Based K-Means Clustering. Sensors 2014, 14, 16128–16147. [Google Scholar] [CrossRef] [PubMed]
  37. Chen, C.-H.; Huang, W.-T.; Tan, T.-H.; Chang, C.-C.; Chang, Y.-J. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds. Sensors 2015, 15, 13132–13158. [Google Scholar] [CrossRef] [PubMed]
  38. Liu, N.; Cao, W.; Zhu, Y.; Zhang, J.; Pang, F.; Ni, J. The Node Deployment of Intelligent Sensor Networks Based on the Spatial Difference of Farmland Soil. Sensors 2015, 15, 28314–28339. [Google Scholar] [CrossRef] [PubMed]
  39. Adhikaria, S.K.; Singb, J.K.; Basub, D.K.; Nasipurib, M. Conditional Spatial Fuzzy C-Means Clustering Algorithm for Segmentation of MRI Images. Appl. Soft Comput. 2015, 34, 758–769. [Google Scholar] [CrossRef]
  40. Mekhmoukh, A.; Mokrani, K. Improved Fuzzy C-Means Based Particle Swarm Optimization (PSO) Initialization and Outlier Rejection with Level Set Methods for MR Brain Image Segmentation. Comput. Methods Prog. Biomed. 2015, 122, 266–281. [Google Scholar] [CrossRef] [PubMed]
  41. Kinnunen, T.; Sidoroff, I.; Tuononen, M.; Fränti, P. Comparison of Clustering Methods: A Case Study of Text-Independent Speaker Modeling. Pattern Recognit. Lett. 2011, 32, 1604–1617. [Google Scholar] [CrossRef]
  42. Alkhalaf, S.; Alfarraj, O.; Hemeida, A.M. Fuzzy-VQ Image Compression based Hybrid PSOGSA Optimization Algorithm. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Istanbul, Turkey, 2–5 August 2015; pp. 1–6.
  43. Bhattacharyya, P.; Mitra, A.; Chatterjee, A. Vector Quantization Based Image Compression Using Generalized Improved Fuzzy Clustering. In Proceedings of the International Conference on Control, Instrumentation, Energy and Communication (CIEC14), Kolkata, India, 31 January–2 February 2014; pp. 662–666.
  44. Guan, L.; Kamel, M. Equal-Average Hyperplane Partitioning Method for Vector Quantization of Image Data. Pattern Recognit. Lett. 1992, 13, 693–699. [Google Scholar] [CrossRef]
  45. Ra, S.W.; Kim, J.K. A Fast Mean-Distance-Ordered Partial Codebook Search Algorithm for Image Vector Quantization. IEEE Trans. Circuits Syst. II 1993, 40, 576–579. [Google Scholar] [CrossRef]
  46. Lee, D.; Baek, S.; Sung, K. Modified K-Means Algorithm for Vector Quantizer Design. IEEE Signal Proc. Lett. 1997, 4, 2–4. [Google Scholar]
  47. Paliwal, K.K.; Ramasubramanian, V. Comments on Modified K-Means Algorithm for Vector Quantizer Design. IEEE Trans. Image Proc. 2000, 9, 1964–1967. [Google Scholar] [CrossRef] [PubMed]
  48. Bei, C.D.; Gray, R.M. An Improvement of the Minimum Distortion Enconding Algorithm for Vector Quantization. IEEE Trans. Commun. 1985, 33, 1132–1133. [Google Scholar]
  49. Chu, S.C.; Lu, Z.M.; Pan, J.S. Hadamard Transform Based Fast Codeword Search Algorithm for High-Dimensional VQ Enconding. Inf. Sci. 2007, 177, 734–746. [Google Scholar] [CrossRef]
  50. Chang, C.-C.; Wu, W.-C. Fast Planar-Oriented Ripple Search Algorithm for Hyperspace VQ Codebook. IEEE Trans. Image Proc. 2007, 16, 1538–1547. [Google Scholar] [CrossRef]
  51. Wang, Z.; Bovik, A.C.; Sheikh, H.R. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Proc. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  52. Averbuch, A.; Lazar, D.; Israeli, M. Image Compression Using Wavelet Transform and Multiresolution Decomposition. IEEE Trans. Image Proc. 1996, 5, 4–15. [Google Scholar] [CrossRef] [PubMed]
  53. Mallat, S.G. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Trans. Pattern Anal. Mach. Int. 1996, 5, 4–15. [Google Scholar]
Figure 1. Images 256 × 256 pixels, 8.0 bpp. (a) Lena; (b) Barbara; (c) Elaine; (d) Boat; (e) Clock; (f) Goldhill; (g) Peppers; (h) Mandrill; (i) Tiffany.
Figure 1. Images 256 × 256 pixels, 8.0 bpp. (a) Lena; (b) Barbara; (c) Elaine; (d) Boat; (e) Clock; (f) Goldhill; (g) Peppers; (h) Mandrill; (i) Tiffany.
Sensors 16 01963 g001aSensors 16 01963 g001b
Figure 2. Image encoding using DWT.
Figure 2. Image encoding using DWT.
Sensors 16 01963 g002
Figure 3. Images obtained from the inverse discrete wavelet transform with the exclusion of subbands S11, S12 and S13. (a) Lena PSNR = 30.05 dB; (b) Barbara PSNR = 25.54 dB; (c) Elaine PSNR = 31.88 dB; (d) Boat PSNR = 26.07 dB; (e) Clock PSNR = 29.02 dB; (f) Goldhill PSNR = 27.77 dB; (g) Peppers PSNR = 30.74 dB; (h) Mandrill PSNR = 24.93 dB; (i) Tiffany PSNR = 31.69 dB.
Figure 3. Images obtained from the inverse discrete wavelet transform with the exclusion of subbands S11, S12 and S13. (a) Lena PSNR = 30.05 dB; (b) Barbara PSNR = 25.54 dB; (c) Elaine PSNR = 31.88 dB; (d) Boat PSNR = 26.07 dB; (e) Clock PSNR = 29.02 dB; (f) Goldhill PSNR = 27.77 dB; (g) Peppers PSNR = 30.74 dB; (h) Mandrill PSNR = 24.93 dB; (i) Tiffany PSNR = 31.69 dB.
Sensors 16 01963 g003
Figure 4. Images Lena: (a) Original; (b) Reconstructed using spatial domain VQ with 0.3125 bpp (PSNR = 25.62 dB and SSIM = 0.7211); (c) Reconstructed using DWT VQ with 0.3125 bpp (PSNR = 29.35 dB and SSIM = 0.8367). Codebooks were designed with training set P-M-T by MFKM2-ENNS.
Figure 4. Images Lena: (a) Original; (b) Reconstructed using spatial domain VQ with 0.3125 bpp (PSNR = 25.62 dB and SSIM = 0.7211); (c) Reconstructed using DWT VQ with 0.3125 bpp (PSNR = 29.35 dB and SSIM = 0.8367). Codebooks were designed with training set P-M-T by MFKM2-ENNS.
Sensors 16 01963 g004
Figure 5. Images Goldhill: (a) Original; (b) Reconstructed using spatial domain VQ with 0.3125 bpp (PSNR = 25.71 dB and SSIM = 0.6391); (c) Reconstructed using DWT VQ with 0.3125 bpp (PSNR = 26.81 dB and SSIM = 0.7640). Codebooks were designed with training set P-M-T by MFKM2-ENNS.
Figure 5. Images Goldhill: (a) Original; (b) Reconstructed using spatial domain VQ with 0.3125 bpp (PSNR = 25.71 dB and SSIM = 0.6391); (c) Reconstructed using DWT VQ with 0.3125 bpp (PSNR = 26.81 dB and SSIM = 0.7640). Codebooks were designed with training set P-M-T by MFKM2-ENNS.
Sensors 16 01963 g005
Table 1. Notation.
Table 1. Notation.
KMK-means
FKMFuzzy K-means
MFKMModified Fuzzy K-means (accelerated version with scale s )
FKM1Fuzzy K-means Family 1
MFKM1Modified Fuzzy K-means Family 1 (accelerated version with scale s )
FKM1-PDSFuzzy K-means Family 1 with Partial Distortion Search in the crisp phase
MFKM1-PDSModified Fuzzy K-means Family 1 (accelerated version with scale s ) with Partial Distortion Search in the crisp phase
FKM1-ENNSFuzzy K-means Family 1 with Equal-Average Nearest Neighbor Search in the crisp phase
MFKM1-ENNSModified Fuzzy K-means Family 1 (accelerated version with scale s ) with Equal-Average Nearest Neighbor Search in the crisp phase
FKM2Fuzzy K-means Family 2
MFKM2Modified Fuzzy K-means Family 2 (accelerated version with scale s )
FKM2-PDSFuzzy K-means Family 2 with Partial Distortion Search in the crisp phase
MFKM2-PDSModified Fuzzy K-means Family 2 (accelerated version with scale s ) with Partial Distortion Search in the crisp phase
FKM2-ENNSFuzzy K-means Family 2 with Equal-Average Nearest Neighbor Search in the crisp phase
MFKM2-ENNSModified Fuzzy K-means Family 2 (accelerated version with scale s ) with Equal-Average Nearest Neighbor Search in the crisp phase
Table 2. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Lena, Barbara and Elaine, using N = 32.
Table 2. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Lena, Barbara and Elaine, using N = 32.
AlgorithmLenaBarbaraElaine
PSNRIterTimePSNRIterTimePSNRIterTime
KM26.6117.200.1624.7615.200.1227.7518.150.16
FKM26.5719.651.5324.7116.001.1127.7019.751.65
MFKM26.6114.751.1224.7212.300.8627.7215.801.31
FKM126.6022.350.3524.7719.000.3827.7724.350.38
MFKM126.6218.250.2824.7916.050.3127.7718.550.30
FKM1-PDS26.6022.350.3324.7719.000.3427.7724.350.34
MFKM1-PDS26.6218.250.2624.7916.050.2827.7718.550.29
FKM1-ENNS26.6022.350.2624.7719.000.2827.7724.350.27
MFKM1-ENNS26.6218.250.2224.7916.050.2527.7718.550.25
FKM226.6015.350.3924.7714.250.3027.7718.500.40
MFKM226.6312.700.3524.7811.750.2427.8014.400.33
FKM2-PDS26.6015.350.3524.7714.250.2727.7718.450.37
MFKM2-PDS26.6312.700.3324.7811.750.2227.8014.400.30
FKM2-ENNS26.6015.350.3324.7714.250.2427.7718.500.29
MFKM2-ENNS26.6312.700.3024.7811.750.2027.8014.400.26
Table 3. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Boat, Clock and Goldhill, using N = 32.
Table 3. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Boat, Clock and Goldhill, using N = 32.
AlgorithmBoatClockGoldhill
PSNRIterTimePSNRIterTimePSNRIterTime
KM24.9218.950.2026.1626.100.3226.6617.000.34
FKM24.8421.201.6426.2341.001.7826.6718.603.05
MFKM24.8716.200.9826.2835.551.0826.7016.302.57
FKM124.9125.750.4226.1933.400.5526.6722.250.46
MFKM124.9319.800.3326.2525.800.4526.6818.850.36
FKM1-PDS24.9125.750.4026.1933.400.5026.6722.250.43
MFKM1-PDS24.9319.800.3126.2525.800.4126.6818.850.32
FKM1-ENNS24.9125.750.3226.1933.400.3926.6722.250.35
MFKM1-ENNS24.9319.800.2726.2525.800.3426.6818.850.30
FKM224.9118.900.4326.2623.300.4626.6716.200.63
MFKM224.9315.200.3726.3220.500.4026.7013.550.55
FKM2-PDS24.9118.900.4126.2623.300.4326.6716.200.58
MFKM2-PDS24.9315.200.3526.3220.500.3926.7013.550.51
FKM2-ENNS24.9118.900.3626.2623.300.4026.6716.200.47
MFKM2-ENNS24.9315.200.3226.3220.500.3426.7013.550.46
Table 4. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Lena, Barbara and Elaine, using N = 64.
Table 4. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Lena, Barbara and Elaine, using N = 64.
AlgorithmLenaBarbaraElaine
PSNRIterTimePSNRIterTimePSNRIterTime
KM27.7417.800.3225.6816.250.2529.0618.000.24
FKM27.6922.855.8125.6421.154.0329.0924.054.91
MFKM27.7317.904.5125.6415.403.3129.1318.603.98
FKM127.6723.150.6125.7420.650.5529.0123.550.62
MFKM127.7519.100.4825.7717.950.4829.0719.900.53
FKM1-PDS27.6723.150.5325.7420.650.5129.0123.550.54
MFKM1-PDS27.7519.100.4625.7717.950.4429.0719.900.47
FKM1-ENNS27.6723.150.3925.7420.650.4329.0123.550.46
MFKM1-ENNS27.7519.100.3525.7717.950.3629.0719.900.40
FKM227.8014.500.8125.7315.000.7129.0516.450.81
MFKM227.8512.850.7125.7512.850.6129.1013.700.75
FKM2-PDS27.8014.500.7025.7315.000.6229.0516.450.74
MFKM2-PDS27.8512.850.6725.7512.850.5529.1013.700.68
FKM2-ENNS27.8014.500.6225.7314.950.5729.0516.400.62
MFKM2-ENNS27.8512.850.6025.7512.850.5229.1013.700.60
Table 5. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Boat, Clock and Goldhill, using N = 64.
Table 5. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Boat, Clock and Goldhill, using N = 64.
AlgorithmBoatClockGoldhill
PSNRIterTimePSNRIterTimePSNRIterTime
KM25.9018.450.4627.1722.050.6227.6916.150.36
FKM25.8423.306.3127.4142.708.1127.6819.555.81
MFKM25.8516.054.6127.4633.855.5327.7015.204.51
FKM125.8524.350.7327.0825.101.0527.6921.800.64
MFKM125.9118.900.6227.1620.100.8027.7118.050.53
FKM1-PDS25.8524.350.6827.0825.650.8827.6921.850.60
MFKM1-PDS25.9118.900.5627.1620.100.7027.7118.050.49
FKM1-ENNS25.8524.350.5527.0825.100.6827.6921.800.48
MFKM1-ENNS25.9118.900.4427.1620.100.5227.7118.050.43
FKM225.9217.500.9427.3218.901.0927.7015.601.04
MFKM225.9613.800.8527.4016.101.0127.7313.100.89
FKM2-PDS25.9217.450.8727.3218.901.0327.7015.550.96
MFKM2-PDS25.9613.800.8327.4016.100.9827.7313.100.93
FKM2-ENNS25.9217.500.8427.3218.900.9427.7015.600.83
MFKM2-ENNS25.9613.800.7027.4016.100.9227.7313.100.75
Table 6. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Lena, Barbara and Elaine, using N = 128.
Table 6. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Lena, Barbara and Elaine, using N = 128.
AlgorithmLenaBarbaraElaine
PSNRIterTimePSNRIterTimePSNRIterTime
KM28.8318.100.5126.6814.950.4530.2716.300.51
FKM28.9127.6022.3126.6120.3513.4530.4026.1518.47
MFKM28.9521.2516.3226.6416.7011.5730.4419.5013.11
FKM128.7322.201.1026.7420.801.0630.1723.101.13
MFKM128.9217.550.9126.8116.050.8530.3018.550.91
FKM1-PDS28.7322.250.9626.7420.900.9630.1723.101.05
MFKM1-PDS28.9217.550.8126.8115.950.7530.3018.550.82
FKM1-ENNS28.7322.200.7926.7420.800.7730.1723.100.77
MFKM1-ENNS28.9217.550.6826.8116.050.6330.3018.550.68
FKM228.9714.451.9726.7414.301.6030.3414.351.83
MFKM229.0712.551.7626.7912.851.5430.4512.701.73
FKM2-PDS28.9714.451.7626.7414.301.5530.3414.351.72
MFKM2-PDS29.0712.551.6526.7912.851.4830.4512.701.66
FKM2-ENNS28.9714.451.6026.7414.301.4730.3414.301.59
MFKM2-ENNS29.0712.551.5626.7912.851.3930.4512.701.57
Table 7. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Boat, Clock and Goldhill, using N = 128.
Table 7. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Boat, Clock and Goldhill, using N = 128.
AlgorithmBoatClockGoldhill
PSNRIterTimePSNRIterTimePSNRIterTime
KM26.9017.800.5328.2816.600.6528.6715.050.41
FKM26.9126.8525.3828.4831.4039.6128.6620.3013.51
MFKM26.9420.7022.5628.5526.0536.4728.6915.3511.02
FKM126.5924.151.2228.0420.501.3628.6920.601.15
MFKM126.7017.200.9828.2417.501.1328.7716.300.99
FKM1-PDS26.5924.151.1728.0420.351.2628.6920.751.02
MFKM1-PDS26.7017.200.9328.2417.501.0328.7716.300.95
FKM1-ENNS26.5924.150.9028.0420.500.8528.6920.600.90
MFKM1-ENNS26.7017.200.7228.2417.500.7328.7716.300.83
FKM226.9716.252.0428.2814.402.5628.6914.301.80
MFKM227.0714.151.8528.4013.152.4828.7512.951.75
FKM2-PDS26.9716.251.9728.2814.402.4728.6914.301.76
MFKM2-PDS27.0714.151.7628.4013.152.4528.7512.951.70
FKM2-ENNS26.9716.251.7728.2814.402.3228.6914.351.64
MFKM2-ENNS27.0714.151.6828.4013.152.3028.7512.951.52
Table 8. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Lena, Barbara and Elaine, using N = 256.
Table 8. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Lena, Barbara and Elaine, using N = 256.
AlgorithmLenaBarbaraElaine
PSNRIterTimePSNRIterTimePSNRIterTime
KM29.8914.700.6227.7613.600.5831.4614.400.66
FKM30.2138.2090.1927.7425.2566.3231.8031.6579.91
MFKM30.2427.1073.2827.7620.0057.3431.8724.3075.85
FKM129.7421.801.9927.7818.401.8931.1620.651.97
MFKM130.1316.201.7827.9715.151.7131.5317.101.75
FKM1-PDS29.7421.801.7827.7818.401.7431.1620.651.73
MFKM1-PDS30.1316.201.5627.9715.151.5931.5317.101.52
FKM1-ENNS29.7421.751.4027.7818.501.4131.1620.751.46
MFKM1-ENNS 30.1316.201.3627.9715.151.4031.5317.101.38
FKM230.2314.105.0427.8813.355.1031.6513.105.32
MFKM230.4312.755.1728.0012.055.0431.7712.255.79
FKM2-PDS30.2314.104.9227.8813.354.8131.6513.105.16
MFKM2-PDS 30.4312.755.2228.0012.054.6531.7712.255.42
FKM2-ENNS 30.2314.104.6327.8813.304.6531.6513.104.92
MFKM2-ENNS30.4312.754.9428.0012.054.5831.7712.255.37
Table 9. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Boat, Clock and Goldhill, using N = 256.
Table 9. PSNR (in dB), number of iterations and codebook design time (in seconds) for images Boat, Clock and Goldhill, using N = 256.
AlgorithmBoatClockGoldhill
PSNRIterTimePSNRIterTimePSNRIterTime
KM27.9113.300.6329.4713.700.6529.7313.300.61
FKM28.0432.0584.2629.8235.6581.1529.8324.7063.14
MFKM28.0823.6570.1829.8526.0075.3429.8618.5054.12
FKM127.5724.052.4429.0919.751.8129.6819.302.12
MFKM127.8717.551.9329.4116.101.5929.9015.801.88
FKM1-PDS27.5724.052.2529.0919.751.6429.6819.301.96
MFKM1-PDS27.8717.551.7929.4116.101.4229.9015.801.76
FKM1-ENNS27.5723.951.7729.0919.701.2929.6819.351.51
MFKM1-ENNS 27.8717.551.4329.4116.101.1229.9015.801.42
FKM228.0513.405.2929.5612.755.1029.8012.105.12
MFKM228.2211.855.5329.7512.405.3529.9211.805.62
FKM2-PDS28.0513.405.0929.5612.754.8229.8012.105.05
MFKM2-PDS 28.2211.855.1529.7512.405.1229.9211.805.34
FKM2-ENNS 28.0513.404.7629.5612.754.5229.8012.104.83
MFKM2-ENNS28.2211.855.0629.7512.404.5029.9211.805.21
Table 10. SSIM for images Lena, Barbara Elaine, Boat, Clock, Goldhill and P-M-T, using N = 32.
Table 10. SSIM for images Lena, Barbara Elaine, Boat, Clock, Goldhill and P-M-T, using N = 32.
AlgorithmSSIM
LenaBarbaraElaineBoatClockGoldhillP-M-T
KM0.77900.68000.76370.70810.83730.70780.7492
FKM0.78380.68090.76870.71180.84470.71050.7501
MFKM0.78400.68070.76880.71200.84570.71110.7496
FKM10.78160.68070.76780.71090.83810.71050.7481
MFKM10.78130.68130.76740.70950.83860.71100.7502
FKM1-PDS0.78160.68070.76780.71090.83810.71050.7481
MFKM1-PDS0.78130.68130.76740.70950.83860.71100.7502
FKM1-ENNS0.78160.68070.76780.71090.83810.71050.7481
MFKM1-ENNS 0.78130.68130.76740.70950.83860.71100.7502
FKM20.77310.67870.76170.70830.83830.70830.7483
MFKM20.77360.67930.76220.70750.83950.70870.7490
FKM2-PDS0.77310.67870.76170.70830.83830.70830.7483
MFKM2-PDS 0.77360.67930.76220.70750.83950.70870.7490
FKM2-ENNS 0.77310.67870.76170.70830.83830.70830.7483
MFKM2-ENNS0.77360.67930.76220.70750.83950.70870.7490
Table 11. SSIM for images Lena, Barbara Elaine, Boat, Clock, Goldhill and P-M-T, using N = 64.
Table 11. SSIM for images Lena, Barbara Elaine, Boat, Clock, Goldhill and P-M-T, using N = 64.
AlgorithmSSIM
LenaBarbaraElaineBoatClockGoldhillP-M-T
KM0.82250.73230.80940.76520.86670.76130.7897
FKM0.82600.73250.81360.76570.87490.76280.7902
MFKM0.82610.73180.81370.76530.87560.76310.7910
FKM10.82280.73550.81050.76550.86560.76290.7900
MFKM10.82240.73510.80960.76430.86610.76220.7902
FKM1-PDS0.82280.73550.81050.76550.86570.76290.7900
MFKM1-PDS0.82240.73510.80960.76430.86610.76220.7902
FKM1-ENNS0.82280.73550.81050.76550.86560.76290.7900
MFKM1-ENNS 0.82240.73510.80960.76430.86610.76220.7902
FKM20.81770.73120.80310.76460.86800.76050.7900
MFKM20.81790.73160.80320.76450.86920.76100.7897
FKM2-PDS0.81770.73120.80310.76460.86800.76040.7900
MFKM2-PDS 0.81790.73160.80320.76450.86920.76100.7897
FKM2-ENNS 0.81770.73120.80310.76460.86800.76050.7900
MFKM2-ENNS0.81790.73160.80320.76450.86920.76100.7897
Table 12. SSIM for images Lena, Barbara Elaine, Boat, Clock, Goldhill and P-M-T, using N = 128.
Table 12. SSIM for images Lena, Barbara Elaine, Boat, Clock, Goldhill and P-M-T, using N = 128.
AlgorithmSSIM
LenaBarbaraElaineBoatClockGoldhillP-M-T
KM0.85830.78630.84870.81410.89410.80500.8232
FKM0.86170.78640.85230.81430.90060.80660.8219
MFKM0.86160.78640.85230.81380.90140.80630.8224
FKM10.85790.78810.84810.80350.88760.80760.8231
MFKM10.85750.78550.84750.80240.88890.80620.8233
FKM1-PDS0.85790.78810.84810.80350.88760.80760.8231
MFKM1-PDS0.85750.78560.84750.80240.88890.80620.8233
FKM1-ENNS0.85790.78810.84810.80350.88760.80760.8231
MFKM1-ENNS 0.85750.78550.84750.80240.88890.80620.8233
FKM20.85180.78230.83870.81240.88990.80410.8236
MFKM20.85200.78330.83940.81280.89060.80480.8244
FKM2-PDS0.85180.78230.83870.81240.88990.80410.8236
MFKM2-PDS 0.85200.78330.83940.81280.89060.80480.8244
FKM2-ENNS 0.85180.78230.83870.81240.88990.80410.8236
MFKM2-ENNS0.85200.78330.83940.81280.89060.80480.8244
Table 13. SSIM for images Lena, Barbara Elaine, Boat, Clock, Goldhill and P-M-T, using N = 256.
Table 13. SSIM for images Lena, Barbara Elaine, Boat, Clock, Goldhill and P-M-T, using N = 256.
AlgorithmSSIM
LenaBarbaraElaineBoatClockGoldhillP-M-T
KM0.88930.83510.88080.85140.91730.84500.8534
FKM0.89350.83710.88430.85400.92260.84780.8516
MFKM0.89350.83720.88430.85390.92310.84790.8518
FKM10.88750.83490.87640.84780.90950.84520.8530
MFKM10.88770.83220.87610.84900.91000.84420.8529
FKM1-PDS0.88750.83490.87640.84780.90950.84520.8530
MFKM1-PDS0.88770.83220.87610.84900.91000.84420.8529
FKM1-ENNS0.88750.83490.87640.84780.90960.84520.8530
MFKM1-ENNS 0.88770.83220.87610.84900.91000.84420.8529
FKM20.88420.83330.86900.85200.91450.84500.8550
MFKM20.88520.83390.86960.85270.91550.84600.8553
FKM2-PDS0.88420.83330.86900.85200.91450.84500.8550
MFKM2-PDS 0.88520.83390.86960.85270.91550.84600.8553
FKM2-ENNS 0.88420.83330.86900.85190.91450.84500.8550
MFKM2-ENNS0.88520.83390.86960.85270.91550.84600.8553
Table 14. PSNR (in dB) and SSIM of reconstructed images by using codebooks designed with the training set P-M-T, using MFKM2-ENNS.
Table 14. PSNR (in dB) and SSIM of reconstructed images by using codebooks designed with the training set P-M-T, using MFKM2-ENNS.
ImagesN = 32N = 64N = 128N = 256
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Lena25.620.721126.340.760426.910.781627.500.8133
Barbara24.090.635024.660.667925.190.698225.680.7293
Elaine26.620.722327.510.762628.110.784828.880.8134
Boat24.160.657524.880.703825.310.725925.890.7633
Clock25.210.799126.050.820726.810.847027.320.8618
Goldhill25.710.639126.340.678826.920.713227.450.7435
Tiffany28.210.749329.100.791730.400.836531.380.8647
Table 15. PSNR (in dB) and SSIM of reconstructed images. Codebooks were designed using MFKM2-ENNS in spatial domain as well as by the DWT domain for code rate 0.3125 bpp.
Table 15. PSNR (in dB) and SSIM of reconstructed images. Codebooks were designed using MFKM2-ENNS in spatial domain as well as by the DWT domain for code rate 0.3125 bpp.
ImagesSpatial Domain VQ. Performance Inside the Training SetSpatial Domain VQ with Codebooks Designed by Using P-M-T Training SetDWT VQ with Multiresolution Codebooks Designed by Using P-M-T Training Set
PSNRSSIMPSNRSSIMPSNRSSIM
Lena26.720.779125.620.721129.350.8367
Barbara24.780.682224.090.635025.000.7573
Elaine27.790.756626.620.722329.720.8304
Boat24.900.704724.160.657525.490.7581
Clock26.270.836425.210.799128.220.8672
Goldhill26.760.708525.710.639126.810.7640
Tiffany29.010.807828.210.749330.210.8099

Share and Cite

MDPI and ACS Style

Mata, E.; Bandeira, S.; De Mattos Neto, P.; Lopes, W.; Madeiro, F. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design. Sensors 2016, 16, 1963. https://doi.org/10.3390/s16111963

AMA Style

Mata E, Bandeira S, De Mattos Neto P, Lopes W, Madeiro F. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design. Sensors. 2016; 16(11):1963. https://doi.org/10.3390/s16111963

Chicago/Turabian Style

Mata, Edson, Silvio Bandeira, Paulo De Mattos Neto, Waslon Lopes, and Francisco Madeiro. 2016. "Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design" Sensors 16, no. 11: 1963. https://doi.org/10.3390/s16111963

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop