An Efﬁcient Codebook Search Algorithm for Line Spectrum Frequency (LSF) Vector Quantization in Speech Codec

: A high-performance vector quantization (VQ) codebook search algorithm is proposed in this paper. VQ is an important data compression technique that has been widely applied to speech, image, and video compression. However, the process of the codebook search demands a high computational load. To solve this issue, a novel algorithm that consists of training and encoding procedures is proposed. In the training procedure, a training speech dataset was used to build the squared-error distortion look-up table for each subspace. In the encoding procedure, ﬁrstly, an input vector was quickly assigned to a search subspace. Secondly, the candidate code word group was obtained by employing the triangular inequality elimination (TIE) equation. Finally, a partial distortion elimination technique was employed to reduce the number of multiplications. The proposed method reduced the number of searches and computation load signiﬁcantly, especially when the input vectors were uncorrelated. The experimental results show that the proposed algorithm provides a computational saving (CS) of up to 85% in the full search algorithm, up to 76% in the TIE algorithm, and up to 63% in the iterative TIE algorithm. Further, the proposed method provides CS and load reduction of up to 29–33% and 67–69%, respectively, over the BSS-ITIE algorithm.


Introduction
Vector quantization (VQ) is a high-performance technique for data compression. Due to its simple coding and high compression ratio, it has been successfully applied to speech, image, audio, and video compression [1][2][3][4][5][6]. It is also a key part of the G.729 Recommendation. A major handicap is that a remarkable computational load is required for the VQ of line spectrum frequency (LSF) coefficients of the speech codec [7][8][9][10][11][12]. Thus, it is necessary to reduce the computation load of VQ.
In the TSVQ approaches [13][14][15][16][17], the search complexity can be reduced significantly. However, the reconstructed speech quality is poor because the selected code word is not necessarily the best matched to the input vector. In contrast, the TIE-based method [18][19][20][21] can achieve an enormous computational load reduction without loss of speech quality. By employing the high correlation between the adjacent frames, a TIE method is used to reject the impossible candidate code words. However, the performance of computational load reduction is weakened when the input vectors are uncorrelated. The EEENNS technique employs the statistical characteristics of an input vector, such as the mean, the variance, and the norm, to reject the impossible code words. In order to further reduce the computational load, the BSS-VQ method [25] is proposed, in which the number of candidate code words is closely related to the distribution of the hit probability, and a sharpened distribution at specific code words yields an enormous computational load. However, the probability distribution is not uniform; some subspaces are concentrated, and some are flattened. Thus sometimes the performance is not good.
To solve the above issues, an efficient VQ codebook search algorithm is proposed. Even though the input vectors are uncorrelated, it can still reduce the computational load significantly while maintaining the quantization accuracy. Compared with previous works on the full search algorithm [6], TIE [18], ITIE [21], and the BSS-ITIE [28], the experimental results show that the proposed algorithm can quantize the input vector with the lowest computational load.
The rest of this paper is organized as follows. Section 2 describes the encoding procedure of the LSF coefficients in G.729. Section 3 presents the theory of the proposed algorithm in detail. Section 4 shows the experimental results and performance comparison between the proposed algorithm and previous works. Finally, Section 5 concludes this work.

LSF Coefficients Quantization in G.729
The ITU-T G.729 [29] speech codec was selected as the platform to verify the performance of the proposed algorithm. Thus, before introducing the theory of the proposed method, the principle of the LSF quantization in G.729 is introduced here.
The LSF coefficients are obtained by an equation ω i = ar cos(q i ), where q i is the LSF coefficient in the cosine domain, and w i is the computed LSF coefficient in the frequency domain.
The procedure of the LSF quantize is organized as follows: a switched 4th moving average (MA) prediction is used to predict the LSF coefficients of the current frame. The difference between the computed and predicted coefficients is quantized using a two-stage vector quantizer. The first stage is a 10-dimensional VQ using a codebook L1 with 128 entries (7 bits). In the second stage, the quantization error vectors of the first stage are split into two sub-vectors, which then are quantized by a split VQ associating two codebooks, L2 and L3, each containing 32 entries (5 bits). Figure 1 illustrates the structure of the two-stage VQ for LSF coefficients.
Electronics 2021, 10, x FOR PEER REVIEW 2 of 15 employing the high correlation between the adjacent frames, a TIE method is used to reject the impossible candidate code words. However, the performance of computational load reduction is weakened when the input vectors are uncorrelated. The EEENNS technique employs the statistical characteristics of an input vector, such as the mean, the variance, and the norm, to reject the impossible code words. In order to further reduce the computational load, the BSS-VQ method [25] is proposed, in which the number of candidate code words is closely related to the distribution of the hit probability, and a sharpened distribution at specific code words yields an enormous computational load. However, the probability distribution is not uniform; some subspaces are concentrated, and some are flattened. Thus sometimes the performance is not good. To solve the above issues, an efficient VQ codebook search algorithm is proposed. Even though the input vectors are uncorrelated, it can still reduce the computational load significantly while maintaining the quantization accuracy. Compared with previous works on the full search algorithm [6], TIE [18], ITIE [21], and the BSS-ITIE [28], the experimental results show that the proposed algorithm can quantize the input vector with the lowest computational load.
The rest of this paper is organized as follows. Section 2 describes the encoding procedure of the LSF coefficients in G.729. Section 3 presents the theory of the proposed algorithm in detail. Section 4 shows the experimental results and performance comparison between the proposed algorithm and previous works. Finally, Section 5 concludes this work.

LSF Coefficients Quantization in G.729
The ITU-T G.729 [29] speech codec was selected as the platform to verify the performance of the proposed algorithm. Thus, before introducing the theory of the proposed method, the principle of the LSF quantization in G.729 is introduced here.
The LSF coefficients are obtained by an equation q is the LSF coefficient in the cosine domain, and i w is the computed LSF coefficient in the frequency domain.
The procedure of the LSF quantize is organized as follows: a switched 4th moving average (MA) prediction is used to predict the LSF coefficients of the current frame. The difference between the computed and predicted coefficients is quantized using a twostage vector quantizer. The first stage is a 10-dimensional VQ using a codebook 1 L with 128 entries (7 bits). In the second stage, the quantization error vectors of the first stage are split into two sub-vectors, which then are quantized by a split VQ associating two codebooks, 2 L and 3 L , each containing 32 entries (5 bits). Figure 1 illustrates the structure of the two-stage VQ for LSF coefficients.    To explain the quantization process, it is convenient to describe the decoding process first. Each quantized value is obtained from the sum of the two code words, as follows: where l1, l2, and l3 are the codebook indices. To guarantee that the reconstructed filters are stable, the vectorl i is arranged such that adjacent elements have a minimum distance of d min . This rearrangement process is done twice. First, the quantized LSF coefficients, ω i (m) , for the current frame, m, are obtained from the weighted sum of previous quantizer outputs,l (m−k) i , and the current quantizer output,l m i .
wherep i,k are the coefficients of the switched MA predictor as defined by parameter, p0.
For each of the two MA predictors the best approximation to the current LSF coefficients has to be found. The best approximation is defined as the one that minimizes the weighted mean-squared error (MSE).
The weights emphasize the relative importance of each LSF coefficient. The weights, W i , are made adaptive as a function of the unquantized LSF parameters.
Then, the weights ω 5 and ω 6 are multiplied by 1.2 each. The vector to be quantized for the current frame, m, is obtained from Equation (7).
The first codebook, L1, is searched and the entry, l1, that minimizes the unweighted MSE is selected. Then the second codebook, L2, is searched by computing the weighted MSE, and the entry l2, which results from the lowest error is selected. After selecting the first stage vector, l1, and the lower part of the second stage, l2, the higher part of the second stage is searched from the codebook, L3. The vector, l3, that minimizes the weighted MSE is selected. The resulting vector,Î i , i = 1, . . . , 10, is rearranged twice using the above procedure. The procedure is done for each of the two MA predictors defined by p0, and the MA predictor that produces the lowest weighted MSE is selected. Table 1 shows the two groups of coefficients of the MA predictor. When the first group of coefficients is selected, the value of p0 is 0. Otherwise, the value of p0 is 1. Table 2 shows the bit allocation of the LSF quantizer.

Proposed Search Algorithm
In this paper, a fast codebook search algorithm of vector quantization is proposed to reduce the computation load of the LSF coefficients' quantization in the G.729 speech codec. Even though the input vectors are uncorrelated, this can still reduce the computation load significantly, while maintaining the quantization accuracy.
In the BSS-VQ method [25], the number of candidate code words is strongly related to the distribution of the hit probability for each subspace, and a sharpened distribution at specific code words yields an enormous computational load while maintaining the quantization accuracy. However, the probability distribution is not uniform: some subspaces are concentrated, and some are flattened. Thus, sometimes its performance is poor. The TIE algorithm [18] uses the strong correlation between adjacent values to narrow the search range. However, its performance is poor when the inputs are uncorrelated.
Considering the drawbacks of the above methods, to further reduce computational load, especially when the inputs are uncorrelated, a novel algorithm is proposed. After the statistical analysis, the code word corresponding to the highest hit probability for each subspace is obtained and selected as the reference code word, and a squared-error distortion look-up table for each subspace is built. The adjacent code word is highly correlated due to the squared-error distortion changing slightly in the squared-error look-up table. The reference code word corresponding to the highest probability is regarded as the bestmatched code word. Thus, the smaller the squared-error distortion with the reference code word, the more likely the candidate code word is to be the best matched. Therefore, the TIE technique can be employed to reject the impossible code words. The proposed algorithm consists of the training procedure and encoding procedure. The structure of the proposed algorithm is illustrated in Figure 2, and the theory of the proposed algorithm is presented in detail as follows.

Fast locating technique
The codeword corresponding to the highest hit probability is obtained

Training Procedure
In the three-stage training procedure, the squared-error look-up table for each subspace is prebuilt. At stage 1, each dimension is dichotomized into two subspaces. Then, an input vector is assigned to a corresponding subspace according to the entries of the input vector [25]. For instance, when the input is a 10-dimensional vector, there are 10 2 1024  subspaces in G.729. Before the encoding procedure, a look-up table that contains the hit probability statistical information on the code words is prebuilt.
The dichotomy position for each dimension is defined as the mean of all the code words from the codebook, presented as: where j is the sum of   n v k over all the dimensions, presented as:  (8) and (9), respectively. Then, the input vector n x is assigned to the subspace j bss with 217 j  . This means that an input vector can be assigned to a corresponding subspace quickly, with only a few basic operations.

Training Procedure
In the three-stage training procedure, the squared-error look-up table for each subspace is prebuilt. At stage 1, each dimension is dichotomized into two subspaces. Then, an input vector is assigned to a corresponding subspace according to the entries of the input vector [25]. For instance, when the input is a 10-dimensional vector, there are 2 10 = 1024 subspaces in G.729. Before the encoding procedure, a look-up table that contains the hit probability statistical information on the code words is prebuilt.
The dichotomy position for each dimension is defined as the mean of all the code words from the codebook, presented as: where c i (k) is the k-th component of the i-th code word c i , and mean(k) is the average value of all the i-th components. For instance, in the first stage of the LSF coefficients quantization in G.729, CSize = 128, Dim = 10 in the codebook L1. A parameter v n (k) is defined for vector quantization on the n-th input vector x n , symbolized as: where x n (k) is the k-th component of x n . Then, x n is assigned to a subspace j bss j , where j is the sum of v n (k) over all the dimensions, presented as:  (8) and (9), respectively. Then, the input vector x n is assigned to the subspace bss j with j = 217. This means that an input vector can be assigned to a corresponding subspace quickly, with only a few basic operations.
At stage 2, the hit probability table for each subspace is prebuilt through a training mechanism, which includes the probability for each code word to be the best-matched code word in each subspace. This is defined as follows: P hit c i bss j , where 1 ≤ i ≤ CSize, 1 ≤ j ≤ Snum, and Snum = 2 Dim is the number of subspaces. Subsequently, the table is sorted in descending order by the hit probability value. For example, when i = 1, P hit i bss j | i = 1 = max c i P hit (c i ) bss j represents the highest hit probability in bss j and the corresponding code word. The sum of all the hit probabilities for each subspace is 1.0. The hit probability P hit c i bss j is computed by Equation (11), and the cumulative probability of the top N code words is symbolized as Equation (12): the number o f times that codeword c i f alls in the subspace bss j the total number o f times that all the candidate codewords f alls in subspace bss j , which represents the number of possible candidate code words. Further, to control the quantization accuracy and computational load, a variable named threshold of quantization accuracy (TQA) is defined. This is given a quantity,N j (TQA), which means the minimum number N that satisfies the equation P cmu N bss j ≥ TQA in a subspace, bss j , is expressed as: At stage 3, the highest hit probability code word in each subspace is selected as the reference code word c r , then the squared-error distortion among the reference code word and all the other code words, c i , in the codebook is calculated by (14).
Subsequently, the squared-error distortion look-up table for each subspace is built and then is sorted in ascending order. Algorithm 1 shows the pseudo-code of the training procedure. Algorithm 1. Training procedure of the proposed algorithm Step 1. Give a training speech dataset and a value, TQA.
Step 2. The input vector is assigned to a corresponding subspace, bss j , by (9) and (10).
Step 3. Repeat Step 2 until all the input vectors are encoded.
Step 4. The hit probability table is obtained by (11) and (12) and then sorted in descending order.
Step 5. Select the highest hit probability code word as the reference code word, the squared-error distortion among the reference code word and other code words, c i , in the codebook is calculated by (14). Step 6. The squared-error distortion look-up table is prebuilt for each subspace and is sorted in ascending order.

Encoding Procedure
Given a testing speech set and a value for TQA, in step 1, an input vector is assigned to a search subspace by (9) and (10). In step 2, a triangular inequality elimination (TIE) formulation is used to reject the impossible candidate code words [18]. If d(c r , c i ) > 4d(c r , x), then d(c i , x) > d(c r , x), thus the computation of d(c i , x) is eliminated. Where x is the input vector, c r is the reference code word. A group composed of all the code words, c i , which satisfy the above conditions is selected, then a group defined as a candidate search group (CSG) is built, and the number of code words in the CSG is symbolled as N(c r ).
In step 3, after the CSG(c r ) is obtained, the squared-error distortion between the input vector and each candidate code word is computed. The best-matched code word is the one that makes the squared-error distortion between the input vector, x n , and the candidate code word, c i , minimized.
As stated by BEI [30], a minimum squared-error distortion computation method called partial distortion elimination (PDE) is employed to reduce the number of multiplication operations. This method can decide whether the current code word is the best matched or not before the whole squared-error distortion is calculated. The pseudo-code of the PDE method is shown in Algorithm 2, where CSize, Dim, d min , C(i, j) are the codebook size, the dimension of the input vector, the minimum distortion value, and the code word in the codebook, respectively. After the abovementioned description, the encoding procedure of the proposed algorithm can be summarized as Algorithm 3. Step 1. Given a testing speech set and a value for TQA.

Algorithm 2. Pseudo-code of the PDE algorithm
Step 2. The input vector is quickly assigned to a corresponding subspace, bss j , by (9) and (10).
Step 3. The number of candidate code words, N k (TQA), is found directly from the prebuilt squared-error distortion look-up table for each subspace.
Step 4. The code word corresponding to the highest hit probability is selected as a reference, c r , compute d(c r , x), and then the CSG(c r ) and N(c r ) are obtained by (15). Step 5. Starting at k = 1, the d(c r , c k ) is obtained directly from the squared-error distortion look-up table.
Step 6. If (d(c r , c k ) < 4d(c r , x)), then compute d(c k , x) by Algorithm 2, k = k + 1; repeat Step 4 until k = N(c r ). Then the index of the best-matched code word is obtained.
Step 7. Output index of the best-matched code word.
Step 8. Repeat Steps 2-6 until all the input vectors are encoded.

Experimental Environment
Here, the first-stage quantization procedure of LSF coefficients in G.729 was selected as a platform to illustrate the performance of the proposed algorithm. The first-stage codebook included 128 code words, and each code word was a 10-dimensional vector. There were 1024 subspaces. The Aurora speech dataset [31] was used as the training and testing speech data. There were 1001 clean speech files and 1001 speech files with noise from Test-A set, spoken by 50 males and 50 females used in this paper. The testing speech signals were sampled at 8 kHz with a resolution of 16 bits per sample.

Selection of the Training Dataset
As introduced in Section 3, the proposed method included a training procedure and an encoding procedure. The training procedure provided a squared-error look-up table for the encoding procedure. The selection of the training speech dataset directly affected the application scope of the squared-error look-up table. Further, it affected the computation loads and quantization accuracy of the subsequent encoding process. Therefore, we discuss the influence of the selection of the training dataset on the application scope of the squarederror look-up table.
The Aurora Test-A set included 1001 clean files and 1001 noisy speech files that were used to train and test the robustness of the proposed algorithm. Here we will compare the performance of the proposed method with three experimental environments. To evaluate the quantization accuracy of the proposed algorithm, a parameter defined as error rate (ER) was proposed, symbolized as ER = uncorrected quantized frames total input frames . The average search times are symbolized as ASN, and the reduction of the ASN is presented as a computational saving (CS) and symbolized as CS = ASN 1 −ASN 2 ASN 1 , given to evaluate the reduction of the computational load. The following three experimental results were all obtained with the value TQA = 0.99.
For the first experimental conditions, the 1001 clean files, which included 350,866 speech frames, were all selected as the training set, then a squared-error look-up table which is symbolized as table A was obtained after the training procedure. Then, table A was employed in the encoding process. The testing datasets were clean files and noisy speech files, respectively. When there were 500 clean files, which included 174,630 speech frames used as the testing dataset, the experimental results show that there were 1412 uncorrected quantized frames, thus ER = 0.81%, and ASN = 18.82. When there were 201 speech files with noise, which included 70,170 frames used as a testing dataset, the experimental results show that there were 2579 uncorrected quantized frames, thus t ER = 3.7% and ASN = 18.9.
For the second experimental conditions, the training dataset included 500 clean speech files and 500 speech files with noise, which included 348,724 frames. Then, when the training dataset was used as a testing dataset, the experimental results show there were 1062 uncorrected quantized frames, thus ER = 0.7% and ASN = 18.8. When the other 201 clean speech files, which included 70,170 frames used as testing data, the experimental result shows that there were 2038 uncorrected quantized frames, thus ER = 2.9% and ASN = 19.3. When the other 201 speech files with noise, which included 70,170 frames used as a testing set, the experimental results show that there were 1975 uncorrected quantized frames, thus ER = 2.8% and ASN = 18.9.
For the third experimental conditions, the training dataset included 1001 clean files and 1001 speech files with noise, which included 701,508 frames. When there were 201 clean speech files used as a testing set, the experimental results show that there were 556 uncorrected quantized frames, thus ER = 0.79% and ASN = 20. When there were 201 speech files with noise, which included 70,170 frames used as a testing set, the experimental results show that there were 605 uncorrected quantized frames, thus ER = 0.8% and ASN = 19.9.
Details of the above three experiments and the corresponding experimental results can be found in Table 3. Even under the worst training conditions, 1, where the training set was clean speech and the testing set was speech files with noise, the experimental results show that ER = 3.7% and ASN = 18.9. Under the best training conditions, 3, the experimental results show that ER = 0.8% and ASN = 20. Comparing the results of these three conditions, the ER value ranges from 0.7% to 3.7%, and the ASN value ranges from 18.82 to 20. It can be concluded that when the training set was large enough, the selection of the training set had no significant influence on the encoding process. The robustness of the prebuilt squared-error look-up table was good. The variation ranges of ER and ASN were within acceptable limits.

Performance of the Proposed Method
Here we choose experiment 1 to illustrate the performance of the proposed method. The generation of the squared-error look-up table is a very important process of the proposed algorithm. Thus, we extracted some intermediate experimental data as examples to illustrate the created procedure. Figure 3 illustrates the design procedure of the squarederror look-up table of the proposed algorithm. It shows that the generation of the squarederror look-up table can reduce the number of candidate code words significantly and reduce the search range. The computational load and the quantization accuracy are compared for the proposed algorithm with TIE [18], ITIE [21], and BSS-ITIE [28] approaches. With the performance of the full search algorithm as the benchmark, Table 4 gives the comparison of , ASN, and CS for the proposed algorithm with TIE [18], ITIE [21], and BSS-ITIE [28] Figure 3. The squared-error distortion look-up table is designed for each subspace. Figure 3. The squared-error distortion look-up table is designed for each subspace.

ER
The computational load and the quantization accuracy are compared for the proposed algorithm with TIE [18], ITIE [21], and BSS-ITIE [28] approaches. With the performance of the full search algorithm as the benchmark, Table 4 gives the comparison of ER, ASN, and CS for the proposed algorithm with TIE [18], ITIE [21], and BSS-ITIE [28] approaches. The experimental results show that the proposed algorithm provided CS of up to 92% when TQA = 0.90 and when TQA = 0.99, it still reduced the computational load by 85%. Compared to the TIE and ITIE methods, the proposed method provided CS of up to 76% and 63% with almost the same quantization accuracy. To further evaluate the reduction of computational load, the comparison of the average number of basic operations, including addition, multiplication, and comparison, is shown in Table 5 and is illustrated as a bar graph in Figure 4. The multiplication operation was the dominant computation, with the highest computational complexity. The reduction in the number of multiplications is the load reduction (LR), symbolized as LR = MulN 1 −MulN 2 where MulN is the number of multiplications. The proposed algorithm provided LR up to 90% in the full search algorithm, with almost the same quantization accuracy.   Table 4. Table 6 and Figure 5 give the comparison results with the BSS-ITIE [28]  . This indicates the proposed algorithm provided CS of about 29-33%, and LR up to 67-69%, over the BSS-ITIE algorithm with almost the same ER .  Table 6.  Figure 6 shows that when ASN was about equal to 19, the ER of the proposed method was equal to 0.81%, while with the BSS-ITIE it was equal to 4.66%. For instance, when ASN was approximately equal to 13, the ER of the proposed method was lower than that of the BSS-ITIE method by about 5%. When ER was approximately equal to 4%, the ASN of the proposed method was about 5.5 lower than that of the BSS-ITIE method. Thus, the proposed algorithm had a significantly better performance than the BSS-ITIE method.  Table 4. Table 6 and Figure 5 give the comparison results with the BSS-ITIE [28] as the benchmark. When TQA = 0.99, the ASN of the BSS-ITIE algorithm was 18.47 with ER = 4.66%. In comparison, the ASN of the proposed algorithm was 18.82 with ER = 0.81%, and LR = 57%. This indicates the proposed algorithm can obtain a better speech quality than the BSS-ITIE algorithm with great reduction of the number of multiplications. On the other hand, when TQA = 0.95, the ASN of the proposed algorithm was 13.10 with ER = 4.08%. When TQA = 0.94, the ASN of the proposed algorithm was 12.33 with ER = 4.86%. This indicates the proposed algorithm provided CS of about 29-33%, and LR up to 67-69%, over the BSS-ITIE algorithm with almost the same ER.   Table 4. Table 6 and Figure 5 give the comparison results with the BSS-ITIE [28]  . This indicates the proposed algorithm provided CS of about 29-33%, and LR up to 67-69%, over the BSS-ITIE algorithm with almost the same ER .  Table 6. Table 6. Computational savings comparison between the BSS-ITIE [28] method and the proposed algorithm.  Figure 6 shows that when ASN was about equal to 19, the ER of the proposed method was equal to 0.81%, while with the BSS-ITIE it was equal to 4.66%. For instance, when  Figure 6 shows that when ASN was about equal to 19, the ER of the proposed method was equal to 0.81%, while with the BSS-ITIE it was equal to 4.66%. For instance, when ASN was approximately equal to 13, the ER of the proposed method was lower than that of the BSS-ITIE method by about 5%. When ER was approximately equal to 4%, the ASN of the proposed method was about 5.5 lower than that of the BSS-ITIE method. Thus, the proposed algorithm had a significantly better performance than the BSS-ITIE method. In addition, to better measure the quantization error, the average vector quantization error (AVQR) was defined as the absolute error value between the quantized code word and the best-matched code word. The AVQR was computed by Equation (16). (16) where ˆi c was the quantized code word, and i c was the best-matched code word with the input vector which was searched by the full search algorithm. L was the total number of input speech frames. The AVQR value of the BSS-ITIE [22] and the proposed algorithm were computed, while the TQA ranged from 0.90 to 0.99, respectively. Table 7 shows the AVQR comparison between the BSS-ITIE [22] method and the proposed method. The experimental results show that all the AVQR values of the proposed method were lower than 0.1, and the AVQR value of the BSS-ITIE method ranged from 0.0695 to 0.2324. Further, the max value of AVQR for the proposed method was 0.0974 when TQA = 0.90, which is about equal to the AVQR value of the BSS-ITIE method when TQA = 0.98. Thus, the experimental results show that the proposed method can obtain a much lower quantization error than the BSS-ITIE method.

Conclusions
In this paper, an efficient codebook search algorithm for the VQ of the LSF coefficients is proposed to reduce the computation load. A squared-error look-up table was In addition, to better measure the quantization error, the average vector quantization error (AVQR) was defined as the absolute error value between the quantized code word and the best-matched code word. The AVQR was computed by Equation (16).
whereĉ i was the quantized code word, and c i was the best-matched code word with the input vector which was searched by the full search algorithm. L was the total number of input speech frames. The AVQR value of the BSS-ITIE [22] and the proposed algorithm were computed, while the TQA ranged from 0.90 to 0.99, respectively. Table 7 shows the AVQR comparison between the BSS-ITIE [22] method and the proposed method. The experimental results show that all the AVQR values of the proposed method were lower than 0.1, and the AVQR value of the BSS-ITIE method ranged from 0.0695 to 0.2324. Further, the max value of AVQR for the proposed method was 0.0974 when TQA = 0.90, which is about equal to the AVQR value of the BSS-ITIE method when TQA = 0.98. Thus, the experimental results show that the proposed method can obtain a much lower quantization error than the BSS-ITIE method.

Conclusions
In this paper, an efficient codebook search algorithm for the VQ of the LSF coefficients is proposed to reduce the computation load. A squared-error look-up table was prebuilt in the training procedure and then the encoding procedure began. An input vector was quickly assigned to a search subspace, then the CSG was obtained by employing the TIE equation. Subsequently, a PDE technique was employed to reduce the number of multiplications. The experimental results show that the proposed algorithm provided a CS of up to 85% in the full search algorithm, up to 76% in the TIE algorithm, and 63% in the iterative TIE (ITIE) algorithm when TQA = 0.99. Compared to the BSS-ITIE algorithm, the proposed method provided a CS and LR of up to 29-33% and 67-69%, respectively, with almost the same quantization accuracy. Further, a trade-off between the computation loads and quantization accuracy could easily be made to meet a user's requirement when performing VQ encoding. This work would be beneficial for reaching the energy-saving requirement when implemented in a speech codec of mobile devices, and the reduction of computation load is helpful for the G.729 Recommendation's application in real-time speech signal processing systems.
Author Contributions: Y.X. developed the idea and wrote this article. Y.W., Z.Y., J.J., Y.Z., X.F. and S.Q. put forward some constructive suggestions for revision. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Acknowledgments: I would like to thank the Smart Sensing R&D Center, Institute of Microelectronics of the Chinese Academy of Sciences, which supported us in this work.

Conflicts of Interest:
The authors declare no conflict of interest.