An Efficient Compensation Method for Limited-View Photoacoustic Imaging Reconstruction Based on Gerchberg–Papoulis Extrapolation

The reconstruction for limited-view scanning, though often the case in practice, has remained a difficult issue for photoacoustic imaging (PAI). The incompleteness of sampling data will cause serious artifacts and fuzziness in those missing views and it will heavily affect the quality of the image. To solve the problem of limited-view PAI, a compensation method based on the Gerchberg–Papoulis (GP) extrapolation is applied into PAI. Based on the known data, missing detectors elements are estimated and the image in the missing views is then compensated using the Fast Fourier Transform (FFT). To accelerate the convergence speed of the algorithm, the total variation (TV)-based iterative algorithm is incorporated into the GP extrapolation-based FFT-utilized compensation method (TV-GPEF). The effective variable splitting and Barzilai–Borwein based method is adopted to solve the optimization problem. Simulations and in vitro experiments for both limited-angle circular scanning and straight-line scanning are conducted to validate the proposed algorithm. Results show that the proposed algorithm can greatly suppress the artifacts caused by the missing views and enhance the edges and the details of the image. It can be indicated that the proposed TV-GPEF algorithm is efficient for limited-view PAI.


Introduction
Photoacoustic imaging (PAI), also known as optoacoustic tomography, is a novel biomedical imaging technique. It uses laser as the energy source and detects the excited ultrasound signals. Therefore, it combines the advantages of both the ultrasound imaging and the optical imaging [1][2][3]. Firstly, PAI inherits the advantage of the optical imaging's high contrast [1]. Secondly, it breaks through the imaging depth limitation of the optical imaging and can acquire an image at higher imaging depth with ideal resolution [3]. Thirdly, PAI is also a noninvasive imaging technique that is harmless to the tissue as long as the energy of laser is controlled. Furthermore, because PAI reflects the light absorption distributions of the tissue, it can indicate such physiological parameters as hemoglobin concentration and blood oxygen saturation [4][5][6]. With all these above-mentioned advantages, PAI serves as a great tool in many fields of biomedical application, e.g., brain imaging [7,8], blood vessel imaging [9], tumor detection [10] and molecular imaging [11].

Model-Based Photoacoustic
In computed-tomography PAI, we usually use a short laser pulse to vertically and uniformly incident upon the tissue. Then, the energy of the laser is absorbed by the tissue and the initial ultrasound field is inspired through the photoacoustic effect. The generated photoacoustic signals propagate in the medium and are detected by the ultrasound transducer scanning around the tissue. The relationship between the photoacoustic signals and the photoacoustic images could be described by the following equation [3]: where p(r, t) is the generated acoustic pressure at the position r and the time t, A(r) is the light absorption deposition of the tissue, c is the speed of sound, β is the isobaric expansion coefficient and C p is the specific heat.
Assuming that the acoustic property of the medium is homogeneous and the laser pulse approximates the delta function, for the two-dimensional case, which is mainly concerned in this paper, Equation (1) can be solved by Green's function [3]: where r 0 is the position of the transducer.
In the model-based PAI, a forward model is usually established to express the relationship between photoacoustic image and photoacoustic signals in matrix multiplication. From Equation (2), it can be derived that: 4πC p t β t 0 p(r 0 , t)dt = |r−r 0 |=ct A(r)d 2 r Defining the left side of Equation (3) as: where g(r 0 , t) is the integral for the sampled signal at r 0 multiplied with t. Then, Equation (3) can be rewritten as: g(r 0 , t) = |r 0 −r |=ct A(r )dr (5) In a practical reconstruction system, the photoacoustic image and the detected signals usually need to be discretized. The photoacoustic image is discretized to a matrix A with the size of N x × N y and g is discretized to the column vector g. From Equation (5), we can see that g is the line integral of A so every element in the vector g can be expressed as the weighted sum of the elements in matrix A and the weight value is related to the position of the integral arc [34]: where l refers to the number of the sampling point. (*) T means transpose of the matrix. N is total number of the sampling points. A is a column vector which is rearranged from A. W l is the corresponding weight matrix. As a method of the compressed sensing, the TV measures the local variation of an image which can be expressed as the L 1 norm of the discrete gradient-matrix of the image [49]: 1/2 (7) where A i,j is the gray value of the image at the pixel (i,j).
When TV is incorporated into the iterative reconstruction method, the optimization problem of the TV-based photoacoutic model-based reconstruction method can be expressed as: Define the finite difference which approximates to the gradient of A at the kth pixel as: u k = D k A, Equation (8) can be written as:

GPEF Compensation Method
The GP extrapolation was firstly proposed by Gerchberg and Papoulis as a new method to calculate the Fourier transform of a band-limited function [47,48]. It is executed by means of the FFT between Fourier space and image space and thus improve the accuracy and the speed of the algorithm [50]. The GPEF method to compensate for the limited-view of PAI is demonstrated as the following equation: A n I = M I A n + ηM I L −1 M S LA n (10) A n = A n I + A n (1 − M I ) (11) where A n is the obtained image in the nth iteration. η is the relax factor which is between 0 to 1. M S and M I are the operators that express the regions of the corresponding missing views to the signals and image, respectively. A n I is the corresponding region of missing views of the image A calculated from M I A n in image space. M S and M I are defined as: where S represents the support of the position of the missing detectors which need to be estimated. I represents the support of missing views of the image, to be blurred during the reconstruction and need to be compensated. Xu et al. in [38] had described the general rule for finding the "detection region" as well as "invisible region" in detail. As it was said in [38], the parts of the boundaries' normal lines to which pass through a detector position can be stably recoverable. This region is the measured region or the known. While the complement forms the invisible part that is the missing views I in the manuscript. This region is to be blurred during the reconstruction and need to be compensated. S denotes the positions of the missing detectors which is in the projection space. S is chosen to make sure that all the normal lines of the boundaries pass through a detector position so that all the images can be reconstructed stably. L and L −1 denote the operator of the forward and backward projection between signals and the image which is conducted using the FFT in frequency domain based on the following relationship [51]: where A(k i , k j ) is the Fourier transform of the photoacoustic image A(i,j). p(k i , k j , ω) is the Fourier transform of photoacoustic signal p(i,j,t). ω = c|k| = c k 2 i + k 2 j Implementing Equation (14) requires the following steps: (1) Fourier transform the photoacoustic signal p(i,j,t) to obtain p(k i , k j , ω); (2) use the relationship in Equation (14) to obtain A(k i , k j ); and (3) inverse Fourier transform A(k i , k j ) to reconstruct the photoacoustic image A(i,j). The diagram of the Gerchberg-Papoulis-based compensation method is displayed in Figure 1.  (11) where A n is the obtained image in the nth iteration. η is the relax factor which is between 0 to 1. MS and MI are the operators that express the regions of the corresponding missing views to the signals and image, respectively.  (13) where S represents the support of the position of the missing detectors which need to be estimated. I represents the support of missing views of the image, to be blurred during the reconstruction and need to be compensated. Xu et al. in [38] had described the general rule for finding the "detection region" as well as "invisible region" in detail. As it was said in [38], the parts of the boundaries' normal lines to which pass through a detector position can be stably recoverable. This region is the measured region or the known. While the complement forms the invisible part that is the missing views I in the manuscript. This region is to be blurred during the reconstruction and need to be compensated. S denotes the positions of the missing detectors which is in the projection space. S is chosen to make sure that all the normal lines of the boundaries pass through a detector position so that all the images can be reconstructed stably. L and L −1 denote the operator of the forward and backward projection between signals and the image which is conducted using the FFT in frequency domain based on the following relationship [51]: is the Fourier transform of the photoacoustic image A(i,j).
is the Fourier transform of photoacoustic signal p(i,j,t).   The main idea of the GP algorithm is that it compensates for missing data via extrapolations between two spaces. The extrapolations in GP are carried out using FFT so that it is between signal space and Fourier space. As can be seen in Equations (10) and (11), during the iteration, missing detector elements are estimated from MsLA n in the Fourier space. Then, the missing views of A usually blurred by the artifacts are estimated by the missing detectors and compensated by weighted addition to this part of re-estimated image. Implementing the extrapolation in the frequency domain will guarantee the speed and accuracy of the compensation method.

TV-GPEF Algorithm
The convergence of the GP-type extrapolation has been theoretically proven. However, the pure application of the GP-type extrapolation to compensate for the loss caused by the data deficiency is still not stable in practical for its weak noise robustness and frequent oscillations. Therefore, we combine the GPEF compensation method with the TV-based optimization to improve the stableness and the convergence of the algorithm.
The variable splitting and Barzilai-Borwein based method is implemented to solve the optimization problem [52]. This method is known to have excellent performance in resolving the TV-based minimization problem. The optimization problem for the TV-based algorithm in Equation (9) can be rewritten as: where α and λ are weights of two parts of the objective function.
By using the standard augmented Lagrangian method and incorporating the Barzilai-Borwein step size to obtain faster convergence [52,53], the problem of Equation (15) can also be deduced as: where b n k is the defined step parameter for the TV in the nth iteration and δ n is the defined Barzilai-Borwein step size in the nth iteration. The updating of δ n can be determined by the Barzilai-Borwein method. After using the variable splitting method, Equation (16) can be transformed into the following two sub-problems [41,52]: The two sub-problems can be solved as follows using the shrinkage operator method [41,52]: where F is the Fourier transform matrix.
We apply the TV-based algorithm into the GPEF compensation method. In every iteration, the image is first updated by the TV optimization and then compensated by the GPEF method. The iteration steps of the TV-GPEF algorithm are summarized as follows:

1.
Initialization: Input A, α, λ, η. Set δ 0 = 1, b 0 = 0. Determine M S and M I according to the scan patterns of the reconstruction based on the rule in [38] as is mentioned above.

2.
Update u n using Equation (18) for the given A n−1' . Update A n using Equation (19) for the given u n . Update b n and δ n using Equation (17).

3.
Input the image A n to Equations (10) and (11) to obtain the compensated image in the nth iteration A n' . 4.
If the terminal condition is met, end the iteration. Otherwise, n = n + 1 and return to Steps 2 and 3. The exiting condition is as follows:

Simulation Results
In order to validate the proposed TV-GPEF algorithm, a series of numerical simulations are conducted for limited-view scanning. First, straight-line scanning with different sampling points are carried out for the Sheep-Logan phantom which is a widely used phantom in biomedical imaging. Then, limited-angle circular scanning with varying sampling views is also tested. The reconstruction results for the TV-GPEF algorithm are compared to those of the TV-GD and the TV-VB algorithm quantitatively and qualitatively. The PSNRs of these three algorithms are compared and the noise robustness, the convergence speed of the algorithms are also studied. The simulations are conducted using Matlab R2013a on a personal computer with a 2.4 GHz Intel(R) Xeon ® CPU and 64 GB memory. The speed of sound is set to 1500 m/s. The simulations for photoacoustic signals and image reconstruction are all carried out in a two-dimensional plane.

Straight-Line Scanning
The Sheep-Logan phantom is chosen as the original distribution of the light absorption of the image which is shown in Figure 2. The number of pixels in the image are 128 × 128 corresponding to the simulation area of 76.8 mm × 76.8 mm. The transducer detects the signals from the right side of the phantom. The perpendicular distance from the center of the image to the scanning line is 38 mm. The length of the scanning line is 76 mm which remains unchanged but the numbers of sampling points are varying resulting in different sampling intervals. The diagram of the straight-line scanning is also shown in Figure 2. The simulations with the sampling points of 50, 20 and 10 are executed, respectively. The number of iterations is 10 for all three algorithms. The adaptive tunable parameter for the TV-GD algorithm is set to 2 at first iteration and decreased to 0.2 when iteration number is greater than 10, which, as [34] reported, is the best selection when the iteration time is 10. The parameter settings for the TV-VB and the TV-GPEF are estimated by testing the values that provides the best performance for the simulations. The values of the parameters are set to α = 0.4, λ = 1. η is set to 0.15, 0.10, and 0.07, corresponding to 50, 20, and 10 points sampling. The position of the compensated missing views for the signals and image are also shown in Figure 3. The white region in Figure 3 is the corresponding image region to the missing views corresponding to I defined in Section 2.2. The rule to determine this region is also illustrated in Section 2.2. Because the missing views of the image is to be blurred due to the missing of the detector elements, this part of the image need to be compensated while keep the known part (black area) as is because this part can be stably recovered. The axis labels in Figure 3 are the same as that of the phantom in Figure 2.
the compensated missing views for the signals and image are also shown in Figure 3. The white region in Figure 3 is the corresponding image region to the missing views corresponding to I defined in Section 2.2. The rule to determine this region is also illustrated in Section 2.2. Because the missing views of the image is to be blurred due to the missing of the detector elements, this part of the image need to be compensated while keep the known part (black area) as is because this part can be stably recovered. The axis labels in Figure 3 are the same as that of the phantom in Figure 2.  The reconstructed images for three algorithms are shown in Figure 4. It is seen in the first line of Figure 4 that plenty of artifacts exist and blurry edges perpendicularly in the results of the TV-GD on account of the deficiency of data in vertical direction. When the number of sampling data decreases, the blurry phenomenon becomes even more serious and the quality of the image is damaged gravely. It is hard to obtain the useful diagnostic information from the missing views and its application is limited. For the TV-VB algorithm in the second line, due to the efficiency of the Barzilai-Borwein based method, the artifacts caused by the missing views are improved with the compensated missing views for the signals and image are also shown in Figure 3. The white region in Figure 3 is the corresponding image region to the missing views corresponding to I defined in Section 2.2. The rule to determine this region is also illustrated in Section 2.2. Because the missing views of the image is to be blurred due to the missing of the detector elements, this part of the image need to be compensated while keep the known part (black area) as is because this part can be stably recovered. The axis labels in Figure 3 are the same as that of the phantom in Figure 2.  The reconstructed images for three algorithms are shown in Figure 4. It is seen in the first line of Figure 4 that plenty of artifacts exist and blurry edges perpendicularly in the results of the TV-GD on account of the deficiency of data in vertical direction. When the number of sampling data decreases, the blurry phenomenon becomes even more serious and the quality of the image is damaged gravely. It is hard to obtain the useful diagnostic information from the missing views and its application is limited. For the TV-VB algorithm in the second line, due to the efficiency of the Barzilai-Borwein based method, the artifacts caused by the missing views are improved with The reconstructed images for three algorithms are shown in Figure 4. It is seen in the first line of Figure 4 that plenty of artifacts exist and blurry edges perpendicularly in the results of the TV-GD on account of the deficiency of data in vertical direction. When the number of sampling data decreases, the blurry phenomenon becomes even more serious and the quality of the image is damaged gravely. It is hard to obtain the useful diagnostic information from the missing views and its application is limited. For the TV-VB algorithm in the second line, due to the efficiency of the Barzilai-Borwein based method, the artifacts caused by the missing views are improved with sufficient sampling points but it has poor results when the sampling points become sparse. As for the TV-GPEF algorithm in the third line, we can see that the results are greatly improved. There are almost no blurs in the reconstructed images and the artifacts are reduced remarkably. Even with sparse sampling points, the degree of vagueness in the missing views is significantly less than those of other two algorithms. Moreover, the contrast of the image is also improved and the background noise is effectively suppressed. sufficient sampling points but it has poor results when the sampling points become sparse. As for the TV-GPEF algorithm in the third line, we can see that the results are greatly improved. There are almost no blurs in the reconstructed images and the artifacts are reduced remarkably. Even with sparse sampling points, the degree of vagueness in the missing views is significantly less than those of other two algorithms. Moreover, the contrast of the image is also improved and the background noise is effectively suppressed. To present the results quantitatively, we also compare the PSNRs of the reconstructed images for these three algorithms. The computation formula of PSNR is as follows: where MAXI is the maximum gray value of the image. Ri,j is the gray value of the original image.
The results of PSNRs are displayed in Table 1. Results show that the TV-GPEF acquires the highest PSNRs compared to those of other two algorithms with the average of 11 dB higher than that of the To present the results quantitatively, we also compare the PSNRs of the reconstructed images for these three algorithms. The computation formula of PSNR is as follows: where MAXI is the maximum gray value of the image. R i,j is the gray value of the original image. The results of PSNRs are displayed in Table 1. Results show that the TV-GPEF acquires the highest PSNRs compared to those of other two algorithms with the average of 11 dB higher than that of the TV-GD and 7 dB higher than that of the TV-VB. From the results of PSNR, we can see that the TV-GPEF algorithm is superior to the other two algorithms. The number of the estimated detectors is another factor which need to consider for TV-GPEF algorithm. We have represented the PSNRs versus the number of estimated detectors for the straight-line scanning in the line chart in Figure 5. It is found that properly increasing the number of estimated detectors elements would improve the results to some extent. However, as the number of estimated detectors continues to increase, the trend of improvement would tend to mitigation. When the increase comes to a certain degree, there would be almost no improvement for the results. TV-GD and 7 dB higher than that of the TV-VB. From the results of PSNR, we can see that the TV-GPEF algorithm is superior to the other two algorithms. The number of the estimated detectors is another factor which need to consider for TV-GPEF algorithm. We have represented the PSNRs versus the number of estimated detectors for the straight-line scanning in the line chart in Figure 5. It is found that properly increasing the number of estimated detectors elements would improve the results to some extent. However, as the number of estimated detectors continues to increase, the trend of improvement would tend to mitigation. When the increase comes to a certain degree, there would be almost no improvement for the results. Meanwhile, the calculation time would increase greatly. Considering the factors of PSNR and calculation time comprehensively, we choose the number of the estimated detectors of 80, 50, and 30 for 50-points, 20-points and 10-points scanning, respectively.

Limited-Angle Circular Scanning
Simulations for limited-angle circular scanning are also carried out. In this case, the scanning step of angle remains unchanged, and is set to 6°. The number of the sampling points are set to 10, 15 and 20, respectively, corresponding to 60-views, 90-views and 120-views circular scanning. The radius of the scanning is 36 mm. The values of the parameters are set as α = 0.6, λ = 1. η is set to 0.16, 0.08, 0.05 corresponding to 120-views, 90-views, 60-views scanning. The diagrams of the scanning are shown in Figure 6a

Limited-Angle Circular Scanning
Simulations for limited-angle circular scanning are also carried out. In this case, the scanning step of angle remains unchanged, and is set to 6 • . The number of the sampling points are set to 10, 15 and 20, respectively, corresponding to 60-views, 90-views and 120-views circular scanning. The radius of the scanning is 36 mm. The values of the parameters are set as α = 0.6, λ = 1. η is set to 0.16, 0.08, 0.05 corresponding to 120-views, 90-views, 60-views scanning. The diagrams of the scanning are shown in Figure 6a-c. Figure 7a-c shows the corresponding areas of compensated missing views.  The simulation results are demonstrated in Figure 8. Results show that the TV-GD has the poor performance for the reconstruction. There exists a great degree of distortion in the top right corner as well as the left bottom of the reconstructed images. It has the large error for the geometrical shape between the original image and the reconstructed images. Especially when the views decrease, the result is getting even worse. You can see that for the 60-views scanning in Figure 8a, there is almost no useful information can be obtained from the image except for large tracts of fuzziness. The performance of the TV-VB is better than that of the TV-GD. However, there are still many artifacts that blur the edges of the image in the missing views especially for the 60-views scanning. Besides, the background noise of the results for the TV-VB is serious which affects the image quality. The reconstruction results of the TV-GPEF are superior to that of other two algorithms. The artifacts in those missing views are well suppressed and the image edge information is preserved relatively intact. Almost no blur appears in the reconstructed images for 120-views and 90-views scanning in Figure 8h,i. The contrasts of the images are high and the images are little affected by the noise.
The PSNRs of these three algorithms are shown in Table 2. The results of PSNR also validate the superiority of the proposed algorithm. On average, the PSNR of the TV-GPEF is higher than that of the TV-VB, by about 9 dB and 12 dB, than that of the TV-GD algorithm.   The simulation results are demonstrated in Figure 8. Results show that the TV-GD has the poor performance for the reconstruction. There exists a great degree of distortion in the top right corner as well as the left bottom of the reconstructed images. It has the large error for the geometrical shape between the original image and the reconstructed images. Especially when the views decrease, the result is getting even worse. You can see that for the 60-views scanning in Figure 8a, there is almost no useful information can be obtained from the image except for large tracts of fuzziness. The performance of the TV-VB is better than that of the TV-GD. However, there are still many artifacts that blur the edges of the image in the missing views especially for the 60-views scanning. Besides, the background noise of the results for the TV-VB is serious which affects the image quality. The reconstruction results of the TV-GPEF are superior to that of other two algorithms. The artifacts in those missing views are well suppressed and the image edge information is preserved relatively intact. Almost no blur appears in the reconstructed images for 120-views and 90-views scanning in Figure 8h,i. The contrasts of the images are high and the images are little affected by the noise.
The PSNRs of these three algorithms are shown in Table 2. The results of PSNR also validate the superiority of the proposed algorithm. On average, the PSNR of the TV-GPEF is higher than that of the TV-VB, by about 9 dB and 12 dB, than that of the TV-GD algorithm.  The simulation results are demonstrated in Figure 8. Results show that the TV-GD has the poor performance for the reconstruction. There exists a great degree of distortion in the top right corner as well as the left bottom of the reconstructed images. It has the large error for the geometrical shape between the original image and the reconstructed images. Especially when the views decrease, the result is getting even worse. You can see that for the 60-views scanning in Figure 8a, there is almost no useful information can be obtained from the image except for large tracts of fuzziness. The performance of the TV-VB is better than that of the TV-GD. However, there are still many artifacts that blur the edges of the image in the missing views especially for the 60-views scanning. Besides, the background noise of the results for the TV-VB is serious which affects the image quality. The reconstruction results of the TV-GPEF are superior to that of other two algorithms. The artifacts in those missing views are well suppressed and the image edge information is preserved relatively intact. Almost no blur appears in the reconstructed images for 120-views and 90-views scanning in Figure 8h,i. The contrasts of the images are high and the images are little affected by the noise.
The PSNRs of these three algorithms are shown in Table 2. The results of PSNR also validate the superiority of the proposed algorithm. On average, the PSNR of the TV-GPEF is higher than that of the TV-VB, by about 9 dB and 12 dB, than that of the TV-GD algorithm.

Noise Robustness
In practical applications, the detected signals are vulnerable to the interference of thermal noise, which is usually Gaussian. Therefore, it is of great importance to test the noise robustness of the algorithm. White Gaussian noise with different powers is added to the detected signals for the case of 20-points straight-line scanning. Then, three groups of noisy signals are acquired with a signal to noise ratio (SNR) of 10 dB, 5 dB and 0 dB, respectively.
The reconstructed images for the three algorithms from the noise-polluted signals are shown in Figure 9. Results show that the TV-GPEF has stronger noise resistance than the other two algorithms. The images from the noisy signals are very close to those from the noise-free signals. Light absorbers in images are clear and distinguishable. The detailed information is quite consistent with the original image. The background noise is well suppressed and there is no significant effect of the added noise to the reconstructed image. However, for the TV-VB and the TV-GD, the added noise has a great influence on the reconstructed images. For the TV-VB, a certain degree of fuzziness appears which blurs the edges of the optical absorbers and there is a lot of background noise in the reconstructed images. The effects of noise are even worse when the SNR decreases. For the result for Figure 8. The reconstructed images from the limited-angle circular scanning by the TV-GD, TV-VB and TV-GPEF algorithms: the first row refers to the results of the TV-GD (a-c); the second row refers to the results of the TV-VB (d-f); and the third row refers to the results of the TV-GPEF (g-i). The first to third columns refer to the results from: 60-views (a,d,g); 90-views (b,e,h); and 120-views (c,f,i) sampling, respectively.

Noise Robustness
In practical applications, the detected signals are vulnerable to the interference of thermal noise, which is usually Gaussian. Therefore, it is of great importance to test the noise robustness of the algorithm. White Gaussian noise with different powers is added to the detected signals for the case of 20-points straight-line scanning. Then, three groups of noisy signals are acquired with a signal to noise ratio (SNR) of 10 dB, 5 dB and 0 dB, respectively.
The reconstructed images for the three algorithms from the noise-polluted signals are shown in Figure 9. Results show that the TV-GPEF has stronger noise resistance than the other two algorithms. The images from the noisy signals are very close to those from the noise-free signals. Light absorbers in images are clear and distinguishable. The detailed information is quite consistent with the original image. The background noise is well suppressed and there is no significant effect of the added noise to the reconstructed image. However, for the TV-VB and the TV-GD, the added noise has a great influence on the reconstructed images. For the TV-VB, a certain degree of fuzziness appears which blurs the edges of the optical absorbers and there is a lot of background noise in the reconstructed images. The effects of noise are even worse when the SNR decreases. For the result for 0 dB SNR, serious artifacts fuzz up the reconstructed image, which seriously debases the quality of the image. For the TV-GD, the image is severely affected by the added noise and almost all light absorbers are damaged in the reconstructed results.
The PSNRs of the reconstructed images from the noise-added signals for the three algorithms are displayed in Table 3. From the table we can see that the TV-GPEF algorithm outperforms other two algorithms. The PSNR of the TV-GPEF is about 11 dB higher than that of the TV-VB and about 8 dB higher than that of the TV-GD on average. Therefore, we can get the conclusion that the TV-GPEF has the strongest noise robustness compared to the other two algorithms. 0 dB SNR, serious artifacts fuzz up the reconstructed image, which seriously debases the quality of the image. For the TV-GD, the image is severely affected by the added noise and almost all light absorbers are damaged in the reconstructed results. The PSNRs of the reconstructed images from the noise-added signals for the three algorithms are displayed in Table 3. From the table we can see that the TV-GPEF algorithm outperforms other two algorithms. The PSNR of the TV-GPEF is about 11 dB higher than that of the TV-VB and about 8 dB higher than that of the TV-GD on average. Therefore, we can get the conclusion that the TV-GPEF has the strongest noise robustness compared to the other two algorithms.

Convergence Speed
Another important performance index for the iterative reconstruction algorithm is the convergence speed. The convergence of an iterative algorithm represents the speed of the reconstructed image

Convergence Speed
Another important performance index for the iterative reconstruction algorithm is the convergence speed. The convergence of an iterative algorithm represents the speed of the reconstructed image approaching to the original one. In order to study the convergence speed of the algorithm, we define the distance between the reconstructed image and the standard image d as follows: A smaller d means that the reconstructed image is closer to the original image thus the reconstructed result is more accurate and vice versa.
We choose the 90-views limited-angle circular scanning for the study. After each iteration, the obtained reconstructed image is used to calculate the distance d and the calculated d for every step of three algorithms is recorded and compared in a line chart in Figure 10. Results show that the reconstructed image of the TV-GPEF is closer to the real one than other two algorithms for every iteration step. Especially for the first several steps, the d of the TV-GPEF falls even faster than the other two algorithms. After ten iterations, d for three algorithms converges to a certain value and the value of the TV-GPEF is much smaller than those of the two other algorithms. This means that the TV-GPEF is more accurate than other two algorithms. It can be inferred that the proposed TV-GPEF algorithm has faster convergence speed than the other two algorithms. Appl. Sci. 2017, 7, 505 14 of 21 approaching to the original one. In order to study the convergence speed of the algorithm, we define the distance between the reconstructed image and the standard image d as follows: A smaller d means that the reconstructed image is closer to the original image thus the reconstructed result is more accurate and vice versa.
We choose the 90-views limited-angle circular scanning for the study. After each iteration, the obtained reconstructed image is used to calculate the distance d and the calculated d for every step of three algorithms is recorded and compared in a line chart in Figure 10. Results show that the reconstructed image of the TV-GPEF is closer to the real one than other two algorithms for every iteration step. Especially for the first several steps, the d of the TV-GPEF falls even faster than the other two algorithms. After ten iterations, d for three algorithms converges to a certain value and the value of the TV-GPEF is much smaller than those of the two other algorithms. This means that the TV-GPEF is more accurate than other two algorithms. It can be inferred that the proposed TV-GPEF algorithm has faster convergence speed than the other two algorithms. The computation times for the three algorithms are also compared. The time costs of the three algorithms for 120-views, 90-views, and 60-views limited-angle circular scanning are shown and compared in Table 4. Results show that using variable splitting and Barzilai-Borwein based method to solve the TV-based minimization (TV-VB) can greatly reduce the computation time compared to that of TV-GD. TV-GPEF algorithm has slightly higher computational cost compared to that of the TV-based ones due to the compensation procedure for every iteration. However, the convergence speed for TV-GPEF is greatly improved. In fact, TV-GPEF can obtain better results than that of TV-GD and TV-VB after the same computation time. The computation times for the three algorithms are also compared. The time costs of the three algorithms for 120-views, 90-views, and 60-views limited-angle circular scanning are shown and compared in Table 4. Results show that using variable splitting and Barzilai-Borwein based method to solve the TV-based minimization (TV-VB) can greatly reduce the computation time compared to that of TV-GD. TV-GPEF algorithm has slightly higher computational cost compared to that of the TV-based ones due to the compensation procedure for every iteration. However, the convergence speed for TV-GPEF is greatly improved. In fact, TV-GPEF can obtain better results than that of TV-GD and TV-VB after the same computation time. From the above discussion, it can be indicated that the TV-GPEF algorithm precedes the other two algorithms in terms of convergence.

In Vitro Experiment
In-vitro experiments were also conducted to validate the effectiveness of the proposed TV-GPEF algorithm. Limited-angle circular scanning with 180-views, 90-views and 60-views as well as straight-line scanning were carried out and the reconstructed results for three algorithms were compared and discussed.
The platform for the experiment is shown in Figure 11. The laser we use was a Nd:YAG laser device ( Surelite I, Contimuum, San Jose, CA, USA). The laser pulse with the wavelength of 532 nm was irradiated from the device and reflected by a mirror. Then, it is expanded by a concave mirror and illuminates the object uniformly. The duration of the laser was 4-6 ns and the frequency was 10 Hz. The settings of the laser meeted the American National Standards Institute (ANSI) laser radiation safety standard. The photoacoustic signals were detected by a transducer (V383-SU, Panametrics, Waltham, MA, USA) driven by a stepping motor to scan around the imaged object. The center frequency of the transducer was 3.5 MHz and the bandwidth was 1.12 MHz. The sampling rate of the system was 16.67 MHz.  From the above discussion, it can be indicated that the TV-GPEF algorithm precedes the other two algorithms in terms of convergence.

In Vitro Experiment
In-vitro experiments were also conducted to validate the effectiveness of the proposed TV-GPEF algorithm. Limited-angle circular scanning with 180-views, 90-views and 60-views as well as straight-line scanning were carried out and the reconstructed results for three algorithms were compared and discussed.
The platform for the experiment is shown in Figure 11. The laser we use was a Nd:YAG laser device ( Surelite I, Contimuum, San Jose, CA, USA). The laser pulse with the wavelength of 532 nm was irradiated from the device and reflected by a mirror. Then, it is expanded by a concave mirror and illuminates the object uniformly. The duration of the laser was 4-6 ns and the frequency was 10 Hz. The settings of the laser meeted the American National Standards Institute (ANSI) laser radiation safety standard. The photoacoustic signals were detected by a transducer (V383-SU, Panametrics, Waltham, MA, USA) driven by a stepping motor to scan around the imaged object. The center frequency of the transducer was 3.5 MHz and the bandwidth was 1.12 MHz. The sampling rate of the system was 16.67 MHz. The phantom for the limited-angle circular scanning is shown in Figure 12a. The phantom was made of a gelatin cylinder which had very small light absorption coefficient. The light absorbers were one leaf embedded into the gelatin cylinder. The radius of the phantom was 25 mm. The shape of the phantom imitated the structure of the blood vessel and tissue. The sampling interval was 2°. The 180-views, 90-views and 60-views scanning were conducted, corresponding to the sampling points of 90, 45 and 30, respectively. The scanning radius was 38 mm. The results of the reconstruction for three algorithms are shown in Figure 13. It can be indicated from the results that for the TV-GD, especially when the number of missing views become large, there is serious artifacts and blurs on the reconstructed images. For 90-views and 60-views scanning, almost no useful information for the phantom can be acquired from the results but a blur. For the TV-VB, the performance is improved with respect to the TV-GD. However, there still exist a lot of artifacts in the The phantom for the limited-angle circular scanning is shown in Figure 12a. The phantom was made of a gelatin cylinder which had very small light absorption coefficient. The light absorbers were one leaf embedded into the gelatin cylinder. The radius of the phantom was 25 mm. The shape of the phantom imitated the structure of the blood vessel and tissue. The sampling interval was 2 • . The 180-views, 90-views and 60-views scanning were conducted, corresponding to the sampling points of 90, 45 and 30, respectively. The scanning radius was 38 mm. The results of the reconstruction for three algorithms are shown in Figure 13. It can be indicated from the results that for the TV-GD, especially when the number of missing views become large, there is serious artifacts and blurs on the reconstructed images. For 90-views and 60-views scanning, almost no useful information for the phantom can be acquired from the results but a blur. For the TV-VB, the performance is improved with respect to the TV-GD. However, there still exist a lot of artifacts in the missing views and edges of the light absorbers are not distinct enough which decreases the resolution of the image. Besides, the background noise also seriously reduces the contrast of the images thus affects the quality of the reconstruction. However for the TV-GPEF, the results are greatly improved. The background noise and the artifacts are suppressed well. The edges of the light absorber are enhanced and the contrast of the image is improved. Therefore, it can be concluded that the performance of the TV-GPEF precedes the two other algorithms in terms of the limited-angle circular scanning. Appl. Sci. 2017, 7, 505 16 of 21 missing views and edges of the light absorbers are not distinct enough which decreases the resolution of the image. Besides, the background noise also seriously reduces the contrast of the images thus affects the quality of the reconstruction. However for the TV-GPEF, the results are greatly improved. The background noise and the artifacts are suppressed well. The edges of the light absorber are enhanced and the contrast of the image is improved. Therefore, it can be concluded that the performance of the TV-GPEF precedes the two other algorithms in terms of the limited-angle circular scanning.      The straight-line scanning was also conducted to verify the proposed algorithm. The phantom for the straight-line scanning is shown in Figure 13b. The phantom was also made of a gelatin cylinder with the radius of 25 mm. A black rectangular rubber sheet was embedded into the gelatin as the light absorber. The size of the rubber sheet was 9 mm × 14 mm. The scanning line was parallel to the longer side of the rectangle. Forty-one sampling points were evenly distributed with the sampling interval of 1 mm. The reconstruction results of three algorithms are demonstrated in Figure 14. For the results of the TV-GD, there is severe deformation for the reconstructed light absorber, especially in the vertical direction. Artifacts with a blade shape appear on the both ends of the light absorber and edges of the image are obscure which are almost hard to recognize. For the TV-VB, the deficiency of the sampling data still causes many artifacts and blurs in the reconstructed image. However, for the TV-GPEF, the missing views are well compensated with fewer artifacts and clearer edges. The distribution of the gray values within the light absorber is relatively uniform. Thus, we can get the conclusion that the TV-GPEF outperforms the other two algorithms in the straight-line scanning. The straight-line scanning was also conducted to verify the proposed algorithm. The phantom for the straight-line scanning is shown in Figure 13b. The phantom was also made of a gelatin cylinder with the radius of 25 mm. A black rectangular rubber sheet was embedded into the gelatin as the light absorber. The size of the rubber sheet was 9 mm × 14 mm. The scanning line was parallel to the longer side of the rectangle. Forty-one sampling points were evenly distributed with the sampling interval of 1 mm. The reconstruction results of three algorithms are demonstrated in Figure 14. For the results of the TV-GD, there is severe deformation for the reconstructed light absorber, especially in the vertical direction. Artifacts with a blade shape appear on the both ends of the light absorber and edges of the image are obscure which are almost hard to recognize. For the TV-VB, the deficiency of the sampling data still causes many artifacts and blurs in the reconstructed image. However, for the TV-GPEF, the missing views are well compensated with fewer artifacts and clearer edges. The distribution of the gray values within the light absorber is relatively uniform. Thus, we can get the conclusion that the TV-GPEF outperforms the other two algorithms in the straight-line scanning.

Discussion and Conclusions
In this paper, a Gerchberg-Papoulis extrapolation based compensation method is successfully applied to the limited-view photoacoustic image reconstruction. The missing views are compensated through the extrapolation between the frequency space and image space based on the FFT. This method is more accurate than the iterative-reconstruction-reprojection (IRR) algorithm [40], another transform-based extrapolation method. The IRR algorithm is carried out using the back and forward projection between the image space and projection space. The forward projection usually bases on the relationship in Equation (2), while the back projection on the FBP-based method. This kind of back projection methods, usually solved under a certain ideal condition, requires the completeness of the data. Since the back projection usually does not well match the forward projection, the performance of the IRR method is just average. Therefore, it usually needs interpolations at both the back projection and forward projection stages. The errors for the interpolations would degrade the reconstruction image. Moreover, this mismatched projection pairs also greatly reduce the convergence speed of the algorithm unless sufficient data is available. The FFT-based transformation utilized in the GP-based extrapolation method can effectively improve the accuracy for the compensation without the need for interpolation. Besides, compared to that of the IRR, the adoption of the FFT in the GP-based method can also accelerate the speed for the compensation and then greatly reduce the calculation time. Furthermore, to ensure the convergence of the GP-based algorithm, the known data usually need to be oversampled. To overcome this limitation, the TV-based iteration method is incorporated into the GP-based compensation method and thus greatly improves the convergence speed of the algorithm.
In the GP-based compensation method, the missing detectors are estimated by the reconstructed image acquired from the known data. The image in the missing views is re-estimated by the estimated missing detectors. Then, the compensation is implemented by weightedly adding the

Discussion and Conclusions
In this paper, a Gerchberg-Papoulis extrapolation based compensation method is successfully applied to the limited-view photoacoustic image reconstruction. The missing views are compensated through the extrapolation between the frequency space and image space based on the FFT. This method is more accurate than the iterative-reconstruction-reprojection (IRR) algorithm [40], another transform-based extrapolation method. The IRR algorithm is carried out using the back and forward projection between the image space and projection space. The forward projection usually bases on the relationship in Equation (2), while the back projection on the FBP-based method. This kind of back projection methods, usually solved under a certain ideal condition, requires the completeness of the data. Since the back projection usually does not well match the forward projection, the performance of the IRR method is just average. Therefore, it usually needs interpolations at both the back projection and forward projection stages. The errors for the interpolations would degrade the reconstruction image. Moreover, this mismatched projection pairs also greatly reduce the convergence speed of the algorithm unless sufficient data is available. The FFT-based transformation utilized in the GP-based extrapolation method can effectively improve the accuracy for the compensation without the need for interpolation. Besides, compared to that of the IRR, the adoption of the FFT in the GP-based method can also accelerate the speed for the compensation and then greatly reduce the calculation time. Furthermore, to ensure the convergence of the GP-based algorithm, the known data usually need to be oversampled. To overcome this limitation, the TV-based iteration method is incorporated into the GP-based compensation method and thus greatly improves the convergence speed of the algorithm.
In the GP-based compensation method, the missing detectors are estimated by the reconstructed image acquired from the known data. The image in the missing views is re-estimated by the estimated missing detectors. Then, the compensation is implemented by weightedly adding the estimated image to the former acquired one in the missing views. Adding those two images only in the missing views can avoid the disturbance of the artifacts in the known-region of the estimated image to that of the acquired reconstructed image. The relax factor η can adjust the weight for the estimated image thus can reduce the possibly caused imbalance between the known-views part and missing-views part for the reconstruction results. Thus, before reconstruction, the regions corresponding to the missing views for data and image M S and M I are need to be determined manually based on the scanning pattern. In the future, more effective image infuse methods can be adopted to better integrate those two images instead of just weighted addition so that there is no need to determine M S and M I for each kind of scanning pattern.
λ is the parameter corresponding to the weight of the constraint condition in the optimization problem. Theoretically, the parameter λ does not have a major impact on the performance of the TV-SE algorithm. In addition, the simulations show that the reconstruction results are not sensitive to the value of λ for a large range. Thus, λ is set to a constant 1 in this paper. α is the parameter corresponding to the weight of TV value in the optimization. α with a big value means that the TV-term is dominant which is expected to have a quicker convergence. However, over-sized value will break the balance between the two parts of the objective function. The reconstructed images with over-sized α will have a great difference from the real images because the data fidelity in the reconstruction is sacrificed to image regularity. Based on this criterion, α should be set to a value which is neither too large nor too small compared to the weights of the other part of the objective function. From the simulations and experiments in this manuscript, α can be set from 0.2 to 1. η is a relaxing factor with the range from 0 to 1. It is the weight for the estimated image from GPEF method added to the obtained reconstruction result. Small value of η will diminish the effect of the compensation method while large value of η will affect the accuracy for the results. In addition, from the simulations and experiments, it can be found that when the number of the sampling points or the views for the circular scanning are sufficient, η should be set to a relatively large value to obtain their best performances, whereas, with sparse sampling points or few scanning views, η should be relatively small. From the simulations and experiments, η can be set from 0.05 to 0.3.
In addition, although applying a compensation method to TV-based iterative algorithm will increase the calculation time, the convergence speed of the algorithm is greatly boosted and the quality of the reconstructed images is greatly improved for the limited-view scanning. In addition, the GP-based method using the FFT to estimate the missing views between the image space and frequency space can accelerate the algorithm and ensure its accuracy. Besides, the adoption of the effective variable splitting and Barzilai-Borwein based method to solve the optimization problem can also increase the convergence speed while reducing the calculation time of the algorithm. Moreover, the experimental design in this paper is relatively simple and is constrained by the equipment of the experiment such as the low-frequency transducer. Future work will focus on the improvement of the experimental facilities and will try more complicated experiments on animals.
Finally, the proposed TV-GPEF algorithm is verified by both numerical simulations and in vitro experiments. The different views of limited-angle circular scanning and straight-line scanning with varying sampling points are conducted. The reconstructed results of the proposed algorithm are compared to those of the TV-GD and TV-VB algorithm. Results show that the TV-GPEF is superior to the other two algorithms and it will greatly improve the performance for the limited-view scanning. The artifacts caused by the missing views are well compensated and the distortion and blurring for the image are greatly reduced. Thus, edges of the image are enhanced and details of the information are better preserved. For the limited-angle circular scanning, the PSNR of the TV-GPEF is higher than that of the TV-GD for about 11 dB and that of the TV-VB for 7 dB on average. For the straight-line scanning, the amplification is 12 dB to the TV-GD and 9 dB to the TV-VB. Besides, the proposed TV-GPEF algorithm also precedes the two other algorithms in terms of noise robustness and convergence speed.
Consequently, we can conclude that the proposed TV-GPEF algorithm is applicable to limited-view PAI and can well solve the problem of the incompleteness of the sampling data.