Numerical Simultaneous Determination of Non-Uniform Soot Temperature and Volume Fraction from Visible Flame Images

: This paper presents a method to invert the two-dimensional distribution of a temperature and volume concentration of soot particles from color images. By using numerical simulation, the temperature ﬁeld and particle volume-concentration ﬁeld of a non-uniform soot ﬂame are simultaneously reconstructed using the wide-response spectrum of a color CCD camera without adding monochromatic ﬁlters. The inﬂuence of number of cameras, error of camera position angle, measurement noise and different reconstruction algorithms on measurement accuracy are analyzed. The numerical-simulation results demonstrate that camera-position angle errors play a crucial role in the reconstruction accuracy. In addition, increasing the number of cameras can improve the reconstruction result accuracy. Compared with the least squares algorithm, the Tikhonov-regularization algorithm has a stronger anti-noise ability and can resist 39 dB of noise. The conclusions obtained in this paper are helpful to guide following experimental studies.


Introduction
Flame temperature is one of the most important combustion parameters [1,2]. Accurate temperature measurement is of great significance for improving combustion efficiency and reducing pollutant emissions [3]. Thermocouple temperature measurement is one of the most common temperature-measurement methods, which has the advantages of simple operation and reliable results. However, it is a contact measurement method, and it is easy to interfere with a measured flame-flow field when the method is used for flame measurement, thereby affecting the temperature-field distribution of the flame [4,5]. Non-contact measurement will not interfere with the measured flame-flow field, and it has certain advantages in flame-temperature measurement. Laser spectroscopy is one of the non-contact temperature-measurement methods, which include laser interference holography [6][7][8], tunable semiconductor-laser absorption spectroscopy (TDLAS) technology [9][10][11][12] and laser Rayleigh scattering (LRS) technology [13][14][15][16]. In recent years, the measurement of the temperature fields of a flame section has been performed by combining TDLAS and CT technologies [11,17,18]. Laser spectral-temperature-measurement technology has the advantages of a high measurement accuracy and large measurement range when measuring a flame's temperature. However, the temperature-measurement process requires the help of lasers. In addition, corresponding optical paths are arranged, and the operation process has a certain complexity. It is challenging to use laser spectroscopy for temperature measurement in some high-vibration and high-dust industrial environments [19][20][21].
Compared with laser spectroscopy technology, temperature-measurement technology based on spontaneous flame-radiation information is another type of non-contact passive temperature-measurement method, which does not require a laser light source [22]. During the combustion process of fuel, a large amount of heat is released due to a chemical reaction; the heat is externally manifested as a thermal-radiation spectrum. When measuring flame temperature based on flame-excitation radiation information, it is necessary to use a sensor to collect thermal-radiation information. An image sensor is commonly used [23]. As the most common industrial camera, the CCD camera is widely used in temperature measurement due to its advantages [24][25][26]. This method has the advantages of not affecting the measured flame-flow field, and it performs without an external light source. All flame-flow field information can be collected by a CCD camera, and several algorithms can be used to reconstruct the flame-flow field parameters [27,28].
The algorithm used to reconstruct a flame-flow field based on a CCD camera consists of inverting the radiation source term of the flame-flow field from the boundary radiation intensity collected by the CCD camera. The process of inversion has a high computational load and is time-consuming. In addition, it is prone to errors due to the fact that an ill-posed problem is considered. At present, it is mainly divided into two categories: optimization algorithms [29][30][31][32] and regularization algorithms [33][34][35][36]. Liu et al. [29] transformed the inversion problem into an optimization problem. They used the conjugate gradient method to obtain a boundary radiation intensity with the smallest error from experimental data. Wang et al. [30] transformed the inversion problem into an ill-conditioned matrix. They used the least-squares QR-decomposition algorithm to solve the optimal radiation source term and then obtained the parameters of a flame-flow field from the optimal radiation source term. Although the least-squares QR-decomposition algorithm used by Wang et al. is analytically equivalent to the conjugate-gradient algorithm used by Liu et al., the least-squares QR-decomposition algorithm is more advantageous in terms of the processing of numerical properties. The optimization algorithm is well developed and is widely used to solve such inversion problems. However, the optimization algorithm is an iterative algorithm that requires one to set a suitable convergence range. An inappropriate convergence range will lead to large errors and a long calculation time. Zhou et al. [37] proposed an improved Tikhonov-regularization algorithm for the reconstruction of a threedimensional temperature field. The obtained error of the experimental results was almost 5%. Wei et al. [38] improved the overall accuracy of reconstruction using the Tikhonovregularization algorithm. They efficiently reduced the number of projection angles that are required for reconstruction. Xie et al. [39] used both the Tikhonov-regularization algorithm and the truncated singular-value algorithm in order to reconstruct a flame in a furnace. The experimental results demonstrated that the former algorithm takes less time, its reconstruction error is small and its maximum temperature reconstruction is more accurate. Previous studies showed that the Tikhonov-regularization algorithm can accurately reproduce the distribution of different parameters of the flame-flow field and has great potential.
In addition, in previous studies, the wavelengths of the three channels of the CCD camera were often approximated by using the monochromatic wavelength in order to calculate radiation intensity [40][41][42][43][44]. However, the spectral response of the R, G and B bands of the color CCD camera is relatively wide and cannot be simply approximated by using the monochromatic wavelength. Yan et al. [45] considered the broad spectrum of the color CCD camera when inverting axisymmetric flames. The experimental results showed that this method is efficient and reliable.
Based on the previous studies, this study used a color CCD camera to collect flame images, reconstruct the two-dimensional temperature field and radiation characteristics of the flame by using the Tikhonov-regularization algorithm and perform a verification of the numerical simulation of the bimodal flame. This paper is divided into the following three steps: establishing a numerical simulation of a double-peak flame, calculating the image intensity collected by the camera according to the spectral-band model and using the Rayleigh approximation formula to calculate the spectral-absorption coefficient. The calculated image intensity is used to reconstruct the temperature-field distribution of the flame section by using the Tikhonov-regularization algorithm. Finally, the reconstruction results are compared with those obtained using the LSQR algorithm.

Radiation Imaging Models
The tomographic reconstruction of asymmetric flames usually requires multiple angles to provide a complete distribution of information on the flame. If images of flames are used for reconstruction, multiple cameras are required. To this end, this paper proposes a flame cross-section radiation-parameter reconstruction system, which is specially designed to perform a multi-camera tomographic reconstruction of the flame, as shown in Figure 1. Multiple cameras are placed at the same angle, α, and same distance, S, from the flame axis. The flame images are then simultaneously collected. The pixel intensity of the flame image is processed in order to obtain the radiation source-term distribution of the flame cross-section. The soot-temperature distribution and volume fraction are then calculated by using colorimetry, as described in the next section. the Rayleigh approximation formula to calculate the spectral-absorption coefficient. The calculated image intensity is used to reconstruct the temperature-field distribution of the flame section by using the Tikhonov-regularization algorithm. Finally, the reconstruction results are compared with those obtained using the LSQR algorithm.

Radiation Imaging Models
The tomographic reconstruction of asymmetric flames usually requires multiple angles to provide a complete distribution of information on the flame. If images of flames are used for reconstruction, multiple cameras are required. To this end, this paper proposes a flame cross-section radiation-parameter reconstruction system, which is specially designed to perform a multi-camera tomographic reconstruction of the flame, as shown in Figure 1. Multiple cameras are placed at the same angle, α , and same distance, S , from the flame axis. The flame images are then simultaneously collected. The pixel intensity of the flame image is processed in order to obtain the radiation source-term distribution of the flame cross-section. The soot-temperature distribution and volume fraction are then calculated by using colorimetry, as described in the next section.

Direct Problem
For hydrocarbon-fuel combustion flames, the radiation of 2 CO and 2 H O in the visible light band is negligible, while the radiation of the flame in the visible light band is mainly generated by soot at high temperature [46]. In general, the flame radiation intensity is described by the Radiation Transfer Equation (RTE) [40]. In this paper, the following assumptions are made [35,36,[47][48][49]: (a) only the soot radiation in the visible light band is considered, (b) the effects of scattering and self-absorption of the flame are ignored and (c) the effect of background-light radiation is ignored. According to these assumptions, the radiation intensity of soot along the projection path received by the camera can be expressed as:

Direct Problem
For hydrocarbon-fuel combustion flames, the radiation of CO 2 and H 2 O in the visible light band is negligible, while the radiation of the flame in the visible light band is mainly generated by soot at high temperature [46]. In general, the flame radiation intensity is described by the Radiation Transfer Equation (RTE) [40]. In this paper, the following assumptions are made [35,36,[47][48][49]: (a) only the soot radiation in the visible light band is considered, (b) the effects of scattering and self-absorption of the flame are ignored and (c) the effect of background-light radiation is ignored. According to these assumptions, the radiation intensity of soot along the projection path received by the camera can be expressed as: where λ is the wavelength, L represents the projected path length, I b (L, λ, T) denotes the blackbody radiation intensity of soot particles and κ(L, λ) is the local spectral absorption coefficient, which is proportional to the soot-volume fraction and calculated by using the Rayleigh limit of Mie theory as [45,48]: where E(m) is the soot optical constant and m is the gradient index function with real and imaginary parts [50].
Note that E(m) is related to the physical properties of soot. In this study, the most widely used function model in the literature is considered. More precisely, E(m) = 0.26, m = 1.57 − 0.56i.
According to Planck's radiation law, the radiation intensity of soot particles can be expressed as: where T(L) represents the temperature and C 1 , C 2 are the first and second Planck's radiation constants, respectively [43].
In previous studies, when computing the radiation intensity of soot particles, the R, G and B data of the pixels of a flame picture collected by color CCD camera are usually assumed to be proportional to the approximately monochromatic radiative intensities in wavelengths of red, green and blue [40][41][42][43]. However, due to the wide spectral-response spectrum of the R, G and B channels of the color camera, it cannot be simply approximated by using the monochromatic radiation intensity. Assuming that the relative spectral response efficiencies of the R, G and B channels of the CCD camera are expressed as η R (λ), η G (λ), η B (λ), the projection received by the camera can be expressed as: where λ 1 , λ 2 are the start and end of the corresponding wavelength of the camera, respectively. As shown in Figure 1, the flame cross-section and camera beam are respectively discretized into n 2 unit grids and p imaging sight lines, where the flame-radiation parameters in the unit grids are uniform. Therefore, Equation (3) can be written in a discrete form: where I b (i, λ, T) is the radiation source term of the cell grid number i, η k (λ) represents the relative spectral-response efficiencies, κ(i, λ) denotes the local spectral absorption coefficient of grid number j and ∆L j,i is the distance traveled by the imaging sight-line number j within the grid number i.
Equation (5) is then written in matrix form: where ∆S ∈ R P×n 2 , I λ ∈ R p and I ∈ R n 2 . It can be seen from Figure 1 that each imaging line of sight will pass through only few cell grids. Therefore, matrix ∆S is a large sparse matrix, and an ill-posed matrix-solution algorithm is required to invert reasonable results. For the inversion algorithm of largesparse matrices, the commonly used methods are the Tikhonov-regularization algorithm and the least-squares (LSQR) algorithm based on QR decomposition. In this paper, these two algorithms are used to simultaneously reconstruct the radiation parameters of the flame section, and the influence of different measurement errors on the accuracies of the two reconstruction algorithms is evaluated.

LSQR Algorithm
The LSQR algorithm is suitable for solving the least-squares problem b − Ax 2 , where A is a large sparse matrix [40].
Equation (6) can be obtained using the LSQR algorithm: where A = ∆S, x = I and b = I λ .
For the kth iterative process: where and where U k+1 and V k are obtained by using matrix bi-diagonalization: The calculation result can be computed as:

Tikhonov-Regularization Algorithm
The Tikhonov-regularization algorithm [35] can transform the solving of Equation (6) into the solving of the minimum of the following expression: where D is a regularized matrix for which the diagonal elements are all 1.0 and the sum of all the elements in each row is 0 [34]. An approximate solution for the regularization coefficient α is given by [43]: where I(0) is the least-squares solution of Equation (6). The optimal solution of Equation (6) is then expressed as:

Two-Color Method
After solving the radiation source-term distribution of the flame section, the temperature distribution can be solved using the two-color method: where I r and I r are the source-term distributions of radiation for the R and G channels, respectively. Since Equation (15) comprises the integral of wavelength λ, it cannot be simply solved. Therefore, this paper develops Equation (16) about temperature T and uses the Newton iteration method to solve the temperature distribution: After the temperature distribution is solved, the soot-concentration distribution can be reversely solved by bringing the temperature distribution into Equation (3). The overall calculation process is presented in Figure 2.
An approximate solution for the regularization coefficient α is given by [43]: where ( ) 0 I is the least-squares solution of Equation (6).
The optimal solution of Equation (6) is then expressed as:

Two-Color Method
After solving the radiation source-term distribution of the flame section, the temper ature distribution can be solved using the two-color method: Since Equation (15) comprises the integral of wavelength λ , it cannot be simpl solved. Therefore, this paper develops Equation (16) about temperature T and uses th Newton iteration method to solve the temperature distribution: After the temperature distribution is solved, the soot-concentration distribution ca be reversely solved by bringing the temperature distribution into Equation (3). The overa calculation process is presented in Figure 2.

Simulated Object
The impacts of different errors on the reconstruction accuracy of the two algorithms are compared using a numerical simulation. The asymmetric temperature and sootconcentration distributions are expressed in Equation (17). In this system, the cameras are assumed to have a field of view of 11.4 degrees and the distance S is 169 mm. The common area that the cameras can capture is a circle with a diameter of 33.78 mm in the central area. Therefore, the flame cross-section is divided into 1600 sub-units, and each sub-unit is a square with a side length of 0.6 mm in space.
The impacts of different errors on the reconstruction accuracy of the two algorithms are compared using a numerical simulation. The asymmetric temperature and soot-concentration distributions are expressed in Equation (17). In this system, the cameras are assumed to have a field of view of 11.4 degrees and the distance S is 169 mm. The common area that the cameras can capture is a circle with a diameter of 33.78 mm in the central area. Therefore, the flame cross-section is divided into 1600 sub-units, and each sub-unit is a square with a side length of 0.6 mm in space.
, v f x y are, respectively, the temperature and soot-concentration distributions, as shown in Figure 3. In this paper, the Gaussian function is used to approximate the relative spectral-response efficiency, a function of the simulated CCD camera [45,51,52]:  In this paper, the Gaussian function is used to approximate the relative spectralresponse efficiency, a function of the simulated CCD camera [45,51,52]: where C i is the maximum relative spectral-response efficiency, a i is the central wavelength and λ 1 , λ 2 are the start and end of the corresponding wavelength of the camera, respectively.
It is assumed that C G = 1, C R = 0.8, a R = 600 and a G = 500. The relative spectralresponse efficiency curves of the R and G channels are shown in Figure 4.  For the actual flame, only the flame image collected by the CCD camera is r as the projection data in order to invert the radiation parameters of the flame However, for the numerical simulation, mathematical methods should be used late the theoretical radiation intensity collected by CCD cameras at different angle paper, in the multi-camera reconstruction system of flame-section radiation-pa construction, three cases of camera-placement interval angles exist: 30 α =°, α and 90 α =°, corresponding to 12, 6 and 4 cameras, respectively. The theoretic tion-intensity distribution collected by the CCD camera is shown in Figure 5. The ical radiation intensity is calculated as described in Section 2.2.  For the actual flame, only the flame image collected by the CCD camera is required as the projection data in order to invert the radiation parameters of the flame section. However, for the numerical simulation, mathematical methods should be used to calculate the theoretical radiation intensity collected by CCD cameras at different angles. In this paper, in the multi-camera reconstruction system of flame-section radiation-parameter construction, three cases of camera-placement interval angles exist: α = 30 • , α = 60 • and α = 90 • , corresponding to 12, 6 and 4 cameras, respectively. The theoretical radiationintensity distribution collected by the CCD camera is shown in Figure 5. The theoretical radiation intensity is calculated as described in Section 2.2. In this paper, root mean square error (RMSE) is used to test the accuracy of the re constructed field with the known original intensity field:    In this paper, root mean square error (RMSE) is used to test the accuracy of the reconstructed field with the known original intensity field: where f (x, y) is the reconstructed field, f (x, y) represents the original intensity field, f max denotes the maximum value of the original intensity field and N is the number of cell grids equal to 1600 as previously mentioned.

Influence of the Number of Cameras
As previously mentioned, three cases for the number of cameras exist: four cameras separated by 90 • , six cameras separated by 60 • and twelve cameras separated by 30 • . The distributions of the flame cross-section temperature reconstructed by using the two algorithms are shown in Figure 6, while the reconstructed soot concentrations are shown in Figure 7. The root mean square errors of the reconstruction results for the two algorithms with different numbers of cameras are shown in Figure 8. In this paper, root mean square error (RMSE) is used to test the accuracy of the reconstructed field with the known original intensity field: where ( ) , f x y is the reconstructed field, ( ) , ' f x y represents the original intensity field, max f denotes the maximum value of the original intensity field and N is the number of cell grids equal to 1600 as previously mentioned.

Influence of the Number of Cameras
As previously mentioned, three cases for the number of cameras exist: four cameras separated by 90°, six cameras separated by 60° and twelve cameras separated by 30°. The distributions of the flame cross-section temperature reconstructed by using the two algorithms are shown in Figure 6, while the reconstructed soot concentrations are shown in Figure 7. The root mean square errors of the reconstruction results for the two algorithms with different numbers of cameras are shown in Figure 8.       It can be seen from Figures 6 and 7 that when the number of CCD cameras increases, the reconstruction effects of the two algorithms are more accurate. Due to the fact that the two algorithms are equations for large sparse matrices, as the number of CCD cameras increases, more projection data can be provided, and therefore, the inversion results are better. Since the horizontal fields of view of the cameras are smaller than the separation angles of the cameras, the line of sight passing between the cameras is decreased, which results in the cameras being more prone to distortion at the boundary. The reconstruction of the soot-concentration distribution is more prone to distortion than the reconstruction of the temperature distribution. The regularization algorithm outperforms the least-squares algorithm for both the temperature and soot-concentration reconstructions. When the number of CCD cameras is more than six, the two algorithms can reconstruct a better temperature distribution. However, for the soot-concentration reconstruction, only the regularization algorithm can obtain better results.
It can be observed from Figure 8 that as the camera interval angle decreases and the number of cameras increases, the reconstruction accuracies of the two algorithms are greatly improved. The temperature-reconstruction error of the regularization algorithm decreases from 1.63% for four cameras to 0.0027% for twelve cameras. In addition, the reconstruction effect of the soot concentration is more easily affected by the number of CCD cameras. The soot-concentration reconstruction error of the four-camera regularization algorithm reaches 9.11%, while the error of the LSQR algorithm also reaches 39.8%. When the number of CCD cameras increases to 12, the error also decreases to the same level as the temperature reconstruction. Therefore, the error of the regularization algorithm is 0.0885% and that of the LSQR algorithm is 1.30%.

Effect of Measurement Noise
When a camera is used to collect flame images, it is inevitable that external light sources, the attenuation of flame light and other factors will affect the collected flame images, which affects the reconstruction accuracy. It can be seen from Figures 6 and 7 that the two algorithms can accurately reconstruct the distribution of temperature and soot concentration. Therefore, in the case of limited noise, the two algorithms should also be able to more accurately reconstruct the radiation parameters of the flame section. In order to analyze the anti-noise ability of the two algorithms, a random noise (cf. Equation (20)) is added to the previously described theoretical radiation intensity. In order to avoid the reconstruction error caused by the lack of flame information collected by using the camera and the two-color method, the influence of noise on the two algorithms is analyzed using the reconstruction results of the radiation source terms of 12 cameras.
where I λ and I λ , respectively, represent the theoretical radiation intensity after adding noise and without adding noise, and R 12 is the sum of 12 random numbers between zero and one, representing different noise levels.
In this paper, signal-to-noise ratio (SNR) is used to evaluate the noise intensity. The SNR is calculated (in dB) using Equation (21). In the calculation process, four different noise levels are considered: σ = 50, σ = 800, σ = 900 and σ = 2200. The corresponding SNRs are: 39 dB, 63 dB, 65 dB and 72 dB, respectively. The theoretical radiation intensity with added noise is shown in Figure 9, and the reconstruction-error results are shown in Figure 10.
It can be seen from Figure 10 that as the noise level decreases, the reconstruction effect of the two algorithms gradually becomes better. A comparison of the two algorithms shows that the regularization algorithm introduces the regularization matrix D and the regularization coefficient α based on the LSQR algorithm and obtains the final result by using the inversion of the ill-posed matrix. Therefore, the regularization algorithm has a stronger anti-noise ability than the LSQR algorithm. Although the SNR reaches 39 dB, the reconstruction error of the regularization algorithm is only almost 4%. In addition, the regularization algorithm can still accurately reconstruct the radiation source-term distribution. The anti-noise ability of the LSQR algorithm is far less than that of the regularization algorithm. Only when the SNR is reduced to 63 dB can the LSQR algorithm have a reconstruction error of almost 10%. However, it is still higher than the regularization algorithm by almost 4%.   It can be seen from Figure 10 that as the noise level decreases, the reconstruction effect of the two algorithms gradually becomes better. A comparison of the two algorithms shows that the regularization algorithm introduces the regularization matrix D and the regularization coefficient α based on the LSQR algorithm and obtains the final result by using the inversion of the ill-posed matrix. Therefore, the regularization algorithm has a stronger anti-noise ability than the LSQR algorithm. Although the SNR reaches 39 dB, the reconstruction error of the regularization algorithm is only almost 4%. In addition, the regularization algorithm can still accurately reconstruct the radiation source-term distribution. The anti-noise ability of the LSQR algorithm is far less than that of the regularization algorithm. Only when the SNR is reduced to 63 dB can the LSQR algorithm have a reconstruction error of almost 10%. However, it is still higher than the regularization algorithm by almost 4%.
Moreover, the time consumed by the reconstruction process is also important to the evaluation of the algorithm. Figure 11 represents the time consumed by the two algorithms under different noise levels. It can be seen that the regularization algorithm takes a slightly longer time than the LSQR algorithm. This is due to the fact that the LSQR algorithm is the basis of the regularization algorithm, both of which are required to iterate the matrix, and the regularization algorithm is required to further regularize the iterative results of the LSQR algorithm, which results in a longer time needed by the regularization algorithm. In general, as the number of elements of the coefficient matrix increases, the iteration time required by the LSQR algorithm sharply increases, which results in an increase in the algorithm time. However, increasing the number of cells in the reconstructed source term will increase the reconstruction accuracy. Therefore, it is necessary to make a trade-off between the reconstruction time and reconstruction accuracy in order to achieve an efficient balance between them. Moreover, the time consumed by the reconstruction process is also important to the evaluation of the algorithm. Figure 11 represents the time consumed by the two algorithms under different noise levels. It can be seen that the regularization algorithm takes a slightly longer time than the LSQR algorithm. This is due to the fact that the LSQR algorithm is the basis of the regularization algorithm, both of which are required to iterate the matrix, and the regularization algorithm is required to further regularize the iterative results of the LSQR algorithm, which results in a longer time needed by the regularization algorithm. In general, as the number of elements of the coefficient matrix increases, the iteration time required by the LSQR algorithm sharply increases, which results in an increase in the algorithm time. However, increasing the number of cells in the reconstructed source term will increase the reconstruction accuracy. Therefore, it is necessary to make a trade-off between the reconstruction time and reconstruction accuracy in order to achieve an efficient balance between them.

Error of Camera-Separation Angle
As previously mentioned, the camera separation angles (30°, 60° and 90°) are the same. However, in the actual experiment, the angles between the cameras would inevita-

Error of Camera-Separation Angle
As previously mentioned, the camera separation angles (30 • , 60 • and 90 • ) are the same. However, in the actual experiment, the angles between the cameras would inevitably have an angle deviation due to the experimental equipment, reading errors and other reasons. In order to analyze the influence of the angle deviation on the reconstruction accuracy of the two algorithms, different degrees of angle error were added to the acquisition angles of the 12 cameras and the radiation source term was reconstructed accordingly. The obtained results are shown in Figure 12. Figure 11. The time taken by the two algorithms for different intensity noises.

Error of Camera-Separation Angle
As previously mentioned, the camera separation angles (30°, 60° and 90°) are the same. However, in the actual experiment, the angles between the cameras would inevitably have an angle deviation due to the experimental equipment, reading errors and other reasons. In order to analyze the influence of the angle deviation on the reconstruction accuracy of the two algorithms, different degrees of angle error were added to the acquisition angles of the 12 cameras and the radiation source term was reconstructed accordingly. The obtained results are shown in Figure 12. It can be seen from Figure 12 that four angular errors of 0.1°, 0.3°, 1° and 3° are added to the acquisition angle. The radiation source terms of the R and G channel reconstructions of the two algorithms are highly affected by the interval-angle error. In addition, the RMS rapidly increases with the increase of the angle error. When the angle error is 2°, the RMS of the regularization algorithm exceeds 30%, and the reconstruction results at this time can no longer be used to reconstruct the radiation parameters of the flame  It can be seen from Figure 12 that four angular errors of 0.1 • , 0.3 • , 1 • and 3 • are added to the acquisition angle. The radiation source terms of the R and G channel reconstructions of the two algorithms are highly affected by the interval-angle error. In addition, the RMS rapidly increases with the increase of the angle error. When the angle error is 2 • , the RMS of the regularization algorithm exceeds 30%, and the reconstruction results at this time can no longer be used to reconstruct the radiation parameters of the flame section. The impact of the LSQR algorithm is far greater than that of the regularization algorithm. More precisely, when the angle error is 0.1 • , the reconstruction error of the LSQR algorithm exceeds 60%. It can be deduced from this analysis that camera acquisition angle is highly important for the two algorithms, and it is the basis for determining whether the inversion can be achieved. By analyzing the generation principle of the theoretical images, the reason why the angle error has a high influence can be determined. In fact, the flame image is a "cumulative effect" formed by a specific radiation-source item, a specific acquisition angle, a specific line of sight and a specific grid. The deviation of the angle when collecting the flame image will cause the flame images provided by multiple cameras to be contradictory, which leads to inversion errors. In contrast to the effects of the number of cameras and noise on the previously detailed two algorithms, the camera acquisition angle affects the inversion process, while its impact on the two algorithms is extremely significant.

Conclusions
This paper proposes a method to simultaneously reconstruct the two-dimensional temperature field and soot-particle volume concentration of a non-uniform soot flame using the wide response spectrum of a color CCD camera. It then evaluates the influence of number of cameras, camera interval-angle error and noise on the Tikhonov-regularization algorithm and LSQR algorithm. The numerical-simulation results demonstrate that when the number of cameras increases, the reconstruction accuracy of the two algorithms increases. The Tikhonov-regularization algorithm has a higher precision than the LSQR algorithm. When six cameras are used, the error of reconstructed RMS of the Tikhonov-regularization algorithm's temperature and soot-volume concentration are 0.0259% and 0.5522%, respectively. In contrast, those of the LSQR algorithm are 0.3497% and 3.6418%, respectively. Furthermore, the Tikhonov-regularization algorithm outperforms the LSQR algorithm in terms of noise immunity. If the LSQR algorithm requires a satisfactory accuracy, the SNR of the flame images should not be less than 65 dB. However, for the Tikhonov-regularization algorithm, the SNR of the images needs to be better than only 39 dB, which is much better than that of the LSQR algorithm. Most importantly, the camera separation-angle error has a high impact on the reconstruction accuracy. Finally, an angle error of 1 • will bring an almost 18% error to the Tikhonov-regularization algorithm and a more than 100% error to the LSQR algorithm.