3D Evolutionary Reconstruction of Scalar Fields in the Gas-Phase

: An evolutionary reconstruction technique (ERT) was developed for three-dimensional (3D) reconstruction of luminescent objects, in particular turbulent ﬂames for the ﬁrst time. The computed tomography (CT) algorithm is comprised of a genetic algorithm (GA) and a ray-tracing software. To guide the reconstruction process, a mask is introduced. It uses a Metropolis algorithm (MA) to sample locations where speciﬁc genetic operators can be applied. Based on an extensive parameter study, performed on several types of phantoms, the ability of our algorithm for 3D reconstructions of ﬁelds with varying complexities is demonstrated. Furthermore, it was applied to three experiments, to reconstruct the instantaneous chemiluminescence ﬁeld of a bunsen ﬂame, a highly turbulent swirl ﬂame and the turbulent Cambridge-Sandia stratiﬁed ﬂame. Additionally, we show direct and quantitative comparison to an advanced computed tomography of chemiluminescence (CTC) method that is based on an algebraic reconstruction technique (ART). The results showed good agreement between CTC and ERT using both phantom data from ﬂame simulations, and experimental data.

Several forms of algorithms exist to deal with the inversion problem of tomography. The classical filtered back projection (FBP) algorithm is based on the Radon transform and the Fourier slice theorem [20]. The FBP can generally be applied when a large number of projections is available to facilitate the Fourier transformations. Therefore, the algorithm is not useful when (optical) access to the target is limited, which is the case in most engineering applications.
Typical engineering applications allow only a moderate number of projections to be taken and the resulting linear system of equations is rank-deficient, which leads to a set of non-unique solutions [22]. To get closer to the true solution, prior information can be incorporated in the reconstruction scheme. This can be achieved for example by a constrained first order Tikhonov regularization, which imposes non-negativity and smoothness [23]. Another approach casts the inversion problem into a Bayesian framework to consider prior information [24,25].
The application of genetic algorithms (GAs) was first introduced to tomography by Kihm et al. [26], Kihm and Lyons [27], and is still in its early stages of development. Wu et al. [28] presented a method for addressing multiphase flows in fluidized beds with a GA using binary images. In the same context, Yang et al. [29,30] compares a simultaneous algebraic reconstruction technique (SART) with an adaptive genetic algorithm (AGA) and proposes a hybrid version. Additionally, GAs were used in the reconstruction of binary matrices in discrete tomographic problems e.g., by Batenburg [31], Valenti [32] and Di Gesù et al. [33], and reconstruction of the resistivity distribution in inhomogeneous objects, called electrical impedance tomography (EIT), was performed by Olmi et al. [34].
Our application of the GA is very different to the previously cited problems as we reconstruct directly in 3D rather than slice-wise as done up to now, which neglects the perspective effect in the projections. We have applied the evolutionary reconstruction technique (ERT) in the first stage to premixed flames. A premixed flame is a thin reaction zone separating unburned reactants from burned products. In most practical cases the structure is a highly complex deformed thin manifold due to turbulence, see Figure 1, but could be statistically symmetric. Reconstructions of such kind will help to determine important flame geometry information, such as wrinkling, vortex shedding and breakdown and regions of local extinction.  Evolutionary methods entail probabilistic algorithms and try to mimic the evolutionary processes observed in nature [37,38]. In our ERT program, an individual (chromosome) of a population is a 3D scalar field that is implemented as an array of voxels representing the discretized reconstruction domain. Projections of the field, which are needed for fitness evaluation, are generated in the form of 2D images, with the help of a pin-hole camera-rendering algorithm. We used an open source code [39][40][41][42] in the first stage and later developed an in-house ray-tracing algorithm, which improved the total run-time of the ERT algorithm.
Continuous and direct ray-tracing of the reconstruction domain during the evolutionary process will allow us in later applications to implement more complex measurement models. These can be, e.g., particle interaction, multi-phase flows or accounting for beam steering. Additionally, a calibration free reconstruction method is feasible with this method by optimizing the cameras parameters simultaneously while reconstructing. This is generally not possible for our CTC method [8,9,16,17], as the matrix to be inverted is formed in a one-time ray-tracing step.
The dimensionality of the ERT optimization problem is proportional to the total number of voxels, such that the size of the reconstruction domain is limited by the available computational power. To alleviate the computational cost, we introduce an Metropolis algorithm (MA)-based masking method, which helps achieve a faster convergence. The stochastic mask chooses random locations/voxels for the genetic operators.
We conducted an extensive parameter study on generic phantoms (e.g., a torus, a hollow sphere and a Gaussian-shaped intensity profile), which are exactly known fields, to determine the sensitivity of the ERT to its parameters. The capability of the ERT to reconstruct complex structures was tested in more depth using three different flame phantoms that were generated by large eddy simulations (LES) and direct numerical simulations (DNS) of available laboratory burners. These were a Bunsen flame, a swirl flame and the Cambridge-Sandia stratified burner. Furthermore, we present experimental reconstructions of these flames using a multi-camera setup. The camera setup was tested in previous experiments [16,17] and we compare the ERT results to ART-based reconstructions using a CTC method [8,9] that features an advanced camera model.

The Evolutionary Reconstruction Scheme
The main part of the program is coded in Fortran 90 and sub-parts related to the vlib ray-tracing algorithm in C. The code is fully parallelized on the chromosome level, i.e., the population is divided between the number of available CPU's.
The algorithm uses either projections of a 3D phantom or experimental reference images. First a stochastic mask is created based on the reference images. Following this, the GA starts to initialize a population and each individual is evaluated for its fitness by ray-tracing. Subsequently, the evolution step can be performed iteratively and the individuals can be re-evaluated for their fitness. After a certain number of generations the stochastic mask is re-calculated in a second stage, based on the individual with the highest fitness. The algorithm terminates when either a threshold value for the fitness, or the maximum number of generations is reached. In Algorithm 1 the structure of the ERT is given as a pseudo-code.  [39][40][41][42]. Vlib is written in C and provides several functions and methods to set up a ray-tracing program that suits the user's needs. It is volume-based by design, which means that a scene constitutes objects that occupy a volume in world space. The properties of the objects can be assigned via scalar fields [41].
The standard left-handed coordinate system used in computer graphics was implemented. The cameras look at the origin by default and their positions are given in spherical coordinates, defined by a distance R from the origin, an azimuth angle φ and an inclination angle θ.
The camera is modelled as a simple pin-hole. The viewport has the size of the reconstruction domain dimensions NX by NY. The distance of the image plane to the pin-hole is set to R − NZ/2 with the opening angle α, such that the camera view covers one side of the box completely. This ensures that we see all the data. Figure 2 shows the setup of the camera view, which was chosen for the phantom reconstructions. For experimental reconstructions the viewport needs to be scaled properly in accordance with the physical sensor specifications, and taking into consideration if any binning is used in the reference images. To speed-up the rendering process two modifications were made to the vlib source code. Firstly, a rendering function was implemented which returns an image directly as an array of floating point numbers with pixel values in the range [0, 1] without outputting an image file. Secondly, the fields are passed to vlib as arrays, saving considerable computation time.
We also developed a run-time-optimized volumetric ray-tracer. Its basic setup is shown in Figure 2. A camera pixel's intensity I corresponds to the integrated line-of-sight (LOS) value sampled at discrete steps ∆s: The start and end points of the summation are given by intersection of the LOS with the reconstruction domain. The field value f i at location (x, y, z) is calculated by trilinear interpolation of the probed volume. All results presented in Section 3, which are related to the parameter study, were calculated with the vlib ray-tracer. Our in-house ray-tracer was used for reconstructions of the flame phantom fields and in the experimental section. The results of the parameter study are not affected by the ray-tracing scheme, since we solely focused on the features of the GA.

The Genetic Algorithm
The number of individuals, PopSize, of the population is fixed for the run-time of our algorithm. A box of a defined size comprising NX by NY by NZ voxels is identified for an individual, and this box is the tomographic reconstruction domain. The individual's box is referred to as a chromosome and the voxels are the genes. The genes of a chromosome initially contain an intensity value of zero.
Each chromosome has an associated Z value, which is a fitness marker that can be calculated via different fitness functions. To measure the fitness between the reference images x and the rendered images y of the reconstruction we use a distance metric. For a high-dimensional data, like in this application, it is not directly obvious which distance measure is the most appropriate. The general L p norm is defined as where x and y are elements of the vector space R d . Reference [43] show that the meaningfulness of L p goes down with increasing dimensionality for higher values of p. This means that it gets harder to discriminate between the nearest and the furthest neighboring element. Therefore, in general, an L 1 norm (Manhattan distance) should be the first choice, followed by an L 2 norm (Euclidean distance) for high values of d. They suggest using a fractional norm p < 1 for high dimensional data mining problems.
In our algorithm, we implemented a Manhattan distance and the Z value is the average of the calculated error distance values of all the views. The Manhattan-distance-based Z value is given by where w and h are the image width and height of the reference and rendered images, respectively, and N is the number of views. The absolute fitness is calculated as 1 − Z, and the relative fitness relFit(i) is deduced by rescaling Z to a range from zero to one and inverting it: Here max(Z) is the largest and min(Z) the smallest Z value of the current population. Thus, the chromosome with the lowest absolute fitness value has a relative fitness of zero and the chromosome with the highest absolute fitness has a relative fitness of one.
The GA evolution step for forming a new generation is shown in Algorithm 2. A member of the new generation is obtained by selecting two members of the old generation by a selection scheme. This function induces a selection bias towards chromosomes with a higher fitness, which ultimately leads to convergence of the solution. Here, we use the simple method of fitness proportional selection that was originally proposed by Holland [37]. The selection procedure corresponds to a roulette wheel design. The population size equals the number of bins on the roulette wheel. Each bin has a size that is proportional to the fitness value. It follows that a member of the population is selected with a probability p i = f i / ∑ f k . Here f i is the relative fitness value.
The selected chromosomes are merged (crossover) by forming a voxel-wise average. Subsequently a mutation operator, an annihilation operator and a local filter operator are applied on the resulting chromosome with predefined probabilities (mutation rate, annihilation rate and filter rate). Mutation is carried out by randomly selecting a defined number of voxels (mutation intensity) and changing their values randomly. These voxels are chosen from an array of random coordinates that are sampled by a Metropolis algorithm in a previous step (stochastic mask). The assigned intensity value for a selected voxel is drawn from a normal distribution. A Fortran 90 implementation of a random number generator with a truncated normal distribution has been taken from Burkhardt [44] for this purpose.
The mean of the normal distribution is taken as the average intensity of the voxel that has to be mutated and its neighbouring voxels. The standard deviation σ is a free parameter and has to be specified. The normal distribution is always cut off for values outside of the interval [0, 1].

Algorithm 2 Evolution
Step Input is the population and mask 2: for i = 1 : GApars.PopSize do 3: [MOM,DAD] ← select( POP ) Select two chromosomes 4: OFF(i) ← average( MOM, DAD ) Merge to offspring 5: if r 1 < GApars.mrate then 7: The annihilation operator randomly sets the intensity value of a defined number of voxels (annihilation intensity) to zero, with a certain probability (annihilation rate). Similarly, the local filter operator, applied with the filter rate probability, sets the intensity values of a defined number of voxels (filter intensity) to a value which is determined by filtering at a specific location. The filter kernel has weights of 1/2 for the central voxel and 1/12 for the six nearest neighbours. Locations where annihilation or filtering takes place are determined in a similar fashion as for mutation. The selection procedure continues until the new population size equals the preceding one.
For an improved convergence behaviour elitism was implemented. Two notions of elitism were used: a weak form and a strong form. Weak elitism ensures that the best chromosome is selected with a probability of 1.0 in the selection process. The second chromosome to pair it with for offspring-formation is randomly selected. By selecting one chromosome randomly, the merged and mutated/annihilated individual might not be optimal anymore. The strong form is a classical elitism, meaning that the best individual is copied untouched to the next generation without alterations. Both types of elites can influence the selection process and lead to an increased convergence rate.

The Stochastic Mask
The ERT optimization task can be more efficient by identifying the voxels that are present within a region of interest to reduce the search space rather than probing all the available voxels, so that a solution is reached within a shorter run-time. To achieve this, we have implemented a stochastic mask using a Metropolis algorithm (MA). The MA stems from the Markov chain-Monte-Carlo methods (MCMC) family. Usually, the MA is used to sample random numbers of arbitrary density functions, or as a tool in Monte-Carlo integration. The idea is to create a Markov chain which has the desired target density as a stationary distribution. After a certain number of iterations the chain generates random numbers which are distributed according to the target density with a good approximation. The original MA was developed by Metropolis et al. [45], and Hastings [46] refined it to a more general method in 1970. An introduction to the topic is given, e.g., by Chib and Greenberg [47].
In the first stage, the MA uses the reference images as prior information to sample random numbers for the mutation and annihilation process as a preparation step before the GA starts. The pixel intensities within an image represent a 2D function from which the MA can draw random numbers.
In our adaptation of the MA a new candidate of the Markov-Chain, which is a voxel, is determined by tracing a ray through a voxel in the domain to a pixel on the image plain. The Metropolis method divides the image intensity of this pixel by the pixel intensity value corresponding to the previous voxel of the Markov chain. A voxel is accepted or rejected as a new member of the Markov-Chain based on a consensus decision with the acceptance probability p acc = min(1, image(X p , Y p , i)/image(X 0 , Y 0 , i)) for each camera i. In Algorithm 3, the pseudo-code of a Metropolis sampling step is given. The function getPixel() returns the location of a point in world space on the camera sensor in pixel coordinates (X, Y), and image(X, Y, i) is the intensity stored at pixel (X, Y) for the image of camera i. The upper index 0 indicates the actual position of the chain and p is the new candidate. The scheme is expanded to include all available information, by iterating over all projections. This method of sampling leads to a distribution of values in 3D world space which can be used as locations at which the GA mutates a voxel's intensity. It shows similarities to a factorized MA [48], where a new member of the chain can be accepted on a consensus basis. Here the 3D target distribution is obviously not available and neither are its factors, but its projections. In Figure 3, an example is given for a mask generated from 15 rendered images of a downsampled DNS dataset [35,36] of the Cambridge-Sandia stratified flame.

Algorithm 3
First stage Metropolis sampling step. 1: function METROPOLISSTEP1(c 0 = (x 0 , y 0 , z 0 )) input is a start location c 0 2:  For the second stage, after the reconstruction has reached an advanced state and the population has evolved over a certain number of generations, the best fitting chromosome is used to sample random mutation locations according to its intensity field. Alternatively, the magnitude of the gradient of the intensity field can be used as a marker. This helps sharpen regions with large gradients in the intensity field, i.e., the edges of the reconstructed object, like the flame front. The gradient is calculated by using a central difference scheme. We interchangeably sample from both, the intensity and the gradient fields, while keeping the obtained random locations for a certain number of generations before re-calculating them with a different field. For the second stage sampling the acceptance probability is given by the fraction of the intensity stored at a randomly drawn voxel and the previous voxel of the Markov chain, p acc = min(1, box(x p , y p , z p )/box(x 0 , y 0 , z 0 )), see the pseudo-code in Algorithm 4.

Parameter Study on Canonical Phantom Data
Before applying the ERT to complicated turbulent flame phantoms (LES and DNS data), we conducted a thorough parameter study on all relevant parameters, using simpler phantoms: a torus, a hollow sphere and a Gaussian-shaped intensity profile. We investigated the sensitivities of the ERT with respect to the population size, number of generations, mutation rate/intensity, σ, annihilation rate/intensity, filter rate/intensity and elites. The tomography scene was numerically set by placing the cameras with an equiangular spacing of ∆φ in a semicircular plane (θ = π/2) around the origin of the coordinate system.
The angular spacing between the cameras was calculated such that the camera at φ = 180 • is omitted to avoid opposite views, as shown in Figure 4. Opposed views are correlated on a large part and hence add limited new information to the system for an integrated LOS measurement, for the optically thin flames that have been targeted in this work. For the torus phantom 8 cameras were used, and for the hollow sphere/Gaussian profile phantom 10 cameras. The reconstruction domain was chosen to be NX = 40 by NY = 40 by NZ = 40. Accordingly, projections of the phantoms were created, with a resolution of 40 by 40 pixels, at the camera locations to generate the reference images. Reasonable dependencies between the parameters were observed. For example lowering the mutation rate results in higher fitness values and at the same time, the number of generations might be adjusted to account for the slower convergence rate at the beginning, see Figure 5a. Here we want to highlight three major aspects, as they are very special to our GA, namely the annihilation operator, the filter operator and the (second stage) stochastic mask. In Figure 5b the evolutionary path (absolute fitness of the best fitting chromosome) of a torus reconstruction with and without a second stage MA sampling is displayed. The sudden improvement in the fitness value is clearly visible, which indicates that providing gradients as a source for mutation locations is valuable for the reconstruction process to get to higher fitness values within the same runtime.
For studying the effect of the annihilation operator we used the hollow sphere phantom. In Figure 6a the evolutionary paths for different annihilation rates are shown and in (b) the mean of the squared Manhattan distance of the images, mean SqMD (2D), and the squared Manhattan distance of the fields, SqMD (3D) are given. The manhattan distance is normalized by the phantom field, similar to the error distance used by Mohri et al. [16].
Here R is the reconstructed field and P the phantom field, respectively.  The algorithm converges on a higher fitness level and both distance measures decrease with an increased annihilation rate, see Figure 6. The mutation rate was set to 0.30 in these reconstructions and must be reasonably balanced against the annihilation rate. Figures 5a and 6b indicate that picking an annihilation rate above the mutation rate is a good choice.
In Section 4.1 we show the reconstruction of an averaged chemiluminescence field of a bunsen flame, which has flat intensity gradients in the images. The reconstruction of large regions with homogenous intensities can also be challenging, since our genetic operators are applied voxel-wise. This might amplify salt and pepper noise in the reconstructions. Therefore, we want to attenuate possible artefacts by the application of a local filter to smooth the field where it is necessary. Figure 7 shows different filter rates for the reconstruction of a gaussian shaped intensity profile. The fitness values and error distances clearly improve with the filter applied.

Phantom Study on Three Generic Flame Types
The phantoms used in the parameter study are of limited complexity when compared to real flame chemiluminescence fields. For example, the torus is a uniform distribution of intensity values with no internal structures. Flames may exhibit arbitrary shapes with gradients, lean regions and hollow zones due to local extinction or the burner geometry. Therefore, for a more stringent validation of the algorithm we rely on flame simulation data since they will provide a numerical field that represents the flame very closely. All the flame phantoms used were generated by our in-house code 'PsiPhi' [49,50]. The synthetic CT setup generated for reconstructing the flame phantoms was equivalent to the real CT experiment in terms of number of cameras and their angular spacing. The high-resolution phantoms are downsampled to a smaller size in order to save the computational time required by the ERT. The correlation values and the normed error measure SqMD (3D) of the reconstructed field and the phantom field were calculated for all the flame phantom studies.

The Bunsen Flame Phantom
A standard premixed natural gas/air laboratory Bunsen flame was considered here, which on average has a conical shape. The Bunsen simulation was previously generated for testing our background-oriented schlieren tomography (BOST) code [5]. The HRR was cropped and downsampled to a size of NX = 50 by NY = 60 by NZ = 50 voxels by cubic interpolation, resulting in a voxel resolution of 2 mm after binning. The synthetic CT setup consisted of 23 cameras, arranged in one plane with an equiangular spacing of 7.5 • . The resolution of the images was set in accordance with the domain size to 50 by 60 pixels. The Bunsen phantom and correlation values (Pearson correlation coefficient) between the original phantom and its reconstruction are displayed in Figure 8. The ERT is able to reconstruct the geometry of the flame with very good accuracy and in agreement with the CTC reconstruction. The CTC reconstruction gives higher correlation values for the slices, as shown here, and for the 3D field (not shown). For the 3D field, the correlation from CTC was 0.9855 and from ERT was 0.9792. Here the 3D error distance of the ERT gave slightly better values than for the CTC. The values were SqMD CTC = 0.1369 and SqMD ERT = 0.0853. x-normal z-normal y=16.0 y=36.0 y=76.0 Figure 8. Slices normal to each spatial axis and height above burner exit (in mm), the correlation/normalized Manhattan distance to the phantom of each pair of slices is displayed between the images.

The Swirl Flame Phantom
The TECFLAM swirl burner provides a premixed methane/air flame through an annular slot that is shielded by a co-flow of air emanating from an outer annular slot [51,52]. The LES [16] was generated for an equivalence ratio of 0.83, a swirl number of 0.75 and cold flow Reynolds number of 10,000. The LES ran on a grid containing 30 million cells, with a premixed generated manifold combustion model, in combination with an artificial flame thickening method [50]. Mohri et al. [16] used the simulation as a phantom for the first time in a high resolution CTC study, and Grauer et al. [5] used it as well for validation purposes. We cropped and downsampled the HRR field to a size of NX = 66 by NY = 43 by NZ = 66 voxels by cubic interpolation. The resulting field had a voxel resolution of 2 mm. The synthetic CT setup consisted of 24 cameras with an equiangular spacing of 7.5 • and images of size 66 by 43 pixels were rendered. In the displayed slices, see Figure 9, it can be observed that the ERT was able to reconstruct the v-shaped flame with similar accuracy as the CTC method. The 3D correlation for the CTC reconstruction was 0.9778 and for the ERT it was 0.8911. The 3D error distances were SqMD CTC = 0.1983 and SqMD ERT = 0.4450. x-normal z-normal y=14.0 y=24.0 y=44.0 Figure 9. Slices normal to each spatial axis and height above burner exit (in mm), the correlation/normalized Manhattan distance to the phantom of each pair of slices is displayed between the images.

The Cambridge-Sandia Stratified Flame Phantom
The Cambridge-Sandia stratified burner is operated with premixed methane/air mixtures emanating from an inner and outer annular slot. Stabilization of the flame is achieved by a central ceramic bluff body and a co-flow of air. Different fuel to air ratios, Φ 1 and Φ 2 , for the inner and outer flame can be used to investigate various modes of stratification. A sketch of the burner is given in Figure 10. Details of the flame conditions used in the current work are presented in Table 1. The flame has been well studied experimentally [53,54] and numerically [35,36,50,55,56].  The simulation by Proch et al. [35,36] was carried out on a grid with 1.6 billion cells with a resolution of 100 µm. A premixed flamelet generated manifold (PFGM) was used for combustion modelling, and the domain covered a region of 12 mm to 82 mm above the burner exit. The data was downsampled by cubic interpolation and cropped to a field of NX = 52 by NY = 55 by NZ = 52 voxels.
The dimensions of the reconstruction domain were chosen equivalently, and the pixel/voxel resolution was 1.32 mm. Fifteen cameras were placed in a half-circle in one plane with equiangular spacing of ∆φ = 12 • and image resolution of 52 by 55 pixels. The results of the reconstruction are shown in Figure 11. The 3D spatial correlation from CTC was 0.9688 and from ERT was 0.9007, and the 3D error distances were SqMD CTC = 0.0781 and SqMD ERT = 0.2505. Both reconstructions clearly capture the shape of the flame and its internal structure.
To assess the stability of the reconstruction algorithm, we conducted 15 independent runs using the same synthetic setup and phantom. The mean of the squared 3D Manhattan distance was 0.2676 with a standard deviation of 0.0046. The mean of the 3D correlation was 0.8831 with a standard deviation of 0.0014. x-normal z-normal y=33.1 y=59.5 y=72.8 Figure 11. Slices normal to each spatial axis and height above burner exit (in mm), the correlation/normalized Manhattan distance to the phantom of each pair of slices is displayed between the images.

Applications to Experimental Data
Our generic experimental setup [16] constitutes an array of Basler acA645-100 gm GigE cameras that are mounted on an aluminium plate in a semicircle with equiangular spacing of ∆φ degrees. The arrangement around the Cambridge-Sandia stratified flame is shown as an example in Figure 12a. Five Bronkhorst mass-flow-controllers were used to regulate the flow-rates, and flame arresters were installed for safety.
Each camera was connected via an Ethernet cable and a common multi-port network hub (Gigabit smart TL-SG2424P) to the computer. Simultaneous triggering was achieved by a single signal generator at 5 Hz. The cameras feature a 1 2 Sony ICX414 monochrome sensor with 659 by 494 pixels of size 9.9 µm. A Kowa C-mount lens with a focal length of 12 mm was used on all cameras, and the aperture was kept at its largest opening f /1.4 for all. First, the entire emission spectrum of the flame within the cameras' spectral response, which is in the visible range, was captured. The hot H 2 O blurred the images and we used filters to eliminate the H 2 O signal, similar to our previous work [16]. Figure 12b shows a sample image of the Cambridge-Sandia stratified flame from one camera view. All cameras were carefully aligned with the help of a checkerboard calibration target, which was mounted on the burner.

The Bunsen Flame
For the standard Bunsen flame the setup was comprised of 23 cameras with an equiangular spacing of ∆φ = 7.5 • . We recorded 150 images at 5 Hz with an exposure time of 300 µs. After background subtraction, the images were averaged to give the mean chemiluminescence intensity from a camera perspective. The challenge with reconstructing such averaged flame fields is to deal with both smooth gradients and sharp edges at the same time. We croppped and downsampled the images to a size of 45 by 78 pixels. Accordingly, the reconstruction domain was NX = 45 by NY = 78 by NZ = 45 voxels in size, with a voxel resolution of 0.90 mm. The ERT has resolved the expected conical flame shape to a good degree, exhibiting more salt and pepper noise than the CTC, see Figure 13. CTC Rec.
x-normal z-normal y=4.5 y=18.0 y=40.5 Figure 13. Vertical slices at the flame center (x-normal and z-normal), and horizontal slices at different heights above burner exit (in mm).

The Swirl Flame
The CTC setup for the swirl flame consisted of 24 cameras with an angular spacing of ∆φ = 7.5 • , see Mohri et al. [16] for more details on the experiment. The cropped and downsampled images for this reconstruction had a resolution of 60 by 48 pixels. A domain of NX = 60 by NY = 48 by NZ = 60 voxels was chosen, which had a voxel resolution of 2.23 mm. The reconstructions show a very complex deformed flame structure, and the ERT resolved the key features, in good agreement with the CTC, see Figure 14.
x-normal z-normal y=8.9 y=13.8 y=24.5 Figure 14. Vertical slices at the flame center (x-normal and z-normal), and horizontal slices at different heights above burner exit (in mm).

The Cambridge-Sandia Stratified Flame
Here, 15 cameras were used with an angular spacing of ∆φ = 12 • . The images, recorded with an exposure time of 350 µs, were cropped and downsampled to a size of 50 by 65 pixels. Accordingly, the reconstruction domain was discretized using NX = 50 by NY = 65 by NZ = 50 voxels with a resolution of 1.27 mm per voxel. CTC Rec.
x-normal z-normal y=19.1 y=40.8 y=65.0 The reconstruction result is shown in Figure 15. The main shape and structure of the flame, as well as the central hollow region was recovered well by the ERT. As in the previous cases, more salt and pepper noise is visible in the ERT result. Compared to the instantaneous DNS field, Figure 11, the structures in the experimental reconstruction are thicker. This is partly due to motion blurring in the flame images (dictated by the camera exposure time), and partly due to other factors such as the reconstruction domain resolution. Nonetheless, this experimental demonstration shows that the ERT is capable of resolving the flame chemiluminescence field with good qualitative accuracy compared to the CTC. In Figure 16a detailed overview of the correlation values for all three flame phantoms (left column) and experimental reconstructions (right column) is given. The graphs (a), (c) and (e) show the correlation between the original and reconstructed phantom using the CTC (CTC-PHA) and the ERT (ERT-PHA), and the correlation between the CTC and ERT (CTC-ERT), respectively. The data from the ERT is clearly more volatile and on a lower level than the CTC. The phantom studies show that the correlation decreases with increased height above the burner for both reconstruction techniques, although it is less pronounced for the CTC. A reason for this could be that when the flame covers a larger region at an increased height above the burner exit, the fan of rays of a perspective camera is less densly distributed across the voxels, compared to regions close to the central up-axis of the reconstruction domain. This is supported by Figure 16a, where the correlation data of the Bunsen flame does not decrease with increased height above the burner. The slices for this case, see Figure 8, show that the main structures of the flame are close to the central up axis of the reconstruction domain. For the experimental reconstructions the best agreement between ERT and CTC is obtained for the slices in the middle of the domain.

Quantitative Comparisons-Phantoms and Experiments
In Figure 17 scatter plots for the phantom and experimental reconstructions are given (normalized data). For two identical data sets, the value would lie on a 45 • -line through the origin of the coordinate system. The CTC method shows a narrower distribution in the phantom studies. The highest degree of similarity is achieved for the Bunsen phantom study and experiment.

Conclusions
We have successfully demonstrated that a reconstruction scheme based on an evolutionary program is able to reconstruct 3D scalar fields (flame chemiluminescence) and gave a proof of concept for the ERT. Genetic operators such as selection, crossover mutation, annihilation and filtering were tailored for the needs of the ERT. Additionally, a stochastic mask based on a Metropolis algorithm was introduced to reduce the dimensionality of the search space.
We have outlined the concept of the ERT and presented the fundamentals of our reconstruction algorithm. The sensitivities of the most important parameters were discussed. The algorithm was applied to several phantom fields derived from simulations, representing the true structure of a chemiluminescent flame. The reconstructions showed good overall agreement with the phantom data in terms of correlations and error distance measures, and were in line with our established CTC reconstructions. Additionally, we performed reconstructions with real flame chemiluminescence data. Three different flame types were investigated by the ERT and the CTC method. Visual inspection and correlation numbers showed that both methods give comparable results.
For the phantom studies, our CTC technique gave mainly better results, quantified in terms of correlation and error distances. This stems partly from the more advanced camera model that is used in CTC, but also from the fact that the ERT's parameters were tuned such that we get results in an acceptable run-time for all of our reconstructions. The run-time is substantially dependent on the image rendering-time, although the program is executed as a parallel application. Massive parallelization which is available on GPUs will help to improve this aspect.