This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh.

Fluorescence Molecular Tomography (FMT) is a highly sensitive molecular imaging modality which benefits from the availability of numerous molecular probes and relatively low cost. FMT is especially valuable in preclinical studies as it allows for three-dimensional, quantified reconstructions of the distribution of fluorescence probes in tissue. This has numerous applications in fields such as drug discovery and gene therapy [

The Monte Carlo (MC) method simulates the path of numerous photons through complex tissue to sample the imaged volume with suitable statistical accuracy [

For arbitrary domain geometry, the FMT problem poses a tradeoff between accuracy and computational efficiency [

Indeed, Monte Carlo methods require simulating the propagation of a large number of photons (10^{6}–10^{9}) per simulated optode, depending on the data type, in preclinical settings [

The goal of FMT is to retrieve the 3-D distribution of a fluorophore typically expressed as its effective quantum yield:

The details of the mesh Monte Carlo method used here can be found in reference [

Mesh adaptation was achieved using the method laid out in [

Flowchart of the mesh iterative adaptation program.

Size factors greater than one lead to mesh coarsening whereas size factors lower than one lead to mesh refining. This size field is created from a solution field that is relevant to the formulation of the FMT inverse problem. To convert the solution field to a size field, each value of the solution field is compared to two thresholds: a lower one and an upper one. If the solution value is lower than the lower threshold, then the size factor at that node is equal to a set maximum size factor. If greater than the upper threshold, then the size factor is equal to a set minimum size factor. Otherwise, when between the two thresholds, the solution field value is converted to a size factor

The two main families of operations used to achieve adaptations are either split or collapse operations. Split operations introduce a new node to fragment an element of volume towards mesh refinement. A new node may be placed in the center of an edge, a face, or a region. New edges are then drawn accordingly. Thus, the number of nodes and elements are increased while also shortening the average attached edge lengths in the region.

Examples of an (

The mesh adaptation is an iterative process in which the new mesh and a new solution field are used as inputs for the next iteration step. The process can be repeated until set stopping criteria (

The Jacobian is a system matrix

As each output node is compared to the input nodes, three conditions may occur. First, this node has not been modified during the adaptation process. The Jacobian value is simply carried over to this node in the new mesh (

The possible cases in the output node positioning. The upper row corresponds to the input discretization and lower row to the output mesh.

The 3D digimouse model was employed to create an anatomically accurate

(

Mesh adaptation in the forward model space is expected to improve the computational efficiency of the Monte Carlo simulations without sacrificing accuracy in the forward model or reconstruction. The computational cost of mMC is mainly associated with the number of photons that need to be launched to reach statistical stability. Among the parameters that dictate the number of photons required, the level of discretization plays a critical role. As element volumes diminish, less photons sample these elements, leading to poor statistics. Moreover, in the case of preclinical imaging in which transmission geometry is used for optimal tomography performances, as the element of volume farther from the source are less likely to be visited by photons. Even if this issue can be mitigated with adjoint methods [

The mesh adaptation process is dependent on a few parameters which are set

To establish the best maximum size factor for our application, mesh adaptations with various maximum size factor values were conducted until they reached convergence using the cross sectional mesh configuration described in

Mesh characteristics at convergence under various size factor ranges.

Size Factor | Iterations | Nodes | Elements | Max. Elem. Vol. (mm^{3}) |
---|---|---|---|---|

Initial | -- | 2396 | 7574 | 0.08 |

1.15 | 17 | 1332 | 3904 | 0.51 |

1.25 | 21 | 777 | 2200 | 1.49 |

1.35 | 20 | 612 | 1615 | 2.97 |

As the maximum size factors increase, the mesh coarsening is more pronounced, as expected. The coarsening leads to a maximum (minimum) reduction in the number of nodes by ×4 (×2) and in the number of elements, by almost ×5 (×2), compared to the initial mesh. The maximum element volume is almost 6 times larger for a maximum size factor of 1.35 compared to 1.15. However, the element of volume sizes are difficult to predict as the coarsening is mainly based on the edge lengths. For instance, in the case of the size factor of 1.35, the change in maximum element size from one iteration to another can be as high as 125% and as low as 12%. Hence, the convergence rate is not as stable as for smaller size factors. Additionally, for the smallest size factor (1.15), even if convergence is stable and achieved at an earlier iteration number, the coarsening is limited. Hence, a maximum size factor of 1.25 is considered to be optimal as it provides stable convergence and significant reduction in the mesh elements (×3) and nodes (×3.5).

The mesh under different maximum size factors after (^{3}).

The solution field is the input to derive the size field, and hence, the field which determines the areas of the mesh that should be adapted based on the solution values at each node. Herein, the solution is computed based on the forward model,

Once the solution field is obtained, a size field is produced to be used as an input for the mesh adaptation procedure. As mentioned above, the size field is computed based on the median value of the solution field and set thresholds. Here, the lower threshold was set to 20% of the median and the upper threshold to 8 times the lower. Under the varying solution fields, each mesh is brought to a unique convergence point as with the size factors. The characteristics of the optimized meshes at convergence are provided in

Mesh characteristics at convergence under different solution fields.

Field | Iterations | Nodes | Elements | Max. Elem. Vol. (mm^{3}) |
---|---|---|---|---|

∑Jacobian | 21 | 777 | 2200 | 1.49 |

Log(∑Jacobian) | 12 | 1487 | 4335 | 0.25 |

Normalized ∑J | 21 | 1053 | 3074 | 0.48 |

Curvature | 15 | 447 | 1105 | 1.55 |

Log(Curv.) | 20 | 668 | 1651 | 1.37 |

Overall, the sum of ^{3}, the smallest maximum volumes achieved at convergence. This allows for comparison of all solution fields as summarized in

Mesh characteristics at convergence under different solution fields.

Field | Iterations | Nodes | Elements | Max. Elem. Vol. (mm^{3}) |
---|---|---|---|---|

∑Jacobian | 2 | 1614 | 4872 | 0.26 |

Log(∑Jacobian) | 5 | 1611 | 4792 | 0.25 |

Normalized ∑J | 5 | 1457 | 4334 | 0.25 |

Curvature | 2 | 1136 | 3304 | 0.31 |

Log(Curv.) | 2 | 1608 | 4846 | 0.25 |

Upper row: mesh at convergence (^{3} stopping criterion for the respective corresponding input fields.

In this case, the curvature and

The solution fields which have been investigated up to this point require a pre-computed Jacobian to start the adaptation process. This can render the whole process computationally demanding. Another option for creating the solution field is to use the

Mesh at convergence for (

Mesh characteristics at convergence under different solution fields.

Field | Iterations | Nodes | Elements | Max. Elem. Vol. (mm^{3}) |
---|---|---|---|---|

∑Jacobian | 21 | 777 | 2200 | 1.49 |

Dist. | 11 | 1675 | 5166 | 0.28 |

Att. | 17 | 699 | 1999 | 1.80 |

At convergence, the

MC based FMT reconstructions are used when the DE fails to adequately model light propagation. As in the proposed method, an analytical rescaling is performed locally to adjust the mMC Jacobian to the new discretization. It is crucial to assess if model accuracy is maintained. To do so, we consider time resolved data types for which the early part is known to be modeled poorly by the DE. A Temporal Point Spread Function (TPSF) was computed from the mMC based Jacobian (10^{9} photons) prior to adaptation and from the analytically rescaled Jacobian after adaptation (10^{9} photons). Moreover, new mMC simulations were computed to obtain a mMC Jacobian on the new adapted mesh. An example of TPSFs produced for one specific source detector pair is provided in

Temporal Point Spread Function (TPSF) of Jacobians rescaled to a new mesh and the associated error at each gate.

The goal of the mesh adaptation described herein is to conserve the stochastic reliability of the forward model while decreasing the computational burden. If more photons are reaching the center elements (ones with poor statistics in the adjoint method), lower photon packets can be simulated for each optode. The relationship of computational time against the number of photons simulated is linear, as shown in

(

The high photon count reference Jacobian was computed using 10^{10} photons per optode. The error was computed for the Jacobian at the time corresponding to the 25% rising gate of the TPSF. A summary of the

Errors in the forward model central nodes before and after

Photons | Initial TG | Final TG ∑ |
Final TG |
---|---|---|---|

10^{9} |
10.84% | 11.71% | 13.99% |

10^{8} |
36.83% | 31.39% | 35.55% |

10^{7} |
61.01% | 60.13% | 58.57% |

10^{6} |
86.71% | 90.92% | 151.43% |

In all cases, except for 10^{6} photons, ^{10} photons simulations on the initial mesh led to ^{9} produced the least ^{8} photons, which is the typical number of photons used successfully for preclinical studies [

Additionally, we estimated the computational efficiency of the analytically rescaled Jacobian compared to MC re-computed Jacobians at each iterations. Time-resolved MC Jacobians were computed on 64 nodes of the CCNI’s Blue Gene/Q system at RPI whereas Jacobian rescaling was performed on a personal computer (i7-4930K Six-Core 3.40 GHz 12 MB Intel Smart Cache LGA2011, 64 GB DDR3/1600MHz memory) using in-house Matlab codes. To provide a meaningful comparison in terms of application, the simulations were performed on a 3D mesh for whole-body small animal imaging. The small animal was discretized in 15,581 nodes and 92,713 elements. 60 wide-field sources and 96 points detectors were employed [

Mesh-based Monte Carlo techniques are relatively recent developments that promise improved computational efficiency for optical tomography. However, these techniques are not currently benefitting from mesh optimization techniques. Herein, we tested different solution fields with the goal of optimizing the forward model for computational efficiency. We found that the

The authors thank Cameron Smith (Scientific Computation Research Center, Rensselaer Polytechnic Institute, Troy, NY 12180) for help with the mesh optimization tool. This work was partly funded by the National Institutes of Health grant R01 EB19443 and National Science Foundation CAREER AWARD CBET-1149407.

Andrew Edmans performed all computational development and validations. Xavier Intes devised the project, supervised the implementation, and wrote the paper.

The authors declare no conflict of interest.