# Stochastic Neural Networks for Automatic Cell Tracking in Microscopy Image Sequences of Bacterial Colonies

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Related Work

#### 1.2. Contributions

#### 1.3. Outline

## 2. Datasets

#### 2.1. Synthetic Videos of Simulated Cell Colonies

#### 2.2. Laboratory Image Sequences (Real Biological Data)

#### 2.3. Cell Characteristics

**Cell Geometry.**In accordance with the dynamics of bacterial colonies in microfluidic traps, the dynamic simulation software generates colonies of rod-shaped bacteria. Cell shapes can be approximated by long and thin ellipsoids, which are geometrically well identified by their center, their long axis, and the two endpoints of this long axis. The center $c\left(b\right)$ is the centroid of all pixels belonging to cell b. The long axis $A\left(b\right)$ of cell b is computed by principal component analysis (PCA). The endpoints $e\left(b\right)$ and $h\left(b\right)$ of cell b are the first and last cell pixels closest to $A\left(b\right)$; see Figure 2 (right) for a schematic illustration.

**Cell Neighbors.**For each image frame J, denote $B=B\left(J\right)$ as the set of fully visible cells in J, and by $N=N\left(J\right)=card\left(B\right)$ the number of these cells. Let V be the set of all cell centers $c\left(b\right)$ with $b\in B$. Denote $\mathit{delV}$ the Delaunay triangulation [89] of the finite planar set V with N vertices. We say that two cells ${b}_{1}$, ${b}_{2}$ in B are neighbors if they verify the following three conditions: (i) $({b}_{1},{b}_{2})$ are connected by the edge $\mathit{edg}$ of one triangle in $\mathit{delV}$. (ii) The edge $\mathit{edg}$ does not intersect any other cell in B. (iii) Their centers verify $\parallel c\left({b}_{1}\right)-c\left({b}_{2}\right)\parallel \le \rho $, where $\rho >0$ is a user defined parameter.

**Cell Motion.**Let J, ${J}_{+}$ denote two successive images (i.e., frames). Denote $B=B\left(J\right)$, ${B}_{+}=B\left({J}_{+}\right)$ as the associated sets of cells. Superpose temporarily the images J and ${J}_{+}$ so that they then have the same center pixel. Any cell $b\in B$, which does not divide in the interframe $J\to {J}_{+}$, becomes a cell ${b}_{+}$ in image ${J}_{+}$. The “motion vector” of cell b from frame J to ${J}_{+}$ is then defined by $v\left(b\right)=c\left({b}_{+}\right)-c\left(b\right)$. If the cell b does divide between J and ${J}_{+}$, denote ${b}_{\mathrm{div}}$ as the last position reached by cell b at the time of cell division, and define similarly the motion $v\left(b\right)=c\left({b}_{\mathrm{div}}\right)-c\left(b\right)$. In our experimental recordings of real bacterial colonies with interframe duration 6 min, there is a fixed number$w>0$ such that $\parallel v\left(b\right)\parallel \le w/2$ for all cells $b\in B\left(J\right)$ for all pairs J, ${J}_{+}$. In particular, we observed that, for real image sequences, $w=100$ pixels is an adequate choice. Consequently, we select $w=100$ pixels for all simulated image sequences of BENCH6. For BENCH1, we select $w=45$ pixels, again based on a comparison with real experimental recordings. Overall, the meta-parameter w is assumed to be a fixed number and to be known, since $w/2$ is an observable upper bound for the cell motion norm for a particular image sequence of a lab experiment.

**Target Window.**Recall that J, $J+$ are temporarily superposed. Let $U\left(b\right)\subset {J}_{+}$ be a square window of width w, with the same center as cell b. The target window$W\left(b\right)$ is the set of all cells in ${B}_{+}$ having their centers in $U\left(b\right)$. Since $\parallel v\left(b\right)\parallel \le w/2$, the cell ${b}_{+}$ must belong to the target window $W\left(b\right)\subset {B}_{+}$.

## 3. Methodology

#### 3.1. Registration Mappings

**Case****1:**- Cell $b\in B$ did not divide in the interframe $J\to {J}_{+}$, and has become a cell $f\left(b\right)\in {B}_{+}$; that is, $f\left(b\right)$ has grown and moved during the interframe time interval.
**Case****2:**- Cell $b\in B$ divided between J and ${J}_{+}$, and generated two children cells ${b}_{1},{b}_{2}\in {B}_{+}$; we then denote $f\left(b\right)=({b}_{1},{b}_{2})\in {B}_{+}\times {B}_{+}$.
**Case****3:**- Cell $b\in B$ disappeared in the interframe $J\to {J}_{+}$, so that $f\left(b\right)$ is not defined.

#### 3.2. Calibration of Cost Function Weights

`CVXPY`and

`DCCP`(disciplined convex-concave programming) [84,85,86].

`DCCP`is a package for convex-concave programming designed to solve non-convex problems. (

`DCCP`can be downloaded at https://github.com/cvxgrp/dccp (last accessed on 20 January 2022)) It can handle objective functions and constraints with any known curvature as defined by the rules of disciplined convex programming [92]. We give examples of numerically computed weight vectors $\mathsf{\Lambda}$ below. The computing time was less than 30 s for the data that we have prepared. For simplicity, we just considered one step changes in our computations, which make the overlap penalty weak. To increase the accuracy of the model, it is possible to consider a larger number of samples (i.e., multi-step changes). Note that the solutions $\mathsf{\Lambda}$ of (2) are of course not unique, even after normalization by rescaling.

#### 3.3. Cell Divisions and Parent–Children Short Lineages

#### 3.3.1. Cell Divisions

#### 3.3.2. Most Likely Parent Cell for a Given Children Pair

#### 3.3.3. Penalties to Enforce Adequate Parent–Children Links

**“**

**Lineage**

**” Penalty.**Valid children pairs $({b}_{1},{b}_{2})\in \mathit{PCH}$ should be correctly matchable with their most likely parent cell ${b}^{*}=parent({b}_{1},{b}_{2})$ (see (8)). Thus, we define the “lineage” penalty $lin({b}_{1},{b}_{2})=dist({b}^{*},{b}_{1},{b}_{2})$ by

**“**

**Gap**

**” Penalty.**Denote $\mathit{tips}\left(b\right)$ as the set of two endpoints of any cell b. For any pair $\mathit{pch}=({b}_{1},{b}_{2})\in \mathit{PCH}$, define endpoints ${x}_{1}\in \mathit{tips}\left({b}_{1}\right),{x}_{2}\in \mathit{tips}\left({b}_{2}\right)$ and the gap penalty $gap({b}_{1},{b}_{2})$ by

**“**

**Dev**

**” Penalty.**For rod-shaped bacteria, a true pair $({b}_{1},{b}_{2})\in \mathit{PCH}$ of just born children must have a small $gap({b}_{1},{b}_{2})=\parallel {x}_{1}-{x}_{2}\parallel $ and roughly aligned cells ${b}_{1}$ and ${b}_{2}$. For $({b}_{1},{b}_{2})\in \mathit{PCH}$, we quantify the deviation from alignment$dev({b}_{1},{b}_{2})$ as follows. Let ${x}_{1}$, ${x}_{2}$ be the closest endpoints of ${b}_{1}$, ${b}_{2}$ (see (9)). Let ${\mathit{str}}_{12}$ be the straight line linking the centers ${c}_{1}$, ${c}_{2}$ of ${b}_{1}$, ${b}_{2}$. Let ${d}_{1}$, ${d}_{2}$ be the distances from ${x}_{1}$, ${x}_{2}$ to the line ${\mathit{str}}_{12}$. Then, set

**“**

**Ratio**

**” Penalty.**True children pairs must have nearly equal lengths. Thus, for $({b}_{1},{b}_{2})\in \mathit{PCH}$ with lengths ${L}_{1}$, ${L}_{2}$, we define the length ratio penalty by

**“**

**Rank**

**” Penalty.**Let ${L}_{\mathrm{min}}$ be the minimum cell length over all cells in ${B}_{+}$. In ${B}_{+}$, children pairs $({b}_{1},{b}_{2})$ just born during interframe $J\to {J}_{+}$ must have lengths ${L}_{1}$, ${L}_{2}$ close to ${L}_{min}$. Thus, for $({b}_{1},{b}_{2})\in \mathit{PCH}$, we define the rank penalty by

#### 3.4. Generic Boltzmann Machines (BMs)

#### 3.5. Optimized Set of Parent–Children Triplets

#### 3.6. Performance of Automatic Children Pairing on Synthetic Videos

#### 3.6.1. Children Pairing: Fast BM Simulations

#### 3.6.2. Children Pairing: Implementation on Synthetic Videos

#### 3.6.3. Parent–Children Matching: Accuracy on Synthetic Videos

#### 3.7. Reduction to Registrations with No Cell Division

#### 3.8. Automatic Cell Registration after Reduction to Cases with No Cell Division

#### 3.8.1. The Set $\mathit{MAP}$ of Many-to-One Cell Registrations

#### 3.8.2. Registration Cost Functional

**Cell Matching Likelihood: $match\left(f\right)$.**Here, we extend a pseudo likelihood approach used to estimate parameters in Markov random fields modeling by Gibbs distributions (see [98]). Recall that $g.\mathit{rate}$ is the known average cell growth rate. For any cells $b\in B$, ${b}_{+}\in {B}_{+}$, the geometric quality of the matching $b\mapsto {b}_{+}$ relies on three main characteristics: (i) motion $c\left({b}_{+}\right)-c\left(b\right)$ of the cell center $c\left(b\right)$, ($ii$) angle between the long axes $A\left(b\right)$ and $A\left({b}_{+}\right)$, ($iii$) cell length ratio $\parallel A\left({b}_{+}\right)\parallel /\parallel A\left(b\right)\parallel $. Thus, for all $b\in B$ and ${b}_{+}$ in the target window $W\left(b\right)$, define (i) Kinetic energy: $kin(b,{b}_{+})={\parallel c\left(b\right)-c\left({b}_{+}\right)\parallel}^{2}$. (ii) Distortion of cell length: $dis(b,{b}_{+})=|log(\parallel A\left({b}_{+}\right)\parallel /\parallel A\left(b\right){\parallel )-logg.\mathit{rate}|}^{2}$. (iii) Rotation angle: $0\le rot(b,{b}_{+})\le \pi /2$ is the geometric angle between the straight lines carrying $A\left(b\right)$ and $A\left({b}_{+}\right)$.

**Overlap: $over\left(f\right)$**. We expect bona fide cell registrations $f\in \mathit{MAP}$ to be bijections. Consequently, we want to penalize mappings f which are many-to-one. We say that two distinct cells $(b,{b}^{\prime})\in B\times B$ do overlap for the mapping $f\in \mathit{MAP}$ if $f\left(b\right)=f\left({b}^{\prime}\right)$. The total number of overlapping pairs $(b,{b}^{\prime})$ for f defines the overlap penalty:

**Neighbor Stability: $stab\left(f\right)$**. Let $B=\{{b}_{1},\dots ,{b}_{N}\}$. Denote ${G}_{i}$ as the set of all neighbors for cell ${b}_{i}$ in B (i.e., ${b}_{j}\sim {b}_{i}\iff {b}_{j}\in {G}_{i}$; see Section 2.3). For bona fide registrations $f\in \mathit{MAP}$, and for most pairs of neighbors ${b}_{i}\sim {b}_{j}$ in B, we expect $f\left({b}_{i}\right)$ and $f\left({b}_{j}\right)$ to remain neighbors in ${B}_{+}$. Consequently, we penalize the lack of “neighbors stability” for f by

**Neighbor Flip: $flip\left(f\right)$**. Fix any mapping $f\in \mathit{MAP}$, any cell $b\in B$ and any two neighbors ${b}^{\prime}$, ${b}^{\u2033}$ of b in B. Let $z=f\left(b\right)$, ${z}^{\prime}=f\left({b}^{\prime}\right)$, ${z}^{\u2033}=f\left({b}^{\u2033}\right)$. Let c, ${c}^{\prime}$, ${c}^{\u2033}$ and d, ${d}^{\prime}$, ${d}^{\u2033}$ be the centers of cells b, ${b}^{\prime}$, ${b}^{\u2033}$ and z, ${z}^{\prime}$, ${z}^{\u2033}$. Let $\alpha $ be the oriented angle between ${c}^{\prime}-c$ and ${c}^{\u2033}-c$, and let ${\alpha}_{f}$ be the angle between ${d}^{\prime}-d$ and ${d}^{\u2033}-d$, respectively. We say that the mapping f has flipped cells ${b}^{\prime}$, ${b}^{\u2033}$ around b, and we set $FLIP(f,b,{b}^{\prime},{b}^{\u2033})=1$ if ${z}^{\prime}$, ${z}^{\u2033}$ are both neighbors of z, and the two angles $\alpha $, ${\alpha}_{f}$ have opposite signs. In all other cases, we set $FLIP(f,b,{b}^{\prime},{b}^{\u2033})=0$.

#### 3.9. BM Minimization of Registration Cost Function

#### 3.9.1. BM Minimization of $Cost\left(f\right)$ over $f\in \mathit{MAP}$

#### 3.9.2. BM Energy Function $E\left(z\right)$

#### 3.9.3. Cliques of Interactive Neurons

**Cliques in ${\mathit{CL}}_{1}$**. For each clique $K=\left\{i\right\}$ in ${\mathit{CL}}_{1}$, and each $z\in \mathit{CONF}$, define its energy ${J}_{\mathrm{match},K}\left(z\right)={J}_{\mathrm{match},K}\left({z}_{i}\right)$ by

**Cliques in ${\mathit{CL}}_{2}$**. For all $z\in \mathit{CONF}$, all cliques $K=\{i,j\}$ in ${\mathit{CL}}_{2}$, define the clique energies ${J}_{\mathrm{over},K}\left(z\right)={J}_{\mathrm{over},K}({z}_{i},{z}_{j})$ and ${J}_{\mathrm{stab},K}\left(z\right)={J}_{\mathrm{stab},K}({z}_{i},{z}_{j})$ by ${J}_{\mathrm{over},K}\left(z\right)={1}_{{z}_{i}={z}_{j}}/N$ and

**Cliques in ${\mathit{CL}}_{3}$**. For each clique $K=\{i,j,k\}$ in ${\mathit{CL}}_{3}$, define the clique energy ${J}_{\mathrm{flip},K}$ by

#### 3.9.4. Test Set of 100 Synthetic Image Pairs

#### 3.9.5. Implementation of BM Minimization for $Cost\left(f\right)$

#### 3.9.6. Weight Calibration

#### 3.9.7. BM Simulations

## 4. Results

#### 4.1. Tests of Cell Registration Algorithms on Synthetic Data

#### 4.2. Tests of Cell Registration Algorithms on Laboratory Image Sequences

**Table 5.**Cell tracking accuracy for the short image sequence COL2 in Figure 6 with an interframe of six minutes. We report the ratio of correctly predicted cell matches over the total number of true cell matches and the associated percentages. The accuracy results quantify four distinct percentages of correct detections (i) for parent cells in image J, (ii) for children cells in image ${J}_{+}$, (iii) for parent–children triplets, and (iv) for registered pairs of cells $(b,{b}_{+})\in \mathit{redB}\times {\mathit{redB}}_{+}$.

Task | Accuracy | |||||
---|---|---|---|---|---|---|

$\{{\mathbf{t}}_{\mathbf{0}},{\mathbf{t}}_{\mathbf{1}}\}$ | $\{{\mathbf{t}}_{\mathbf{1}},{\mathbf{t}}_{\mathbf{2}}\}$ | $\{{\mathbf{t}}_{\mathbf{2}},{\mathbf{t}}_{\mathbf{3}}\}$ | ||||

correctly detected parents | 15/19 | 79% | 20/21 | 95% | 7/10 | 70% |

correctly detected children | 35/38 | 92% | 32/42 | 76% | 14/20 | 70% |

correct parent–children triplets | 15/19 | 78% | 16/21 | 76% | 7/10 | 70% |

correctly registered cell pairs | 36/36 | 100% | 44/49 | 90% | 76/80 | 95% |

## 5. Conclusions and Future Work

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A. Stochastic Dynamics of BMs

#### Appendix A.1. Asynchronous BM Dynamics

#### Appendix A.2. Synchronous BM Dynamics

#### Appendix A.3. Comparing Asynchronous and Synchronous BM Dynamics

## Appendix B. Computer Hardware

## Appendix C. Parameters for Simulation Software

`python`functions and has been released to the public at https://github.com/scopagroup/BacTrak (accessed on 15 December 2021). We refer to [12,81] for a detailed description of this mathematical model and its implementation. The code for generating the synthetic data has been released at https://github.com/jwinkle/eQ (accessed on 15 December 2021). We note that detailed installation instructions for the software can be found on this page. The parameters for this agent-based simulation software are as follows: Cells were modeled as 2D spherocylinders of constant, 1 $\mathsf{\mu}$m width. The computational framework takes into account mechanical constraints that can impact cell growth and influence other aspects of cell behavior. The growth rate of the cells is exponential and is controlled by the doubling time. The time until cells double is set to 20 min (default setting; resulting in a growth rate of $g.rate=1.05$). The cells have a length of approximately 2 $\mathsf{\mu}$m after division and 4 $\mathsf{\mu}$m right before division (minimum division length of $4\phantom{\rule{0.166667em}{0ex}}\mathsf{\mu}$m; subject to some random perturbation). In our data set of simulated videos, there is no “trap wall” (as opposed to the simulations carried out in [12,81]). The “trap” encompassing all cells on a given frame has a size of 30 $\mathsf{\mu}\mathrm{m}\times $ 30 $\mathsf{\mu}\mathrm{m}$ subdivided into $400\times 400$ pixels of size $0.075\phantom{\rule{0.166667em}{0ex}}\mathsf{\mu}\mathrm{m}\times 0.075\phantom{\rule{0.166667em}{0ex}}\mathsf{\mu}\mathrm{m}$. The size of the resulting binary image used in our tracking algorithm is $600\times 600$ pixels. (We add a boundary of 100 pixels on each side). Bacteria are moving, growing and dividing within the trap. However, at this stage of our study, we consider only video segments where no cell disappears and where cells do not enter the trap from outside so that the trap is a confined environment. Cells move only due to soft shocks’ interactions with other neighboring cells. The time interval between any two successive image frames ranges from one minute to six minutes (see Table 1). All other simulation parameters remain unchanged; i.e., we use the default parameters specified in the simulation software.

## Appendix D. Cell Segmentation

#### Appendix D.1. Watershed Algorithm

#### Appendix D.2. Segmentation Errors: Correction Steps

**Table A1.**Statistics of some quantities of interest related to the intensity of boundary segments and regions. These quantities allow us to define heuristics to identify erroneous segmentations computed by the watershed algorithm. We state the characteristic and report the minimum, maximum 5% quantile, mean, and standard deviation for the reported quantities of interest.

Characteristic | 5% Quantile | Min | Max | Mean |
---|---|---|---|---|

Watershed area | 56.00 | 43.00 | 984.00 | 211.00 ± 138.00 |

Mean intensity of area | 0.34 | 0.00 | 0.57 | 0.41 ± 0.06 |

Mean intensity of boundary segment | 0.46 | 0.30 | 0.99 | 0.74 ± 0.14 |

Height of boundary segment | 0.05 | −0.09 | 0.62 | 0.33 ± 0.14 |

#### Appendix D.3. Cell Boundary Detection

#### Appendix D.4. Convolutional Neural Networks (CNNs)

**Training and Testing Data.**In the absence of any ground truth data set for the classification of rod-shape bacteria cells from movies of cell populations, we consider the output of the Mumford–Shah algorithm introduced above as ground truth classification for training and testing our machine learning methodology. Above, we introduced three different zones: The interior $in\left({b}_{i}\right)$, the exterior $ext\left({b}_{i}\right)$, and the boundary $\partial {b}_{i}$ of a cell ${b}_{i}$. We reduce these three regions to two zones—the interior and exterior of a cell ${b}_{i}$. We assign pixels that belong to $int\left({b}_{i}\right)$ the label 0 and pixels that belong to $ext\left({b}_{i}\right)$ and $\partial {b}_{i}$ the label of 1. For an image of size $200\times 200$, we obtain 40,000 binary labels. We limit the training of the CNN to a subregion of size $200\times 200$ in the center of each preprocessed image to avoid issues associated with mislabeled training data of cells located at the boundary of our data. We consider X as the set of features and Y as the set of labels. We want to assign to each pixel a label of either 0 or 1. For pixel p, we define ${X}_{p}$ to be a $7\times 7$ square window with center p located in the original image. The corresponding label ${Y}_{p}$ is denoted by $C\left(p\right)$, which corresponds to the class of the pixel p in the binarized image.

**CNN Algorithm.**The considered CNN algorithm consists of two parts, (i) the convolutional auto-encoder and (ii) a fully connected multilayer perceptron (MLP). The input for the auto-encoder is a window of $7\times 7$ pixels. In the first layer of the encoder, we have a $5\times 5\times 4$ convolution layer Conv1 with $3\times 3$ kernel. We feed Conv1 to a max-pooling layer MPool2 with one stride and pooling window $2\times 2$. The output of MPool2 is the input of a $3\times 3\times 8$ convolution layer Conv3. For decoding, we have almost the same structure in reverse order: We feed Conv3 to a $5\times 5\times 4$ deconvolution with $3\times 3$ kernel. Subsequently, we feed the output of this layer to a $7\times 7\times 1$ deconvolution with $3\times 3$ kernel. The decoder’s output is a window of $7\times 7$ pixels. We compare this output with the input window (since it is an auto-encoder, features and labels are the same) by using the mean square error as a cost function. We train the auto-encoder for all training sets using a mini-batch gradient descent. When the training is finished, we freeze the weights for Conv1 and Conv3.

0 | 1 | |

0 | 0.97 | 0.03 |

1 | 0.11 | 0.89 |

## References

- Butts-Wilmsmeyer, C.J.; Rapp, S.; Guthrie, B. The technological advancements that enabled the age of big data in the environmental sciences: A history and future directions. Curr. Opin. Environ. Sci. Health
**2020**, 18, 63–69. [Google Scholar] [CrossRef] - Sivarajah, U.; Kamal, M.M.; Irani, Z.; Weerakkody, V. Critical analysis of Big Data challenges and analytical methods. J. Bus. Res.
**2017**, 70, 263–286. [Google Scholar] [CrossRef] [Green Version] - Balomenos, A.D.; Tsakanikas, P.; Aspridou, Z.; Tampakaki, A.P.; Koutsoumanis, K.P.; Manolakos, E.S. Image analysis driven single-cell analytics for systems microbiology. BMC Syst. Biol.
**2017**, 11, 1–21. [Google Scholar] [CrossRef] [Green Version] - Klein, J.; Leupold, S.; Biegler, I.; Biedendieck, R.; Münch, R.; Jahn, D. TLM-Tracker: Software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies. Bioinformatics
**2012**, 28, 2276–2277. [Google Scholar] [CrossRef] [Green Version] - Stylianidou, S.; Brennan, C.; Nissen, S.B.; Kuwada, N.J.; Wiggins, P.A. SuperSegger: Robust image segmentation, analysis and lineage tracking of bacterial cells. Mol. Microbiol.
**2016**, 102, 690–700. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Bennett, M.R.; Hasty, J. Microfluidic devices for measuring gene network dynamics in single cells. Nat. Rev. Genet.
**2009**, 10, 628–638. [Google Scholar] [CrossRef] - Danino, T.; Mondragón-Palomino, O.; Tsimring, L.; Hasty, J. A synchronized quorum of genetic clocks. Nature
**2010**, 463, 326–330. Available online: http://xxx.lanl.gov/abs/15334406 (accessed on 15 December 2021). [CrossRef] [Green Version] - Mather, W.; Mondragon-Palomino, O.; Danino, T.; Hasty, J.; Tsimring, L.S. Streaming instability in growing cell populations. Phys. Rev. Lett.
**2010**, 104, 208101. [Google Scholar] [CrossRef] [PubMed] [Green Version] - El Najjar, N.; Van Teeseling, M.C.; Mayer, B.; Hermann, S.; Thanbichler, M.; Graumann, P.L. Bacterial cell growth is arrested by violet and blue, but not yellow light excitation during fluorescence microscopy. BMC Mol. Cell Biol.
**2020**, 21, 35. [Google Scholar] [CrossRef] - Icha, J.; Weber, M.; Waters, J.C.; Norden, C. Phototoxicity in live fluorescence microscopy, and how to avoid it. BioEssays
**2017**, 39, 1700003. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Kim, J.K.; Chen, Y.; Hirning, A.J.; Alnahhas, R.N.; Josić, K.; Bennett, M.R. Long-range spatio-temporal coordination of gene expression in synthetic microbial consortia. Nat. Chem. Biol.
**2019**, 15, 1102–1109. [Google Scholar] [CrossRef] - Winkle, J.; Igoshin, O.A.; Bennett, M.R.; Josic, K.; Ott, W. Modeling mechanical interactions in growing populations of rod-shaped bacteria. Phys. Biol.
**2017**, 14, 055001. [Google Scholar] [CrossRef] [PubMed] - Carpenter, A.E.; Jones, T.R.; Lamprecht, M.R.; Clarke, C.; Kang, I.H.; Friman, O.; Guertin, D.A.; Chang, J.H.; Lindquist, R.A.; Moffat, J.; et al. CellProfiler: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol.
**2006**, 7, R100. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Kamentsky, L.; Jones, T.R.; Fraser, A.; Bray, M.; Logan, D.; Madden, K.; Ljosa, V.; Rueden, C.; Harris, G.B.; Eliceiri, K.; et al. Improved structure, function, and compatibility for CellProfiler: Modular high-throughput image analysis software. Bioinformatics
**2011**, 27, 1179–1180. [Google Scholar] [CrossRef] [Green Version] - McQuin, C.; Goodman, A.; Chernyshev, V.; Kamentsky, L.; Cimini, B.A.; Karhohs, K.W.; Doan, M.; Ding, L.; Rafelski, S.M.; Thirstrup, D.; et al. CellProfiler 3.0: Next,-generation image processing for biology. PLoS Biol.
**2018**, 16, e2005970. [Google Scholar] [CrossRef] [Green Version] - Alnahhas, R.N.; Sadeghpour, M.; Chen, Y.; Frey, A.A.; Ott, W.; Josić, K.; Bennett, M.R. Majority sensing in synthetic microbial consortia. Nat. Commun.
**2020**, 11, 1–10. [Google Scholar] [CrossRef] [PubMed] - Locke, J.C.W.; Elowitz, M.B. Using movies to analyse gene circuit dynamics in single cells. Nat. Rev. Microbiol.
**2009**, 7, 383–392. [Google Scholar] [CrossRef] [Green Version] - Alnahhas, R.N.; Winkle, J.J.; Hirning, A.J.; Karamched, B.; Ott, W.; Josić, K.; Bennett, M.R. Spatiotemporal Dynamics of Synthetic Microbial Consortia in Microfluidic Devices. ACS Synth. Biol.
**2019**, 8, 2051–2058. [Google Scholar] [CrossRef] [PubMed] - Hand, A.J.; Sun, T.; Barber, D.C.; Hose, D.R.; MacNeil, S. Automated tracking of migrating cells in phase-contrast video microscopy sequences using image registration. J. Microsc.
**2009**, 234, 62–79. [Google Scholar] [CrossRef] [PubMed] - Ulman, V.; Maška, M.; Magnusson, K.E.G.; Ronneberger, O.; Haubold, C.; Harder, N.; Matula, P.; Matula, P.; Svoboda, D.; Radojevic, M.; et al. An objective comparison of cell-tracking algorithms. Nat. Methods
**2017**, 14, 1141–1152. [Google Scholar] [CrossRef] [PubMed] - Marvasti-Zadeh, S.M.; Cheng, L.; Ghanei-Yakhdan, H.; Kasaei, S. Deep learning for visual tracking: A comprehensive survey. IEEE Trans. Intell. Transp. Syst.
**2021**, 1–26. [Google Scholar] [CrossRef] - Yilmaz, A.; Javed, O.; Shah, M. Object tracking: A survey. ACM Comput. Surv. (CSUR)
**2006**, 38, 13-es. [Google Scholar] [CrossRef] - Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 674–679. [Google Scholar]
- Mang, A.; Biros, G. An inexact Newton–Krylov algorithm for constrained diffeomorphic image registration. SIAM J. Imaging Sci.
**2015**, 8, 1030–1069. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Mang, A.; Ruthotto, L. A Lagrangian Gauss–Newton–Krylov solver for mass- and intensity-preserving diffeomorphic image registration. SIAM J. Sci. Comput.
**2017**, 39, B860–B885. [Google Scholar] [CrossRef] [PubMed] - Mang, A.; Gholami, A.; Davatzikos, C.; Biros, G. CLAIRE: A distributed-memory solver for constrained large deformation diffeomorphic image registration. SIAM J. Sci. Comput.
**2019**, 41, C548–C584. [Google Scholar] [PubMed] - Borzi, A.; Ito, K.; Kunisch, K. An optimal control approach to optical flow computation. Int. J. Numer. Methods Fluids
**2002**, 40, 231–240. [Google Scholar] [CrossRef] - Horn, B.K.P.; Shunck, B.G. Determining optical flow. Artif. Intell.
**1981**, 17, 185–203. [Google Scholar] [CrossRef] [Green Version] - Delpiano, J.; Jara, J.; Scheer, J.; Ramírez, O.A.; Ruiz-del Solar, J.; Härtel, S. Performance of optical flow techniques for motion analysis of fluorescent point signals in confocal microscopy. Mach. Vis. Appl.
**2012**, 23, 675–689. [Google Scholar] - Madrigal, F.; Hayet, J.B.; Rivera, M. Motion priors for multiple target visual tracking. Mach. Vis. Appl.
**2015**, 26, 141–160. [Google Scholar] [CrossRef] - Banerjee, D.S.; Stephenson, G.; Das, S.G. Segmentation and analysis of mother machine data: SAM. bioRxiv
**2020**. [Google Scholar] [CrossRef] - Jug, F.; Pietzsch, T.; Kainmüller, D.; Funke, J.; Kaiser, M.; van Nimwegen, E.; Rother, C.; Myers, G. Optimal Joint Segmentation and Tracking of Escherichia Coli in the Mother Machine. In Bayesian and Graphical Models for Biomedical Imaging; Springer: Cham, Switzerland, 2014; Volume LNCS 8677, pp. 25–36. [Google Scholar]
- Lugagne, J.B.; Lin, H.; Dunlop, M.J. DeLTA: Automated cell segmentation, tracking, and lineage reconstruction using deep learning. PLoS Comput. Biol.
**2020**, 16, e1007673. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Ollion, J.; Elez, M.; Robert, L. High-throughput detection and tracking of cells and intracellular spots in mother machine experiments. Nat. Protoc.
**2019**, 14, 3144–3161. [Google Scholar] [CrossRef] - Sauls, J.T.; Schroeder, J.W.; Brown, S.D.; Le Treut, G.; Si, F.; Li, D.; Wang, J.D.; Jun, S. Mother machine image analysis with MM3. bioRxiv
**2019**, 810036. [Google Scholar] [CrossRef] - Smith, A.; Metz, J.; Pagliara, S. MMHelper: An automated framework for the analysis of microscopy images acquired with the mother machine. Sci. Rep.
**2019**, 9, 10123. [Google Scholar] - Arbelle, A.; Reyes, J.; Chen, J.Y.; Lahav, G.; Raviv, T.R. A probabilistic approach to joint cell tracking and segmentation in high-throughput microscopy videos. Med. Image Anal.
**2018**, 47, 140–152. [Google Scholar] - Okuma, K.; Taleghani, A.; De Freitas, N.; Little, J.J.; Lowe, D.G. A boosted particle filter: Multitarget detection and tracking. In Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 28–39. [Google Scholar]
- Smal, I.; Niessen, W.; Meijering, E. Bayesian tracking for fluorescence microscopic imaging. In Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, Arlington, WA, USA, 6–9 April 2006; pp. 550–553. [Google Scholar]
- Kervrann, C.; Trubuil, A. Optimal level curves and global minimizers of cost functionals in image segmentation. J. Math. Imaging Vis.
**2002**, 17, 153–174. [Google Scholar] [CrossRef] - Li, K.; Miller, E.D.; Chen, M.; Kanade, T.; Weiss, L.E.; Campbell, P.G. Cell population tracking and lineage construction with spatiotemporal context. Med. Image Anal.
**2008**, 12, 546–566. [Google Scholar] [CrossRef] [PubMed] - Wang, X.; He, W.; Metaxas, D.; Mathew, R.; White, E. Cell segmentation and tracking using texture-adaptive snakes. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Arlington, VA, USA, 12–15 April 2007; pp. 101–104. [Google Scholar]
- Yang, F.; Mackey, M.A.; Ianzini, F.; Gallardo, G.; Sonka, M. Cell segmentation, tracking, and mitosis detection using temporal context. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Palm Springs, CA, USA, 26–29 October 2005; pp. 302–309. [Google Scholar]
- Sethuraman, V.; French, A.; Wells, D.; Kenobi, K.; Pridmore, T. Tissue-level segmentation and tracking of cells in growing plant roots. Mach. Vis. Appl.
**2012**, 23, 639–658. [Google Scholar] [CrossRef] - Balomenos, A.D.; Tsakanikas, P.; Manolakos, E.S. Tracking single-cells in overcrowded bacterial colonies. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milano, Italy, 25–29 August 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 6473–6476. [Google Scholar]
- Bise, R.; Yin, Z.; Kanade, T. Reliable cell tracking by global data association. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 1004–1010. [Google Scholar]
- Bise, R.; Li, K.; Eom, S.; Kanade, T. Reliably tracking partially overlapping neural stem cells in DIC microscopy image sequences. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention Workshop, London, UK, 20–24 September 2009; pp. 67–77. [Google Scholar]
- Kanade, T.; Yin, Z.; Bise, R.; Huh, S.; Eom, S.; Sandbothe, M.F.; Chen, M. Cell image analysis: Algorithms, system and applications. In Proceedings of the 2011 IEEE Workshop on Applications of Computer Vision (WACV), Kona, HI, USA, 5–7 January 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 374–381. [Google Scholar]
- Primet, M.; Demarez, A.; Taddei, F.; Lindner, A.; Moisan, L. Tracking of cells in a sequence of images using a low-dimensional image representation. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Paris, France, 14–17 May 2008; pp. 995–998. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Munich, Germany, 5–9 October 2015; Volume LNCS 9351, pp. 234–241. [Google Scholar]
- Su, H.; Yin, Z.; Huh, S.; Kanade, T. Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features. Med. Image Anal.
**2013**, 17, 746–765. [Google Scholar] [CrossRef] - Wang, Q.; Niemi, J.; Tan, C.M.; You, L.; West, M. Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy. Cytom. Part A J. Int. Soc. Adv. Cytom.
**2010**, 77, 101–110. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Jiuqing, W.; Xu, C.; Xianhang, Z. Cell tracking via structured prediction and learning. Mach. Vis. Appl.
**2017**, 28, 859–874. [Google Scholar] [CrossRef] - Zhou, Z.; Wang, F.; Xi, W.; Chen, H.; Gao, P.; He, C. Joint multi-frame detection and segmentation for multi-cell tracking. In Proceedings of the International Conference on Image and Graphics, Beijing, China, 23–25 August 2019; Volume LNCS 11902, pp. 435–446. [Google Scholar]
- Sixta, T.; Cao, J.; Seebach, J.; Schnittler, H.; Flach, B. Coupling cell detection and tracking by temporal feedback. Mach. Vis. Appl.
**2020**, 31, 1–18. [Google Scholar] - Hayashida, J.; Nishimura, K.; Bise, R. MPM: Joint representation of motion and position map for cell tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3823–3832. [Google Scholar]
- Payer, C.; Stern, D.; Neff, T.; Bishof, H.; Urschler, M. Instance segmentation and tracking with cosine embeddings and recurrent hourglass networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Granada, Spain, 16–20 September 2018; Volume LNCS 11071, pp. 3–11. [Google Scholar]
- Payer, C.; Štern, D.; Feiner, M.; Bischof, H.; Urschler, M. Segmenting and tracking cell instances with cosine embeddings and recurrent hourglass networks. Med. Image Anal.
**2019**, 57, 106–119. [Google Scholar] [PubMed] - Vicar, T.; Balvan, J.; Jaros, J.; Jug, F.; Kolar, R.; Masarik, M.; Gumulec, J. Cell segmentation methods for label-free contrast microscopy: Review and comprehensive comparison. BMC Bioinform.
**2019**, 20, 1–25. [Google Scholar] - Al-Kofahi, Y.; Zaltsman, A.; Graves, R.; Marshall, W.; Rusu, M. A deep learning-based algorithm for 2D cell segmentation in microscopy images. BMC Bioinform.
**2018**, 19, 1–11. [Google Scholar] - Falk, T.; Mai, D.; Bensch, R.; Çiçek, Ö.; Abdulkadir, A.; Marrakchi, Y.; Böhm, A.; Deubner, J.; Jäckel, Z.; Seiwald, K.; et al. U-Net: Deep learning for cell counting, detection, and morphometry. Nat. Methods
**2019**, 16, 67–70. [Google Scholar] [CrossRef] - Lux, F.; Matula, P. DIC image segmentation of dense cell populations by combining deep learning and watershed. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 236–239. [Google Scholar]
- Moen, E.; Bannon, D.; Kudo, T.; Graf, W.; Covert, M.; Van Valen, D. Deep learning for cellular image analysis. Nat. Methods
**2019**, 16, 1233–1246. [Google Scholar] [PubMed] - Rempfler, M.; Stierle, V.; Ditzel, K.; Kumar, S.; Paulitschke, P.; Andres, B.; Menze, B.H. Tracing cell lineages in videos of lens-free microscopy. Med. Image Anal.
**2018**, 48, 147–161. [Google Scholar] - Stringer, C.; Wang, T.; Michaelos, M.; Pachitariu, M. Cellpose: A generalist algorithm for cellular segmentation. Nat. Methods
**2021**, 18, 100–106. [Google Scholar] [CrossRef] - Akram, S.U.; Kannala, J.; Eklund, L.; Heikkilä, J. Joint cell segmentation and tracking using cell proposals. In Proceedings of the IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 920–924. [Google Scholar]
- Nishimura, K.; Hayashida, J.; Wang, C.; Bise, R. Weakly-Supervised Cell Tracking via Backward-and-Forward Propagation. In Proceedings of the European Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 104–121. [Google Scholar]
- Rempfler, M.; Kumar, S.; Stierle, V.; Paulitschke, P.; Andres, B.; Menze, B.H. Cell lineage tracing in lens-free microscopy videos. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; pp. 3–11. [Google Scholar]
- Maska, M.; Ulman, V.; Svoboda, D.; Matula, P.; Matula, P.; Ederra, C.; Urbiola, A.; Espana, T.; Venkatesan, S.; Balak, D.M.W.; et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics
**2014**, 30, 1609–1617. [Google Scholar] - Löffler, K.; Scherr, T.; Mikut, R. A graph-based cell tracking algorithm with few manually tunable parameters and automated segmentation error correction. bioRxiv
**2021**, 16, e0249257. [Google Scholar] - Vo, B.T.; Vo, B.N.; Cantoni, A. The cardinality balanced multi-target multi-Bernoulli filter and its implementations. IEEE Trans. Signal Process.
**2008**, 57, 409–423. [Google Scholar] - Pierskalla, W.P. The multidimensional assignment problem. Oper. Res.
**1968**, 16, 422–431. [Google Scholar] [CrossRef] [Green Version] - Gilbert, K.C.; Hofstra, R.B. Multidimensional assignment problems. Decis. Sci.
**1988**, 19, 306–321. [Google Scholar] [CrossRef] - Chakraborty, A.; Roy-Chowdhury, A.K. Context aware spatio-temporal cell tracking in densely packed multilayer tissues. Med. Image Anal.
**2015**, 19, 149–163. [Google Scholar] [CrossRef] [PubMed] - Liu, M.; Yadav, R.K.; Roy-Chowdhury, A.; Reddy, G.V. Automated tracking of stem cell lineages of Arabidopsis shoot apex using local graph matching. Plant J.
**2010**, 62, 135–147. [Google Scholar] [CrossRef] - Liu, M.; Chakraborty, A.; Singh, D.; Yadav, R.K.; Meenakshisundaram, G.; Reddy, G.V.; Roy-Chowdhury, A. Adaptive cell segmentation and tracking for volumetric confocal microscopy images of a developing plant meristem. Mol. Plant
**2011**, 4, 922–931. [Google Scholar] [CrossRef] - Liu, M.; Li, J.; Qian, W. A multi-seed dynamic local graph matching model for tracking of densely packed cells across unregistered microscopy image sequences. Mach. Vis. Appl.
**2018**, 29, 1237–1247. [Google Scholar] [CrossRef] - Vo, B.N.; Vo, B.T. A multi-scan labeled random finite set model for multi-object state estimation. IEEE Trans. Signal Process.
**2019**, 67, 4948–4963. [Google Scholar] [CrossRef] [Green Version] - Punchihewa, Y.G.; Vo, B.T.; Vo, B.N.; Kim, D.Y. Multiple object tracking in unknown backgrounds with labeled random finite sets. IEEE Trans. Signal Process.
**2018**, 66, 3040–3055. [Google Scholar] [CrossRef] [Green Version] - Kim, D.Y.; Vo, B.N.; Thian, A.; Choi, Y.S. A generalized labeled multi-Bernoulli tracker for time lapse cell migration. In Proceedings of the 2017 International Conference on Control, Automation and Information Sciences, Jeju, Korea, 18–21 October 2017; pp. 20–25. [Google Scholar]
- Winkle, J.J.; Karamched, B.R.; Bennett, M.R.; Ott, W.; Josić, K. Emergent spatiotemporal population dynamics with cell-length control of synthetic microbial consortia. PLoS Comput. Biol.
**2021**, 17, e1009381. [Google Scholar] - Bise, R.; Sato, Y. Cell detection from redundant candidate regions under non-overlapping constraints. IEEE Trans. Med Imaging
**2015**, 34, 1417–1427. [Google Scholar] [CrossRef] [PubMed] - Matula, P.; Maška, M.; Sorokin, D.V.; Matula, P.; Ortiz-de Solórzano, C.; Kozubek, M. Cell tracking accuracy measurement based on comparison of acyclic oriented graphs. PLoS ONE
**2015**, 10, e0144959. [Google Scholar] - Agrawal, A.; Verschueren, R.; Diamond, S.; Boyd, S. A rewriting system for convex optimization problems. J. Control Decis.
**2018**, 5, 42–60. [Google Scholar] [CrossRef] - Diamond, S.; Boyd, S. CVXPY: A Python-embedded modeling language for convex optimization. J. Mach. Learn. Res.
**2016**, 17, 1–5. [Google Scholar] - Shen, X.; Diamond, S.; Gu, Y.; Boyd, S. Disciplined convex-concave programming. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, 12–14 December 2016; pp. 1009–1014. [Google Scholar]
- Stricker, J.; Cookson, S.; Bennett, M.R.; Mather, W.H.; Tsimring, L.S.; Hasty, J. A fast, robust and tunable synthetic gene oscillator. Nature
**2008**, 456, 516–519. [Google Scholar] [CrossRef] - Chen, Y.; Kim, J.K.; Hirning, A.J.; Josić, K.; Bennett, M.R. Emergent genetic oscillations in a synthetic microbial consortium. Science
**2015**, 349, 986–989. [Google Scholar] [CrossRef] [Green Version] - Sloan, S.W. A fast algorithm for constructing Delauny triangulations in the plane. Adv. Eng. Softw.
**1987**, 9, 34–55. [Google Scholar] [CrossRef] - Azencott, R. Simulated Annealing: Parallelization Techniques; Wiley-Interscience: Hoboken, NJ, USA, 1992; Volume 27. [Google Scholar]
- Azencott, R.; Chalmond, B.; Coldefy, F. Markov Image Fusion to Detect Intensity Valleys. Int. J. Comput. Vis.
**1994**, 16, 135–145. [Google Scholar] [CrossRef] - Boyd, S.; Vandenberghe, L. Convex Optimization; Campridge University Press: Campridge, UK, 2004. [Google Scholar]
- Ackley, D.H.; Hinton, G.E.; Sejnowski, T.J. A learning algorithm for Boltzmann machines. Cogn. Sci.
**1985**, 9, 147–169. [Google Scholar] [CrossRef] - Hinton, G.E.; Sejnowski, T.J. Chapter Learning and Relearning in Boltzmann Machines. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition; MIT Press: Cambridge, MA, USA, 1986; pp. 282–317. [Google Scholar]
- Azencott, R. Synchronous Boltzmann machines and Gibbs fields: Learning algorithms. In Neurocomputing; Springer: Berlin/Heidelberg, Germany, 1990; pp. 51–63. [Google Scholar]
- Azencott, R. Synchronous Boltzmann machines and artificial vision. Neural Netw.
**1990**, 135–143. Available online: https://www.math.uh.edu/~razencot/MyWeb/Research/Selected_Reprints/1990SynchronousBoltzmanMachinesArtificialVision.pdf (accessed on 15 December 2021). - Azencott, R.; Graffigne, C.; Labourdette, C. Edge Detection and Segmentation of Textured Plane Images. In Stochastic Models, Statistical Methods, and Algorithms in Image Analysis; Springer-Verlag: New York, NY, USA, 1992; Volume 74, pp. 75–88. [Google Scholar]
- Kong, A.; Azencott, R. Binary Markov Random Fields and Interpretable Mass Spectra Discrimination. Stat. Appl. Genet. Mol. Biol.
**2017**, 16, 13–30. [Google Scholar] [CrossRef] [PubMed] - Azencott, R.; Doutriaux, A.; Younes, L. Synchronous Boltzmann Machines and Curve Identification Tasks. Netw. Comput. Neural Syst.
**1993**, 4, 461–480. [Google Scholar] - Garda, P.; Belhaire, E. An Analog Circuit with Digital I/O for Synchronous Boltzmann Machines. In VLSI for Artificial Intelligence and Neural Networks; Springer: Berlin, Germany, 1991; pp. 245–254. [Google Scholar]
- Lafargue, V.; Belhaire, E.; Pujol, H.; Berechet, I.; Garda, P. Programmable Mixed Implementation of the Boltzmann Machine. In International Conference on Artificial Neural Networks; Springer: Berlin, Germany, 1994; pp. 409–412. [Google Scholar]
- Pujol, H.; Klein, J.-O.; Belhaire, E.; Garda, P. RA: An analog neurocomputer for the synchronous Boltzmann machine. In Proceedings of the Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems, Turin, Italy, 26–28 September 1994; IEEE: Piscataway, NJ, USA, 1994; pp. 449–455. [Google Scholar]
- Beucher, S.; Lantuejoul, C. Use of watersheds in contour detection. In Workshop on Image Processing; CCETT/IRISA: Rennes, France, 1979. [Google Scholar]
- Mumford, D.B.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math.
**1989**, 42, 577–685. [Google Scholar] [CrossRef] [Green Version] - Mézard, M.; Parisi, G.; Virasoro, M.A. Spin Glass Theory and Beyond: An Introduction to the Replica Method and Its Applications; World Scientific Publishing Company: Singapore, 1987; Volume 9. [Google Scholar]
- Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science
**1983**, 220, 671–680. [Google Scholar] [CrossRef] [PubMed] - Roussel-Ragot, P.; Dreyfus, G. A problem independent parallel implementation of simulated annealing: Models and experiments. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.
**1990**, 9, 827–835. [Google Scholar] [CrossRef] - Burda, Z.; Krzywicki, A.; Martin, O.C.; Tabor, Z. From simple to complex networks: Inherent structures, barriers, and valleys in the context of spin glasses. Phys. Rev. E
**2006**, 73, 036110. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Huber, P.J. The 1972 Wald Lecture Robust Statistics: A Review. Ann. Math. Stat.
**1972**, 43, 1041–1067. [Google Scholar] [CrossRef] - Ram, D.J.; Sreenivas, T.; Subramaniam, K.G. Parallel simulated annealing algorithms. J. Parallel Distrib. Comput.
**1996**, 37, 207–212. [Google Scholar] [CrossRef] - Digabel, H.; Lantuejoul, C. Iterative Algorithms. In Proceedings of the 2nd European Symposium Quantitative Analysis of Microstructures in Material Science, Biology and Medicine, Caen, France, 4–7 October 1977; pp. 85–89. [Google Scholar]
- Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell.
**1991**, 13, 583–598. [Google Scholar] [CrossRef] [Green Version]

**Figure 1.**Typical microscopy image sequence. We show five frames out of a total of 150 frames of an image sequence showing the dynamics of E. coli in a microfluidic device [18] (real laboratory image data). These cells are are about 1 $\mathsf{\mu}$m in diameter and on average 3 $\mathsf{\mu}$m in length, and they divide about every 30 min. The original images exported from the microscope are 0.11 $\mathsf{\mu}$m/pixel. We report results for these real datasets in Section 4.

**Figure 2.**Simulated data and cell characteristics considered in the proposed algorithm. (

**Left**): Two successive images generated by dynamic simulation for a colony of rod-shaped bacteria. The left image J displays $N=109$ cells at time t. At time $t+\mathsf{\Delta}t$ with $\mathsf{\Delta}t=1$ min, cells have moved and grown, and some have divided. These cells are displayed in image ${J}_{+}$, which contains ${N}_{+}=124$ cells. We highlight two cells that have undergone a division between the frames (red and green ellipses). (

**Right**): Geometry of a rod shaped bacterium. We consider different quantities of interest in the proposed algorithm. These include the center $c\left(b\right)$ of a cell, the two end points $e\left(b\right)$ and $h\left(b\right)$, and the long axis $A\left(b\right)$, respectively.

**Figure 3.**Scatter plots for tandems of the penalty terms $\mathit{DEV}$, $\mathit{GAP}$, and $\mathit{RANK}$. We mark in orange the true children pairs and in blue invalid children pairs. These plots allow us to identify appropriate empirical thresholds to trim the (considered synthetic) data in order to reduce the computational complexity of the parent–children pairing.

**Figure 4.**Simulated cell dynamics. From left to right, two successive simulated images J and ${J}_{+}$ with an interframe time of six minutes and no cell division, their image difference $|J-{J}_{+}|$, and the associated motion vectors. For the image J and ${J}_{+}$ we color four pairs of cells in $B\times {B}_{+}$, which should be matched by the true cell registration mapping. Notice that the motion for an interframe time of six minutes is significant. We can observe that, even without considering cell division, we can no longer assume that corresponding cells in frame J and ${J}_{+}$ overlap.

**Figure 5.**Illustration of an undesirable flip for the mapping f. The cells ${b}_{1}$ and ${b}_{3}$ are neighbors of ${b}_{2}$, and mapped by f on neighbors ${z}_{1}=f\left({b}_{1}\right),{z}_{3}=f\left({b}_{3}\right)$ of ${z}_{2}=f\left({b}_{2}\right)$, as should be expected for bona fide cell registrations. However, for this mapping f, we have ${z}_{3}$ above ${z}_{2}$ above ${z}_{1}$, whereas, for the original cells, we had ${b}_{1}$ above ${b}_{2}$ above ${b}_{3}$. Our cost function penalizes flips of this nature.

**Figure 6.**Segmentation results for experimental recordings of live cell colonies. We show two short image sequences extracts COL1 (

**left**) and COL2 (

**right**). The interframe duration is six minutes. The image sequence extract COL1 has only two successive image frames. The image sequence extract COL2 has four successive image frames. We are going to automatically compute four cell registrations, one for each pair of successive images in COL1 and COL2.

**Figure 7.**Cell tracking results for the pair COL1 of successive images J, ${J}_{+}$ shown in Figure 6. The interframe duration is six minutes. (

**Left**): Results for parent–children pairing on COL1. Automatically detected parent–children triplets are displayed in the same color. (

**Right**): Computed registration. The removal of the automatically detected parent–children triplets (see

**left column**) generates the reduced cell sets $\mathit{redB}$ and ${\mathit{redB}}_{+}$. Automatic registration of $\mathit{redB}$ and ${\mathit{redB}}_{+}$ is again displayed via identical color for the registered cell pairs $(b,{b}_{+})$. Mismatches are mostly due to previous errors in parent–children pairing (see Figure 8 for a more detailed assessment).

**Figure 8.**Cell tracking results for the short image sequence COL2 in Figure 6. The interframe duration for COL2 is six minutes. COL2 involves four successive images $J\left({t}_{i}\right)$, $i=0,1,2,3$. In our figure, each one of the three rows displays the automatic cell registration results between images $J\left({t}_{i}\right)$ and $J\left({t}_{i+1}\right)$ for $i=0,1,2$. We report the accuracies of parent–children pairing and of the registration in Table 5. (

**Left column**): Results for parent–children pairing. Each parent–children triplet is identified by the same color for each parent cell and its two children. (

**Middle column**): Display of the automatically computed registration after removing the parent–children triplets already identified in order to generate two reduced sets $\mathit{redB}$ and ${\mathit{redB}}_{+}$ of cells. Again, the same color is used for each pair of automatically registered cells. The white cells in ${\mathit{redB}}_{+}$ are cells which could not be registered to some cell in $\mathit{redB}$. (

**Right column**): To differentiate between errors induced during automatic identification of and errors generated by automatic registration between $\mathit{redB}$ and ${\mathit{redB}}_{+}$, we manually removed all “true” parent–children triplets and then applied our registration algorithm to this “cleaned” (reduced) cell sets ${\mathit{redB}}^{*}$ and ${\mathit{redB}}_{+}^{*}$.

**Table 1.**Benchmark datasets. To test the tracking software, we consider simulated data. We have generated data of varying complexity with different interframe durations. We note that we also consider these data to train our algorithms for tracking cells. We report the label for each dataset, the interframe duration, as well as the number of frames generated. We set the cell growth factor to $g.rate=1.05$ per min. We refer to the text for details about how these data have been generated.

Label | Interframe Duration | Number of Frames |
---|---|---|

BENCH1 | 1 min | 500 |

BENCH2 | 2 min | 300 |

BENCH3 | 3 min | 300 |

BENCH6 | 6 min | 100 |

**Table 2.**Accuracies of parent–children pairing algorithm. We applied our parent–children pairing algorithm to three long synthetic image sequences BENCH1 (500 frames), BENCH2 (300 frames), and BENCH3 (300 frames), with interframe intervals of 1, 2, 3 min, respectively. The table summarizes the resulting pcp-accuracies. Note that pcp-accuracies are practically always at 100%. For BENCH2, pcp-accuracies are 100% for 298 frames out of 300, and for the remaining two frames, accuracies were still high at 93% and 96%. For BENCH3, the average pcp-accuracy for the 3 min interframe is 99%.

Sequence | Pcp-Accuracy | Frames |
---|---|---|

BENCH1 | $\mathit{acc}=100\%$ | 500 out of 500 |

BENCH2 | $\mathit{acc}=100\%$ | 298 out of 300 |

BENCH2 | $99\%\ge \mathit{acc}\ge 93\%$ | 2 out of 300 |

BENCH3 | $\mathit{acc}=100\%$ | 271 out of 300 |

BENCH3 | $99\%\ge \mathit{acc}\ge 95\%$ | 17 out of 300 |

BENCH3 | $94\%\ge \mathit{acc}\ge 90\%$ | 12 out of 300 |

**Table 3.**Registration accuracy for synthetic image sequence BENCH${}_{100}$. We consider 100 pairs of consecutive synthetic images taken from the benchmark dataset BENCH6. Automatic registration was implemented by BM minimization of the cost function $cost\left(f\right)$, which was parametrized by the vector of optimized weights ${\mathsf{\Lambda}}^{*}$ in (17). The average registration accuracy was 99%.

Registration Accuracy | Number of Frames |
---|---|

$acc=100\%$ | 55 frames out of 100 |

$99\%\ge \mathit{acc}>$ 97% | 40 frames out of 100 |

$96\%\ge \mathit{acc}>$ 94.5% | 5 frames out of 100 |

**Table 4.**Cost function weights for parent–children pairing in the COL1 images displayed in Figure 6.

Weights | ${\mathit{\lambda}}_{\mathbf{cen}}$ | ${\mathit{\lambda}}_{\mathbf{siz}}$ | ${\mathit{\lambda}}_{\mathbf{ang}}$ | ${\mathit{\lambda}}_{\mathbf{gap}}$ | ${\mathit{\lambda}}_{\mathbf{dev}}$ | ${\mathit{\lambda}}_{\mathbf{rat}}$ | ${\mathit{\lambda}}_{\mathbf{rank}}$ | ${\mathit{\lambda}}_{\mathbf{over}}$ |
---|---|---|---|---|---|---|---|---|

Value | 3 | 7 | 100 | $0.8$ | 4 | $0.01$ | $0.01$ | 600 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Sarmadi, S.; Winkle, J.J.; Alnahhas, R.N.; Bennett, M.R.; Josić, K.; Mang, A.; Azencott, R.
Stochastic Neural Networks for Automatic Cell Tracking in Microscopy Image Sequences of Bacterial Colonies. *Math. Comput. Appl.* **2022**, *27*, 22.
https://doi.org/10.3390/mca27020022

**AMA Style**

Sarmadi S, Winkle JJ, Alnahhas RN, Bennett MR, Josić K, Mang A, Azencott R.
Stochastic Neural Networks for Automatic Cell Tracking in Microscopy Image Sequences of Bacterial Colonies. *Mathematical and Computational Applications*. 2022; 27(2):22.
https://doi.org/10.3390/mca27020022

**Chicago/Turabian Style**

Sarmadi, Sorena, James J. Winkle, Razan N. Alnahhas, Matthew R. Bennett, Krešimir Josić, Andreas Mang, and Robert Azencott.
2022. "Stochastic Neural Networks for Automatic Cell Tracking in Microscopy Image Sequences of Bacterial Colonies" *Mathematical and Computational Applications* 27, no. 2: 22.
https://doi.org/10.3390/mca27020022