Abstract
Our work targets automated analysis to quantify the growth dynamics of a population of bacilliform bacteria. We propose an innovative approach to frame-sequence tracking of deformable-cell motion by the automated minimization of a new, specific cost functional. This minimization is implemented by dedicated Boltzmann machines (stochastic recurrent neural networks). Automated detection of cell divisions is handled similarly by successive minimizations of two cost functions, alternating the identification of children pairs and parent identification. We validate the proposed automatic cell tracking algorithm using (i) recordings of simulated cell colonies that closely mimic the growth dynamics of E. coli in microfluidic traps and (ii) real data. On a batch of 1100 simulated image frames, cell registration accuracies per frame ranged from 94.5% to 100%, with a high average. Our initial tests using experimental image sequences (i.e., real data) of E. coli colonies also yield convincing results, with a registration accuracy ranging from 90% to 100%.
Keywords:
stochastic neural networks; cell tracking; microscopy image analysis; detection-and-association methods MSC:
62H35; 62M45
1. Introduction
Technology advances have led to increasing magnitudes of data generation with increasing levels of precision [,]. However, data generation presently far outpaces data analysis and drives the requirement for analyzing such large-scale data sets with automated tools [,,]. The main goal of the present work is to develop computational methods for an automated analysis of microscopy image sequences of colonies of E. coli growing in a single layer. Such recordings can be obtained from colonies growing in microfluidic devices, and they provide a detailed view of individual cell-growth dynamics as well as population-level, inter-cellular mechanical and chemical interactions [,,].
However, to understand both variability and lineage-based correlations in cellular response to environmental factors and signals from other cells requires the tracking of large numbers of individual cells across many generations. This can be challenging, as large cell numbers tightly packed in microfluidic devices can compromise spatial resolution, and toxicity effects can place limits on the temporal resolution of the recordings [,]. One approach to better understand and control the behavior of these bacterial colonies is to develop computational methods that capture the dynamics of gene networks within single cells [,,]. For these methods to have a practical impact, one ultimately has to fit the models to the data, which allows us to infer hidden parameters (i.e., characteristics of the behavior of cells that cannot be measured directly). Image analysis and pattern recognition techniques for biological imaging data [,,], like the methods developed in the present work, can be used to track lineages and thus automatically infer how gene expression varies over time. These methods can serve as an indispensable tool to extract information to fit and validate both coarse and detailed models of bacterial population, thus allowing us to infer model parameters from recordings.
Here, we describe an algorithm that provides quantitative information about the population dynamics, including the life cycle and lineage of cells within a population from recordings of cells growing in a mono-layer. A typical sequence of frames of cells growing in a microfluidic trap is shown in Figure 1. We describe the design and validation of algorithms for tracking individual cells in sequences of such images [,,]. After segmentation of individual image frames to identify each cell, tracking individual cells from frame to frame is a combinatorial problem. To solve this problem we take into account the unknown cell growth, cell motion, and cell divisions that occur between frames. Segmentation and tracking are complicated by imaging noise and artifacts, overlap of bacteria, similarity of important cell characteristics across the population (shape; length; and diameter), tight packing of bacteria, and large interframe durations resulting in significant cell motion, and up to a 30% increase in individual cell volume.
Figure 1.
Typical microscopy image sequence. We show five frames out of a total of 150 frames of an image sequence showing the dynamics of E. coli in a microfluidic device [] (real laboratory image data). These cells are are about 1 m in diameter and on average 3 m in length, and they divide about every 30 min. The original images exported from the microscope are 0.11 m/pixel. We report results for these real datasets in Section 4.
1.1. Related Work
The present work focuses on tracking E. coli in a time series of images. A comparison of different cell-tracking algorithms can be found in [,]. Multi-object tracking in video sequences and object recognition in time series of images is a challenging task that arises in numerous applications in computer vision [,]. In (biomedical) image processing, motion tracking is often referred to as “image registration” [,,,,] or “optical flow” [,,,]. Typical solutions used in the defense industry, for instance, track small numbers of fast moving targets by image sequence analysis at pixel levels and use sophisticated reconstruction of the optical flow, combined with real time segmentation, and quick combinatoric exploration at each image frame. Initially, we did implement several well-known algorithms for reconstruction of the optical flow, but the results we obtained were not satisfactory due to long interframe times and high noise levels. Moreover, we are not interested in tracking individual pixels but rather cells (i.e., rod-shaped, deformable shapes), while recognizing events of cell division and recording cell lineage. Consequently, we decided to first segment each image frame to isolate each cell, and then to match cells between successive frames.
As for the problem at hand, one approach proposed in prior work to simplify the tracking task is to make the experimental setup more rigid by confining individual cell lineages to small tubes; the associated microfluidic device is called a “mother machine” [,,,,,]. The microfluidic devices we consider here yield more complicated data as cells are allowed to move and multiply freely in two dimensions (constrained to a mono-layer). We refer to Figure 1 for a typical sequence of experimental images considered in the present work.
Turning to methods that work on more complex biological cell imaging data, we can distinguish different classes of tracking methods. “Model-based evolution methods” operate on the image intensities directly. They rely on particle filters [,,] or active contour models [,,,,]. These methods work well if the cells are not tightly packed. However, they may lead to erroneous results if the cells are close together, the inter-cellular boundaries are blurry, or the cells move significantly. Our work belongs to another class—the so called “detection-and-association methods” [,,,,,,,,,,], which first detect cells in each frame and then solve the tracking problem/association task across successive frames. (We note that the segmentation and tracking of cells does not necessarily need to be implemented in two distinct steps. In many image sequence analyses, implementing these two steps jointly can be beneficial [,,,,,]. However, for the clarity of exposition and easier implementation of our new tracking technique, we present these steps separately.) Doing so necessitates the segmentation of cells within individual frames. We refer to [] for an overview of cell segmentation approaches. Deep learning strategies have been widely used for this task [,,,,,,,,,]. We consider a framework based on convolutional neural networks (CNNs). Others have also used CNNs for cell segmentation [,,,,]. We omit a detailed discussion of our segmentation approach within the main body of this paper, as we do not view it as our main contribution (see Section 1.2). However, the interested reader is referred to Appendix D for some insights. To solve the tracking problem after the cell detection, many of the methods cited above use hand-crafted association scores based on the proximity of the cells and shape similarity measures [,,,]. We follow this approach here. We note that we not only consider local association scores between cells but also include measures for the integrity of a cell’s neighborhood (i.e., “context information”).
Our method is tailored for tracking cells in tightly packed colonies of rod-shaped E. coli bacteria. This problem has been considered previously [,,,]. However, we are not aware of any large-scale datasets that provide ground truth tracking data for these types of recordings, but note that there are community efforts for providing a framework for testing cell tracking algorithms [,] (see, e.g., [,]). (Cell tracking challenge: http://celltrackingchallenge.net (accessed on 15 December 2021).) Works that consider these data are for example [,,,,,]. The cells in this dataset have significantly different characteristics compared to those considered in the present work. As we describe below, our approach is based on distinct characteristics of the bacteria cells and, consequently, does not directly apply to these data. Therefore, we have developed our own validation and calibration framework (see Section 2.1).
Standard graph matching algorithms (see, e.g., []) do not directly apply to our problem. Indeed, a fundamental complication is that cells can divide between successive images. Hence, each assignment from one frame to its successor is not a one-to-one mapping but a one-to-many correspondence. More advanced graph matching strategies are described in [,]. Graph-based matching strategies for cell-tracking that are somewhat related to our approach are described in [,,,,]. Like the methods mentioned above, they consider various association scores for tracking. Individual cells are represented as nodes, and neighbors are connected through edges. Our approach also introduces cost terms for structural matching of local neighborhoods by specific scoring for single nodes, pairs of nodes, and triplets of nodes, after a (modified) Delaunay triangulation. By using a graph-like structure, cell divisions can be identified by detecting changes in the topology of the graph [,]. We tested a similar strategy, but came to the conclusion that we cannot reliably construct neighborhood networks between frames for which topology changes only occur due to cell division; the main issue we observed is that the significant motion of cells between frames can introduce additional topology changes in our neighborhood structure. Consequently, we decided to relax these assumptions.
Refs. [,,,] implement multi-target tracking in videos by stochastic models based on random finite set densities and variants thereof. The fit to the data are based on Gibbs sampling to maximize the posterior likelihood. A key challenge of these approaches is the estimation of an adequate finite number of Gibbs sampling iterations when one computes posterior distributions. Most Gibbs samplers are ergodic Markov chains on a finite but huge state spaces, so that their natural exponential rate of convergence is not a practically reassuring feature.
As mentioned above, some recent works jointly solve the tracking and segmentation problem [,,,,,]. Contrary to observations we have made in our data, these approaches rely (with the exception of []) on the fact that the tracking problem is inherent to the segmentation problem (“tracking-by-detection methods” []; see also []). That is, the key assumption made by many of these algorithms is that cells belonging to the same lineage overlap across frames (see also []). In this case, cell-overlap can serve as a good proxy for cell-tracking []. We note that in our data we cannot guarantee that the frame rate is sufficiently high for this assumption to hold. Refs. [,,] exploited machine learning techniques for segmentation and motion tracking. One key challenge here is to provide adequate training data for these methods to be successful. Here, we describe simulation-based techniques that can be extended to produce training data, which we use for parameter tuning [,].
The works that are most similar to ours are [,,]. They perform a local search to identify the best cell-tracking candidates across frames. One key difference across these works are the matching criteria. Moreover, Refs. [,] employ a local greedy-search, whereas we consider stochastic neural network dynamics for optimization. Ref. [] constructs score matrices within a score based neighborhood tracking method; an integer programming method is used to generate frame-to-frame correspondences between cells and the lineage map. Other approaches that consider linear programming to maximize an association score function for cell tracking can be found in [,,].
As we have mentioned in the abstract, we obtain a tracking accuracy that ranges from 90% to 100%, respectively. Overall, our method is competitive with existing approaches: For example, Ref. [] reports a tracking accuracy of up to 97% for data that are similar to ours, while Ref. [] reports a tracking accuracy (spatial, temporal, and cell division detection) at the order of 95% (between about 93% and 98%, respectively). The second group also reports results for their prior approach [], with an accuracy at the order of 90% (ranging from about 87% to 92%, respectively). Accuracies reported in [] range from about 92% to 97%, respectively. This work also includes a comparison to one of their earlier approaches [] with an accuracy of up to 85% and 89% if the datasets are pre-aligned. We note that the data considered in [,,,] are quite different from ours. Refs. [,,,,] consider the data from the cell tracking challenge [,] to evaluate the performance of their methods. As in the previously mentioned work, these data are again quite different from ours. To evaluate the performance of the methodology, the so-called acyclic oriented graph matching measure [] is considered. We refer to the webpage of the cell tracking challenge for details on the evaluation metrics (see http://celltrackingchallenge.net/evaluation-methodology, accessed on 15 December 2021). Based on these, the reported tracking scores are between 0.873 and 0.902 [], 0.901 and 1.00 [], 0.950 and 0.987 [], 0.788 and 0.982 [] and 0.765 and 0.915 [] depending on the considered data set, respectively.
1.2. Contributions
For image segmentation, we first apply two well-known, powerful variational segmentation algorithms to generate a large training set of correctly delineated single cells. We can then train a CNN dedicated to segmenting out each single cell. Using a CNN significantly reduces the runtime of our computational framework for cell identification. The frame-to-frame tracking of individual cells in tightly packed colonies is a significantly more challenging task, and is hence the main topic discussed in the present work. We develop a set of innovative automatic cell tracking algorithms based on the successive minimization of three dedicated cost functionals. For each pair of successive image frames, minimizing these cost functionals over all potential cell registration mappings poses significant computational and mathematical challenges. Standard gradient descent algorithms are inefficient for these discrete and highly combinatorial minimization problems. Instead, we implement the stochastic neural network dynamics of BMs, with architectures and energy functions tailored to effectively solve our combinatorial tracking problem. Our major contributions are: (i) The design of a multi-stage cell tracking algorithm that starts with a parent–children pairing step, followed by removal of identified parent–children triplets, and concludes with a cell-to-cell registration step. (ii) The design of dedicated BM architectures, with several energy functions, respectively, minimized by true parent–children pairing and by true cell-to-cell registration. Energy minimizations are then implemented by simulation of BM stochastic dynamics. (iii) The development of automatic algorithms for the estimation of unknown weight parameters of our BM energy functions, using convex-concave programming tools [,,]. (iv) The evaluation of our methodology on synthetic and real image sequences of cell colonies. The massive effort involved in human expert annotation of cell colony recordings limits the availability of “ground truth tracking” data for dense bacterial colonies. Therefore, we first validated the accuracy of our cell tracking algorithms on recordings of simulated cell colonies, generated by the dedicated cell colony simulation software [,]. This provided us with ground truth frame-by-frame registration for cell lineages, enabling us to validate our methodology.
1.3. Outline
In Section 2, we describe the synthetic image sequence (see Section 2.1) and experimental data (see Section 2.2) of cell colonies considered as benchmarks for our cell tracking algorithms. In Section 2.3, we describe key cell characteristics considered in our tracking methodology to define metrics that enter our cost functionals. Our tracking approach is developed in greater detail in Section 3. We define valid cell registration mappings between successive image frames in Section 3.1. We outline how to automatically calibrate the weights of our various penalty terms in Section 3.2. Our algorithms for pairing parent cells with their children and for cell-to-cell registration are developed in Section 3.3, Section 3.4, Section 3.5, Section 3.6, Section 3.7, Section 3.8, Section 3.9. We present our main validation results on long image sequences (time series of images) in Section 4 and conclude with Section 5.
2. Datasets
Below, we introduce the datasets used to evaluate the performance of the proposed methodology. The synthetic data are described in Section 2.1. The experimental data (real imaging data) are described in Section 2.2.
2.1. Synthetic Videos of Simulated Cell Colonies
To validate our cell tracking algorithms, we consider simulated image sequences of dense cell populations. We refer to [,] for a detailed description of this mathematical model and its implementation. (The code for generating the synthetic data has been released at https://github.com/jwinkle/eQ, accessed on 15 December 2021) The simulated cell colony dynamics are driven by an agent based model [,], which emulates live colonies of growing, moving, and dividing rod-like E. coli cells in a 2D microfluidic trap environment. Between two successive frames J, , cells are allowed to move until they nearly bump into each other, and to grow at multiplicative rate denoted with an average value of per minute.
The cells are modeled as 2D spherocylinders of constant 1 m width. Each cell grew exponentially in length with a doubling time of 20 min. To prevent division synchronization across the population when a mother cell of length divides, the two daughter cells are assigned random birth lengths and , where is a random number sampled independently at each division from a uniform distribution on . Consequently, a bacterial cell b of length divides into two cells and , their lengths , satisfy and , , is a random number. The cells have a length of approximately 2 m after division and 4 m right before division. We refer to [] for additional details. The simulation keeps track of cell lineage, cell size, and cell location (among other parameters). The main output of each such simulation considered here is a binary image sequence of the cell colony with a fixed interframe duration. Each such synthetic image sequence is used as the sole input to our cell tracking algorithm. The remaining meta-data generated by the simulations are only used as ground truth to evaluate the performance of our tracking algorithms.
We consider several benchmark datasets of synthetic image sequences of simulated cell colonies of different complexity. We refer to these benchmarks as BENCH1 (500 frames), BENCH2 (300 frames), BENCH3 (300 frames), and BENCH6 (100 frames), with an interframe duration of 1, 2, 3 and 6 min, respectively. Notice that there is no explicit noise on the growth rate. However, due to the crowding of cells, the growth rate will vary from cell-to-cell. The generated binary images are of size pixels. We summarize these benchmarks in Table 1. The associated image sequences involve between 100 up to 500 frames, respectively. In Figure 2, we display an example of two simulated consecutive frames separated by 1 min. To simplify our presentation and validation tests, we control our simulations to make sure that cells will not exit the region of interest from one frame to the next, and we exclude cells that are only partially visible in the current frames.
Table 1.
Benchmark datasets. To test the tracking software, we consider simulated data. We have generated data of varying complexity with different interframe durations. We note that we also consider these data to train our algorithms for tracking cells. We report the label for each dataset, the interframe duration, as well as the number of frames generated. We set the cell growth factor to per min. We refer to the text for details about how these data have been generated.
Figure 2.
Simulated data and cell characteristics considered in the proposed algorithm. (Left): Two successive images generated by dynamic simulation for a colony of rod-shaped bacteria. The left image J displays cells at time t. At time with min, cells have moved and grown, and some have divided. These cells are displayed in image , which contains cells. We highlight two cells that have undergone a division between the frames (red and green ellipses). (Right): Geometry of a rod shaped bacterium. We consider different quantities of interest in the proposed algorithm. These include the center of a cell, the two end points and , and the long axis , respectively.
2.2. Laboratory Image Sequences (Real Biological Data)
We also verify the performance of our approach on real datasets of E. coli bacteria. These bacteria are about 1 m in diameter and on average 3 m in length, and they divide about every 30 min. The original images exported from the microscope are 0.11 m/pixel. The microscopy experimental data were obtained using JS006 [] (BW25113 araC lacI) E. coli strains containing a plasmid constitutively expressing yellow or cyan fluorescent protein (sfyfp or sfcfp) for identification. The plasmid also contains an ampicillin resistance gene and p15A origin. These cells were grown overnight in LB medium with 100 g/mL ampicillin for 18 h. These cultures were diluted in the morning into 1/1000 into 50 mL fresh LB with 100 g/mL ampicillin and grown for 3 h until they reached an OD600 of about 0.3. The cells were then concentrated by centrifuging 30 mL of culture at 2000× g for 5 min and then resuspending in 10 mL of fresh LB. The concentrated culture was loaded into a hallway microfluidic device prewarmed and flushed with 0.1% (v/v) Tween-20 []. In the microfluidic device, the cells were provided with continuous fresh LB with 100 g/mL ampicillin and 0.075% (v/v) Tween-20. The microfluidic device was placed onto an 60× oil objective and imaged every 6 min at phase contrast, YFP, and CFP filter settings using an inverted fluorescence microscope. We show a representative dataset in Figure 1.
2.3. Cell Characteristics
Next, we discuss characteristics of the E. coli bacteria important for our tracking algorithm.
Cell Geometry. In accordance with the dynamics of bacterial colonies in microfluidic traps, the dynamic simulation software generates colonies of rod-shaped bacteria. Cell shapes can be approximated by long and thin ellipsoids, which are geometrically well identified by their center, their long axis, and the two endpoints of this long axis. The center is the centroid of all pixels belonging to cell b. The long axis of cell b is computed by principal component analysis (PCA). The endpoints and of cell b are the first and last cell pixels closest to ; see Figure 2 (right) for a schematic illustration.
Cell Neighbors. For each image frame J, denote as the set of fully visible cells in J, and by the number of these cells. Let V be the set of all cell centers with . Denote the Delaunay triangulation [] of the finite planar set V with N vertices. We say that two cells , in B are neighbors if they verify the following three conditions: (i) are connected by the edge of one triangle in . (ii) The edge does not intersect any other cell in B. (iii) Their centers verify , where is a user defined parameter.
For the synthetic images of size that we considered (see Section 2.1), we take pixels. We write for short, whenever , are neighbors (i.e, satisfy the three conditions identified above).
Cell Motion. Let J, denote two successive images (i.e., frames). Denote , as the associated sets of cells. Superpose temporarily the images J and so that they then have the same center pixel. Any cell , which does not divide in the interframe , becomes a cell in image . The “motion vector” of cell b from frame J to is then defined by . If the cell b does divide between J and , denote as the last position reached by cell b at the time of cell division, and define similarly the motion . In our experimental recordings of real bacterial colonies with interframe duration 6 min, there is a fixed number such that for all cells for all pairs J, . In particular, we observed that, for real image sequences, pixels is an adequate choice. Consequently, we select pixels for all simulated image sequences of BENCH6. For BENCH1, we select pixels, again based on a comparison with real experimental recordings. Overall, the meta-parameter w is assumed to be a fixed number and to be known, since is an observable upper bound for the cell motion norm for a particular image sequence of a lab experiment.
Target Window. Recall that J, are temporarily superposed. Let be a square window of width w, with the same center as cell b. The target window is the set of all cells in having their centers in . Since , the cell must belong to the target window .
3. Methodology
3.1. Registration Mappings
Next, we discuss our assumptions on a valid registration mapping that establishes cell-to-cell correspondences between two frames. Let J, denote two successive images, with cell sets B and , respectively. As above, we let , and . Our goal is to track each cell from J to . For each cell , there exist three possible evolutions between J and :
- Case 1:
- Cell did not divide in the interframe , and has become a cell ; that is, has grown and moved during the interframe time interval.
- Case 2:
- Cell divided between J and , and generated two children cells ; we then denote .
- Case 3:
- Cell disappeared in the interframe , so that is not defined.
To simplify our exposition, we ignore Case 3. We discuss Case 3 in greater detail in the conclusions in Section 5. Consequently, a valid (true) registration mapping f will take values in the set .
3.2. Calibration of Cost Function Weights
With the notation we introduced, fix any two finite sets A, . Let be the set of all mappings g: . Fix m penalty functions , . Let be the ground truth mapping we want to discover through minimization in g of some given cost function defined by the linear combination of the penalty functions , the contributions of which are controlled by the cost function weights . In this section, we present a generic weight calibration algorithm, extending a technique introduced and applied in [,] for Markov random fields based image analysis.
The cost function must perform well (with the same weights) for hundreds of pairs of (synthetic) images J, . We consider one such synthetic pair for which the ground truth registration mapping is known, and use it to compute an adequate set of weights, which will then be used on all other synthetic pairs J, . Notice that, for experimental recordings of real cell colonies, no ground truth registration mappings f are available. In this case, f should be replaced by a set of user constructed, correct partial mappings defined on small subsets of A. The proposed weight calibration algorithm will also work in those situations.
We now show how knowing one ground truth mapping f can be used to derive the best feasible weights ensuring that f should be a plausible minimizer of the cost functional over . Let be the vector of m penalties for any mapping . Let be the weight vectors. Then, . Replacing g by another mapping induces the penalty changes and the cost change . Now, fix any known ground truth mapping . We want f to be a minimizer of COST, so we should have for all modifications .
For each , select an arbitrary (where is the target window for cell a; see Section 2.3), to define a new mapping from A to by , and for all . Since f is a minimizer of COST, this single point modification must generate the following cost increase
Denote the vector . Then, the positive vector , , should verify the set of linear constraints for all . There may be too many such linear constraints. Consequently, we relax these constraints by introducing a vector , , of slack variables indexed by all the . (In optimization, slack variables are introduced as additional unknowns to transform inequality constraints to an equality constraint and a non-negativity constraint on the slack variables.) We require the unknown positive vector and the slack variables vector y to verify the system of linear constraints:
where . The normalizing constant 1000 can be arbitrarily changed by rescaling. We seek high positive values for and small -norm for the slack variable vector y. Thus, we will seek two vectors and solving the following convex-concave minimization problem, where is a user selected (large) meta parameter:
subject to (1), where we denote for arbitrary x. To numerically solve the constrained minimization problem (2), we use the libraries CVXPY and DCCP (disciplined convex-concave programming) [,,]. DCCP is a package for convex-concave programming designed to solve non-convex problems. (DCCP can be downloaded at https://github.com/cvxgrp/dccp (last accessed on 20 January 2022)) It can handle objective functions and constraints with any known curvature as defined by the rules of disciplined convex programming []. We give examples of numerically computed weight vectors below. The computing time was less than 30 s for the data that we have prepared. For simplicity, we just considered one step changes in our computations, which make the overlap penalty weak. To increase the accuracy of the model, it is possible to consider a larger number of samples (i.e., multi-step changes). Note that the solutions of (2) are of course not unique, even after normalization by rescaling.
3.3. Cell Divisions and Parent–Children Short Lineages
Next, we discuss how we tackle the assignment problem when cells divide.
3.3.1. Cell Divisions
We now outline a cost function based methodology to detect cell divisions. The first step will be to seek the most likely parent for each potential pair of children cells. Fix two successive synthetic image frames J, with short interframe time equal to 1 minute. Their cell sets B, have cardinality N and , respectively. For our synthetic image sequences, all cells still exist in —either as whole cells or after dividing into two children cells, and no new cell enters the field of view during the interframe . This forces , and implies that the number of cell divisions occurring in this interframe verifies . Each children pair is born from a single parent . Thus, the set of all such true children pairs must then verify
For our video recordings of actual cell populations, during any interframe, we may have cells exiting the field of view and cells entering it, so that may be of the order of . To take this into account, we relax the constraint in (3) as follows:
where is a fixed bound estimated from our experiments. For simplicity, we have restricted our methodology to the situation where and are always 0. However, even in that case, there was a computational advantage to using the slightly relaxed constraint (4) with .
3.3.2. Most Likely Parent Cell for a Given Children Pair
For successive images J, with 1 min interframe, define the set of plausible children pairs by
where the threshold is user selected and fixed for the whole benchmark set BENCH1 of synthetic image sequences.
To evaluate if a pair of cells can qualify as a pair of children generated by division of a parent cell , we now quantify the “geometric distortion” between b and . Cell division of b into occurs with small motions of , . During the short interframe duration, the initial centers , of , in image J move by at most pixels each (see Section 2.3), and their initial distance to the center c of b is roughly at most , where is the long axis of cell b. Hence, the centers c, , of b, , should verify the constraint
Define the set of potential short lineages as the set all triplets with , , verifying the preceding constraint (6). For each potential lineage , define three terms penalizing the geometric distortions between a parent and a pair of children by the following formulas, where we denote c, , , the centers of cells b, , and A, , their long axes, respectively: (i) center distortion , () size distortion , and () angle distortion
Here, angle denotes “angles between non-oriented straight lines,” with a range from 0 to . Introduce three positive weights , , (to be estimated), and for every short lineage define its distortion cost by
For each plausible pair of children , we will compute the most likely parent cell as the cell minimizing in (7) over all , as summarized by the formula
To force this minimization to yield a reliable estimate of for most true pairs of children , we calibrate the weights , by the algorithm outlined in Section 3.2, using as “ground truth set” a fairly small set of visually identified true short lineages . For fixed , the set of potential parent cells has very small size due to the constraint (6). Hence, brute force minimization of the functional in (7) over all such that , is a fast computation for each in . The distortion minimizing yields the most likely parent cell . The brute force minimization in b of is still a greedy minimization in the sense that other soft constraints introduced further on are not taken into consideration during this preliminary fast computation of .
3.3.3. Penalties to Enforce Adequate Parent–Children Links
Any true pair of children cells should belong to , but must also verify lineage and geometric constraints which we now enforce via several penalties. Note that the new penalties introduced here are fully distinct from the three penalties specified above to define .
“Lineage” Penalty. Valid children pairs should be correctly matchable with their most likely parent cell (see (8)). Thus, we define the “lineage” penalty by
Notice that the computation of is quite fast.
“Gap” Penalty. Denote as the set of two endpoints of any cell b. For any pair , define endpoints and the gap penalty by
with .
“Dev” Penalty. For rod-shaped bacteria, a true pair of just born children must have a small and roughly aligned cells and . For , we quantify the deviation from alignment as follows. Let , be the closest endpoints of , (see (9)). Let be the straight line linking the centers , of , . Let , be the distances from , to the line . Then, set
“Ratio” Penalty. True children pairs must have nearly equal lengths. Thus, for with lengths , , we define the length ratio penalty by
“Rank” Penalty. Let be the minimum cell length over all cells in . In , children pairs just born during interframe must have lengths , close to . Thus, for , we define the rank penalty by
Given two successive images J, , we seek the set of true children pairs in , which is an unknown subset of . In Section 3.5 below, we replace X by its indicator function z and we build a cost function which should be nearly minimized when z is close to the indicator of . A key term of will be a weighted linear combination of the penalty functions . Since these penalties are different from those introduced in Section 3.3.2, we estimate their weights in the cost function by the algorithm outlined in Section 3.2. The minimization of will be implemented by simulations of a BM with energy function . We present these stochastic neural networks in the next section.
3.4. Generic Boltzmann Machines (BMs)
Minimization of our main cost functionals is a heavily combinatorial task, since the unknown variable is a mapping between two finite sets of sizes ranging from 80 to 120. To handle these minimizations, we use BMs originally introduced by Hinton et al. (see [,]). Indeed, these recurrent stochastic neural networks can efficiently emulate some forms of simulated annealing.
Each BM implemented here is a network of N stochastic neurons . In the BM context, the time is discretized and represents the number of steps in a Markov chain, where the successive updates of the BM configuration are analogous to the steps of a Gibbs sampler. The configuration of the whole network at time t is defined by the random states of all neuron . Each belongs to a fixed finite set . Hence, belongs to the configurations set .
Neuron interactivity is specified by a finite set of cliques. Each clique K is a subset of . During configuration updates , neurons may interact only if they are in the same clique. Here, all cliques K are of small sizes 1, or 2, or 3.
For each clique K, one specifies an energy function defined for all , with depending only on the such that . The full energy of configuration z is then defined by
The BM stochastic dynamics is driven by the energy function , and by a fixed decreasing sequence of virtual temperatures , tending slowly to 0 as . Here, we use standard temperature schemes of the form with fixed and slow decay rate .
We have implemented the classical “asynchronous” BM dynamics. At each time t, only one random neuron may modify its state, after reading the states of all neurons belonging to cliques containing . A much faster alternative, implementable on GPUs, is the “synchronous” BM dynamics, where at each time t roughly 50% of all neurons may simultaneously modify their states (see [,,]). The detailed BM dynamics are presented in the appendix (see Appendix A).
When the virtual temperatures decrease slowly enough to 0, the energy converges in probability to a local minimum of the BM energy over all configurations .
3.5. Optimized Set of Parent–Children Triplets
Next, we formulate the search for bona fide parent–children triplets as an optimization problem. For brevity, this outline is restricted to situations where (3) holds, as is the case for our synthetic image data. Simple modifications extend this approach to the relaxed constraint (4), which we used for lab videos of live cell populations. Fix successive images J, with a positive number of cell divisions . Denote the set of m plausible children pairs in . The penalties lin, gap, dev, ratio, and rank defined above for all pairs determine five numerical vectors , , , , in with coordinates , , , , .
We now define a binary BM constituted by mbinary stochastic neurons , . At time , each has a random binary valued state or 0. The random configuration of this BM belongs to the configuration space of all binary vectors . Let be the set of all subsets of . Each configuration is the indicator function of a subset of . We view each as a possible estimate for the unknown set of true children pairs . For each potential estimate of , the “lack of quality” of the estimate will be penalized by the energy function of our binary BM. We now specify the energy for all by combining the penalty terms introduced above. Note that the penalty terms introduced in Section 3.3.2 are quite different from those introduced in Section 3.3.3. No cell in can be assigned to more than one parent in b. To enforce this constraint, define the symmetric binary matrix by (i) if and the two pairs , have one cell in common, (ii) if and the two pairs , have no cell in common, (iii) for all j.
The quadratic penalty is non-negative for , and must be zero if . Introduce six positive weight parameters to be selected further on , . Define the vector as a weighted linear combination of the penalty vectors , , , ,
For any configuration , the BM energy is defined by the quadratic function
We already know that the unknown set of true children pairs must have cardinal . Thus, we seek a configuration minimizing the energy under the rigid constraint . Let be the vector with all its coordinates equal to 1. The constraint on z can be reformulated as . We want the unknown to be close to the solution of the constrained minimization problem
To force this minimization to yield a reliable estimate of , we calibrate the six weights
by the algorithm in Section 3.2. Denote the set of all such that . To minimize under the constraint , fix a slowly decreasing temperature scheme as in Section 3.4. We need to force the BM stochastic configurations to remain in . Then, for large time step t, the will converge in probability to a configuration approximately minimizing under the constraint .
Start with any . Assume that, for , one has already dynamically generated BM configurations . Then, randomly select two sites j, k such that and . Compute a virtual configuration Y by setting , , and for all sites i different from j and k. Compute the energy change , and the probability , where . Then, randomly select or with respective probabilities and . Clearly, this forces .
3.6. Performance of Automatic Children Pairing on Synthetic Videos
In the following subsections, we provide experimental results for pairing children and parent cells.
3.6.1. Children Pairing: Fast BM Simulations
For , one can reduce the computational cost for BM dynamics simulations by pre-computing and storing the symmetric binary matrix Q, as well as the m-dimensional vectors , , , , and their linear combination V. A priori reduction of m significantly reduces the computing times, and can be implemented by trimming away the pairs for which the penalties , , , , and are all larger than predetermined empirical thresholds. We performed a study on 100 successive (synthetic) images. We show scatter plots for the most informative penalty terms in Figure 3. These plots allow us to determine adequate thresholds for the penalty terms. We observed that, for the synthetic and real data, we considered the trimming of , , and reduced the percentage of invalid children pairs by 95%, therefore drastically reducing the combinatorial complexity of the problem.
Figure 3.
Scatter plots for tandems of the penalty terms , , and . We mark in orange the true children pairs and in blue invalid children pairs. These plots allow us to identify appropriate empirical thresholds to trim the (considered synthetic) data in order to reduce the computational complexity of the parent–children pairing.
The quadratic energy function is the sum of clique energies involving only cliques of cardinality 1 and 2. For any clique of cardinality 1, with , one has . For any clique of cardinality 2, with , one has . A key computational step when generating is to evaluate the energy change when one flips the binary values and by the new value for a fixed single site i. This step is quite fast since it uses only the numbers , , and , , where is the row of the matrix Q.
3.6.2. Children Pairing: Implementation on Synthetic Videos
We have implemented our children pairing algorithms on synthetic image sequences having 100 to 500 image frames with 1 min interframe (benchmark set BENCH1; see Section 2.1). The cell motion bound per interframe was defined by pixels. The parameter that defines the sets of plausible children pairs (see (5)) was set at pixels.
The known true cell registrations indicated that, in our typical BENCH1 image sequence, the successive sets had average cardinalities of 120, while the number of true children pairs per roughly ranged from 2 to 6 with a median of 4. The size of the reduced configuration space per image frame thus ranged from to with a median of .
Our weights estimation technique introduced in Section 3.2 yields the weights
and
for the penalties introduced in Section 4. To reduce the computing time for hundreds of BM energy minimizations on the BENCH1 image sequences, we excluded obviously invalid children pairs in each set, by simultaneously thresholding of the penalty terms. The BM temperature scheme was , with the number of epochs capped at 5000. The average CPU time for BM energy minimization dedicated to optimized children pairing was about 30 seconds per frame. (We provide hardware specifications in Appendix B).
3.6.3. Parent–Children Matching: Accuracy on Synthetic Videos
For each successive image pair J, , with cells B, of cardinality , our parent–children matching algorithm computes a set of short lineages , where the cell is expected to be the parent of cells . Recall that provides the number of cell divisions during the interframe . The number of correctly reconstructed short lineages is obtained by direct comparison to the known ground truth registration . For each frame J, we define the pcp-accuracy of our parent–children pairing algorithm as the ratio .
We have tested our parent–children matching algorithm on three long synthetic image sequences BENCH1 (500 frames), BENCH2 (300 frames), and BENCH3 (300 frames), with respective interframes of 1, 2, and 3 min. For each frame , we computed the pcp-accuracy between and .
We report the accuracies of our parent–children pairing algorithms in Table 2. For BENCH1, all 500 pcp-accuracies reached 100%. For BENCH2, pcp-accuracies reach 100% for 298 frames out of 300, and for the remaining two frames, accuracies were still high at 93% and 96%. For BENCH3, where interframe duration was longest (3 min), the 300 pcp-accuracies decreased slightly but still averaged 99%, and never fell below 90%.
Table 2.
Accuracies of parent–children pairing algorithm. We applied our parent–children pairing algorithm to three long synthetic image sequences BENCH1 (500 frames), BENCH2 (300 frames), and BENCH3 (300 frames), with interframe intervals of 1, 2, 3 min, respectively. The table summarizes the resulting pcp-accuracies. Note that pcp-accuracies are practically always at 100%. For BENCH2, pcp-accuracies are 100% for 298 frames out of 300, and for the remaining two frames, accuracies were still high at 93% and 96%. For BENCH3, the average pcp-accuracy for the 3 min interframe is 99%.
3.7. Reduction to Registrations with No Cell Division
Fix successive frames and their cell sets B, . We seek the unknown registration mapping , where iff cell b did not divide during the interframe and iff cell b divided into during the interframe.
If , we know that the number of cell divisions during the interframe should be . We then apply the parent–children matching algorithm outlined above to compute a set of short lineages with , and . For each , the cell b is computed by as the parent cell of the two children cells .
For each , eliminate from B the parent cell, b, and eliminate from the two children cells , . We are left with two residual sets, and having the same cardinality, . Assuming that our set of short lineages is correctly determined, the cells should not divide in the interframe , and hence have a single (still unknown) registration . Thus, the still unknown part of the registration f is a bijection from to .
Let and . For each , the cell b divides into the unique pair of cells, such that . Hence, we can set for all . Thus, the remaining problem to solve is to compute the bijective registration . We have reduced the registration discovery to a new problem, where no cell divisions occur in the interframe duration. In what follows, we present our algorithm to solve this registration problem.
3.8. Automatic Cell Registration after Reduction to Cases with No Cell Division
As indicated above, we can explicitly reduce the generic cell tracking problem to a problem where there is no cell division. We consider images J, with associated cell sets B, such that . Hence, there are no cell divisions in the interframe and the map f of this reduced problem is (in principle) a bijection with . In Figure 4, we show two typical successive images we use for testing with no cell division generated by the simulation software [,] (see Section 2.1).
Figure 4.
Simulated cell dynamics. From left to right, two successive simulated images J and with an interframe time of six minutes and no cell division, their image difference , and the associated motion vectors. For the image J and we color four pairs of cells in , which should be matched by the true cell registration mapping. Notice that the motion for an interframe time of six minutes is significant. We can observe that, even without considering cell division, we can no longer assume that corresponding cells in frame J and overlap.
3.8.1. The Set of Many-to-One Cell Registrations
We have reduced the registration search to a situation where, during the interframe , no cell has divided, no cell has disappeared, and no cell has suddenly emerged in without originating from B. The unknown registration should then in principle be injective and onto. However, for computational efficiency, we will temporarily relax the bijectivity constraint on f. We will seek f in the set of all many-to-one mappings such that for each , the cell is in the target window (see Section 2.3).
3.8.2. Registration Cost Functional
To design a cost functional , which should be roughly minimized when is very close to the true registration from B to , we linearly combine penalties , , , weighted by unknown positive weights , , , , to write, for all registrations ,
We specify the individual terms that appear in (11) below. Ideally, the minimizer of over all is close to the unknown true registration mapping . To enforce a good approximation of this situation, we first estimate efficient positive weights by applying our calibration algorithm (see Section 3.2). The actual minimization of over all is then implemented by a BM described in Section 3.9.
Cell Matching Likelihood: . Here, we extend a pseudo likelihood approach used to estimate parameters in Markov random fields modeling by Gibbs distributions (see []). Recall that is the known average cell growth rate. For any cells , , the geometric quality of the matching relies on three main characteristics: (i) motion of the cell center , () angle between the long axes and , () cell length ratio . Thus, for all and in the target window , define (i) Kinetic energy: . (ii) Distortion of cell length: . (iii) Rotation angle: is the geometric angle between the straight lines carrying and .
Fix , and let run through the whole target window . The finite set of values thus reached by the kinetic penalties has two smallest values , . Define , which is a list of “low” kinetic penalty values. Repeat this procedure for the penalties and to similarly define a of “low” distortion penalty values, and a of “low” rotation penalty values.
The three sets , , can be viewed as three random samples of size , respectively, generated by three unknown probability distributions , , . We approximate these three probabilities by their empirical cumulative distribution functions , , , which can be readily computed. We now use the right tails of these three CDFs to compute separate probabilistic evaluations of how likely the matching of cell with cell is. For any fixed mapping , and any , set . Compute the three penalties , , , and define three associated “likelihoods” for the matching :
High values of the penalties , , thus will yield three small likelihoods for the matching . With this, we can define a “joint likelihood” evaluating how likely is the matching :
Note that higher values of correspond to a better geometric quality for the matching of b with . To avoid vanishingly small likelihoods, whenever , we replace it by . Then, for any mapping , we define its likelihood by the finite product
The product of these N likelihoods is typically very small, since can be large. Thus, we evaluate the geometric matching quality of the mapping f via the averaged log-likelihood of f, namely,
Good registrations should yield small values for the criterion .
Overlap: . We expect bona fide cell registrations to be bijections. Consequently, we want to penalize mappings f which are many-to-one. We say that two distinct cells do overlap for the mapping if . The total number of overlapping pairs for f defines the overlap penalty:
Neighbor Stability: . Let . Denote as the set of all neighbors for cell in B (i.e., ; see Section 2.3). For bona fide registrations , and for most pairs of neighbors in B, we expect and to remain neighbors in . Consequently, we penalize the lack of “neighbors stability” for f by
Neighbor Flip: . Fix any mapping , any cell and any two neighbors , of b in B. Let , , . Let c, , and d, , be the centers of cells b, , and z, , . Let be the oriented angle between and , and let be the angle between and , respectively. We say that the mapping f has flipped cells , around b, and we set if , are both neighbors of z, and the two angles , have opposite signs. In all other cases, we set .
For any registration , define the flip penalty for f by
where is the neighborhood of cell b in B. In Figure 5, we illustrate an example of an unwanted cell flip.
Figure 5.
Illustration of an undesirable flip for the mapping f. The cells and are neighbors of , and mapped by f on neighbors of , as should be expected for bona fide cell registrations. However, for this mapping f, we have above above , whereas, for the original cells, we had above above . Our cost function penalizes flips of this nature.
3.9. BM Minimization of Registration Cost Function
In what follows, we define the optimization problem for the registration of cells from one frame to another (i.e., cell tracking), as well as associated methodology and parameter estimates.
3.9.1. BM Minimization of over
Let B, be two successive sets of cells. As outlined above, we have reduced the problem to one in which we can assume that , so that there is no cell division during the interframe. Write . For short, denote instead of the target window of cell . We seek to minimize over all registrations . Let be a BM with sites and stochastic neurons . At time t, the random state of will be some cell belonging to the target window and the random configuration of the whole belongs to the configurations set .
To any configuration , we associate a unique cell registration defined by for all j, denoted by . This determines a bijection from onto . The inverse of will be called , and is defined by , when for all j.
3.9.2. BM Energy Function
We now define the energy function of our BM for all . Denote . Since is a bijection from to , we must have
Our goal is to minimize , and we know that BM simulations should roughly minimize over all . Thus, we define the BM energy function by forcing
for any registration mapping , which—due to the preceding subsection—is equivalent to
for all configurations . The next subsection will explicitly express the energy in terms of cliques of neurons. Due to (13) and (14), we have
For large time t, the BM stochastic configuration tends with high probability to concentrate on configurations , which roughly minimize . The random registration will belong to and verify , so that . Consequently, for large t—with high probability—the random mapping will have a value of the cost functional close to .
3.9.3. Cliques of Interactive Neurons
The BM energy function just defined turns out to involve only three sets of small cliques: (i) is the set of all singletons , with . (ii) is the set of all pairs such that cells and are neighbors in B. (iii) is the set of all triplets such that cells and are both neighbors of in B. Denote as the set of all cliques for our BM.
Cliques in . For each clique in , and each , define its energy by
where LIK is given by (12). Set for K in . For all , define the energy by
which implies that the registration verifies .
Cliques in . For all , all cliques in , define the clique energies and by and
where and are the numbers of neighbors in B for cells and , respectively. Set for K in . Define the two energy functions
which implies that verifies and .
Cliques in . For each clique in , define the clique energy by
where is any registration mapping , , onto , , . The indicator FLIP was defined in Section 3.8.2. Set for K in . Define the energy
which implies that verifies .
Finally, define the clique energy for all by the linear combination
Summing this relation over all yields
Define then the final BM energy function by
For any , the associated registration verifies , , . By weighted linear combination of these equalities, and, due to (15), we obtain for all configurations , when or, equivalently, when .
3.9.4. Test Set of 100 Synthetic Image Pairs
As shown above, the minimization of over all registrations is equivalent to seeking BM configurations with minimal energy . We have implemented this minimization of by the long-term asynchronous dynamics of the BM just defined. This algorithm was designed for the registration of image pairs exhibiting no cell division, and was, therefore, implemented after the automatic reduction of the generic registration problem, as indicated earlier. We have tested this specialized registration algorithm on a set of 100 pairs of successive images of simulated cell colonies exhibiting no cell divisions. These 100 image pairs were extracted from the benchmark set BENCH6 of synthetic image sequence described in Section 2.1. The 100 pairs of cell sets B, had sizes ranging from 80 to 100 cells. For each test pair B, , each target window typically contained 30 to 40 cells. The set of configurations had huge cardinality ranging from to . However, the average number of neighbors of a cell was around 4 to 5.
3.9.5. Implementation of BM Minimization for
The numbers , , of cliques in , , have the following rough ranges , , and . For , denote the numbers of non-zero values for when z runs through and K runs through all cliques of cardinality k. One easily checks the rough upper bounds ; ; . Hence, to automatically register B to , one could pre-compute and store all the possible values of for all cliques and all the configurations . This accelerates the key computing steps of the asynchronous BM dynamics, namely, for the evaluation of energy change , when configurations z and differ at only one site . Indeed, the single site modification affects only the energy values for the very small number of cliques K, which contain the site j. In our benchmark sets of synthetic images, one had for all . Hence, the computation of was fast since it requires retrieving at most 24 pairs of pre-computed , , and evaluating the 24 differences . Another practical acceleration step is to replace the ubiquitous computations of probabilities by simply testing the value against 100 precomputed logarithmic thresholds.
In our implementation of ABM dynamics, we used virtual temperature schemes such as with . The BM simulation was stopped when the stochastic energy had remained roughly stable during the last N steps. Since all target windows had cardinality smaller 40, the initial configuration was computed via
where the likelihoods LIK were defined by (12).
3.9.6. Weight Calibration
For the pair of successive synthetic images J, displayed in Figure 4, we have cells. The ground truth registration f is known by construction; we used it to apply the weight calibration described in Section 3.2. We set the meta-parameter to and obtained the vector of weights
These weights are kept fixed for all the 100 pairs of images taken from the set BENCH6. The determined weights are used in the cost function defined above. This correctly parametrized the BM energy function . We then simulated the BM stochastic dynamics to minimize the BM energy .
3.9.7. BM Simulations
We launched 100 simulations of the asynchronous BM dynamics, one for each pair of successive images in our test set of 100 images taken from BENCH6. For each such pair, the ground truth mapping was known by construction and the stochastic minimization of the BM energy generated an estimated cells registration . For each pair B, in the considered set of 100 images, the accuracy of this automatically computed registration was evaluated by the percentage of cells such that . When , our BM has N stochastic neurons, and the asynchronous BM dynamics proceeds by successive epochs. Each epoch is a sequence of N single site updates of the BM configuration. For each one of our 100 simulations of BM asynchronous dynamics, the number of epochs ranged from 250 to 450.
The average computing time was about eight minutes per epoch, which entailed a computing time ranging from 30 to 50 min for each one of our 100 automatic registrations reported here. (We specify the hardware used to carry out these computations in Appendix B). Each image contains about 100 to 150 cells. Consequently, the runtime for the algorithm is approximately 20 s per cell for our prototype implementation. We note that this is only a rough estimate. The runtime depends on several factors, such as the number of cells in an image; the number of mother and daughter cells (i.e., how many cells divide); the size of the neighborhood of each individual cell (window size); the weights used in the cost function (which affects the number of epochs), etc. We note that the temperature scheme had not been optimized yet, so that these computing times are upper bounds. Earlier SBM studies [,,,] indicate that the same energy minimizations on GPUs could provide a computational speedup by a factor ranging between 30 and 50. We report registration accuracies in Table 3. For each pair of images in the considered set of 100 images, the accuracy of automatic registration was larger than 94.5%. The overall average registration accuracy was quite high at 99%.
Table 3.
Registration accuracy for synthetic image sequence BENCH. We consider 100 pairs of consecutive synthetic images taken from the benchmark dataset BENCH6. Automatic registration was implemented by BM minimization of the cost function , which was parametrized by the vector of optimized weights in (17). The average registration accuracy was 99%.
4. Results
In this section, we report results for the registration for cell dynamics involving growth, motion, and cell divisions.
4.1. Tests of Cell Registration Algorithms on Synthetic Data
We now consider more generic long synthetic image sequences of simulated cell colonies, with a small interframe duration of one minute. We still impose the mild constraint that no cell is lost between two successive images. The main difference with the earlier benchmark of 100 images from BENCH6 is that cells are allowed to freely divide during interframes, as well as to grow and to move. For the full implementation on 100 pairs of successive images, we first execute the parent–children pairing, and remove the identified parent–children triplets; we can then apply our cell registration algorithmic on the reduced sets cells. Our image sequence contained 760 true parent–children triplets, which we automatically identified with an accuracy of 100%. As outlined earlier, we removed all these identified cell triplets and then applied our tracking algorithm. This left us with a total of 12,631 cells (spread over 100 frames). Full automatic registration was then implemented with an accuracy higher than 99.5%.
4.2. Tests of Cell Registration Algorithms on Laboratory Image Sequences
To test our cell tracking algorithm on pairs of consecutive images extracted from recorded image sequences of bacterial colonies (real data), we had to automatically delineate all individual cells in each image. Representative frames of these data are shown in Figure 1. We describe these data in more detail in Section 2.2. We will only briefly outline the overall segmentation approach to not distract from our main contribution—the cell tracking algorithm. We use the watershed algorithm [] (also used, e.g., in []) to segment each frame into individual image segments containing one single cell each. Consequently, these regions represent over segmentations of the individual cells; we only know that each region will contain a bacteria cell b. To segment individual cells, an additional step is necessary. We then apply ad hoc nonlinear filters to remove minor segmentation artifacts. In a second step, we then identified the contour of each single cell b by applying the Mumford–Shah algorithm [] within the image segment containing a cell b. Since this procedure is quite time-consuming for large images, we have implemented it to produce a training set of delineated individual cells to train a CNN for image segmentation. After automatic training, this CNN substantially reduces the runtime of the cell segmentation/delineation procedure. We show the resulting segmentations in Figure 6. We provide additional information regarding our approach for the segmentation of individual bacteria cells in the appendix (see Appendix D).
Figure 6.
Segmentation results for experimental recordings of live cell colonies. We show two short image sequences extracts COL1 (left) and COL2 (right). The interframe duration is six minutes. The image sequence extract COL1 has only two successive image frames. The image sequence extract COL2 has four successive image frames. We are going to automatically compute four cell registrations, one for each pair of successive images in COL1 and COL2.
After each cell has been identified (i.e., segmented out) in each pair J, of successive images, we transform J, into binary images, where cells appear in white on a black background. For each resulting pair B, of successive sets of cells, we apply the parent–children pairing algorithm outlined in Section 3.3 to identify all the short lineages. For the two successive images in COL1, the discovered short lineages are shown in Figure 7 (left pair of images). Here, color designates the cell triplet algorithmically identified: parent cell in image J and its two children in image . We then remove each identified “parent” from B and its two children from . This yields the reduced cell sets and . We can then apply our tracking algorithm (see Section 3.7) dedicated to situations where cells do not divide during the interframe.
Figure 7.
Cell tracking results for the pair COL1 of successive images J, shown in Figure 6. The interframe duration is six minutes. (Left): Results for parent–children pairing on COL1. Automatically detected parent–children triplets are displayed in the same color. (Right): Computed registration. The removal of the automatically detected parent–children triplets (see left column) generates the reduced cell sets and . Automatic registration of and is again displayed via identical color for the registered cell pairs . Mismatches are mostly due to previous errors in parent–children pairing (see Figure 8 for a more detailed assessment).
For image sequences of live cell colonies, we had to re-calibrate most of our weight parameters. The weight parameters used for these image sequences are summarized in Table 4.
Table 4.
Cost function weights for parent–children pairing in the COL1 images displayed in Figure 6.
The BM temperature scheme was , with the number of epochs capped at 5000. We illustrate our COL1 automatic registration results in Figure 7 (right pair of images). Here, if cell has been automatically registered onto cell , b, share the same color. The cells colored in white in are cells which the registration algorithm did not succeed in matching to some cell in . These errors can essentially be attributed to errors in the parent–children pairing step. By visual inspection, we have determined that there are 14 true parent–children triplets in the successive images of COL1. Our parent–children pairing algorithm did correctly identify 11 of these 14 triplets. To check further the performance of our registration algorithm on live images, we also report automatic registration results for “manually prepared” true versions of and , obtained by removing “manually” the true parent–children triplets determined by visual inspection. For the short image sequence COL2, results are displayed in Figure 8.
Figure 8.
Cell tracking results for the short image sequence COL2 in Figure 6. The interframe duration for COL2 is six minutes. COL2 involves four successive images , . In our figure, each one of the three rows displays the automatic cell registration results between images and for . We report the accuracies of parent–children pairing and of the registration in Table 5. (Left column): Results for parent–children pairing. Each parent–children triplet is identified by the same color for each parent cell and its two children. (Middle column): Display of the automatically computed registration after removing the parent–children triplets already identified in order to generate two reduced sets and of cells. Again, the same color is used for each pair of automatically registered cells. The white cells in are cells which could not be registered to some cell in . (Right column): To differentiate between errors induced during automatic identification of and errors generated by automatic registration between and , we manually removed all “true” parent–children triplets and then applied our registration algorithm to this “cleaned” (reduced) cell sets and .
The display setup is the same: The left column shows the results of automatic parent–children pairing. The middle column illustrates the computed registration after automatic removal of the computer identified parent–children triplets. The third column displays the computed registration after “manually” removing the true parent–children triplets determined by visual inspection. Note that the overall matching accuracy can be improved if we reduce errors in the parent–children pairing. We report quantitative accuracies in Table 5. For parent–children pairing, accuracy ranges between 70% and 78%. For pure registration after correct parent–children pairing, accuracy ranges between 90% and 100%.
Table 5.
Cell tracking accuracy for the short image sequence COL2 in Figure 6 with an interframe of six minutes. We report the ratio of correctly predicted cell matches over the total number of true cell matches and the associated percentages. The accuracy results quantify four distinct percentages of correct detections (i) for parent cells in image J, (ii) for children cells in image , (iii) for parent–children triplets, and (iv) for registered pairs of cells .
Table 5.
Cell tracking accuracy for the short image sequence COL2 in Figure 6 with an interframe of six minutes. We report the ratio of correctly predicted cell matches over the total number of true cell matches and the associated percentages. The accuracy results quantify four distinct percentages of correct detections (i) for parent cells in image J, (ii) for children cells in image , (iii) for parent–children triplets, and (iv) for registered pairs of cells .
| Task | Accuracy | |||||
|---|---|---|---|---|---|---|
| correctly detected parents | 15/19 | 79% | 20/21 | 95% | 7/10 | 70% |
| correctly detected children | 35/38 | 92% | 32/42 | 76% | 14/20 | 70% |
| correct parent–children triplets | 15/19 | 78% | 16/21 | 76% | 7/10 | 70% |
| correctly registered cell pairs | 36/36 | 100% | 44/49 | 90% | 76/80 | 95% |
5. Conclusions and Future Work
We have developed a methodology for automatic cell tracking in recordings of dense bacterial colonies growing in a mono-layer. We have also validated our approach using synthetic data from agent based simulations, as well as experimental recordings of E. coli colonies growing in microfluidic traps. Our next goal is to streamline our implementation for systematic cell registration on experimentally acquired recordings of such cell colonies, to enable automated quantitative analysis and modeling of cell population dynamics and lineages.
There are a number of challenges for our cell tracking algorithm: Inherent imaging artifacts such as noise or intensity drifts, cell overlaps, similarity of cell shape characteristics across the population, tight packing of cells, somewhat large interframe times, cell growth combined with cell motion, and cell divisions represent just a few of these challenges. Overall, the cell tracking problem has combinatorial complexity, and for large frames is beyond the concrete patience of human experts. We tackle these challenges by developing a two-stage algorithm that first identifies parent–children triplets and subsequently computes cell registration from one frame to the next, after reducing the two original cell sets by automatic removal of the identified parent–children triplets. Our algorithms specify innovative cost functions dedicated to these registration challenges. These cost functions have combinatorial complexity. To discover good registrations, we minimize these cost functions numerically by intensive stochastic simulations of specifically structured BMs. We have validated the potential of our approach by reporting promising results obtained on long synthetic image sequences of simulated cell colonies (which naturally provide a ground truth for cell registration from one frame to the next). We have also successfully tested our algorithms on experimental recordings of live bacterial colonies.
The choice of adequate cost functions to drive each major cost optimization step in our multi-step cell tracking algorithms is essential for obtaining good tracking. Selecting the proper formulation had a strong impact on actual tracking accuracy. Our cost functions are fundamentally nonlinear, which entails additional complications. We introduced a set of meta-parameters for each cost function, and proposed an original learning algorithm to automatically identify good ranges for these meta-parameters.
Our BMs are focused on stochastic minimization of dedicated cost functions. An interesting feature of BMs we will explore in future work is the simplicity of their natural massive parallelization for fast stochastic minimization []. This allows us to mitigate the slow convergence typically observed for Gibbs samplers on discrete state spaces with high cardinality. Parallelized BMs implement a form of massively parallel simulated annealing. Sequential simulated annealing has been explored by physicists [,,,] seeking to minimize spin–glasses energies. For these clique-based energies, reaching global minima requires unfeasible CPU times, and much faster parallel simulated annealing yields only good local minima, via a sophisticated but still greedy stochastic search. Parallel stochasticity favors ending in rather stable local minima, which in turn enforces low sensitivity to small changes in energy parameters. Robustness to small changes in the coefficients of our cost functions is a desirable feature, since our algorithmic calibration of cost coefficients focuses on computing good ranges for these meta-parameters. We do not aim to seek global minima, generally a very elusive search because computing speed and scalability are important features in our problem. Recall the established results of Huber [] showing that optimal estimators of the mean for a Gaussian distribution lose efficiency very quickly when the Gaussian data are slightly perturbed.
In future work, we will further improve the stability and accuracy of our cell registration algorithms by exploring natural modifications of our cost functions. In the present work, we have not yet explicitly considered the case of cells vanishing between successive frames. This is a critical issue that can occur due to cells exiting or entering the field of view as well as due to errors in cell segmentation. The problem is somewhat controlled and/or mitigated in our experimental setup, where we expect cells to enter or vanish close to a precisely positioned trap edge and/or near frame boundaries. Since we intend to track lineages, each frame-to-frame error of this type may be problematic, and it will be instrumental for our future work to address these issues.
Linking parents to children involves an optimization distinct from the final optimization of frame-to-frame registrations. This did reduce computing time without reducing the quality for our benchmark results. However, in future work, one could attempt to iterate this sequence of two optimizations in order to reach a better minimum.
We note that our algorithm does work for experimental setups in which the frame rate of the video recordings is not fixed. This will require an adaptive parameter selection that depends on the frame rate. This can be implemented based on a trivial rescaling procedure. However, note that, for larger interframe times, more errors will impact tracking results. Indeed, large interframe durations intensify fluctuations in key parameters of cell dynamics, and increase the range of cell displacement, imposing searches in larger cell neighborhoods for cell pairing, as well as increased combinatorial complexity.
We have considered synthetic data to evaluate the performance of our method. One clear practical issue is that some of the parameters of our tracking algorithms may change when applied to laboratory image sequences acquired from colonies of different cells, with various image acquisition setups. One can design a computational framework to automatically fit the parameters of the simulation model to the imaging data acquired on specific live cell colonies, using specific camera hardware and setup. In future work, we will attempt to implement this type of fitting for our simulation model, before launching intensive model simulations to calibrate the parameters of our new tracking algorithms. We have not yet removed physical scales in the implementation of our tracking algorithm. Implementing such a non-dimensionalization will allow us to reduce the sensitivity of our methodology with respect to new datasets.
Identification of full lineages is an interesting concrete goal for cell tracking. Evaluating the accuracy of lineage identification on real cell colonies is quite challenging since it requires inheritable biological tagging of cells. This is probably feasible for populations mixing two or three cell types, but not for individualized tagging in populations of moderate size. However even partial tagging of sub-populations would provide some control on lineage identification accuracies.
Author Contributions
Conceptualization, R.A., M.R.B., K.J. and A.M.; methodology, R.A., A.M., S.S.; software, S.S. and J.J.W.; validation, S.S. and J.J.W.; formal analysis, R.A. and A.M.; investigation, R.A., M.R.B., K.J., A.M. and S.S.; resources, R.A., M.R.B., K.J. and A.M.; data curation, R.N.A. and M.R.B.; writing—original draft preparation, R.N.A., R.A., M.R.B., K.J., A.M., S.S. and J.J.W.; writing—review and editing, R.N.A., R.A., M.R.B., K.J., A.M., S.S. and J.J.W.; visualization, S.S., J.J.W. and A.M.; supervision, R.A. and A.M.; project administration, R.A., M.R.B., K.J. and A.M.; funding acquisition, R.A., M.R.B., K.J. and A.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research was partly supported by the National Science Foundation (NSF) through the grants GRFP 1842494 (R.N.A.), DMS-1854853 (R.A. and A.M.), DMS-2009923 (R.A. and A.M.), 1662305 (K.J.), MCB-1936770 (K.J.), and DMS-2012825 (A.M.); the joint NSF-National Institutes of General Medical Sciences Mathematical Biology Program grant DMS-1662290 (M.R.B.); and the Welch Foundation grant C-1729 (M.R.B.). Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily reflect the views of the NSF or the Welch Foundation.
Acknowledgments
This work was completed in part with resources provided by the Research Computing Data Core at the University of Houston.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Stochastic Dynamics of BMs
Notations and terminology refer to Section 3.4. Consider a BM network of N stochastic neurons , with finite configuration set . At time t, let be the random state of neuron , and the BM configuration is then . Fix as in Section 3.4 a sequence of virtual temperatures slowly decreasing to 0 for large t.
There are two main options to implement the Markov chain dynamics (see []).
Appendix A.1. Asynchronous BM Dynamics
Generate a long random sequence of sites , for instance by concatenating successive random permutations of the set S. At time t, the only neuron that may modify its current state is . For brevity, write . The neuron will compute its new random state by the following updating procedure: (i) For each y in , define a new configuration by , and for all . Let be the corresponding BM energy change. () In the finite set , select any z such that , and set . () Compute the probability . () The new random state of neuron will be equal to z with probability p and equal to the current state with probability . (v) For all , the new state of neuron remains equal to its current state .
Appendix A.2. Synchronous BM Dynamics
Fix a synchrony parameter , usually around 50%. At each time t, all neurons synchronously, but independently compute their own random binary tag , equal to 1 with probability , and to 0 with probability . Let be the set of all neurons. All the neurons such that then synchronously and independently compute their new random states by applying the updating procedure given above. In addition, for all j such that , the new state of remains equal to .
Appendix A.3. Comparing Asynchronous and Synchronous BM Dynamics
As t becomes large, and for temperatures slowly decreasing to 0, both BM dynamics generate with high probability configurations which provide deep local minima of the BM energy function. The asynchronous dynamics can be fairly slow. However, the synchronous dynamics are much faster since they emulate efficient forms of parallelel simulated annealing (see [,]) and are directly implementable on GPUs.
Appendix B. Computer Hardware
The computations were carried out on a dedicated server at the Department of Mathematics of the University of Houston. The hardware specifications are 64 Intel(R) Xeon(R) Gold 6142 CPU cores at 2.60 GHz with 128 GB of memory.
Appendix C. Parameters for Simulation Software
Our tracking module is a collection of python functions and has been released to the public at https://github.com/scopagroup/BacTrak (accessed on 15 December 2021). We refer to [,] for a detailed description of this mathematical model and its implementation. The code for generating the synthetic data has been released at https://github.com/jwinkle/eQ (accessed on 15 December 2021). We note that detailed installation instructions for the software can be found on this page. The parameters for this agent-based simulation software are as follows: Cells were modeled as 2D spherocylinders of constant, 1 m width. The computational framework takes into account mechanical constraints that can impact cell growth and influence other aspects of cell behavior. The growth rate of the cells is exponential and is controlled by the doubling time. The time until cells double is set to 20 min (default setting; resulting in a growth rate of ). The cells have a length of approximately 2 m after division and 4 m right before division (minimum division length of m; subject to some random perturbation). In our data set of simulated videos, there is no “trap wall” (as opposed to the simulations carried out in [,]). The “trap” encompassing all cells on a given frame has a size of 30 30 subdivided into pixels of size . The size of the resulting binary image used in our tracking algorithm is pixels. (We add a boundary of 100 pixels on each side). Bacteria are moving, growing and dividing within the trap. However, at this stage of our study, we consider only video segments where no cell disappears and where cells do not enter the trap from outside so that the trap is a confined environment. Cells move only due to soft shocks’ interactions with other neighboring cells. The time interval between any two successive image frames ranges from one minute to six minutes (see Table 1). All other simulation parameters remain unchanged; i.e., we use the default parameters specified in the simulation software.
Appendix D. Cell Segmentation
In the next couple of sections, we outline the framework we have developed to segment individual cells from real world laboratory imaging data. In a first step, we consider traditional segmentation algorithms—a watershed algorithm [,,] in combination with a variational contour based model—to generate a sufficiently large dataset to train a neuronal network. The actual segmentations on real data can subsequently be carried out efficiently using segmentation predictions generated by the trained neuronal network. Note that the proposed segmentation algorithm is only included for completeness. We do not view this as a major contribution of the present work.
Appendix D.1. Watershed Algorithm
We consider a watershed algorithm based on immersion that compares high intensity values to local intensity minima for cell segmentation [,,].
We consider Matlab’s implementation of the watershed algorithm in the present work. This version of the watershed algorithm is unseeded and yields n regions . To identify these regions, we perform a statistical analysis of each image histogram to compute adaptive rough thresholds for interiors and exterior of cells. This leads to watershed results which identify each cell by a segment slightly larger than the cell itself. The very small percentage of oversegmented cells is automatically detected by cell length and width computations through PCA analysis of each cell shape viewed as a cloud of planar points. Since our segments are slightly too wide, we reduce each segment to the exact outer cell contour by applying a Mumford–Shah algorithm to each segment computed by the watershed algorithm. In an ideal case, after applying the watershed algorithm, each individual bacteria cell , , will be located in a single region . However, we observed several segmentation errors after applying the watershed algorithm to the considered data. A common error is that a line segment that defines the boundary of a region crosses through a cell. That is, two regions contain parts of one bacterium cell. In what follows, we devise strategies to correct these errors. For this processing step, we have normalized the intensities of the data to .
Appendix D.2. Segmentation Errors: Correction Steps
We define the boundary segment as a non-empty intersection of two region’s boundaries, i.e., . Moreover, we denote the area of a region as . We know that the interior of a bacteria cell has a lower intensity than the exterior region of a cell. More precisely, the interior of a cell tends to have intensity values of zero, whereas the exterior of a cell (i.e., the background) tends to have an intensity that is close or equal to one. For this reason, we define a function for the intensity of the boundary. To remove outliers, we consider the average intensity value of the pixels located along a boundary segment. We denote this mean intensity value along a boundary by and the average intensity of a region by . One difficulty is that we cannot assume that the intensity of the pixels on the interior of each cell corresponds to the same value (i.e., there exist intensity and contrast drifts depending on location). We hypothesize that, if of a boundary segment is close to the average intensity of the regions on both sides of the boundary segment , this boundary segment does not separate two bacteria cells; it is erroneous. Conversely, if the difference between the mean intensity along a boundary segment and the mean intensity of the interior regions it separates is high, we consider that the boundary segment represents a good segmentation (i.e., represents a segment that does separate two cells). To quantify this notion, we define the height of a boundary segment as .
In Table A1, we report some statistics associated with the quantities of interest introduced above. There are several key observations we can draw from this table which confirm our qualitative (i.e., visual) assessment of the segmentation results. Most notably, we can observe that there seem to exist outliers in terms of cell size. Moreover, we can observe that, in some cases, we obtain a height of the boundary segment that is negative, and by that nonsensical. These observations allow us to develop some heuristic rules to remove erroneous segmentations.
Table A1.
Statistics of some quantities of interest related to the intensity of boundary segments and regions. These quantities allow us to define heuristics to identify erroneous segmentations computed by the watershed algorithm. We state the characteristic and report the minimum, maximum 5% quantile, mean, and standard deviation for the reported quantities of interest.
Table A1.
Statistics of some quantities of interest related to the intensity of boundary segments and regions. These quantities allow us to define heuristics to identify erroneous segmentations computed by the watershed algorithm. We state the characteristic and report the minimum, maximum 5% quantile, mean, and standard deviation for the reported quantities of interest.
| Characteristic | 5% Quantile | Min | Max | Mean |
|---|---|---|---|---|
| Watershed area | 56.00 | 43.00 | 984.00 | 211.00 ± 138.00 |
| Mean intensity of area | 0.34 | 0.00 | 0.57 | 0.41 ± 0.06 |
| Mean intensity of boundary segment | 0.46 | 0.30 | 0.99 | 0.74 ± 0.14 |
| Height of boundary segment | 0.05 | −0.09 | 0.62 | 0.33 ± 0.14 |
We introduce the following post-processing steps: (i) We connect small regions to their neighbors (i.e., regions that are too small in area to realistically contain any cells). We select the threshold for the area to be 65. This threshold is selected in accordance with the scale of the image and the expected size of bacteria cells observed in the image data. We merge each small region with one of its neighboring regions by removing the segment that separates the two. To select an appropriate region for merging, we choose the region that gave the lowest height from all available candidate regions that share the same boundary segment. (ii) We remove all boundary segments with a height that is below the 5% quantile of all heights. (iii) We remove all incomplete regions from our segmentation. We define a region as incomplete, if the region or the associated boundary segments touch an edge of the image. This step is necessary since we cannot guarantee that the regions close to the boundary contain an entire cell or only parts of a cell. Consequently, we decided to remove them to prevent any issues with our post-analysis.
Appendix D.3. Cell Boundary Detection
The next step is to identify the boundaries of individual cells contained within a subregion defined by the watershed algorithm. To identify the boundaries of the cells (and by that segment the individual cells), we use the Mumford–Shah algorithm []. Notice that we can execute the Mumford–Shah algorithm for each region separately making this an embarrassingly parallelizable problem. Denote the cell in each region by . We divide each of these regions into three different zones. The first zone is the interior of the cell denoted by . The second zone is exterior of the cell (i.e., the background) contained in the region and denoted by . The third zone is the boundary of the cell , denoted by . The Mumford–Shah algorithm represents a variational approach that allows us to segment cartoon like images. Mathematically speaking, we model information contained in each region as piecewise-smooth functions. In our model, the associated regions we seek to identify are given by the zones defined above—the interior and the exterior of the cell . Let denote the mean intensity for the interior of the cell and denote the mean intensity for the exterior of the cell . With this definition, we obtain the cost functional
where the first two terms measure the discrepancy between the piecewise smooth function and and the image intensities u and the third term is a penalty that measures the length of the boundary of a particular cell with parameter . Notice that our formulation slightly deviates from the traditional definition of the Mumford–Shah cost functional; we drop the penalty for the smoothness of the function u. The minimizer of the cost function defined above provides the sought after segmentation: the boundary, interior, and exterior of a cell. We have implemented the minimization of the cost function formula for each cell separately.
Appendix D.4. Convolutional Neural Networks (CNNs)
Next, we introduce our actual method for cell segmentation that can be efficiently applied to a large dataset (as opposed to the prototype method described above to generate the underlying training data). The biggest issue with the methodology outlined above is that our prototype implementation is computationally costly. While we envision that an improved implementation as well as the use of parallel computing can significantly reduce the time to solution, we decided not to further pursue a reduction in runtime but extend our methodology by taking advantage of existing machine learning algorithms. Replacing the approach outlined above by CNNs allowed us to reduce the runtime by factor of 60 to less than 3 min, without any significant loss in accuracy.
Training and Testing Data. In the absence of any ground truth data set for the classification of rod-shape bacteria cells from movies of cell populations, we consider the output of the Mumford–Shah algorithm introduced above as ground truth classification for training and testing our machine learning methodology. Above, we introduced three different zones: The interior , the exterior , and the boundary of a cell . We reduce these three regions to two zones—the interior and exterior of a cell . We assign pixels that belong to the label 0 and pixels that belong to and the label of 1. For an image of size , we obtain 40,000 binary labels. We limit the training of the CNN to a subregion of size in the center of each preprocessed image to avoid issues associated with mislabeled training data of cells located at the boundary of our data. We consider X as the set of features and Y as the set of labels. We want to assign to each pixel a label of either 0 or 1. For pixel p, we define to be a square window with center p located in the original image. The corresponding label is denoted by , which corresponds to the class of the pixel p in the binarized image.
CNN Algorithm. The considered CNN algorithm consists of two parts, (i) the convolutional auto-encoder and (ii) a fully connected multilayer perceptron (MLP). The input for the auto-encoder is a window of pixels. In the first layer of the encoder, we have a convolution layer Conv1 with kernel. We feed Conv1 to a max-pooling layer MPool2 with one stride and pooling window . The output of MPool2 is the input of a convolution layer Conv3. For decoding, we have almost the same structure in reverse order: We feed Conv3 to a deconvolution with kernel. Subsequently, we feed the output of this layer to a deconvolution with kernel. The decoder’s output is a window of pixels. We compare this output with the input window (since it is an auto-encoder, features and labels are the same) by using the mean square error as a cost function. We train the auto-encoder for all training sets using a mini-batch gradient descent. When the training is finished, we freeze the weights for Conv1 and Conv3.
After training the auto-encoder and freezing the weights, we feed X as the input to Conv1 and obtain the output of Conv3 denoted by . In the next step, we train an MLP with features and labels Y. We flatten , which is a matrix to a vector of size , called FCL4. FCL4 is fully connected to the hidden layer HID5 with 10 nodes. We use ReLu as a nonlinear function for HID5. We connect HID5 to the output layer OUT6, which possess two nodes for the two classes 0 and 1. We use a softmax function to find two probabilistic outputs and for related classes. We use maximum-entropy as a cost function. We train the MLP for training set of with mini-batch gradient descent.
We have trained the model with two images of size pixels; the training set is 80,000 images. We train the model for 100 epochs. The accuracy of the model for the image is 93%. The confusion matrix is shown in Table A2. Based on this confusion matrix, we can observe that the proposed methodology can predict the pixels located in the interior of a cell quite well. However, we can also observe that there is a slightly lower accuracy for the pixels outside the cells. This can be probably explained by the fact that the data sets are tightly packed with cells so that we have available more observations of foreground pixels (interior of cells) than pixels that belong to the background.
Table A2.
Confusion matrix for the CNN.
Table A2.
Confusion matrix for the CNN.
| 0 | 1 | |
| 0 | 0.97 | 0.03 |
| 1 | 0.11 | 0.89 |
References
- Butts-Wilmsmeyer, C.J.; Rapp, S.; Guthrie, B. The technological advancements that enabled the age of big data in the environmental sciences: A history and future directions. Curr. Opin. Environ. Sci. Health 2020, 18, 63–69. [Google Scholar] [CrossRef]
- Sivarajah, U.; Kamal, M.M.; Irani, Z.; Weerakkody, V. Critical analysis of Big Data challenges and analytical methods. J. Bus. Res. 2017, 70, 263–286. [Google Scholar] [CrossRef] [Green Version]
- Balomenos, A.D.; Tsakanikas, P.; Aspridou, Z.; Tampakaki, A.P.; Koutsoumanis, K.P.; Manolakos, E.S. Image analysis driven single-cell analytics for systems microbiology. BMC Syst. Biol. 2017, 11, 1–21. [Google Scholar] [CrossRef] [Green Version]
- Klein, J.; Leupold, S.; Biegler, I.; Biedendieck, R.; Münch, R.; Jahn, D. TLM-Tracker: Software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies. Bioinformatics 2012, 28, 2276–2277. [Google Scholar] [CrossRef] [Green Version]
- Stylianidou, S.; Brennan, C.; Nissen, S.B.; Kuwada, N.J.; Wiggins, P.A. SuperSegger: Robust image segmentation, analysis and lineage tracking of bacterial cells. Mol. Microbiol. 2016, 102, 690–700. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bennett, M.R.; Hasty, J. Microfluidic devices for measuring gene network dynamics in single cells. Nat. Rev. Genet. 2009, 10, 628–638. [Google Scholar] [CrossRef]
- Danino, T.; Mondragón-Palomino, O.; Tsimring, L.; Hasty, J. A synchronized quorum of genetic clocks. Nature 2010, 463, 326–330. Available online: http://xxx.lanl.gov/abs/15334406 (accessed on 15 December 2021). [CrossRef] [Green Version]
- Mather, W.; Mondragon-Palomino, O.; Danino, T.; Hasty, J.; Tsimring, L.S. Streaming instability in growing cell populations. Phys. Rev. Lett. 2010, 104, 208101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- El Najjar, N.; Van Teeseling, M.C.; Mayer, B.; Hermann, S.; Thanbichler, M.; Graumann, P.L. Bacterial cell growth is arrested by violet and blue, but not yellow light excitation during fluorescence microscopy. BMC Mol. Cell Biol. 2020, 21, 35. [Google Scholar] [CrossRef]
- Icha, J.; Weber, M.; Waters, J.C.; Norden, C. Phototoxicity in live fluorescence microscopy, and how to avoid it. BioEssays 2017, 39, 1700003. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kim, J.K.; Chen, Y.; Hirning, A.J.; Alnahhas, R.N.; Josić, K.; Bennett, M.R. Long-range spatio-temporal coordination of gene expression in synthetic microbial consortia. Nat. Chem. Biol. 2019, 15, 1102–1109. [Google Scholar] [CrossRef]
- Winkle, J.; Igoshin, O.A.; Bennett, M.R.; Josic, K.; Ott, W. Modeling mechanical interactions in growing populations of rod-shaped bacteria. Phys. Biol. 2017, 14, 055001. [Google Scholar] [CrossRef] [PubMed]
- Carpenter, A.E.; Jones, T.R.; Lamprecht, M.R.; Clarke, C.; Kang, I.H.; Friman, O.; Guertin, D.A.; Chang, J.H.; Lindquist, R.A.; Moffat, J.; et al. CellProfiler: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol. 2006, 7, R100. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kamentsky, L.; Jones, T.R.; Fraser, A.; Bray, M.; Logan, D.; Madden, K.; Ljosa, V.; Rueden, C.; Harris, G.B.; Eliceiri, K.; et al. Improved structure, function, and compatibility for CellProfiler: Modular high-throughput image analysis software. Bioinformatics 2011, 27, 1179–1180. [Google Scholar] [CrossRef] [Green Version]
- McQuin, C.; Goodman, A.; Chernyshev, V.; Kamentsky, L.; Cimini, B.A.; Karhohs, K.W.; Doan, M.; Ding, L.; Rafelski, S.M.; Thirstrup, D.; et al. CellProfiler 3.0: Next,-generation image processing for biology. PLoS Biol. 2018, 16, e2005970. [Google Scholar] [CrossRef] [Green Version]
- Alnahhas, R.N.; Sadeghpour, M.; Chen, Y.; Frey, A.A.; Ott, W.; Josić, K.; Bennett, M.R. Majority sensing in synthetic microbial consortia. Nat. Commun. 2020, 11, 1–10. [Google Scholar] [CrossRef] [PubMed]
- Locke, J.C.W.; Elowitz, M.B. Using movies to analyse gene circuit dynamics in single cells. Nat. Rev. Microbiol. 2009, 7, 383–392. [Google Scholar] [CrossRef] [Green Version]
- Alnahhas, R.N.; Winkle, J.J.; Hirning, A.J.; Karamched, B.; Ott, W.; Josić, K.; Bennett, M.R. Spatiotemporal Dynamics of Synthetic Microbial Consortia in Microfluidic Devices. ACS Synth. Biol. 2019, 8, 2051–2058. [Google Scholar] [CrossRef] [PubMed]
- Hand, A.J.; Sun, T.; Barber, D.C.; Hose, D.R.; MacNeil, S. Automated tracking of migrating cells in phase-contrast video microscopy sequences using image registration. J. Microsc. 2009, 234, 62–79. [Google Scholar] [CrossRef] [PubMed]
- Ulman, V.; Maška, M.; Magnusson, K.E.G.; Ronneberger, O.; Haubold, C.; Harder, N.; Matula, P.; Matula, P.; Svoboda, D.; Radojevic, M.; et al. An objective comparison of cell-tracking algorithms. Nat. Methods 2017, 14, 1141–1152. [Google Scholar] [CrossRef] [PubMed]
- Marvasti-Zadeh, S.M.; Cheng, L.; Ghanei-Yakhdan, H.; Kasaei, S. Deep learning for visual tracking: A comprehensive survey. IEEE Trans. Intell. Transp. Syst. 2021, 1–26. [Google Scholar] [CrossRef]
- Yilmaz, A.; Javed, O.; Shah, M. Object tracking: A survey. ACM Comput. Surv. (CSUR) 2006, 38, 13-es. [Google Scholar] [CrossRef]
- Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 674–679. [Google Scholar]
- Mang, A.; Biros, G. An inexact Newton–Krylov algorithm for constrained diffeomorphic image registration. SIAM J. Imaging Sci. 2015, 8, 1030–1069. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mang, A.; Ruthotto, L. A Lagrangian Gauss–Newton–Krylov solver for mass- and intensity-preserving diffeomorphic image registration. SIAM J. Sci. Comput. 2017, 39, B860–B885. [Google Scholar] [CrossRef] [PubMed]
- Mang, A.; Gholami, A.; Davatzikos, C.; Biros, G. CLAIRE: A distributed-memory solver for constrained large deformation diffeomorphic image registration. SIAM J. Sci. Comput. 2019, 41, C548–C584. [Google Scholar] [PubMed]
- Borzi, A.; Ito, K.; Kunisch, K. An optimal control approach to optical flow computation. Int. J. Numer. Methods Fluids 2002, 40, 231–240. [Google Scholar] [CrossRef]
- Horn, B.K.P.; Shunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef] [Green Version]
- Delpiano, J.; Jara, J.; Scheer, J.; Ramírez, O.A.; Ruiz-del Solar, J.; Härtel, S. Performance of optical flow techniques for motion analysis of fluorescent point signals in confocal microscopy. Mach. Vis. Appl. 2012, 23, 675–689. [Google Scholar]
- Madrigal, F.; Hayet, J.B.; Rivera, M. Motion priors for multiple target visual tracking. Mach. Vis. Appl. 2015, 26, 141–160. [Google Scholar] [CrossRef]
- Banerjee, D.S.; Stephenson, G.; Das, S.G. Segmentation and analysis of mother machine data: SAM. bioRxiv 2020. [Google Scholar] [CrossRef]
- Jug, F.; Pietzsch, T.; Kainmüller, D.; Funke, J.; Kaiser, M.; van Nimwegen, E.; Rother, C.; Myers, G. Optimal Joint Segmentation and Tracking of Escherichia Coli in the Mother Machine. In Bayesian and Graphical Models for Biomedical Imaging; Springer: Cham, Switzerland, 2014; Volume LNCS 8677, pp. 25–36. [Google Scholar]
- Lugagne, J.B.; Lin, H.; Dunlop, M.J. DeLTA: Automated cell segmentation, tracking, and lineage reconstruction using deep learning. PLoS Comput. Biol. 2020, 16, e1007673. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ollion, J.; Elez, M.; Robert, L. High-throughput detection and tracking of cells and intracellular spots in mother machine experiments. Nat. Protoc. 2019, 14, 3144–3161. [Google Scholar] [CrossRef]
- Sauls, J.T.; Schroeder, J.W.; Brown, S.D.; Le Treut, G.; Si, F.; Li, D.; Wang, J.D.; Jun, S. Mother machine image analysis with MM3. bioRxiv 2019, 810036. [Google Scholar] [CrossRef]
- Smith, A.; Metz, J.; Pagliara, S. MMHelper: An automated framework for the analysis of microscopy images acquired with the mother machine. Sci. Rep. 2019, 9, 10123. [Google Scholar]
- Arbelle, A.; Reyes, J.; Chen, J.Y.; Lahav, G.; Raviv, T.R. A probabilistic approach to joint cell tracking and segmentation in high-throughput microscopy videos. Med. Image Anal. 2018, 47, 140–152. [Google Scholar]
- Okuma, K.; Taleghani, A.; De Freitas, N.; Little, J.J.; Lowe, D.G. A boosted particle filter: Multitarget detection and tracking. In Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 28–39. [Google Scholar]
- Smal, I.; Niessen, W.; Meijering, E. Bayesian tracking for fluorescence microscopic imaging. In Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, Arlington, WA, USA, 6–9 April 2006; pp. 550–553. [Google Scholar]
- Kervrann, C.; Trubuil, A. Optimal level curves and global minimizers of cost functionals in image segmentation. J. Math. Imaging Vis. 2002, 17, 153–174. [Google Scholar] [CrossRef]
- Li, K.; Miller, E.D.; Chen, M.; Kanade, T.; Weiss, L.E.; Campbell, P.G. Cell population tracking and lineage construction with spatiotemporal context. Med. Image Anal. 2008, 12, 546–566. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; He, W.; Metaxas, D.; Mathew, R.; White, E. Cell segmentation and tracking using texture-adaptive snakes. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Arlington, VA, USA, 12–15 April 2007; pp. 101–104. [Google Scholar]
- Yang, F.; Mackey, M.A.; Ianzini, F.; Gallardo, G.; Sonka, M. Cell segmentation, tracking, and mitosis detection using temporal context. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Palm Springs, CA, USA, 26–29 October 2005; pp. 302–309. [Google Scholar]
- Sethuraman, V.; French, A.; Wells, D.; Kenobi, K.; Pridmore, T. Tissue-level segmentation and tracking of cells in growing plant roots. Mach. Vis. Appl. 2012, 23, 639–658. [Google Scholar] [CrossRef]
- Balomenos, A.D.; Tsakanikas, P.; Manolakos, E.S. Tracking single-cells in overcrowded bacterial colonies. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milano, Italy, 25–29 August 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 6473–6476. [Google Scholar]
- Bise, R.; Yin, Z.; Kanade, T. Reliable cell tracking by global data association. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 1004–1010. [Google Scholar]
- Bise, R.; Li, K.; Eom, S.; Kanade, T. Reliably tracking partially overlapping neural stem cells in DIC microscopy image sequences. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention Workshop, London, UK, 20–24 September 2009; pp. 67–77. [Google Scholar]
- Kanade, T.; Yin, Z.; Bise, R.; Huh, S.; Eom, S.; Sandbothe, M.F.; Chen, M. Cell image analysis: Algorithms, system and applications. In Proceedings of the 2011 IEEE Workshop on Applications of Computer Vision (WACV), Kona, HI, USA, 5–7 January 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 374–381. [Google Scholar]
- Primet, M.; Demarez, A.; Taddei, F.; Lindner, A.; Moisan, L. Tracking of cells in a sequence of images using a low-dimensional image representation. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Paris, France, 14–17 May 2008; pp. 995–998. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Munich, Germany, 5–9 October 2015; Volume LNCS 9351, pp. 234–241. [Google Scholar]
- Su, H.; Yin, Z.; Huh, S.; Kanade, T. Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features. Med. Image Anal. 2013, 17, 746–765. [Google Scholar] [CrossRef]
- Wang, Q.; Niemi, J.; Tan, C.M.; You, L.; West, M. Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy. Cytom. Part A J. Int. Soc. Adv. Cytom. 2010, 77, 101–110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jiuqing, W.; Xu, C.; Xianhang, Z. Cell tracking via structured prediction and learning. Mach. Vis. Appl. 2017, 28, 859–874. [Google Scholar] [CrossRef]
- Zhou, Z.; Wang, F.; Xi, W.; Chen, H.; Gao, P.; He, C. Joint multi-frame detection and segmentation for multi-cell tracking. In Proceedings of the International Conference on Image and Graphics, Beijing, China, 23–25 August 2019; Volume LNCS 11902, pp. 435–446. [Google Scholar]
- Sixta, T.; Cao, J.; Seebach, J.; Schnittler, H.; Flach, B. Coupling cell detection and tracking by temporal feedback. Mach. Vis. Appl. 2020, 31, 1–18. [Google Scholar]
- Hayashida, J.; Nishimura, K.; Bise, R. MPM: Joint representation of motion and position map for cell tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3823–3832. [Google Scholar]
- Payer, C.; Stern, D.; Neff, T.; Bishof, H.; Urschler, M. Instance segmentation and tracking with cosine embeddings and recurrent hourglass networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Granada, Spain, 16–20 September 2018; Volume LNCS 11071, pp. 3–11. [Google Scholar]
- Payer, C.; Štern, D.; Feiner, M.; Bischof, H.; Urschler, M. Segmenting and tracking cell instances with cosine embeddings and recurrent hourglass networks. Med. Image Anal. 2019, 57, 106–119. [Google Scholar] [PubMed]
- Vicar, T.; Balvan, J.; Jaros, J.; Jug, F.; Kolar, R.; Masarik, M.; Gumulec, J. Cell segmentation methods for label-free contrast microscopy: Review and comprehensive comparison. BMC Bioinform. 2019, 20, 1–25. [Google Scholar]
- Al-Kofahi, Y.; Zaltsman, A.; Graves, R.; Marshall, W.; Rusu, M. A deep learning-based algorithm for 2D cell segmentation in microscopy images. BMC Bioinform. 2018, 19, 1–11. [Google Scholar]
- Falk, T.; Mai, D.; Bensch, R.; Çiçek, Ö.; Abdulkadir, A.; Marrakchi, Y.; Böhm, A.; Deubner, J.; Jäckel, Z.; Seiwald, K.; et al. U-Net: Deep learning for cell counting, detection, and morphometry. Nat. Methods 2019, 16, 67–70. [Google Scholar] [CrossRef]
- Lux, F.; Matula, P. DIC image segmentation of dense cell populations by combining deep learning and watershed. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 236–239. [Google Scholar]
- Moen, E.; Bannon, D.; Kudo, T.; Graf, W.; Covert, M.; Van Valen, D. Deep learning for cellular image analysis. Nat. Methods 2019, 16, 1233–1246. [Google Scholar] [PubMed]
- Rempfler, M.; Stierle, V.; Ditzel, K.; Kumar, S.; Paulitschke, P.; Andres, B.; Menze, B.H. Tracing cell lineages in videos of lens-free microscopy. Med. Image Anal. 2018, 48, 147–161. [Google Scholar]
- Stringer, C.; Wang, T.; Michaelos, M.; Pachitariu, M. Cellpose: A generalist algorithm for cellular segmentation. Nat. Methods 2021, 18, 100–106. [Google Scholar] [CrossRef]
- Akram, S.U.; Kannala, J.; Eklund, L.; Heikkilä, J. Joint cell segmentation and tracking using cell proposals. In Proceedings of the IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 920–924. [Google Scholar]
- Nishimura, K.; Hayashida, J.; Wang, C.; Bise, R. Weakly-Supervised Cell Tracking via Backward-and-Forward Propagation. In Proceedings of the European Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 104–121. [Google Scholar]
- Rempfler, M.; Kumar, S.; Stierle, V.; Paulitschke, P.; Andres, B.; Menze, B.H. Cell lineage tracing in lens-free microscopy videos. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; pp. 3–11. [Google Scholar]
- Maska, M.; Ulman, V.; Svoboda, D.; Matula, P.; Matula, P.; Ederra, C.; Urbiola, A.; Espana, T.; Venkatesan, S.; Balak, D.M.W.; et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics 2014, 30, 1609–1617. [Google Scholar]
- Löffler, K.; Scherr, T.; Mikut, R. A graph-based cell tracking algorithm with few manually tunable parameters and automated segmentation error correction. bioRxiv 2021, 16, e0249257. [Google Scholar]
- Vo, B.T.; Vo, B.N.; Cantoni, A. The cardinality balanced multi-target multi-Bernoulli filter and its implementations. IEEE Trans. Signal Process. 2008, 57, 409–423. [Google Scholar]
- Pierskalla, W.P. The multidimensional assignment problem. Oper. Res. 1968, 16, 422–431. [Google Scholar] [CrossRef] [Green Version]
- Gilbert, K.C.; Hofstra, R.B. Multidimensional assignment problems. Decis. Sci. 1988, 19, 306–321. [Google Scholar] [CrossRef]
- Chakraborty, A.; Roy-Chowdhury, A.K. Context aware spatio-temporal cell tracking in densely packed multilayer tissues. Med. Image Anal. 2015, 19, 149–163. [Google Scholar] [CrossRef] [PubMed]
- Liu, M.; Yadav, R.K.; Roy-Chowdhury, A.; Reddy, G.V. Automated tracking of stem cell lineages of Arabidopsis shoot apex using local graph matching. Plant J. 2010, 62, 135–147. [Google Scholar] [CrossRef]
- Liu, M.; Chakraborty, A.; Singh, D.; Yadav, R.K.; Meenakshisundaram, G.; Reddy, G.V.; Roy-Chowdhury, A. Adaptive cell segmentation and tracking for volumetric confocal microscopy images of a developing plant meristem. Mol. Plant 2011, 4, 922–931. [Google Scholar] [CrossRef]
- Liu, M.; Li, J.; Qian, W. A multi-seed dynamic local graph matching model for tracking of densely packed cells across unregistered microscopy image sequences. Mach. Vis. Appl. 2018, 29, 1237–1247. [Google Scholar] [CrossRef]
- Vo, B.N.; Vo, B.T. A multi-scan labeled random finite set model for multi-object state estimation. IEEE Trans. Signal Process. 2019, 67, 4948–4963. [Google Scholar] [CrossRef] [Green Version]
- Punchihewa, Y.G.; Vo, B.T.; Vo, B.N.; Kim, D.Y. Multiple object tracking in unknown backgrounds with labeled random finite sets. IEEE Trans. Signal Process. 2018, 66, 3040–3055. [Google Scholar] [CrossRef] [Green Version]
- Kim, D.Y.; Vo, B.N.; Thian, A.; Choi, Y.S. A generalized labeled multi-Bernoulli tracker for time lapse cell migration. In Proceedings of the 2017 International Conference on Control, Automation and Information Sciences, Jeju, Korea, 18–21 October 2017; pp. 20–25. [Google Scholar]
- Winkle, J.J.; Karamched, B.R.; Bennett, M.R.; Ott, W.; Josić, K. Emergent spatiotemporal population dynamics with cell-length control of synthetic microbial consortia. PLoS Comput. Biol. 2021, 17, e1009381. [Google Scholar]
- Bise, R.; Sato, Y. Cell detection from redundant candidate regions under non-overlapping constraints. IEEE Trans. Med Imaging 2015, 34, 1417–1427. [Google Scholar] [CrossRef] [PubMed]
- Matula, P.; Maška, M.; Sorokin, D.V.; Matula, P.; Ortiz-de Solórzano, C.; Kozubek, M. Cell tracking accuracy measurement based on comparison of acyclic oriented graphs. PLoS ONE 2015, 10, e0144959. [Google Scholar]
- Agrawal, A.; Verschueren, R.; Diamond, S.; Boyd, S. A rewriting system for convex optimization problems. J. Control Decis. 2018, 5, 42–60. [Google Scholar] [CrossRef]
- Diamond, S.; Boyd, S. CVXPY: A Python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 2016, 17, 1–5. [Google Scholar]
- Shen, X.; Diamond, S.; Gu, Y.; Boyd, S. Disciplined convex-concave programming. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, 12–14 December 2016; pp. 1009–1014. [Google Scholar]
- Stricker, J.; Cookson, S.; Bennett, M.R.; Mather, W.H.; Tsimring, L.S.; Hasty, J. A fast, robust and tunable synthetic gene oscillator. Nature 2008, 456, 516–519. [Google Scholar] [CrossRef]
- Chen, Y.; Kim, J.K.; Hirning, A.J.; Josić, K.; Bennett, M.R. Emergent genetic oscillations in a synthetic microbial consortium. Science 2015, 349, 986–989. [Google Scholar] [CrossRef] [Green Version]
- Sloan, S.W. A fast algorithm for constructing Delauny triangulations in the plane. Adv. Eng. Softw. 1987, 9, 34–55. [Google Scholar] [CrossRef]
- Azencott, R. Simulated Annealing: Parallelization Techniques; Wiley-Interscience: Hoboken, NJ, USA, 1992; Volume 27. [Google Scholar]
- Azencott, R.; Chalmond, B.; Coldefy, F. Markov Image Fusion to Detect Intensity Valleys. Int. J. Comput. Vis. 1994, 16, 135–145. [Google Scholar] [CrossRef]
- Boyd, S.; Vandenberghe, L. Convex Optimization; Campridge University Press: Campridge, UK, 2004. [Google Scholar]
- Ackley, D.H.; Hinton, G.E.; Sejnowski, T.J. A learning algorithm for Boltzmann machines. Cogn. Sci. 1985, 9, 147–169. [Google Scholar] [CrossRef]
- Hinton, G.E.; Sejnowski, T.J. Chapter Learning and Relearning in Boltzmann Machines. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition; MIT Press: Cambridge, MA, USA, 1986; pp. 282–317. [Google Scholar]
- Azencott, R. Synchronous Boltzmann machines and Gibbs fields: Learning algorithms. In Neurocomputing; Springer: Berlin/Heidelberg, Germany, 1990; pp. 51–63. [Google Scholar]
- Azencott, R. Synchronous Boltzmann machines and artificial vision. Neural Netw. 1990, 135–143. Available online: https://www.math.uh.edu/~razencot/MyWeb/Research/Selected_Reprints/1990SynchronousBoltzmanMachinesArtificialVision.pdf (accessed on 15 December 2021).
- Azencott, R.; Graffigne, C.; Labourdette, C. Edge Detection and Segmentation of Textured Plane Images. In Stochastic Models, Statistical Methods, and Algorithms in Image Analysis; Springer-Verlag: New York, NY, USA, 1992; Volume 74, pp. 75–88. [Google Scholar]
- Kong, A.; Azencott, R. Binary Markov Random Fields and Interpretable Mass Spectra Discrimination. Stat. Appl. Genet. Mol. Biol. 2017, 16, 13–30. [Google Scholar] [CrossRef] [PubMed]
- Azencott, R.; Doutriaux, A.; Younes, L. Synchronous Boltzmann Machines and Curve Identification Tasks. Netw. Comput. Neural Syst. 1993, 4, 461–480. [Google Scholar]
- Garda, P.; Belhaire, E. An Analog Circuit with Digital I/O for Synchronous Boltzmann Machines. In VLSI for Artificial Intelligence and Neural Networks; Springer: Berlin, Germany, 1991; pp. 245–254. [Google Scholar]
- Lafargue, V.; Belhaire, E.; Pujol, H.; Berechet, I.; Garda, P. Programmable Mixed Implementation of the Boltzmann Machine. In International Conference on Artificial Neural Networks; Springer: Berlin, Germany, 1994; pp. 409–412. [Google Scholar]
- Pujol, H.; Klein, J.-O.; Belhaire, E.; Garda, P. RA: An analog neurocomputer for the synchronous Boltzmann machine. In Proceedings of the Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems, Turin, Italy, 26–28 September 1994; IEEE: Piscataway, NJ, USA, 1994; pp. 449–455. [Google Scholar]
- Beucher, S.; Lantuejoul, C. Use of watersheds in contour detection. In Workshop on Image Processing; CCETT/IRISA: Rennes, France, 1979. [Google Scholar]
- Mumford, D.B.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef] [Green Version]
- Mézard, M.; Parisi, G.; Virasoro, M.A. Spin Glass Theory and Beyond: An Introduction to the Replica Method and Its Applications; World Scientific Publishing Company: Singapore, 1987; Volume 9. [Google Scholar]
- Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
- Roussel-Ragot, P.; Dreyfus, G. A problem independent parallel implementation of simulated annealing: Models and experiments. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 1990, 9, 827–835. [Google Scholar] [CrossRef]
- Burda, Z.; Krzywicki, A.; Martin, O.C.; Tabor, Z. From simple to complex networks: Inherent structures, barriers, and valleys in the context of spin glasses. Phys. Rev. E 2006, 73, 036110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Huber, P.J. The 1972 Wald Lecture Robust Statistics: A Review. Ann. Math. Stat. 1972, 43, 1041–1067. [Google Scholar] [CrossRef]
- Ram, D.J.; Sreenivas, T.; Subramaniam, K.G. Parallel simulated annealing algorithms. J. Parallel Distrib. Comput. 1996, 37, 207–212. [Google Scholar] [CrossRef]
- Digabel, H.; Lantuejoul, C. Iterative Algorithms. In Proceedings of the 2nd European Symposium Quantitative Analysis of Microstructures in Material Science, Biology and Medicine, Caen, France, 4–7 October 1977; pp. 85–89. [Google Scholar]
- Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 583–598. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).