# Self-Learning Microfluidic Platform for Single-Cell Imaging and Classification in Flow

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

^{8}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Device Design and Fabrication

#### 2.2. Flow Focusing Principle

#### 2.3. Device Simulations

^{®}, leaving the channel widths, the sample channel height and the sheath fluid channel height as variables. Two-fold symmetry was exploited by simulating only one half of the device split along the x-axis, applying symmetric boundary conditions where appropriate. Steady-state fluid flow through the device was simulated using the computational fluid dynamics (CFD) module, coupled to the transport of diluted species module for all simulations involving fluorescein. We used the particle tracing module for all simulations involving microparticles. Simulations were conducted under the assumption of laminar flow, with no-slip boundary conditions on all walls. Inlets were subjected to laminar inflow constraints, parameterized by the sample flow rate and the sheath flow rate respectively. The outlet pressure was constrained to zero. Fluid parameters were assigned for liquid water at 293 K. For simulations involving fluorescein, the sample inlet was subjected to a concentration constraint, fixing the concentration of fluorescein at the inlet to its experimental value of 1 mM/mL. All other inlets were subjected to zero concentration constraints. Coupled CFD-transport systems were solved using COMSOL’s default solver. Maximum sample (fluorescein) height was calibrated using experimental data for a single set of sample and sheath flow rates, yielding a threshold concentration of fluorescein in the model. Fluorescein heights and widths were predicted by thresholding fluorescein concentration at the outlet. Particles used in tracing simulations were assumed to have a diameter of 6 um and density 1.002 kg/L, within the range of the parameters of a Saccharomyces cerevisiae cell [55]. Particles were subjected to Stokes’ drag, neglecting other force contributions. The simulation was initialized with 500 particles uniformly distributed at the sample inlet and traced for 10 ms. Particle positions were registered at the outlet and used to compute the mean particle position and its standard deviation for comparison with experiment.

#### 2.4. Microscopy

#### 2.5. Microsphere Z-Displacement Regression

^{−4}to predict the displacement of bead centers with respect to the focal plane [58,61]. Training images were augmented using random rotations, cropping and addition of Gaussian noise with mean 0 and standard deviation 0.1. Bright-field images of microspheres in flow were evaluated using the network and a z-displacement distribution of microspheres was computed.

#### 2.6. Yeast Cell Z-Distance Regression

^{−4}until convergence [61]. Bright-field images of yeast cells in flow were embedded using the neural network. A well-focused S. cerevisiae cell was chosen as a reference for z-distance computation. Z-distances-to-reference were computed for all single-cell images to derive a z-displacement distribution.

#### 2.7. Unsupervised Learning For Cellular Mixtures

_{θ}(x) = p

_{θ}(x|z) p(z) with parameters θ where z are low-dimensional latent variables, p(z) is the prior distribution of latent variables, and p

_{θ}(x|z) is the likelihood of an image x given a latent vector z. Here, p

_{θ}(x|z) is given by a neural network. Following [36], we construct a neural network to give a variational approximation q

_{θ}(z|x) to the true posterior p(z|x) by reparameterization and optimize the variational lower bound to the marginal log-likelihood log p(x) with respect to all neural network parameters θ (Figure S3). The neural network q

_{θ}maps single-cell images to samples from the low-dimensional latent distribution, and can thus be understood as an encoder, embedding data points into latent space. Similarly, p

_{θ}can be understood as a decoder, mapping samples from the latent distribution to high-dimensional single-cell images. Optimizing the variational lower bound is then realized as training the encoder and decoder to reconstruct input images well, under the constraint that the latent distribution should be as close as possible to the prior distribution p(z) (Figure S3, purple term). To successfully learn a latent space, where latent dimensions correspond to meaningful visual features (e.g., cell shape, cell focal plane), we follow and implement the FactorVAE term in the variational lower bound promoting the independence of latent dimensions (Figure S3, red term) [40]. This term penalizes the latent distribution’s total correlation (TC) as given by the Kullback–Leibler (KL) divergence of the marginal distribution q(z) and its corresponding factored distribution, which is a product of the distributions for each latent dimension [63]. This forces the latent distribution to be close to a product of independent distributions. Therefore, the neural networks are encouraged to learn a more strongly disentangled latent representation.

^{−4}, and factor loss balancing parameter γ = 10 until convergence. K-means clustering as implemented in scikit-learn was applied to the latent space to separate S. cerevisiae cells from S. pombe cells and compared to ground-truth species labels [66]. Nearest neighbors for sample cells were extracted using euclidean distance in latent space. The latent space was visualized in 2D using t-distributed stochastic neighbor embedding (TSNE) [67].

## 3. Results and Discussion

#### 3.1. Simulation Results

^{®}to simulate the effect of device height and sheath-to-sample flow velocity ratio on the maximum distance between sample and microscope cover slide, referred to here as “sample height”. The results of the parametric sweep are shown in Figure 2a. According to the simulation, any sheath-to-sample flow velocity ratio over 20 (Figure 2a, y-axis) should result in sample height below 10 μm (Figure 2a, color scale, darker green color) relative to the coverslip. This appears to be independent of the height of the device used (Figure 2a, x-axis). To test the simulation results, we fabricated two devices with different heights, the cross sections of which are shown in Figure 2b. The first device has a total height of 60 μm (10 μm bottom layer height + 50 μm top layer height), and the second device has a height of 120 μm (10 + 110 μm).

#### 3.2. Sample Confinement Testing Using Fluorescein

^{®}, pressure constraints need to be applied. For all simulations shown in this work, flow velocity constraints that prohibit backflow were applied instead. Indeed, when pressure constraints are applied instead of velocity constrains, we see a negative velocity in the x-direction within the sample channel, which confirms flow towards the sample inlet for sheath flow rates above 20 μL/min (Figure S4). These results demonstrate that devices with a height of 60 μm do not fulfill the necessary requirements for sample flow focusing within 10 µm from the coverslip.

^{®}to generate animations that show sample flow confinement with increasing sheath flow rates in 2D and 3D, see Supplementary Animation S1 and S2. In summary, devices with a height of 120 µm can robustly confine fluorescein as close as ~5 μm from the microscope slide.

#### 3.3. Simulation of Particle Positioning and Validation Using Microspheres

^{7}particles/ml and were introduced into the device through the sample inlet. To keep the particle velocity within the range we can image without being affected by image blur, we used sample flow rate of 0.1 μL/min and sheath (water) flow rate of 10 μL/min. These flow rates resulted in microspheres being confined within 16 μm from the coverslip, with a mean equilibrium position at approximately 10 μm away from the coverslip. To automatically and accurately quantify the z-positions of all microspheres imaged in flow with respect to the microscope focal plane and the coverslip, we trained a simple neural network on z-stacks of images acquired from static microspheres (Figure 5d and Supplementary File S2). Details about the network can be found in Section 2.4. The distribution of microspheres within the sample stream is depicted as a histogram in Figure 5e. Microsphere z-displacement is shown relative to the true focal plane (z-displacement = 0), as well as relative to the microscope slide (z-displacement = 10 μm). Microspheres located within 2 μm from the focal plane, depicted by the shaded area in Figure 5e, account for 68% of all spheres imaged. As we already demonstrated in Figure 3 and Figure 4, further confinement is possible in these devices by increasing the sheath flow rate. It is expected that this will increase the percentage of microspheres located within 2 μm from the focal plane since there will be less space available for them to move. However, increasing the sheath flow rate also increases the velocity of the microspheres, which in turn results in significant motion blur that no longer allows us to quantify the microsphere distribution.

#### 3.4. Single-Cell Imaging in Flow

#### 3.5. In-Flow Cell Imaging and Cell Classification

## 4. Discussion

## 5. Conclusions

## Supplementary Materials

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Gross, H.-J.; Verwer, B.; Houck, D.; Recktenwald, D. Detection of rare cells at a frequency of one per million by flow cytometry. Cytometry
**1993**, 14, 519–526. [Google Scholar] [CrossRef] [PubMed][Green Version] - De Rosa, S.C.; Herzenberg, L.A.; Herzenberg, L.A.; Roederer, M. 11-color, 13-parameter flow cytometry: Identification of human naive T cells by phenotype, function, and T-cell receptor diversity. Nat. Med.
**2001**, 7, 245–248. [Google Scholar] [CrossRef] - Sandberg, J.; Werne, B.; Dessing, M.; Lundeberg, J. Rapid flow-sorting to simultaneously resolve multiplex massively parallel sequencing products. Sci. Rep.
**2011**, 1, 1–7. [Google Scholar] [CrossRef] - Schonbrun, E.; Gorthi, S.S.; Schaak, D. Microfabricated multiple field of view imaging flow cytometry. Lab Chip
**2012**, 12, 268–273. [Google Scholar] [CrossRef] - Barteneva, N.S.; Fasler-kan, E.; Vorobjev, I.A. Imaging Flow Cytometry: Coping with Heterogeneity in Biological Systems. J. Histochem. Cytochem.
**2012**, 60, 723–733. [Google Scholar] [CrossRef] [PubMed] - Han, Y.; Gu, Y.; Zhang, A.C.; Lo, Y. Review: Imaging technologies for flow cytometry. Lab Chip
**2016**, 16, 4639–4647. [Google Scholar] [CrossRef] [PubMed] - Rosenauer, M.; Buchegger, W.; Finoulst, I.; Verhaert, P.; Vellekoop, M. Miniaturized flow cytometer with 3D hydrodynamic particle focusing and integrated optical elements applying silicon photodiodes. Microfluid. Nanofluid.
**2011**, 10, 761–771. [Google Scholar] [CrossRef] - Simonnet, C.; Groisman, A. High-throughput and high-resolution flow cytometry in molded microfluidic devices. Anal. Chem.
**2006**, 78, 5653–5663. [Google Scholar] [CrossRef] - Sundararajan, N.; Pio, M.S.; Lee, L.P.; Berlin, A.A. Three-Dimensional Hydrodynamic Focusing in Polydimethylsiloxane (PDMS) Microchannels. J. Microelectromech. Syst.
**2004**, 13, 559–567. [Google Scholar] [CrossRef] - Chang, C.-C.; Huang, Z.-X.; Yang, R.-J. Three-dimensional hydrodynamic focusing in two-layer polydimethylsiloxane (PDMS) microchannels. J. Micromech. Microeng.
**2007**, 17, 1479–1486. [Google Scholar] [CrossRef] - Wu, T.; Chen, Y.; Park, S.; Hong, J.; Teslaa, T.; Zhong, J.F.; Carlo, D.; Teitell, M.A.; Chiou, P. Pulsed laser triggered high speed microfluidic fluorescence activated cell sorter. Lab Chip
**2012**, 12, 1378–1383. [Google Scholar] [CrossRef] [PubMed] - Sakuma, S.; Kasai, Y.; Hayakawa, T.; Arai, F. On-chip cell sorting by high-speed local-flow control using dual membrane pumps. Lab Chip
**2017**, 17, 2760–2767. [Google Scholar] [CrossRef] - Mao, X.; Lin, S.C.S.; Dong, C.; Huang, T.J. Single-layer planar on-chip flow cytometer using microfluidic drifting based three-dimensional (3D) hydrodynamic focusing. Lab Chip
**2009**, 9, 1583–1589. [Google Scholar] [CrossRef] - Eluru, G.; Julius, L.A.N.; Gorthi, S.S. Single-layer microfluidic device to realize hydrodynamic 3D flow focusing. Lab Chip
**2016**, 16, 4133–4141. [Google Scholar] [CrossRef] [PubMed] - Gualda, E.J.; Pereira, H.; Martins, G.G.; Gardner, R.; Moreno, N. Three-dimensional imaging flow cytometry through light-sheet fluorescence microscopy. Cytom. Part A
**2017**, 91, 144–151. [Google Scholar] [CrossRef] - Nawaz, A.A.; Zhang, X.; Mao, X.; Rufo, J.; Lin, S.C.S.; Guo, F.; Zhao, Y.; Lapsley, M.; Li, P.; McCoy, J.P.; et al. Sub-micrometer-precision, three-dimensional (3D) hydrodynamic focusing via “microfluidic drifting”. Lab Chip Miniaturisation Chem. Biol.
**2014**, 14, 415–423. [Google Scholar] [CrossRef] - Paiè, P.; Bragheri, F.; Di Carlo, D.; Osellame, R. Particle focusing by 3D inertial microfluidics. Microsyste. Nanoeng.
**2017**, 3, 17027. [Google Scholar] [CrossRef][Green Version] - Rane, A.S.; Rutkauskaite, J.; deMello, A.; Stavrakis, S. High-Throughput Multi-parametric Imaging Flow Cytometry. Chem
**2017**, 3, 588–602. [Google Scholar] [CrossRef] - Nitta, N.; Sugimura, T.; Isozaki, A.; Mikami, H.; Hiraki, K.; Sakuma, S.; Iino, T.; Arai, F.; Endo, T.; Fujiwaki, Y.; et al. Intelligent Image-Activated Cell Sorting. Cell
**2018**, 175, 266–276.e13. [Google Scholar] [CrossRef] - Normolle, D.P.; Donnenberg, V.S.; Donnenberg, A.D. Statistical classification of multivariate flow cytometry data analyzed by manual gating: Stem, progenitor, and epithelial marker expression in nonsmall cell lung cancer and normal lung. Cytom. Part A
**2013**, 83A, 150–160. [Google Scholar] [CrossRef] [PubMed] - Ye, X.; Ho, J.W.K. Ultrafast clustering of single-cell flow cytometry data using FlowGrid. BMC Syst. Biol.
**2019**, 13 (Suppl. 2), 35. [Google Scholar] [CrossRef] - Pouyan, M.B.; Jindal, V.; Birjandtalab, J.; Nourani, M. Single and multi-subject clustering of flow cytometry data for cell-type identification and anomaly detection. BMC Med. Genom.
**2016**, 9, 41. [Google Scholar] [CrossRef] - Kraus, O.Z.; Grys, B.T.; Ba, J.; Chong, Y.T.; Frey, B.J.; Boone, C.; Andrews, B.J.J.; Pärnamaa, T.; Parts, L.; Humeau-Heurtier, A.; et al. The curse of dimensionality. Mach. Learn.
**2018**, 7, 18. [Google Scholar] - Köppen, M. The curse of dimensionality. In Proceedings of the 5th Online World Conference on Soft Computing in Industrial Applications, 4–18 September 2000; pp. 4–8. [Google Scholar]
- Humeau-Heurtier, A. Texture Feature Extraction Methods: A Survey. IEEE Access
**2019**, 7, 8975–9000. [Google Scholar] [CrossRef] - Carpenter, A.E.; Jones, T.R.; Lamprecht, M.R.; Clarke, C.; Kang, I.H.; Friman, O.; Guertin, D.A.; Chang, J.H.; Lindquist, R.A.; Moffat, J.; et al. CellProfiler: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol.
**2006**, 7, R100. [Google Scholar] [CrossRef] - Kraus, O.Z.; Grys, B.T.; Ba, J.; Chong, Y.; Frey, B.J.; Boone, C.; Andrews, B.J. Automated analysis of high-content microscopy data with deep learning. Mol. Syst. Biol.
**2017**, 13, 924. [Google Scholar] [CrossRef] - Rumetshofer, E.; Hofmarcher, M.; Röhrl, C.; Hochreiter, S.; Klambauer, G. Human-level Protein Localization with Convolutional Neural Networks. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Pärnamaa, T.; Parts, L. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning. G3 (Bethesda)
**2017**, 7, 1385–1392. [Google Scholar] [CrossRef][Green Version] - Platt, J. Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines; Technical Report MSR-TR-98-14; Microsoft Research: Redmond, WA, USA, 1998. [Google Scholar]
- Chong, Y.T.; Koh, J.L.Y.; Friesen, H.; Kaluarachchi Duffy, S.; Cox, M.J.; Moses, A.; Moffat, J.; Boone, C.; Andrews, B.J. Yeast Proteome Dynamics from Single Cell Imaging and Automated Analysis. Cell
**2015**, 161, 1413–1424. [Google Scholar] [CrossRef] [PubMed][Green Version] - Bengio, Y. Deep Learning of Representations: Looking Forward. In Statistical Language and Speech Processing, Lecture Notes in Computer Science; Dediu, A.-H., Martín-Vide, C., Mitkov, R., Truthe, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 1–27. [Google Scholar]
- Gidaris, S.; Singh, P.; Komodakis, N. Unsupervised Representation Learning by Predicting Image Rotations. arXiv
**2018**, arXiv:1803.07728. [Google Scholar] - Haeusser, P.; Plapp, J.; Golkov, V.; Aljalbout, E.; Cremers, D. Associative Deep Clustering: Training a Classification Network with no Labels. In Proceedings of the German Conference on Pattern Recognition (GCPR), Stuttgart, Germany, 9–12 October 2018. [Google Scholar]
- Caron, M.; Bojanowski, P.; Joulin, A.; Douze, M. Deep Clustering for Unsupervised Learning of Visual Features. arXiv
**2018**, arXiv:1807.05520. [Google Scholar] - Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv
**2013**, arXiv:1312.6114. [Google Scholar] - Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv
**2014**, arXiv:1406.2661. [Google Scholar] - Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.-A. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. J. Mach. Learn. Res.
**2010**, 11, 3371–3408. [Google Scholar] - Higgins, I.; Amos, D.; Pfau, D.; Racaniere, S.; Matthey, L.; Rezende, D.; Lerchner, A. Towards a Definition of Disentangled Representations. arXiv
**2018**, arXiv:1812.02230. [Google Scholar] - Kim, H.; Mnih, A. Disentangling by Factorising. arXiv
**2018**, arXiv:1802.05983. [Google Scholar] - Kim, M.; Wang, Y.; Sahu, P.; Pavlovic, V. Relevance Factor VAE: Learning and Identifying Disentangled Factors. arXiv
**2019**, arXiv:1902.01568. [Google Scholar] - Higgins, I.; Matthey, L.; Pal, A.; Burgess, C.; Glorot, X.; Botvinick, M.; Mohamed, S.; Lerchner, A. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Burgess, C.P.; Higgins, I.; Pal, A.; Matthey, L.; Watters, N.; Desjardins, G.; Lerchner, A. Understanding disentangling in β-VAE. arXiv
**2018**, arXiv:1804.03599. [Google Scholar] - Chen, R.T.Q.; Li, X.; Grosse, R.; Duvenaud, D. Isolating Sources of Disentanglement in Variational Autoencoders. arXiv
**2018**, arXiv:1802.04942. [Google Scholar] - Mescheder, L.; Geiger, A.; Nowozin, S. Which Training Methods for GANs do actually Converge? arXiv
**2018**, arXiv:1801.04406. [Google Scholar] - Scott, R.; Sethu, P.; Harnett, C.K. Three-dimensional hydrodynamic focusing in a microfluidic Coulter counter. Rev. Sci. Instrum.
**2008**, 79, 46104. [Google Scholar] [CrossRef] - Hairer, G.; Pärr, G.S.; Svasek, P.; Jachimowicz, A.; Vellekoop, M.J. Investigations of micrometer sample stream profiles in a three-dimensional hydrodynamic focusing device. Sens. Actuators B Chem.
**2008**, 132, 518–524. [Google Scholar] [CrossRef] - Chung, S.; Park, S.J.; Kim, J.K.; Chung, C.; Han, D.C.; Chang, J.K. Plastic microchip flow cytometer based on 2- and 3-dimensional hydrodynamic flow focusing. Microsyst. Technol.
**2003**, 9, 525–533. [Google Scholar] [CrossRef] - Lake, M.; Narciso, C.; Cowdrick, K.; Storey, T.; Zhang, S.; Zartman, J.; Hoelzle, D. Microfluidic device design, fabrication, and testing protocols. Protoc. Exch.
**2015**. [Google Scholar] [CrossRef] - Qin, D.; Xia, Y.; Whitesides, G.M. Soft lithography for micro- and nanoscale patterning. Nat. Protoc.
**2010**, 5, 491. [Google Scholar] [CrossRef] - Ward, K.; Fan, Z.H. Mixing in microfluidic devices and enhancement methods. J. Micromech. Microeng.
**2015**, 25, 094001. [Google Scholar] [CrossRef][Green Version] - Tan, J.N.; Neild, A. Microfluidic mixing in a Y-junction open channel. AIP Adv.
**2012**, 2, 032160. [Google Scholar] [CrossRef] - Ushikubo, F.Y.; Birribilli, F.S.; Oliveira, D.R.B.; Cunha, R.L. Y- and T-junction microfluidic devices: Effect of fluids and interface properties and operating conditions. Microfluid. Nanofluid.
**2014**, 17, 711–720. [Google Scholar] [CrossRef] - Watkins, N.; Venkatesan, B.M.; Toner, M.; Rodriguez, W.; Bashir, R. A robust electrical microcytometer with 3-dimensional hydrofocusing. Lab Chip
**2009**, 9, 3177–3184. [Google Scholar] [CrossRef] [PubMed] - Bryan, A.K.; Goranov, A.; Amon, A.; Manalis, S.R. Measurement of mass, density, and volume during the cell cycle of yeast. Proc. Natl. Acad. Sci. USA
**2010**, 107, 999–1004. [Google Scholar] [CrossRef] - Ridler, T.W.; Calvard, S. Picture Thresholding Using an Iterative Selection Method. IEEE Trans. Syst. Man Cybern.
**1978**, 8, 630–632. [Google Scholar] - Salvi, M.; Molinari, F. Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images. Biomed. Eng. Online
**2018**, 17, 89. [Google Scholar] [PubMed] - Paszke, A.; Chanan, G.; Lin, Z.; Gross, S.; Yang, E.; Antiga, L.; Devito, Z. Automatic differentiation in PyTorch. In Proceedings of the 31st Conference Neural Information Processing, Long Beach, CA, USA, 4–9 December 2017; pp. 1–4. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; p. 8. [Google Scholar]
- Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical Evaluation of Rectified Activations in Convolutional Network. arXiv
**2015**, arXiv:1505.00853. [Google Scholar] - Kingma, D.P.; Ba, J.L. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
- Koch, G.; Zemel, R.; Salakhutdinov, R. Siamese Neural Networks for One-shot Image Recognition. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; p. 8. [Google Scholar]
- Watanabe, S. Information Theoretical Analysis of Multivariate Correlation. IBM J. Res. Dev.
**1960**, 4, 66–82. [Google Scholar] [CrossRef] - Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv
**2015**, arXiv:1502.03167. [Google Scholar] - Rezende, D.J.; Mohamed, S.; Wierstra, D. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. arXiv
**2014**, arXiv:1401.4082. [Google Scholar] - Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res.
**2011**, 12, 2825–2830. [Google Scholar] - Van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res.
**2008**, 9, 2579–2605. [Google Scholar] - Wyatt Shields, C., IV; Reyes, C.D.; López, G.P. Microfluidic cell sorting: A review of the advances in the separation of cells from debulking to rare cell isolation. Lab Chip
**2015**, 15, 1230–1249. [Google Scholar] [CrossRef] - Kuo, J.S.; Chiu, D.T. Controlling Mass Transport in Microfluidic Devices. Annu. Rev. Anal. Chem.
**2011**, 4, 275–296. [Google Scholar] [CrossRef][Green Version] - Salmon, J.B.; Ajdari, A. Transverse transport of solutes between co-flowing pressure-driven streams for microfluidic studies of diffusion/reaction processes. J. Appl. Phys.
**2007**, 101, 074902. [Google Scholar] [CrossRef][Green Version] - Kuntaegowdanahalli, S.S.; Bhagat, A.A.; Kumar, G.; Papautsky, I. Inertial microfluidics for continuous particle separation in spiral microchannels. Lab Chip
**2009**, 9, 2973–2980. [Google Scholar] [CrossRef] - Carlo, D. Di Inertial microfluidics. Lab Chip
**2009**, 9, 3038–3046. [Google Scholar] [CrossRef] - Yu, C.; Qian, X.; Chen, Y.; Yu, Q.; Ni, K.; Wang, X. Three-Dimensional Electro-Sonic Flow Focusing Ionization Microfluidic Chip for Mass Spectrometry. Micromachines
**2015**, 6, 1890–1902. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**(

**a**) Lengthwise 3D device cross-section showing the difference in height between the sheath and sample inlets. Red color is used to show the bottom layer of photoresist, and also the device footprint. The top layer of photoresist is shown in grey (mirror symmetry across the y-axis applies). The difference in height between the sheath inlet and the sample inlet shown is not drawn to scale and only serves as an example; (

**b**) Device top view showing the flow focusing mechanism, where the black area is occupied by sheath fluid and the green area is occupied by the sample; (

**c**) 2D lengthwise cross-section of the channel (front view) showing sample confinement from both the top and the sides.

**Figure 2.**(

**a**) Parametric sweep performed in COMSOL Multiphysics

^{®}. Flow velocity ratio (y-axis) refers to the ratio between sheath fluid velocity and sample flow velocity. Device height (x-axis) refers to the total height of the device, assuming a constant sample inlet height of 10 μm. Sample height (color scale) refers to the maximum distance between coverslip and sample. Log–log scale has been used to better resolve the areas of interest; (

**b**) Cross sections of two-layer device geometries tested, where H

_{1}is the height of the bottom layer and H

_{2}is the height of the top layer: (i) H

_{1}= 10 μm, H

_{2}= 50 μm, (ii) H

_{1}= 10 μm, H

_{2}= 110 μm device.

**Figure 3.**(

**a**) Confocal microscopy images at increasing sheath-to-sample flow velocity ratio for 10 + 50 μm device. As sheath flow rate increases, fluorescein is confined both towards the microscope slide (bottom), as well as from the sides. In the z-direction the pixel size is w = 0.41 μm, h = 0.65 μm; (

**b**) Z-projection of thresholded confocal microscopy images showing fluorescein confinement for increasing sheath flow rate; (

**c**) Simulated (black) and experimentally measured (red) fluorescein heights for constant fluorescein flow rate of 0.25 μL/min and varying sheath flow rates. Shaded area highlights fluorescein height below 10 μm.

**Figure 4.**(

**a**) Confocal microscopy images at increasing sheath-to-sample flow velocity ratio for the 120 µm (10 + 110 μm) device. In the z-direction, the pixel size is w = 0.43 μm, h = 0.43 μm; (

**b**) Z-projection of thresholded confocal microscopy images showing confinement of the fluorescein cone for increasing sheath flow rate; (

**c**) Simulated and experimentally measured fluorescein heights for constant fluorescein flow rate of 0.25 μL/min and varying sheath flow rates. Shaded area highlights fluorescein height below 10 μm, which is our target confinement height for live yeast cell in-flow imaging.

**Figure 5.**(

**a**) Simulated distance between fluorescein cone tip and microscope slide for increasing sheath flow rates and fluorescein flow rate of 0.25 μL/min (black). Mean bead equilibrium position (along with standard deviation) with respect to the microscope slide (red); (

**b**) Confocal microscopy image of the cross section of the sample stream (green), sheath fluid (black). Red fluorescent microspheres (diameter ~ 6 μm) dispersed in the sample stream appear as lines due to line scanning in confocal microscopy. This image was taken in a device with a height of 120 μm, at a fluorescein flow rate of 0.25 μL/min (0.43 mm/s) and sheath flow rate of 10 μL/min (17.36 mm/s). The height of the fluorescein cone in this image is 27 μm, and the scale bar is 6 μm; (

**c**) Simulation reproducing microscopy data shown in (

**b**). Fluorescein contour shown as green dotted line. Microspheres appear to concentrate closer to the fluorescein cone tip rather that the microscope slide. The insert shows a juxtaposition of the confocal image and the equivalent simulated data. (

**d**) Microsphere images acquired at known distances from the focal plane, used as part of the training set for bead focal plane regression. (

**e**) Histogram of z-displacement of microspheres relative to the focal plane. Bins have a width of 0.5 µm, with the x-axis the displacement relative to the focal plane and the y-axis the bead count for each bin. The shaded region within 2 µm of the focal plane, accounts for 68% of all microspheres. Displacement on the x-y plane is shown in the inset. 87% of sphere centers are located within 2 µm of the stream centerline.

**Figure 6.**(

**a**) Work flow underlying distance learning for automated z-position determination of flowing cells. Stacks of images of large fields of view containing many cells were recorded on the same microscope used for the imaging of flowing cells, but using an ordinary mounting of the cells. Single-cell stacks were cropped out. Random image pairs with known z-distances taken from single-cell stacks were used after image augmentation to train a Siamese neural network. (

**b**) Predicted and real z-distance in pairs of cells taken from a test set of single cell stacks. (

**c**) Images of the different yeast species used. (

**c**) Example images of S. cerevisiae, S. ludwigii and S. pombe cells. The example cells highlight the high within-species and inter-species heterogeneity of the cells used. Scale bar length corresponds to 5 µm. (

**d**,

**e**) Histogram of z-displacement of flowing yeast cells in the device relative to an in-focus reference cell. Bins have a width of 0.5 µm, with the x-axis the z-displacement relative to the distribution median. Across all yeast species, example images highlighting their heterogeneity are shown. The shaded region within 2.5 µm of the focal plane contains 51% and 60% of imaged cells for all species (

**d**) and S. cerevisiae (

**e**) respectively. S. cerevisiae examples of non-budding cells and pessimistic predictions in the region below −10 µm are shown, accounting for 85% of the distributions lower peak.

**Figure 7.**(

**a**) Variational autoencoder (VAE) architecture for unsupervised learning. Cell images are convolutionally embedded into 10-dimensional latent space and reconstructed using same-shaped transpose convolutions. The network is trained to perform image reconstruction and is constrained to produce a disentangled latent space by KL divergence relative to a normal distribution and a penalty on total correlation. (

**b**) Sample nearest neighbor queries for eight query cells. Query results are displayed in the order of increasing distance in latent-space. (

**c**) Assessment of unsupervised classification accuracy. A two-dimensional embedding of data points for S. cerevisiae (blue) and S. pombe (turquoise) is shown, with ground truth species labels (left), a map of data points wrongly classified (red) by latent space k-means (center), and images of random failure cases for both species (right). Failure cases comprise S. cerevisiae cells classified as S. pombe cells (left column) and vice versa (right column). k-means on latent space classifies 74% of samples correctly, without the need for supervision. (

**d**) Assessment of few-shot classification accuracy. A map of wrongly classified data points (red) using an support vector machines (SVM) classifier on latent space with 10 training examples per species shows an accuracy of 88% (left). The full training set is displayed for both species (top right), together with a confusion matrix showing the percentage of classifications for the classifier (bottom right). Sc and Sp indicate S. cerevisiae and S. pombe respectively. (

**e**) Latent space interpretability. A latent space interpolation between three cells is shown, indicating latent space vectors encoding for cell focal plane (focus), as well as cell elongation (shape).

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Constantinou, I.; Jendrusch, M.; Aspert, T.; Görlitz, F.; Schulze, A.; Charvin, G.; Knop, M.
Self-Learning Microfluidic Platform for Single-Cell Imaging and Classification in Flow. *Micromachines* **2019**, *10*, 311.
https://doi.org/10.3390/mi10050311

**AMA Style**

Constantinou I, Jendrusch M, Aspert T, Görlitz F, Schulze A, Charvin G, Knop M.
Self-Learning Microfluidic Platform for Single-Cell Imaging and Classification in Flow. *Micromachines*. 2019; 10(5):311.
https://doi.org/10.3390/mi10050311

**Chicago/Turabian Style**

Constantinou, Iordania, Michael Jendrusch, Théo Aspert, Frederik Görlitz, André Schulze, Gilles Charvin, and Michael Knop.
2019. "Self-Learning Microfluidic Platform for Single-Cell Imaging and Classification in Flow" *Micromachines* 10, no. 5: 311.
https://doi.org/10.3390/mi10050311