# Mathematical Modelling of Ground Truth Image for 3D Microscopic Objects Using Cascade of Convolutional Neural Networks Optimized with Parameters’ Combinations Generators

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Dataset and Methodological Framework

#### 2.1. Dataset Description and Preparation

#### 2.2. Convolutional Neural Netwotrks (CNNs)

#### 2.2.1. Knowledge Based (KB) Module

- Choose
**n**combinations of CNN parameters, with linearly selected values from their domain. - Generate and train
**n**sets of CNNs for each selected combination.

Pseudocode 1: LM | |

forn = defined_combinations_of_CNN_parameters | |

generate_CNN(n.parameter1.value, n.parameter2.value, n.parameter3.value,…, n.parameterM.value); | |

end for |

- For
**m**CNN parameters, choose a certain number of linearly selected values (experiments use 3 values) from their domain. - For every possible set of parameters, generate and train CNN.

Pseudocode 2: BFM |

for i = defined_number_of_values_for_parameter1 |

for j = defined_number_of_values_for_parameter2 |

for k = defined_number_of_values_for_parameter3 |

………. |

for m = defined_number_of_values_for_parameterM |

generate_CNN(i.value, j.value, k.value,…, m.value); |

end for |

end for |

end for |

end for |

- Number of filters: [12 : 256]
- Learning rate: [0.004 : 1]
- Filter size: [2 : 5]

- Learning rate: [0.004 : 1]
- Mini batch size: [32 : 256]
- Momentum: [0.7 : 1]

- For
**m**CNN parameters, for**n**iterations (m is a positive integer number), every iteration will randomly choose parameters’ values from their domain. - Generate and train
**m**CNNs.

Pseudocode 3: RPM |

for n = iterations |

parameter1.value = rand(values_from_parameter1_domain); |

parameter2.value = rand(values_from_parameter2_domain); |

parameter3.value = rand(values_from_parameter3_domain); |

… |

parameterM.value = rand(values_from_parameterM_domain); |

generate_CNN(parameter1.value, parameter2.value, parameter3. value,..., parameterM.value); |

end for |

**n**generated CNNs in a single generation.

- Number of filters: [12 : 256]
- Learning rate: [0.004 : 0.9]
- Filter size: [2 : 5]

- Learning rate:[0.004 : 0.9]
- Mini batch size: [32 : 256]
- Momentum: [0.7 : 1]

#### 2.2.2. Cascade of CNNs

#### 2.2.3. Data Augmentation

#### 2.3. Metrics Evaluation Module

## 3. System Modelling and Software Implementation

#### 3.1. Model Based on CNNs

#### 3.1.1. GGTI DL Variant 1A, 1B, 2A and 2B

#### 3.1.2. GGTI DL Variant 3A and 3B

#### 3.2. Software Implementation

## 4. Results

#### 4.1. Experiment 1: GGTI DL Variant 1A

- LA: 3 CNNs
- BFM: 39 CNNs
- BFM: 35 CNNs

#### 4.2. Experiment 2: GGTI DL Variant 1B

- CNN counter
- LM: 5 CNNs
- BFM: 27 CNNs

#### 4.3. Experiment 3: GGTI DL Variant 2A

- LM: 3 CNNs
- BFM: 27 CNNs
- BFM: 23 CNNs

#### 4.4. Experiment 4: GGTI DL Variant 2B

- LM: 3 CNNs
- BFM: 27 CNNs
- BFM: 15 CNNs

#### 4.5. Experiment 5: GGTI DL Variant 3A

- LM: 3 CNNs
- BFM: 27 CNNs
- BFM: 23 CNNs

#### 4.6. Experiment 6: GGTI DL Variant 3B

- LM: 5 CNNs
- BFM: 27 CNNs
- BFM: 15 CNNs

#### 4.7. Experiment 7: Benchmarking—GGTI DL 3D Segmentation

#### 4.8. Experiment 8: Generalization

- Variant 1G—Semantic segmentation with pre-trained CNN networks used in experiment 7.
- Variant 2G—Semantic segmentation with additional training of pre-trained CNN networks used in experiment 7, with samples from new datasets (data augmentation included).
- Variant 3G—Semantic segmentation with training CNN networks from scratch, with samples from new datasets (data augmentation included).

#### 4.8.1. Variant 1G

#### 4.8.2. Variant 2G

#### 4.8.3. Variant 3G

## 5. Comparative Analysis and Discussion

#### 5.1. Benchmarking

#### 5.2. Generalization

## 6. Conclusions

## Supplementary Materials

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Gaonkar, B.; Hovda, D.; Martin, N.; Macyszyn, L. Deep learning in the small sample size setting: Cascaded feed forward neural networks for medical image segmentation. In Proceedings of the SPIE Medical Imaging: Computer-Aided Diagnosis, San Diego, CA, USA, 27 February–3 March 2016. [Google Scholar]
- Zhou, S.; Nie, D.; Adeli, E.; Yin, J.; Lian, J.; Shen, D. High-Resolution Encoder-Decoder Networks for Low-Contrast Medical Image Segmentation. IEEE Trans. Image Process.
**2019**, 29, 461–475. [Google Scholar] [CrossRef] [PubMed] - Mittal, M.; Goyal, L.M.; Kaur, S.; Kaur, I.; Verma, A.; Hemanth, D.J. Deep learning based enhanced tumor segmentation approach for MR brain mages. Appl. Soft Comput. J.
**2019**, 78, 346–354. [Google Scholar] [CrossRef] - Zhang, W.; Li, R.; Deng, H.; Wang, L.; Lin, W.; Ji, S.; Shen, D. Deep convolutional neural networks formulti-modality isointense infant brain image segmentation. NeuroImage
**2015**, 108, 214–224. [Google Scholar] [CrossRef] [PubMed] - Zheng, X.; Liu, Z.; Chang, L.; Long, W.; Lu, Y. Coordinate-guided U-Net for automated breast segmentation on MRI images. In Proceedings of the International Conference on Graphic and Image Processing (ICGIP), Chengdu, China, 12–14 December 2018. [Google Scholar]
- Vardhana, M.; Arunkumar, N.; Lasrado, S.; Abdulhay, E.; Ramirez-Gonzalez, G. Convolutional neural network for bio-medical image segmentation with hardware acceleration. Cogn. Syst. Res.
**2018**, 50, 10–14. [Google Scholar] [CrossRef] - Iscan, Z.; Yüksel, A.; Dokur, Z.; Korürek, M.; Ölmez, T. Medical image segmentation with transform and moment based features and incremental supervised neural network. Digit. Signal Process.
**2009**, 19, 890–901. [Google Scholar] [CrossRef] - Ma, B.; Ban, X.; Huang, H.-Y.; Chen, Y.; Liu, W.; Zhi, Y. Deep Learning-Based Image Segmentation for Al-La Alloy Microscopic Images. Symmetry
**2018**, 10, 107. [Google Scholar] [CrossRef] [Green Version] - Rivenson, Y.; Göröcs, Z.; Günaydin, H.; Zhang, Y.; Wang, H.; Ozcan, A. Deep learning microscopy. Optica
**2017**, 4, 1437–1443. [Google Scholar] [CrossRef] [Green Version] - Vijayalakshmi, A.; Kanna, B.R. Deep learning approach to detect malaria from microscopic images. Multimed. Tools Appl.
**2019**, 1–21. [Google Scholar] [CrossRef] - Mundhra, D.; Cheluvaraju, B.; Rampure, J.; Dastidar, T.R. Analyzing Microscopic Images of Peripheral Blood Smear Using Deep Learning. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Québec City, QC, Canada, 14 September 2017. [Google Scholar]
- Wu, X.; Wu, Y.; Stefani, E. Multi-Scale Deep Neural Network Microscopic Image Segmentation. Biophys. J.
**2015**, 108, 473A. [Google Scholar] [CrossRef] [Green Version] - Xing, F.; Xie, Y.; Su, H.; Liu, F.; Yang, L. Deep Learning in Microscopy Image Analysis: A Survey. IEEE Trans. Neural Netw. Learn. Syst.
**2017**, 29, 4550–4568. [Google Scholar] [CrossRef] - Fakhry, A.; Peng, H.; Ji, S. Deep models for brain EM image segmentation: Novel insights and improved performance. Bioinformatics
**2016**, 32, 2352–2358. [Google Scholar] [CrossRef] [PubMed] [Green Version] - EGómez-de-Mariscal, A.; Maška, M.; Kotrbová, A.; Pospíchalová, V.; Matula, P.; Muñoz-Barrutia, A. Deep-Learning-Based Segmentation of Small Extracellular Vesicles in Transmission Electron Microscopy Images. Sci. Rep.
**2019**, 9, 13211. [Google Scholar] [CrossRef] [PubMed] - Cireşan, D.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Deep neural networks segment neuronal membranes in electron microscopy images. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–8 December 2012. [Google Scholar]
- Fakhry, A.; Zeng, T.; Ji, S. Residual Deconvolutional Networks for Brain Electron Microscopy Image Segmentation. IEEE Trans. Med. Imaging
**2016**, 36, 447–456. [Google Scholar] [CrossRef] [PubMed] - Haberl, M.G.; Churas, C.; Tindall, L.; Boassa, D.; Phan, S.; Bushong, E.A.; Madany, M.; Akay, R.; Deerinck, T.J.; Peltier, S.T.; et al. CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation. Nat. Methods
**2018**, 15, 677–680. [Google Scholar] [CrossRef] [PubMed] - Song, Y.; Zhang, L.; Chen, S.; Ni, D.; Li, B.; Zhou, Y.; Lei, B.; Wang, T. A Deep Learning Based Framework for Accurate Segmentation of Cervical Cytoplasm and Nuclei. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014. [Google Scholar]
- Srishti, G.; Arnav, B.; Anil, K.S.; Harinarayan, K.K. CNN based segmentation of nuclei in PAP-smear images with selective pre-processing. In Proceedings of the SPIE Medical Imaging: Digital Pathology, Houston, TX, USA, 10–15 February 2018. [Google Scholar]
- Duggal, R.; Gupta, A.; Gupta, R.; Wadhwa, M.; Ahuja, C. Overlapping cell nuclei segmentation in microscopic images using deep belief networks. In Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, Guwahati, Assam, India, 18–22 December 2016. [Google Scholar]
- Fu, C.; Joon, H.D.; Han, S.; Salama, P.; Dunn, K.W.; Delp, J.E. Nuclei segmentation of fluorescence microscopy images using convolutional neural networks. In Proceedings of the IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017. [Google Scholar]
- Jung, H.; Lodhi, B.; Kang, J. An automatic nuclei segmentation method based on deep convolutional neural networks for histopathology images. BMC Biomed. Eng.
**2019**, 1, 24. [Google Scholar] [CrossRef] [Green Version] - Caicedo, J.C.; Roth, J.; Goodman, A.; Becker, T.; Karhohs, K.W.; Broisin, M.; Molnar, C.; McQuin, C.; Singh, S.; Theis, F.J.; et al. Evaluation of Deep Learning Strategies for Nucleus Segmentation in Fluorescence Images. Cytom. Part A
**2019**, 95, 952–965. [Google Scholar] [CrossRef] [Green Version] - Kowal, M.; Zejmo, M.; Skobel, M.; Korbicz, J.; Monczak, R. Cell Nuclei Segmentation in Cytological Images Using Convolutional Neural Network and Seeded Watershed Algorithm. J. Digit. Imaging
**2019**, 1–12. [Google Scholar] [CrossRef] [Green Version] - Cui, Y.; Zhang, G.; Liu, Z.; Xiong, Z.; Hu, J. A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images. Med. Biol. Eng. Comput.
**2019**, 57, 2027–2043. [Google Scholar] [CrossRef] [Green Version] - Taran, V.; Gordienko, Y.; Rokovyi, A.; Alienin, O.; Stirenko, S. Impact of Ground Truth Annotation Quality on Performance of Semantic Image Segmentation of Traffic Conditions. In Advances in Computer Science for Engineering and Education II. ICCSEEA 2019. Advances in Intelligent Systems and Computing; Hu, Z., Petoukhov, S., Dychka, I., He, M., Eds.; Book Series 2012–2020; Springer: Cham, Switzerland, 2019; Volume 938. [Google Scholar]
- Milletari, F.; Navab, N.; Ahmadi, S.H. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science; Navab, N., Hornegger, J., Wells, W., Frangi, A., Eds.; Book Series 1973–2019; Springer: Cham, Switzerland, 2015; Volume 9351. [Google Scholar]
- Sadanandan, S.K.; Ranefall, P.; Le Guyade, S.; Wählby, C. Automated Training of Deep Convolutional Neural Networks for Cell Segmentation. Sci. Rep.
**2017**, 7, 7860. [Google Scholar] [CrossRef] [Green Version] - Carpenter, A.E.; Jones, T.R.; Lamprecht, M.R.; Clarke, C.; Kang, I.H.; Friman, O.; Guertin, D.A.; Chang, J.H.; Lindquist, R.A.; Moffat, J.; et al. Cellprofiler: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol.
**2006**, 7, R100. [Google Scholar] [CrossRef] [Green Version] - Joyce, T.; Chartsias, A.; Tsaftaris, S.A. Deep Multi-Class Segmentation without Ground-Truth Labels. In Proceedings of the Medical Imaging with Deep Learning—MIDL 2018, Amsterdam, The Netherlands, 4–6 July 2018. [Google Scholar]
- Avdagić, Z.; Bilalović, O.; Letić, V.; Golić, M.; Kafadar, M. On line performance evaluation and optimization of image segmentation methods using genetic algorithm and generic ground truth in 3D microscopy imaging. In Proceedings of the COST-Action CA16212 Impact of Nuclear Domains in Gene Expression and Plant Traits (INDEPTH) Prague meeting, Prague, Chech Republic, 25–27 February 2019; Available online: https://www.garnetcommunity.org.uk/sites/default/files/INDEPTH_Prague_Book_FINAL_Online.pdf (accessed on 10 February 2019).
- Bilalović, O.; Avdagić, Z.; Kafadar, M. Improved nucleus segmentation process based on knowledge based parameter optimization in two levels of voting structures. In Proceedings of the 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 20–24 May 2019. [Google Scholar]
- Dhanachandra, N.; Manglem, K.; Chanu, Y.J. Image Segmentation Using K -means Clustering Algorithm and Subtractive Clustering Algorithm. Procedia Comput. Sci.
**2015**, 54, 764–771. [Google Scholar] [CrossRef] [Green Version] - MathWorks. File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/45057-adaptive-kmeans-clustering-for-color-and-gray-image (accessed on 8 December 2019).
- Saha, R.; Bajger, M.; Lee, G. Spatial Shape Constrained Fuzzy C-Means (FCM) Clustering for Nucleus Segmentation in Pap Smear Images. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016. [Google Scholar]
- Christ, M.C.J.; Parvathi, R.M.S. Fuzzy c-means algorithm for medical image segmentation. In Proceedings of the 3rd International Conference on Electronics Computer Technology, Kanyakumari, India, 8–10 April 2011. [Google Scholar]
- MathWorks. File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/25532-fuzzy-c-means-segmentation (accessed on 8 December 2019).
- MathWorks. File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/38555-kernel-graph-cut-image-segmentation (accessed on 10 October 2019).
- Salah, M.B.; Mitiche, A.; Ayed, I.B. Multiregion Image Segmentation by Parametric Kernel Graph Cuts. IEEE Trans. Image Process.
**2011**, 20, 545–557. [Google Scholar] [CrossRef] [PubMed] - MathWorks. File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/28418-multi-modal-image-segmentation (accessed on 13 October 2019).
- O’Callaghan, R.J.; Bull, D.R. Combined morphological-spectral unsupervised image segmentation. IEEE Trans. Image Process.
**2005**, 14, 49–62. [Google Scholar] [CrossRef] - Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern.
**1979**, 9, 62–66. [Google Scholar] [CrossRef] [Green Version] - MathWorks. File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/26532-image-segmentation-using-otsu-thresholding (accessed on 15 October 2019).
- Nock, R.; Nielsen, F. Statistical Region Merging. IEEE Trans. Pattern Anal. Mach. Intell.
**2004**, 26, 1452–1458. [Google Scholar] [CrossRef] [PubMed] - MathWorks. File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/25619-image-segmentation-using-statistical-region-merging (accessed on 15 October 2019).
- MathWorks. File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/44447-segmentation-of-pet-images-based-on-affinity-propagation-clustering (accessed on 16 October 2019).
- Foster, B.; Bagci, U.; Xu, Z.; Dey, B.; Luna, B.; Bishai, W.; Jain, S.; Mollura, D.J. Segmentation of PET Images for Computer-Aided Functional Quantification of Tuberculosis in Small Animal Models. IEEE Trans. Biomed. Eng.
**2013**, 61, 711–724. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Foster, B.; Bagci, U.; Luna, B.; Dey, B.; Bishai, W.; Jain, S.; Xu, Z.; Mollura, D.J. Robust segmentation and accurate target definition for positron emission tomography images using Affinity Propagation. Proceedings of 10th IEEE International Symposium on Biomedical Imaging (ISBI), San Francisco, CA, USA, 7–11 April 2013. [Google Scholar]
- Poulet, A.; Arganda-Carreras, I.; Legland, D.; Probst, A.V.; Andrey, P.; Tatout, C. NucleusJ: An ImageJ plugin for quantifying 3D images of interphase nuclei. Bioinformatics
**2015**, 3, 1144–1146. [Google Scholar] [CrossRef] [Green Version] - Desset, S.; Poulet, A.; Tatout, C. Quantitative 3D Analysis of Nuclear Morphology and Heterochromatin Organization from Whole-Mount Plant Tissue Using NucleusJ. Methods Mol. Biol.
**2018**, 1675, 615–632. [Google Scholar] - Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- imageDataAugmenter. MATLAB Library for Data Augmentation. Available online: https://www.mathworks.com/help/deeplearning/ref/imagedataaugmenter.html (accessed on 13 January 2020).
- Fawcett, T. An Introduction to ROC Analysis. Pattern Recognit. Lett.
**2006**, 27, 861–874. [Google Scholar] [CrossRef] - Powers, D.M.W. Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation. J. Mach. Learn. Technol.
**2011**, 2, 37–63. [Google Scholar] - Kohli, P.; Ladicky, L.; Torr, P.H.S. Robust higher order potentials for enforcing label consistency. Int. J. Comput. Vis.
**2009**, 82, 302–324. [Google Scholar] [CrossRef] [Green Version] - Rand, W.M. Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc.
**1971**, 66, 846–850. [Google Scholar] [CrossRef] - Hubert, L.; Arabie, P. Comparing partitions. J. Classif.
**1985**, 2, 193–218. [Google Scholar] [CrossRef] - Fowlkes, E.B.; Mallows, C.L. A Method for Comparing Two Hierarchical Clusterings. J. Am. Stat. Assoc.
**1983**, 78, 553–569. [Google Scholar] [CrossRef] - Dice, L.R. Measures of the amount of ecologic association between species. Ecology
**1945**, 26, 297–302. [Google Scholar] [CrossRef] - MATLAB, v. 2017b, Individual Student Licence. 2018. Available online: https://www.mathworks.com/products/new_products/release2017b.html (accessed on 1 March 2020).
- NucleusJ Plugin. Available online: https://bio.tools/NucleusJ (accessed on 13 January 2020).
- Schindelin, J.; Arganda-Carreras, I.; Frise, E.; Kaynig, V.; Longair, M.; Pietzsch, T.; Preibisch, S.; Rueden, C.; Saalfeld, S.; Schmid, B.; et al. Fiji: An open-source platform for biological-image analysis. Nat. Methods
**2012**, 9, 676–682. [Google Scholar] [CrossRef] [Green Version] - Nuclei of U2OS Cells in a Chemical Screen—Broad Bioimage Benchmark Collection—Annotated Biological Image Sets for Testing and Validation. Available online: https://data.broadinstitute.org/bbbc/BBBC039/ (accessed on 13 January 2020).
- Simulated HL60 Cells—Broad Bioimage Benchmark Collection—Annotated Biological Image Sets for Testing and Validation. Available online: https://data.broadinstitute.org/bbbc/BBBC035/ (accessed on 13 January 2020).

**Figure 2.**(

**a**) Individual slices for sample 6 - „2013-01-23-col0-8_GC1“. (

**b**) 3D view of sample 6 - „2013-01-23-col0-8_GC1“.

**Figure 3.**(

**a**) Individual slices for sample 6 - „2013-01-23-col0-8_GC1“. (

**b**) 3D view of sample 6 - „2013-01-23-col0-8_GC1“.

**Figure 6.**Graphical demonstration of the median operator. Suppose that there are five generated CNNs: (

**a**) Each of CNNs made semantic segmentation for a slice, with grey pixels being background label, and white pixels being foreground (nucleus) label. (

**b**) After sorting resulting segmentation slices, median slice can be selected.

**Figure 7.**Let ground truth image (GTI) be defined as referenced slice, and segmented image (SI) as segmented slice calculated from semantic segmentation of CNN. White value represents foreground (nucleus) label, and black value represents background label. If the GTI (white (+)) pixel goes into SI (white (+)) pixel, that is TP. If the GTI (white (+)) pixel goes into SI (black (-)) pixel, that is FN. If the GTI (black (-)) pixel goes into SI (white (+)) pixel that is FP. If the GTI (black (-)) pixel goes into SI (black (-)) pixel, that is TN.

**Figure 11.**

**(a)**Training block for variant 1A.

**(b)**Training block for variant 1B.

**(c)**Training block for variant 2A and 3A.

**(d)**Training block for variant 2B and 3B.

**Figure 12.**Diagram of software framework. It presents software implementation of modules, as well as inputs and outputs.

**Figure 13.**(

**a**) Global GUI contains links to all other important GUI elements, as well as some other additional options and helpers. (

**b**) GUI for MGTI—Module 1.

**(c)**GUI for GGTI AP7—Module 2. (

**d**) GUI (in usage) for variants 1A, 1B, 2A and 2B—Module 3.

**Figure 14.**(

**a**) GUI for GGTI DL variants 3A and 3B—Module 3. (

**b**) Module 4 GUI offers load and preview of referenced, segmented and original slice, as well as export to MAT/TIFF data types, etc.

**(c)**The second Module 4 GUI offers calculation of all metrics. (

**d**) The GUI for batch 3D segmentation and evaluation.

**Figure 22.**(a) Experiment 1-6, results for LPM method and $\mathrm{AMCQ}$ metrics. (b) Experiment 1-6, results for BFM method and $\mathrm{AMCQ}$ metrics. (c) Experiment 1–6, results for RPM method and $\mathrm{AMCQ}$ metrics.

**Figure 23.**(

**a**) Variant 1A—segmented slice of MGTI and LM CNN voter. (

**b**) Variant 1B—segmented slices of MGTI and LM CNN voter.

**Figure 24.**(

**a**) Variant 2A—segmented slices of MGTI and LM CNN voter. (

**b**) Variant 2B—segmented slices of MGTI and LM CNN median.

**Figure 25.**(

**a**) Variant 3A—segmented slice of MGTI and BFM CNN voter. (

**b**) Variant 3B—segmented slices of MGTI and BFM CNN voter.

**Figure 27.**Segmentation of first and last five slices of Arabidopsis thaliana sample 1. GGTI AP7 is not having controlled segmentation over darker slices of samples, while GGTI DL gives controlled and reliable segmentation.

**Figure 30.**3D segmentations of BBBC039 testing sample: GT, GGTI DL CNN voter, GGTI DL CNN median and GGTI AP7.

**Figure 31.**3D segmentations of BBBC035 testing sample: GT, GGTI DL CNN voter, GGTI DL CNN median and GGTI AP7.

Variant | Training | Usage of Pre-Trained CNNs | Semantic Segmentation—Validation | |
---|---|---|---|---|

Number of Slices Per 3D Stack Sample | Number of 3D Stack Samples | |||

Variant 1A | 1 | 1 | No | Validation slice belongs to training 3D stack sample, different than training slice. |

Variant 1B | 1 | >1 | No | Validation slice belongs to 3D stack sample that has not been used in training. |

Variant 2A | >1 | 1 | No | Validation slice belongs to training 3D stack sample, different than training slices. |

Variant 2B | >1 | >1 | No | Validation slice belongs to 3D stack sample that has not been used in training. |

Variant 3A | >1 | 1 | Yes—from variant 2A | Validation slice belongs to training 3D stack sample, different than training slices. |

Variant 3B | >1 | >1 | Yes—from variant 2B | Validation slice belongs to 3D stack sample that has not been used in training. |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.88078 | 0.83361 | 0.84929 |

AMCQ | 0.97683 | 0.95812 | 0.96392 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.82300 | 0.76498 | 0.76699 |

AMCQ | 0.95221 | 0.92090 | 0.92143 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.84017 | 0.84468 | 0.84664 |

AMCQ | 0.96353 | 0.96340 | 0.96325 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.81129 | 0.78347 | 0.76811 |

AMCQ | 0.95034 | 0.93191 | 0.92203 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.87835 | 0.87316 | 0.87280 |

AMCQ | 0.97628 | 0.97484 | 0.97476 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.85212 | 0.84450 | 0.84727 |

AMCQ | 0.96493 | 0.96239 | 0.96397 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.87857 | 0.86307 | 0.86438 |

AMCQ | 0.97639 | 0.97083 | 0.97116 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.85351 | 0.84903 | 0.85025 |

AMCQ | 0.96549 | 0.96472 | 0.96522 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.89308 | 0.84920 | 0.81585 |

AMCQ | 0.98159 | 0.96668 | 0.95337 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.83880 | 0.82822 | 0.79492 |

AMCQ | 0.95949 | 0.95694 | 0.94158 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.87130 | 0.78491 | 0.80887 |

AMCQ | 0.97429 | 0.94065 | 0.95123 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.802365 | 0.78129 | 0.80637 |

AMCQ | 0.942518 | 0.93690 | 0.94822 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.88564 | 0.87881 | 0.87008 |

AMCQ | 0.97879 | 0.97616 | 0.97308 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.85552 | 0.84212 | 0.83191 |

AMCQ | 0.96629 | 0.96000 | 0.95552 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.88728 | 0.87827 | 0.86879 |

AMCQ | 0.97912 | 0.97564 | 0.97283 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.86313 | 0.84981 | 0.83226 |

AMCQ | 0.96959 | 0.96298 | 0.95602 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.78964 | 0.87792 | 0.82941 |

AMCQ | 0.94142 | 0.97669 | 0.95796 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.76392 | 0.84530 | 0.78378 |

AMCQ | 0.92526 | 0.96347 | 0.93344 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.80183 | 0.87481 | 0.80027 |

AMCQ | 0.94786 | 0.97493 | 0.94680 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.77626 | 0.84862 | 0.76622 |

AMCQ | 0.93260 | 0.96478 | 0.92621 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.92281 | 0.93390 | 0.90248 |

AMCQ | 0.98783 | 0.99064 | 0.98234 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.88202 | 0.88944 | 0.86098 |

AMCQ | 0.97356 | 0.97536 | 0.96547 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.91094 | 0.91695 | 0.89490 |

AMCQ | 0.98550 | 0.98607 | 0.97895 |

Metrics | LM | BFM | RPM |
---|---|---|---|

AMC | 0.88326 | 0.87748 | 0.86304 |

AMCQ | 0.97570 | 0.97187 | 0.96556 |

**Table 26.**Results for Sample 1—2012-12-21-crwn12-008_GC1 batch 3D segmentation - CNN voter, CNN median, GGTI AP7 and NucleusJ compared to MGTI.

Metrics | GGTI DL Voter Compared to MGTI | GGTI DL Median Compared to MGTI | GGTI AP7 Compared to MGTI | NucleusJ Compared to MGTI |
---|---|---|---|---|

AMC | 0.79218 | 0.79983 | 0.85271 | 0.89196 |

AMCQ | 0.93450 | 0.94039 | 0.96455 | 0.97612 |

**Table 27.**Results for Sample 2—2012-12-21-crwn12-009_GC1 batch 3D segmentation - CNN voter, CNN median, GGTI AP7 and NucleusJ compared to MGTI.

Metrics | GGTI DL Voter Compared to MGTI | GGTI DL Median Compared to MGTI | GGTI AP7 Compared to MGTI | NucleusJ Compared to MGTI |
---|---|---|---|---|

AMC | 0.80584 | 0.80520 | 0.93701 | 0.91979 |

AMCQ | 0.94448 | 0.94438 | 0.99165 | 0.98655 |

**Table 28.**Results for Sample 3—2012-12-21-crwn12-009_GC2 batch 3D segmentation - CNN voter, CNN median, GGTI AP7 and NucleusJ compared to MGTI.

Metrics | GGTI DL Voter Compared to MGTI | GGTI DL Median Compared to MGTI | GGTI AP7 Compared to MGTI | NucleusJ Compared to MGTI |
---|---|---|---|---|

AMC | 0.84245 | 0.83838 | 0.92516 | 0.91983 |

AMCQ | 0.96110 | 0.95991 | 0.98907 | 0.98760 |

**Table 29.**Results for Sample 4—2013-01-23-col0-3_GC1 batch 3D segmentation - CNN voter, CNN median, GGTI AP7 and NucleusJ compared to MGTI.

Metrics | GGTI DL Voter Compared to MGTI | GGTI DL Median Compared to MGTI | GGTI AP7 Compared to MGTI | NucleusJ Compared to MGTI |
---|---|---|---|---|

AMC | 0.89753 | 0.88032 | 0.85723 | 0.81302 |

AMCQ | 0.98200 | 0.97724 | 0.96604 | 0.94573 |

**Table 30.**Results for Sample 5—2013-01-23-col0-3_GC2 batch 3D segmentation - CNN voter, CNN median, GGTI AP7 and NucleusJ compared to MGTI.

Metrics | GGTI DL Voter Compared to MGTI | GGTI DL Median Compared to MGTI | GGTI AP7 Compared to MGTI | NucleusJ Compared to MGTI |
---|---|---|---|---|

AMC | 0.87893 | 0.86141 | 0.89951 | 0.84845 |

AMCQ | 0.97182 | 0.96666 | 0.98087 | 0.95708 |

**Table 31.**Results for Sample 6—2013-01-23-col0-8_GC1 batch 3D segmentation - CNN voter, CNN median, GGTI AP7 and NucleusJ compared to MGTI.

Metrics | GGTI DL Voter Compared to MGTI | GGTI DL Median Compared to MGTI | GGTI AP7 Compared to MGTI | NucleusJ Compared to MGTI |
---|---|---|---|---|

AMC | 0.91628 | 0.88757 | 0.92987 | 0.89224 |

AMCQ | 0.98525 | 0.97637 | 0.98928 | 0.97586 |

**Table 32.**Variant 1G—Results for dataset: BBBC039—Nuclei of U2OS cells in a chemical screen; GGTI DL/GGTI AP7 compared to GT.

Metrics | GGTI DL Voter Compared to GT | GGTI DL Median Compared to GT | GGTI AP7 Compared to GT |
---|---|---|---|

AMC | 0.91455 | 0.91182 | 0.94410 |

AMCQ | 0.98568 | 0.98517 | 0.99341 |

**Table 33.**Variant 1G—Results for dataset: BBBC035—Simulated nuclei of HL60 cells; GGTI DL/GGTI AP7 compared to GT.

Metrics | GGTI DL Voter Compared to GT | GGTI DL Median Compared to GT | GGTI AP7 Compared to GT |
---|---|---|---|

AMC | 0.94451 | 0.93626 | 0.92042 |

AMCQ | 0.99172 | 0.98970 | 0.98569 |

**Table 34.**Variant 2G—Results for dataset: BBBC039—Nuclei of U2OS cells in a chemical screen; GGTI DL/GGTI AP7 compared to GT.

Metrics | GGTI DL Voter Compared to GT | GGTI DL Median Compared to GT | GGTI AP7 Compared to GT |
---|---|---|---|

AMC | 0.95688 | 0.95536 | 0.94410 |

AMCQ | 0.99616 | 0.99596 | 0.99341 |

**Table 35.**Variant 2G—Results for dataset: BBBC035—Simulated nuclei of HL60 cells; GGTI DL/GGTI AP7 compared to GT.

Metrics | GGTI DL Voter Compared to GT | GGTI DL Median Compared to GT | GGTI AP7 Compared to GT |
---|---|---|---|

AMC | 0.93432 | 0.93133 | 0.92042 |

AMCQ | 0.98891 | 0.98830 | 0.98569 |

**Table 36.**Variant 3G—Results for dataset: BBBC039—Nuclei of U2OS cells in a chemical screen; GGTI DL/GGTI AP7 compared to GT.

Metrics | GGTI DL Voter Compared to GT | GGTI DL Median Compared to GT | GGTI AP7 Compared to GT |
---|---|---|---|

AMC | 0.94754 | 0.94666 | 0.94410 |

AMCQ | 0.99444 | 0.99431 | 0.99341 |

**Table 37.**Variant 3G—Results for dataset: BBBC035—Simulated nuclei of HL60 cells; GGTI DL/GGTI AP7 compared to GT.

Metrics | GGTI DL Voter Compared to GT | GGTI DL Median Compared to GT | GGTI AP7 Compared to GT |
---|---|---|---|

AMC | 0.91323 | 0.91395 | 0.92042 |

AMCQ | 0.98376 | 0.98392 | 0.98569 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Bilalovic, O.; Avdagic, Z.; Omanovic, S.; Besic, I.; Letic, V.; Tatout, C.
Mathematical Modelling of Ground Truth Image for 3D Microscopic Objects Using Cascade of Convolutional Neural Networks Optimized with Parameters’ Combinations Generators. *Symmetry* **2020**, *12*, 416.
https://doi.org/10.3390/sym12030416

**AMA Style**

Bilalovic O, Avdagic Z, Omanovic S, Besic I, Letic V, Tatout C.
Mathematical Modelling of Ground Truth Image for 3D Microscopic Objects Using Cascade of Convolutional Neural Networks Optimized with Parameters’ Combinations Generators. *Symmetry*. 2020; 12(3):416.
https://doi.org/10.3390/sym12030416

**Chicago/Turabian Style**

Bilalovic, Omar, Zikrija Avdagic, Samir Omanovic, Ingmar Besic, Vedad Letic, and Christophe Tatout.
2020. "Mathematical Modelling of Ground Truth Image for 3D Microscopic Objects Using Cascade of Convolutional Neural Networks Optimized with Parameters’ Combinations Generators" *Symmetry* 12, no. 3: 416.
https://doi.org/10.3390/sym12030416