5.2. Software Based Super-Resolution
Analytical approaches to super-resolution microscopy are a recent development. They stand out for being entirely software-based, and hence are applicable to a wide range of microscopy modalities [264
]. Similar to super-resolution microscopy, software-based super-resolution obtains information from non-overlapping individual fluorophores. However, rather than requiring specific dyes or excitation conditions, software-based super-resolution relies on sampling fluorophore fluctuation information in a temporal fashion. One of the first software algorithms of this kind was super-resolution optical fluctuation imaging (SOFI) [266
]. SOFI is reminiscent to STORM processing algorithms, and relies on collecting cumulants of fluctuating fluorophores [267
]. Another method termed 3B analysis utilizes Bayesian statistical analyses to obtain the super-resolution information from the temporal domain [268
The so-called super-resolution radial fluctuations (SRRF) method achieves super-resolution by radial-symmetry based higher-order statistical analysis of temporal intensity fluctuations of conventional fluorophores. Impressively, SRRF achieves 60 nm resolution of images obtained with widefield microscopy, and is extendible to other microscopy modalities [264
]. Finally, an approach named NanoJ-SQUIRREL allows a significant improvement in resolution in a wide range of super-resolution modalities, including STED or SIM. The open-source software performs quantitative assessments of super-resolution image quality upon processing, thus creating a metric for improving image processing. Noteworthy, NanoJ-SQUIRREL has been successfully applied to reconstruction of lateral bodies, a structural element of vaccinia virus particles [265
5.3. Data Analysis
Image and data analysis is a critical, but somewhat underappreciated aspect of microscopy. Image analysis refers to the extraction of numeric data from a set of images, for example, the number of viral particles bound to the cell, co-localization levels or cell motility features. Quantitative image analysis is an important approach to analyze the cell-to-cell variability in infection [85
]. Data analysis refers to the identification of phenotypes of interest and statistical analyses in large datasets. While manual quantification and classification of images is still practiced, manual procedures are prone to confirmation bias, are difficult to standardize and lack scalability. They can result in data misinterpretation and statistically underpowered claims [273
]. Automated large-scale experiments have been introduced to biology with the so-called OMICS technologies in the 1990s. Yet, it is still a challenge to standardize automated methods for image and data analysis. Nevertheless, the increased demand for statistically powerful experiments and reproducible analysis pipelines will reward the implementation of standardized approaches.
Analysis tools can be broadly generalized into command-line interface (CLI) and graphical user interface (GUI) based tools (Figure 1
). Typical programming languages in life sciences for the CLI-based approach are R, Python, and MATLAB. A key advantage of CLIs is flexibility and scalability as well as transparency of the underlying methodology. Drawbacks include a cost for the user to become proficient in at least one programming language for being able to develop analysis pipelines. Proficiency is important, because programming errors can result in misrepresentation of the data. Fortunately, the academic and commercial communities recognize the demand for analysis tools. Today, a wide array of plugins, toolboxes or ready-made analysis software exists. Comprising of either simple to use CLIs or GUIs which offer approachable solutions requiring little to none knowledge of programming.
Most microscope manufacturers provide acquisition software that is capable of performing standard post-processing and image analysis steps. However, the analysis and processing tools provided by dedicated research groups and companies tend to show superior performance. This is partially due to compatibility issues between different commercial solutions, and lower prioritization of software development by the manufacturers.
The best-known tool for academic image processing is perhaps ImageJ. It was originally developed in 1997 and is still maintained by Wayne Rasband [277
]. ImageJ is an open-source platform, which allows for image visualization and processing, and incorporates several hundred analysis plugins. A deep strength of ImageJ is the ease with which additional plugins can be generated, modified, and installed. The prime example for this is Fiji, a recursive acronym for “Fiji Is Just ImageJ”. Fiji is an implementation of ImageJ which is expanded by many plugins, as well as an integrated updating system and developer tools [260
]. Several plugins perform surprisingly complex tasks at high quality, including 3D stitching [278
], generation of super-resolution images from diffraction limited image stacks [264
], or lineage tracing [279
While ImageJ/Fiji provides a powerful and simple to use toolbox, the creation of new plugins or adaption of existing ones can become challenging for people not familiar with “Java”, a general-purpose programming language. Fortunately, there are several open-source programs available, which generate flexible analysis pipelines by combining existing modules in an easy-to-use GUI.
For example, CellProfiler (http://cellprofiler.org/
) is an image analysis software that allows the simple generation of automated workflows for high-content imaging [280
]. The resulting datasets can be exported as .csv files, and employed for further analysis. A useful tool for data analysis is KNIME (Konstanz Information Miner—https://www.knime.com/
). While capable of image analysis KNIME has a stronger focus on data analysis, allowing easy handling and exploration of datasets containing several million datapoints [281
]. Similar to ImageJ, it has a framework, which enables simple implementation of additional nodes, strengthening the core functionalities by features, such as Java/R/Python/MATLAB/ImageJ compatibility, machine learning or expanded workflow control. Icy (http://icy.bioimageanalysis.org/
) combines both image and data analysis. While slightly less intuitive to use compared to CellProfiler/KNIME, Icy provides great segmentation quality even for noisy images and a similar wide array of functionality and compatibility plugins [282
Apart from the general solutions for image and data analyses, a number of specific solutions for microbiological questions have been designed. For example, based on the idea of the original plaque assay of Dulbecco [283
], an analysis software termed “Plaque2.0” was designed, which allows automated scoring of lesions or fluorescence labeled virus spreading events in high-throughput format, providing more information at lower resource consumption, reduced incubation time, and larger scale than prior procedures [284
An alternative approach to tackle the amount of data generated by recent advances in biomedical imaging, such as high-throughput time-lapse microscopy, is machine learning (ML). ML refers to a family of computer science methods that allow for automatic learning and then recognizing, classifying, and predicting of patterns in a dataset. In a biomedical imaging context, ML is typically used for relatively simple tasks, such as segmentation, tracking, denoising, and phenotype determination. For example, tools like Ilastik [285
] and CellCognition provide easy-to-use and powerful ways to segment and classify of images by reinforced machine learning [286
Recent advances in computing, specifically in graphical processing units, have enabled efficient implementations of artificial neural networks and deep learning, which vastly outperformed classical ML methods in image recognition problems [287
]. These methods allow for detection and prediction of highly complex biological phenotypes. Moreover, visualization of these trained networks can help to identify new patterns and features of phenotypes [289
]. Furthermore, deep neural networks can be used to improve super-resolution processing [291
]. Deep learning remains an actively developing field and promises to lead to breakthroughs in biomedicine.