sensors-logo

Journal Browser

Journal Browser

Special Issue "Computer Vision and Sensors Innovations for Microscopy Imaging Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 28 February 2022.

Special Issue Editors

Dr. Filippo Piccinini
E-Mail Website
Guest Editor
IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) "Dino Amadori", Meldola (FC), Italy
Interests: computer vision; microscopy and imaging; 3D cell cultures; software development; machine learning
Dr. Antonella Carbonaro
E-Mail Website
Guest Editor
Department of Computer Science and Engineering - DISI, Alma Mater Studiorum-Università di Bologna, Bologna, Italy
Interests: customization and content-based information processing for data and knowledge representation; semantic web technologies; personalized environments; heterogeneous data integration from IoT devices
Special Issues, Collections and Topics in MDPI journals
Prof. Peter Horvath
E-Mail Website
Guest Editor
Institute of Biochemistry, Biological Research Centre (BRC), Szeged, Hungary
Interests: computer vision; microscopy and imaging; single-cell analysis; software development; machine learning
Prof. Dr. Gastone C. Castellani
E-Mail Website
Guest Editor
Department of Experimental, Diagnostic and Specialty Medicine - DIMES, Alma Mater Studiorum-Università di Bologna, Bologna, Italy
Interests: computer vision; big data; medical physics; microscopy and imaging; machine learning

Special Issue Information

Dear Colleagues,

It is well known that microscopy imaging innovations pave the way for discoveries in biology, medicine, engineering, and many other disciplines of health and industrial research. In this scenario, computer vision and sensors are the driving force for innovative imaging applications leading to new microcopy opportunities.

This Special Issue, entitled: "Computer Vision and Sensors Innovations for Microscopy Imaging Applications" aims to explore the scientific-technological frontiers that characterize the microscopy scenario. It seeks original, previously unpublished research and review articles empirically addressing key issues and challenges related to the methods, implementation, results, and evaluation of novel approaches and technologies in the field of microscopy imaging.

Dr. Filippo Piccinini
Prof. Antonella Carbonaro
Prof. Peter Horvath
Prof. Gastone C. Castellani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer vision
  • Microscopy
  • Novel imaging and sensing
  • Imaging technology for biomedical applications
  • Software development
  • Machine learning
  • Deep learning
  • Segmentation, tracking, and classification
  • Image processing
  • Signal processing

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Generative Adversarial Networks for Morphological–Temporal Classification of Stem Cell Images
Sensors 2022, 22(1), 206; https://doi.org/10.3390/s22010206 - 29 Dec 2021
Viewed by 110
Abstract
Frequently, neural network training involving biological images suffers from a lack of data, resulting in inefficient network learning. This issue stems from limitations in terms of time, resources, and difficulty in cellular experimentation and data collection. For example, when performing experimental analysis, it [...] Read more.
Frequently, neural network training involving biological images suffers from a lack of data, resulting in inefficient network learning. This issue stems from limitations in terms of time, resources, and difficulty in cellular experimentation and data collection. For example, when performing experimental analysis, it may be necessary for the researcher to use most of their data for testing, as opposed to model training. Therefore, the goal of this paper is to perform dataset augmentation using generative adversarial networks (GAN) to increase the classification accuracy of deep convolutional neural networks (CNN) trained on induced pluripotent stem cell microscopy images. The main challenges are: 1. modeling complex data using GAN and 2. training neural networks on augmented datasets that contain generated data. To address these challenges, a temporally constrained, hierarchical classification scheme that exploits domain knowledge is employed for model learning. First, image patches of cell colonies from gray-scale microscopy images are generated using GAN, and then these images are added to the real dataset and used to address class imbalances at multiple stages of training. Overall, a 2% increase in both true positive rate and F1-score is observed using this method as compared to a straightforward, imbalanced classification network, with some greater improvements on a classwise basis. This work demonstrates that synergistic model design involving domain knowledge is key for biological image analysis and improves model learning in high-throughput scenarios. Full article
Show Figures

Figure 1

Communication
Multi-Focus Image Fusion Using Focal Area Extraction in a Large Quantity of Microscopic Images
Sensors 2021, 21(21), 7371; https://doi.org/10.3390/s21217371 - 05 Nov 2021
Viewed by 367
Abstract
The non-invasive examination of conjunctival goblet cells using a microscope is a novel procedure for the diagnosis of ocular surface diseases. However, it is difficult to generate an all-in-focus image due to the curvature of the eyes and the limited focal depth of [...] Read more.
The non-invasive examination of conjunctival goblet cells using a microscope is a novel procedure for the diagnosis of ocular surface diseases. However, it is difficult to generate an all-in-focus image due to the curvature of the eyes and the limited focal depth of the microscope. The microscope acquires multiple images with the axial translation of focus, and the image stack must be processed. Thus, we propose a multi-focus image fusion method to generate an all-in-focus image from multiple microscopic images. First, a bandpass filter is applied to the source images and the focus areas are extracted using Laplacian transformation and thresholding with a morphological operation. Next, a self-adjusting guided filter is applied for the natural connections between local focus images. A window-size-updating method is adopted in the guided filter to reduce the number of parameters. This paper presents a novel algorithm that can operate for a large quantity of images (10 or more) and obtain an all-in-focus image. To quantitatively evaluate the proposed method, two different types of evaluation metrics are used: “full-reference” and “no-reference”. The experimental results demonstrate that this algorithm is robust to noise and capable of preserving local focus information through focal area extraction. Additionally, the proposed method outperforms state-of-the-art approaches in terms of both visual effects and image quality assessments. Full article
Show Figures

Figure 1

Article
Density Distribution Maps: A Novel Tool for Subcellular Distribution Analysis and Quantitative Biomedical Imaging
Sensors 2021, 21(3), 1009; https://doi.org/10.3390/s21031009 - 02 Feb 2021
Cited by 1 | Viewed by 787
Abstract
Subcellular spatial location is an essential descriptor of molecules biological function. Presently, super-resolution microscopy techniques enable quantification of subcellular objects distribution in fluorescence images, but they rely on instrumentation, tools and expertise not constituting a default for most of laboratories. We propose a [...] Read more.
Subcellular spatial location is an essential descriptor of molecules biological function. Presently, super-resolution microscopy techniques enable quantification of subcellular objects distribution in fluorescence images, but they rely on instrumentation, tools and expertise not constituting a default for most of laboratories. We propose a method that allows resolving subcellular structures location by reinforcing each single pixel position with the information from surroundings. Although designed for entry-level laboratory equipment with common resolution powers, our method is independent from imaging device resolution, and thus can benefit also super-resolution microscopy. The approach permits to generate density distribution maps (DDMs) informative of both objects’ absolute location and self-relative displacement, thus practically reducing location uncertainty and increasing the accuracy of signal mapping. This work proves the capability of the DDMs to: (a) improve the informativeness of spatial distributions; (b) empower subcellular molecules distributions analysis; (c) extend their applicability beyond mere spatial object mapping. Finally, the possibility of enhancing or even disclosing latent distributions can concretely speed-up routine, large-scale and follow-up experiments, besides representing a benefit for all spatial distribution studies, independently of the image acquisition resolution. DDMaker, a Software endowed with a user-friendly Graphical User Interface (GUI), is also provided to support users in DDMs creation. Full article
Show Figures

Figure 1

Back to TopTop