Next Issue
Volume 7, February
Previous Issue
Volume 6, December
 
 

J. Imaging, Volume 7, Issue 1 (January 2021) – 11 articles

Cover Story (view full-size image): An unsupervised machine learning technique is presented that is reinforced with hypothesis testing and statistical inference to iteratively segment the reconstructed image of a breast into fat, transition, fibroglandular, and malignant tissues. This segmentation leads to decomposition of the breast interior into disjoint tissue masks. An array of metrics is applied to compare masks extracted from reconstructed images and ground truth models. The quantitative results reveal the accuracy with which the geometric and dielectric properties are reconstructed, and are supplemented with qualitative information. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 5637 KiB  
Review
The Neutron Imaging Instrument CONRAD—Post-Operational Review
by Nikolay Kardjilov, Ingo Manke, André Hilger, Tobias Arlt, Robert Bradbury, Henning Markötter, Robin Woracek, Markus Strobl, Wolfgang Treimer and John Banhart
J. Imaging 2021, 7(1), 11; https://doi.org/10.3390/jimaging7010011 - 19 Jan 2021
Cited by 4 | Viewed by 3379
Abstract
The neutron imaging instrument CONRAD was operated as a part of the user program of the research reactor BER-II at Helmholtz-Zentrum Berlin (HZB) from 2005 to 2020. The instrument was designed to use the neutron flux from the cold source of the reactor, [...] Read more.
The neutron imaging instrument CONRAD was operated as a part of the user program of the research reactor BER-II at Helmholtz-Zentrum Berlin (HZB) from 2005 to 2020. The instrument was designed to use the neutron flux from the cold source of the reactor, transported by a curved neutron guide. The pure cold neutron spectrum provided a great advantage in the use of different neutron optical components such as focusing lenses and guides, solid-state polarizers, monochromators and phase gratings. The flexible setup of the instrument allowed for implementation of new methods including wavelength-selective, dark-field, phase-contrast and imaging with polarized neutrons. In summary, these developments helped to attract a large number of scientists and industrial customers, who were introduced to neutron imaging and subsequently contributed to the expansion of the neutron imaging community. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Graphical abstract

15 pages, 6039 KiB  
Article
Improved Acquisition and Reconstruction for Wavelength-Resolved Neutron Tomography
by Singanallur Venkatakrishnan, Yuxuan Zhang, Luc Dessieux, Christina Hoffmann, Philip Bingham and Hassina Bilheux
J. Imaging 2021, 7(1), 10; https://doi.org/10.3390/jimaging7010010 - 15 Jan 2021
Cited by 3 | Viewed by 2995
Abstract
Wavelength-resolved neutron tomography (WRNT) is an emerging technique for characterizing samples relevant to the materials sciences in 3D. WRNT studies can be carried out at beam lines in spallation neutron or reactor-based user facilities. Because of the limited availability of experimental time, potential [...] Read more.
Wavelength-resolved neutron tomography (WRNT) is an emerging technique for characterizing samples relevant to the materials sciences in 3D. WRNT studies can be carried out at beam lines in spallation neutron or reactor-based user facilities. Because of the limited availability of experimental time, potential imperfections in the neutron source, or constraints placed on the acquisition time by the type of sample, the data can be extremely noisy resulting in tomographic reconstructions with significant artifacts when standard reconstruction algorithms are used. Furthermore, making a full tomographic measurement even with a low signal-to-noise ratio can take several days, resulting in a long wait time before the user can receive feedback from the experiment when traditional acquisition protocols are used. In this paper, we propose an interlaced scanning technique and combine it with a model-based image reconstruction algorithm to produce high-quality WRNT reconstructions concurrent with the measurements being made. The interlaced scan is designed to acquire data so that successive measurements are more diverse in contrast to typical sequential scanning protocols. The model-based reconstruction algorithm combines a data-fidelity term with a regularization term to formulate the wavelength-resolved reconstruction as minimizing a high-dimensional cost-function. Using an experimental dataset of a magnetite sample acquired over a span of about two days, we demonstrate that our technique can produce high-quality reconstructions even during the experiment compared to traditional acquisition and reconstruction techniques. In summary, the combination of the proposed acquisition strategy with an advanced reconstruction algorithm provides a novel guideline for designing WRNT systems at user facilities. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

14 pages, 6310 KiB  
Article
Imaging Meets Cytometry: Analyzing Heterogeneous Functional Microscopic Data from Living Cell Populations
by Matthew Draper, Mara Willems, Reshwan K. Malahe, Alexander Hamilton and Andrei I. Tarasov
J. Imaging 2021, 7(1), 9; https://doi.org/10.3390/jimaging7010009 - 13 Jan 2021
Cited by 2 | Viewed by 2095
Abstract
Biological tissue consists of populations of cells exhibiting different responses to pharmacological stimuli. To probe the heterogeneity of cell function, we propose a multiplexed approach based on real‐time imaging of the secondary messenger levels within each cell of the tissue, followed by extraction [...] Read more.
Biological tissue consists of populations of cells exhibiting different responses to pharmacological stimuli. To probe the heterogeneity of cell function, we propose a multiplexed approach based on real‐time imaging of the secondary messenger levels within each cell of the tissue, followed by extraction of the changes of single‐cell fluorescence over time. By utilizing a piecewise baseline correction, we were able to quantify the effects of multiple pharmacological stimuli added and removed sequentially to pancreatic islets of Langerhans, thereby performing a deep functional profiling for each cell within the islet. Cluster analysis based on the functional profile demonstrated dose‐dependent changes in statistical inter‐relationships between islet cell populations. We therefore believe that the functional cytometric approach can be used for routine quantitative profiling of the tissue for drug screening or pathological testing. Full article
Show Figures

Figure 1

16 pages, 2021 KiB  
Article
Factors that Influence PRNU-Based Camera-Identification via Videos
by Lars de Roos and Zeno Geradts
J. Imaging 2021, 7(1), 8; https://doi.org/10.3390/jimaging7010008 - 13 Jan 2021
Cited by 7 | Viewed by 2709
Abstract
The Photo Response Non-Uniformity pattern (PRNU-pattern) can be used to identify the source of images or to indicate whether images have been made with the same camera. This pattern is also recognized as the “fingerprint” of a camera since it is a highly [...] Read more.
The Photo Response Non-Uniformity pattern (PRNU-pattern) can be used to identify the source of images or to indicate whether images have been made with the same camera. This pattern is also recognized as the “fingerprint” of a camera since it is a highly characteristic feature. However, this pattern, identically to a real fingerprint, is sensitive to many different influences, e.g., the influence of camera settings. In this study, several previously investigated factors were noted, after which three were selected for further investigation. The computation and comparison methods are evaluated under variation of the following factors: resolution, length of the video and compression. For all three studies, images were taken with a single iPhone 6. It was found that a higher resolution ensures a more reliable comparison, and that the length of a (reference) video should always be as high as possible to gain a better PRNU-pattern. It also became clear that compression (i.e., in this study the compression that Snapchat uses) has a negative effect on the correlation value. Therefore, it was found that many different factors play a part when comparing videos. Due to the large amount of controllable and non-controllable factors that influence the PRNU-pattern, it is of great importance that further research is carried out to gain clarity on the individual influences that factors exert. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

14 pages, 2059 KiB  
Article
Bayesian Learning of Shifted-Scaled Dirichlet Mixture Models and Its Application to Early COVID-19 Detection in Chest X-ray Images
by Sami Bourouis, Abdullah Alharbi and Nizar Bouguila
J. Imaging 2021, 7(1), 7; https://doi.org/10.3390/jimaging7010007 - 10 Jan 2021
Cited by 9 | Viewed by 2724
Abstract
Early diagnosis and assessment of fatal diseases and acute infections on chest X-ray (CXR) imaging may have important therapeutic implications and reduce mortality. In fact, many respiratory diseases have a serious impact on the health and lives of people. However, certain types of [...] Read more.
Early diagnosis and assessment of fatal diseases and acute infections on chest X-ray (CXR) imaging may have important therapeutic implications and reduce mortality. In fact, many respiratory diseases have a serious impact on the health and lives of people. However, certain types of infections may include high variations in terms of contrast, size and shape which impose a real challenge on classification process. This paper introduces a new statistical framework to discriminate patients who are either negative or positive for certain kinds of virus and pneumonia. We tackle the current problem via a fully Bayesian approach based on a flexible statistical model named shifted-scaled Dirichlet mixture models (SSDMM). This mixture model is encouraged by its effectiveness and robustness recently obtained in various image processing applications. Unlike frequentist learning methods, our developed Bayesian framework has the advantage of taking into account the uncertainty to accurately estimate the model parameters as well as the ability to solve the problem of overfitting. We investigate here a Markov Chain Monte Carlo (MCMC) estimator, which is a computer–driven sampling method, for learning the developed model. The current work shows excellent results when dealing with the challenging problem of biomedical image classification. Indeed, extensive experiments have been carried out on real datasets and the results prove the merits of our Bayesian framework. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

16 pages, 1360 KiB  
Article
EXAM: A Framework of Learning Extreme and Moderate Embeddings for Person Re-ID
by Guanqiu Qi, Gang Hu, Xiaofei Wang, Neal Mazur, Zhiqin Zhu and Matthew Haner
J. Imaging 2021, 7(1), 6; https://doi.org/10.3390/jimaging7010006 - 07 Jan 2021
Cited by 9 | Viewed by 1959
Abstract
Person re-identification (Re-ID) is challenging due to host of factors: the variety of human positions, difficulties in aligning bounding boxes, and complex backgrounds, among other factors. This paper proposes a new framework called EXAM (EXtreme And Moderate feature embeddings) for Re-ID tasks. This [...] Read more.
Person re-identification (Re-ID) is challenging due to host of factors: the variety of human positions, difficulties in aligning bounding boxes, and complex backgrounds, among other factors. This paper proposes a new framework called EXAM (EXtreme And Moderate feature embeddings) for Re-ID tasks. This is done using discriminative feature learning, requiring attention-based guidance during training. Here “Extreme” refers to salient human features and “Moderate” refers to common human features. In this framework, these types of embeddings are calculated by global max-pooling and average-pooling operations respectively; and then, jointly supervised by multiple triplet and cross-entropy loss functions. The processes of deducing attention from learned embeddings and discriminative feature learning are incorporated, and benefit from each other in this end-to-end framework. From the comparative experiments and ablation studies, it is shown that the proposed EXAM is effective, and its learned feature representation reaches state-of-the-art performance. Full article
Show Figures

Figure 1

27 pages, 8759 KiB  
Article
Evaluating Performance of Microwave Image Reconstruction Algorithms: Extracting Tissue Types with Segmentation Using Machine Learning
by Douglas Kurrant, Muhammad Omer, Nasim Abdollahi, Pedram Mojabi, Elise Fear and Joe LoVetri
J. Imaging 2021, 7(1), 5; https://doi.org/10.3390/jimaging7010005 - 07 Jan 2021
Cited by 6 | Viewed by 3756
Abstract
Evaluating the quality of reconstructed images requires consistent approaches to extracting information and applying metrics. Partitioning medical images into tissue types permits the quantitative assessment of regions that contain a specific tissue. The assessment facilitates the evaluation of an imaging algorithm in terms [...] Read more.
Evaluating the quality of reconstructed images requires consistent approaches to extracting information and applying metrics. Partitioning medical images into tissue types permits the quantitative assessment of regions that contain a specific tissue. The assessment facilitates the evaluation of an imaging algorithm in terms of its ability to reconstruct the properties of various tissue types and identify anomalies. Microwave tomography is an imaging modality that is model-based and reconstructs an approximation of the actual internal spatial distribution of the dielectric properties of a breast over a reconstruction model consisting of discrete elements. The breast tissue types are characterized by their dielectric properties, so the complex permittivity profile that is reconstructed may be used to distinguish different tissue types. This manuscript presents a robust and flexible medical image segmentation technique to partition microwave breast images into tissue types in order to facilitate the evaluation of image quality. The approach combines an unsupervised machine learning method with statistical techniques. The key advantage for using the algorithm over other approaches, such as a threshold-based segmentation method, is that it supports this quantitative analysis without prior assumptions such as knowledge of the expected dielectric property values that characterize each tissue type. Moreover, it can be used for scenarios where there is a scarcity of data available for supervised learning. Microwave images are formed by solving an inverse scattering problem that is severely ill-posed, which has a significant impact on image quality. A number of strategies have been developed to alleviate the ill-posedness of the inverse scattering problem. The degree of success of each strategy varies, leading to reconstructions that have a wide range of image quality. A requirement for the segmentation technique is the ability to partition tissue types over a range of image qualities, which is demonstrated in the first part of the paper. The segmentation of images into regions of interest corresponding to various tissue types leads to the decomposition of the breast interior into disjoint tissue masks. An array of region and distance-based metrics are applied to compare masks extracted from reconstructed images and ground truth models. The quantitative results reveal the accuracy with which the geometric and dielectric properties are reconstructed. The incorporation of the segmentation that results in a framework that effectively furnishes the quantitative assessment of regions that contain a specific tissue is also demonstrated. The algorithm is applied to reconstructed microwave images derived from breasts with various densities and tissue distributions to demonstrate the flexibility of the algorithm and that it is not data-specific. The potential for using the algorithm to assist in diagnosis is exhibited with a tumor tracking example. This example also establishes the usefulness of the approach in evaluating the performance of the reconstruction algorithm in terms of its sensitivity and specificity to malignant tissue and its ability to accurately reconstruct malignant tissue. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Figure 1

9 pages, 2257 KiB  
Article
Neutron Imaging Using a Fine-Grained Nuclear Emulsion
by Katsuya Hirota, Tomoko Ariga, Masahiro Hino, Go Ichikawa, Shinsuke Kawasaki, Masaaki Kitaguchi, Kenji Mishima, Naoto Muto, Naotaka Naganawa and Hirohiko M. Shimizu
J. Imaging 2021, 7(1), 4; https://doi.org/10.3390/jimaging7010004 - 05 Jan 2021
Cited by 4 | Viewed by 2705
Abstract
A neutron detector using a fine-grained nuclear emulsion has a sub-micron spatial resolution and thus has potential to be applied as high-resolution neutron imaging. In this paper, we present two approaches to applying the emulsion detectors for neutron imaging. One is using a [...] Read more.
A neutron detector using a fine-grained nuclear emulsion has a sub-micron spatial resolution and thus has potential to be applied as high-resolution neutron imaging. In this paper, we present two approaches to applying the emulsion detectors for neutron imaging. One is using a track analysis to derive the reaction points for high resolution. From an image obtained with a 9 μm pitch Gd grating with cold neutrons, periodic peak with a standard deviation of 1.3 μm was observed. The other is an approach without a track analysis for high-density irradiation. An internal structure of a crystal oscillator chip, with a scale of approximately 30 μm, was able to be observed after an image analysis. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

33 pages, 3219 KiB  
Article
Image Aesthetic Assessment Based on Image Classification and Region Segmentation
by Quyet-Tien Le, Patricia Ladret, Huu-Tuan Nguyen and Alice Caplier
J. Imaging 2021, 7(1), 3; https://doi.org/10.3390/jimaging7010003 - 27 Dec 2020
Cited by 4 | Viewed by 3207
Abstract
The main goal of this paper is to study Image Aesthetic Assessment (IAA) indicating images as high or low aesthetic. The main contributions concern three points. Firstly, following the idea that photos in different categories (human, flower, animal, landscape, …) are taken with [...] Read more.
The main goal of this paper is to study Image Aesthetic Assessment (IAA) indicating images as high or low aesthetic. The main contributions concern three points. Firstly, following the idea that photos in different categories (human, flower, animal, landscape, …) are taken with different photographic rules, image aesthetic should be evaluated in a different way for each image category. Large field images and close-up images are two typical categories of images with opposite photographic rules so we want to investigate the intuition that prior Large field/Close-up Image Classification (LCIC) might improve the performance of IAA. Secondly, when a viewer looks at a photo, some regions receive more attention than other regions. Those regions are defined as Regions Of Interest (ROI) and it might be worthy to identify those regions before IAA. The question “Is it worthy to extract some ROIs before IAA?” is considered by studying Region Of Interest Extraction (ROIE) before investigating IAA based on each feature set (global image features, ROI features and background features). Based on the answers, a new IAA model is proposed. The last point is about a comparison between the efficiency of handcrafted and learned features for the purpose of IAA. Full article
Show Figures

Graphical abstract

15 pages, 46221 KiB  
Article
Data Augmentation Using Adversarial Image-to-Image Translation for the Segmentation of Mobile-Acquired Dermatological Images
by Catarina Andrade, Luís F. Teixeira, Maria João M. Vasconcelos and Luís Rosado
J. Imaging 2021, 7(1), 2; https://doi.org/10.3390/jimaging7010002 - 24 Dec 2020
Cited by 5 | Viewed by 3110
Abstract
Dermoscopic images allow the detailed examination of subsurface characteristics of the skin, which led to creating several substantial databases of diverse skin lesions. However, the dermoscope is not an easily accessible tool in some regions. A less expensive alternative could be acquiring medium [...] Read more.
Dermoscopic images allow the detailed examination of subsurface characteristics of the skin, which led to creating several substantial databases of diverse skin lesions. However, the dermoscope is not an easily accessible tool in some regions. A less expensive alternative could be acquiring medium resolution clinical macroscopic images of skin lesions. However, the limited volume of macroscopic images available, especially mobile-acquired, hinders developing a clinical mobile-based deep learning approach. In this work, we present a technique to efficiently utilize the sizable number of dermoscopic images to improve the segmentation capacity of macroscopic skin lesion images. A Cycle-Consistent Adversarial Network is used to translate the image between the two distinct domains created by the different image acquisition devices. A visual inspection was performed on several databases for qualitative evaluation of the results, based on the disappearance and appearance of intrinsic dermoscopic and macroscopic features. Moreover, the Fréchet Inception Distance was used as a quantitative metric. The quantitative segmentation results are demonstrated on the available macroscopic segmentation databases, SMARTSKINS and Dermofit Image Library, yielding test set thresholded Jaccard Index of 85.13% and 74.30%. These results establish a new state-of-the-art performance in the SMARTSKINS database. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

12 pages, 7119 KiB  
Article
Comparison of Thermal Neutron and Hard X-ray Dark-Field Tomography
by Alex Gustschin, Tobias Neuwirth, Alexander Backs, Manuel Viermetz, Nikolai Gustschin, Michael Schulz and Franz Pfeiffer
J. Imaging 2021, 7(1), 1; https://doi.org/10.3390/jimaging7010001 - 23 Dec 2020
Cited by 1 | Viewed by 2476
Abstract
High visibility (0.56) neutron-based multi-modal imaging with a Talbot–Lau interferometer at a wavelength of 1.6 Å is reported. A tomography scan of a strongly absorbing quartz geode sample was performed with both the neutron and an X-ray grating interferometer (70 kVp) for a [...] Read more.
High visibility (0.56) neutron-based multi-modal imaging with a Talbot–Lau interferometer at a wavelength of 1.6 Å is reported. A tomography scan of a strongly absorbing quartz geode sample was performed with both the neutron and an X-ray grating interferometer (70 kVp) for a quantitative comparison. Small scattering structures embedded in the absorbing silica matrix were well resolved in neutron dark-field CT slices with a spatial resolution of about 300 μm. Beneficial effects, such as monochromaticity and stronger penetration power of the used neutron radiation, helped to avoid the beam hardening-related artificial dark-field signal which was present in the X-ray data. Both dark-field modalities show mostly the same structures; however, some scattering features appear only in the neutron domain. Potential applications of combined X-ray and neutron multi-modal CT enabling one to probe both the nuclear and the electron density-related structural properties are discussed. strongly absorbing samples are now accessible for the dark-field modality by the use of thermal neutrons. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop