Next Issue
Previous Issue

Table of Contents

J. Imaging, Volume 5, Issue 1 (January 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-21
Export citation of selected articles as:
Open AccessArticle Efficient FPGA Implementation of Automatic Nuclei Detection in Histopathology Images
J. Imaging 2019, 5(1), 21; https://doi.org/10.3390/jimaging5010021
Received: 30 November 2018 / Revised: 27 December 2018 / Accepted: 11 January 2019 / Published: 17 January 2019
Viewed by 321 | PDF Full-text (6972 KB) | HTML Full-text | XML Full-text
Abstract
Accurate and efficient detection of cell nuclei is an important step towards the development of a pathology-based Computer Aided Diagnosis. Generally, high-resolution histopathology images are very large, in the order of billion pixels, therefore nuclei detection is a highly compute intensive task, and [...] Read more.
Accurate and efficient detection of cell nuclei is an important step towards the development of a pathology-based Computer Aided Diagnosis. Generally, high-resolution histopathology images are very large, in the order of billion pixels, therefore nuclei detection is a highly compute intensive task, and software implementation requires a significant amount of processing time. To assist the doctors in real time, special hardware accelerators, which can reduce the processing time, are required. In this paper, we propose a Field Programmable Gate Array (FPGA) implementation of automated nuclei detection algorithm using generalized Laplacian of Gaussian filters. The experimental results show that the implemented architecture has the potential to provide a significant improvement in processing time without losing detection accuracy. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs)
Figures

Figure 1

Open AccessArticle Local Indicators of Spatial Autocorrelation (LISA): Application to Blind Noise-Based Perceptual Quality Metric Index for Magnetic Resonance Images
J. Imaging 2019, 5(1), 20; https://doi.org/10.3390/jimaging5010020
Received: 23 November 2018 / Revised: 16 December 2018 / Accepted: 2 January 2019 / Published: 15 January 2019
Viewed by 368 | PDF Full-text (1783 KB) | HTML Full-text | XML Full-text
Abstract
Noise-based quality evaluation of MRI images is highly desired in noise-dominant environments. Current noise-based MRI quality evaluation methods have drawbacks which limit their effective performance. Traditional full-reference methods such as SNR and most of the model-based techniques cannot provide perceptual quality metrics required [...] Read more.
Noise-based quality evaluation of MRI images is highly desired in noise-dominant environments. Current noise-based MRI quality evaluation methods have drawbacks which limit their effective performance. Traditional full-reference methods such as SNR and most of the model-based techniques cannot provide perceptual quality metrics required for accurate diagnosis, treatment and monitoring of diseases. Although techniques based on the Moran coefficients are perceptual quality metrics, they are full-reference methods and will be ineffective in applications where the reference image is not available. Furthermore, the predicted quality scores are difficult to interpret because their quality indices are not standardized. In this paper, we propose a new no-reference perceptual quality evaluation method for grayscale images such as MRI images. Our approach is formulated to mimic how humans perceive an image. It transforms noise level into a standardized perceptual quality score. Global Moran statistics is combined with local indicators of spatial autocorrelation in the form of local Moran statistics. Quality score is predicted from perceptually weighted combination of clustered and random pixels. Performance evaluation, comparative performance evaluation and validation by human observers, shows that the proposed method will be a useful tool in the evaluation of retrospectively acquired MRI images and the evaluation of noise reduction algorithms. Full article
Figures

Figure 1

Open AccessArticle Full-Vectorial 3D Microwave Imaging of Sparse Scatterers through a Multi-Task Bayesian Compressive Sensing Approach
J. Imaging 2019, 5(1), 19; https://doi.org/10.3390/jimaging5010019
Received: 3 December 2018 / Revised: 30 December 2018 / Accepted: 8 January 2019 / Published: 15 January 2019
Viewed by 358 | PDF Full-text (1336 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the full-vectorial three-dimensional (3D) microwave imaging (MI) of sparse scatterers is dealt with. Towards this end, the inverse scattering (IS) problem is formulated within the contrast source inversion (CSI) framework and it [...] Read more.
In this paper, the full-vectorial three-dimensional (3D) microwave imaging (MI) of sparse scatterers is dealt with. Towards this end, the inverse scattering (IS) problem is formulated within the contrast source inversion (CSI) framework and it is aimed at retrieving the sparsest and most probable distribution of the contrast source within the imaged volume. A customized multi-task Bayesian compressive sensing (MT-BCS) method is used to yield regularized solutions of the 3D-IS problem with a remarkable computational efficiency. Selected numerical results on representative benchmarks are presented and discussed to assess the effectiveness and the reliability of the proposed MT-BCS strategy in comparison with other competitive state-of-the-art approaches, as well. Full article
(This article belongs to the Special Issue Microwave Imaging and Electromagnetic Inverse Scattering Problems)
Figures

Figure 1

Open AccessArticle Quality Assessment of HDR/WCG Images Using HDR Uniform Color Spaces
J. Imaging 2019, 5(1), 18; https://doi.org/10.3390/jimaging5010018
Received: 31 October 2018 / Revised: 21 December 2018 / Accepted: 4 January 2019 / Published: 14 January 2019
Viewed by 360 | PDF Full-text (5027 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
High Dynamic Range (HDR) and Wide Color Gamut (WCG) screens are able to render brighter and darker pixels with more vivid colors than ever. To assess the quality of images and videos displayed on these screens, new quality assessment metrics adapted to this [...] Read more.
High Dynamic Range (HDR) and Wide Color Gamut (WCG) screens are able to render brighter and darker pixels with more vivid colors than ever. To assess the quality of images and videos displayed on these screens, new quality assessment metrics adapted to this new content are required. Because most SDR metrics assume that the representation of images is perceptually uniform, we study the impact of three uniform color spaces developed specifically for HDR and WCG images, namely, I C t C p , J z a z b z and H D R - L a b on 12 SDR quality assessment metrics. Moreover, as the existing databases of images annotated with subjective scores are using a standard gamut, two new HDR databases using WCG are proposed. Results show that MS-SSIM and FSIM are among the most reliable metrics. This study also highlights the fact that the diffuse white of HDR images plays an important role when adapting SDR metrics for HDR content. Moreover, the adapted SDR metrics does not perform well to predict the impact of chrominance distortions. Full article
(This article belongs to the Special Issue Multimedia Content Analysis and Applications)
Figures

Figure 1

Open AccessArticle Macrosight: A Novel Framework to Analyze the Shape and Movement of Interacting Macrophages Using Matlab®
J. Imaging 2019, 5(1), 17; https://doi.org/10.3390/jimaging5010017
Received: 29 November 2018 / Revised: 5 January 2019 / Accepted: 8 January 2019 / Published: 14 January 2019
Viewed by 415 | PDF Full-text (2792 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel software framework, called macrosight, which incorporates routines to detect, track, and analyze the shape and movement of objects, with special emphasis on macrophages. The key feature presented in macrosight consists of an algorithm to assess the changes of [...] Read more.
This paper presents a novel software framework, called macrosight, which incorporates routines to detect, track, and analyze the shape and movement of objects, with special emphasis on macrophages. The key feature presented in macrosight consists of an algorithm to assess the changes of direction derived from cell–cell contact, where an interaction is assumed to occur. The main biological motivation is the determination of certain cell interactions influencing cell migration. Thus, the main objective of this work is to provide insights into the notion that interactions between cell structures cause a change in orientation. Macrosight analyzes the change of direction of cells before and after they come in contact with another cell. Interactions are determined when the cells overlap and form clumps of two or more cells. The framework integrates a segmentation technique capable of detecting overlapping cells and a tracking framework into a tool for the analysis of the trajectories of cells before and after they overlap. Preliminary results show promise into the analysis and the hypothesis proposed, and lays the groundwork for further developments. The extensive experimentation and data analysis show, with statistical significance, that under certain conditions, the movement changes before and after an interaction are different from movement in controlled cases. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Figures

Figure 1

Open AccessArticle FPGA-Based Processor Acceleration for Image Processing Applications
J. Imaging 2019, 5(1), 16; https://doi.org/10.3390/jimaging5010016
Received: 27 November 2018 / Revised: 23 December 2018 / Accepted: 7 January 2019 / Published: 13 January 2019
Viewed by 445 | PDF Full-text (2370 KB) | HTML Full-text | XML Full-text
Abstract
FPGA-based embedded image processing systems offer considerable computing resources but present programming challenges when compared to software systems. The paper describes an approach based on an FPGA-based soft processor called Image Processing Processor (IPPro) which can operate up to 337 MHz on a [...] Read more.
FPGA-based embedded image processing systems offer considerable computing resources but present programming challenges when compared to software systems. The paper describes an approach based on an FPGA-based soft processor called Image Processing Processor (IPPro) which can operate up to 337 MHz on a high-end Xilinx FPGA family and gives details of the dataflow-based programming environment. The approach is demonstrated for a k-means clustering operation and a traffic sign recognition application, both of which have been prototyped on an Avnet Zedboard that has Xilinx Zynq-7000 system-on-chip (SoC). A number of parallel dataflow mapping options were explored giving a speed-up of 8 times for the k-means clustering using 16 IPPro cores, and a speed-up of 9.6 times for the morphology filter operation of the traffic sign recognition using 16 IPPro cores compared to their equivalent ARM-based software implementations. We show that for k-means clustering, the 16 IPPro cores implementation is 57, 28 and 1.7 times more power efficient (fps/W) than ARM Cortex-A7 CPU, nVIDIA GeForce GTX980 GPU and ARM Mali-T628 embedded GPU respectively. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs)
Figures

Figure 1

Open AccessArticle Comparison of Piezoelectric and Optical Projection Imaging for Three-Dimensional In Vivo Photoacoustic Tomography
J. Imaging 2019, 5(1), 15; https://doi.org/10.3390/jimaging5010015
Received: 26 November 2018 / Revised: 22 December 2018 / Accepted: 3 January 2019 / Published: 11 January 2019
Viewed by 387 | PDF Full-text (2518 KB) | HTML Full-text | XML Full-text
Abstract
Ultrasound sensor arrays for photoacoustic tomography (PAT) are investigated that create line projections of the pressure generated in an object by pulsed light illumination. Projections over a range of viewing angles enable the reconstruction of a three-dimensional image. Two line-integrating arrays are compared [...] Read more.
Ultrasound sensor arrays for photoacoustic tomography (PAT) are investigated that create line projections of the pressure generated in an object by pulsed light illumination. Projections over a range of viewing angles enable the reconstruction of a three-dimensional image. Two line-integrating arrays are compared in this study for the in vivo imaging of vasculature, a piezoelectric array, and a camera-based setup that captures snapshots of the acoustic field emanating from the sample. An array consisting of 64 line-shaped sensors made of piezoelectric polymer film, which was arranged on a half-cylindrical area, was used to acquire spatiotemporal data from a human finger. The optical setup used phase contrast to visualize the acoustic field generated in the leg of a mouse after a selected delay time. Time-domain back projection and frequency-domain back propagation were used for image reconstruction from the piezoelectric and optical data, respectively. The comparison yielded an about threefold higher resolution for the optical setup and an about 13-fold higher sensitivity of the piezoelectric array. Due to the high density of data in the camera images, the optical technique gave images without streak artifacts, which were visible in the piezo array images due to the discrete detector positions. Overall, both detection concepts are suited for almost real-time projection imaging and three-dimensional imaging with a data acquisition time of less than a minute without averaging, which was limited by the repetition rate of the laser. Full article
(This article belongs to the Special Issue Biomedical Photoacoustic Imaging: Technologies and Methods)
Figures

Figure 1

Open AccessArticle Enhancement and Segmentation Workflow for the Developing Zebrafish Vasculature
J. Imaging 2019, 5(1), 14; https://doi.org/10.3390/jimaging5010014
Received: 28 November 2018 / Revised: 3 January 2019 / Accepted: 8 January 2019 / Published: 11 January 2019
Viewed by 407 | PDF Full-text (3609 KB) | HTML Full-text | XML Full-text
Abstract
Zebrafish have become an established in vivo vertebrate model to study cardiovascular development and disease. However, most published studies of the zebrafish vascular architecture rely on subjective visual assessment, rather than objective quantification. In this paper, we used state-of-the-art light sheet fluorescence microscopy [...] Read more.
Zebrafish have become an established in vivo vertebrate model to study cardiovascular development and disease. However, most published studies of the zebrafish vascular architecture rely on subjective visual assessment, rather than objective quantification. In this paper, we used state-of-the-art light sheet fluorescence microscopy to visualize the vasculature in transgenic fluorescent reporter zebrafish. Analysis of image quality, vascular enhancement methods, and segmentation approaches were performed in the framework of the open-source software Fiji to allow dissemination and reproducibility. Here, we build on a previously developed image processing pipeline; evaluate its applicability to a wider range of data; apply and evaluate an alternative vascular enhancement method; and, finally, suggest a work-flow for successful segmentation of the embryonic zebrafish vasculature. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Figures

Graphical abstract

Open AccessArticle Resolution Limits in Photoacoustic Imaging Caused by Acoustic Attenuation
J. Imaging 2019, 5(1), 13; https://doi.org/10.3390/jimaging5010013
Received: 27 November 2018 / Revised: 25 December 2018 / Accepted: 3 January 2019 / Published: 10 January 2019
Viewed by 395 | PDF Full-text (1876 KB) | HTML Full-text | XML Full-text
Abstract
In conventional photoacoustic tomography, several effects contribute to the loss of resolution, such as the limited bandwidth and the finite size of the transducer, or the space-dependent speed of sound. They can all be compensated (in principle) technically or numerically. Frequency-dependent acoustic attenuation [...] Read more.
In conventional photoacoustic tomography, several effects contribute to the loss of resolution, such as the limited bandwidth and the finite size of the transducer, or the space-dependent speed of sound. They can all be compensated (in principle) technically or numerically. Frequency-dependent acoustic attenuation also limits spatial resolution by reducing the bandwidth of the photoacoustic signal, which can be numerically compensated only up to a theoretical limit given by thermodynamics. The entropy production, which is the dissipated energy of the acoustic wave divided by the temperature, turns out to be equal to the information loss, which cannot be compensated for by any reconstruction method. This is demonstrated for the propagation of planar acoustic waves in water, which are induced by short laser pulses and measured by piezoelectric acoustical transducers. It turns out that for water, where the acoustic attenuation is proportional to the squared frequency, the resolution limit is proportional to the square root of the distance and inversely proportional to the square root of the logarithm of the signal-to-noise ratio. The proposed method could be used in future work for media other than water, such as biological tissue, where acoustic attenuation has a different power-law frequency dependence. Full article
(This article belongs to the Special Issue Biomedical Photoacoustic Imaging: Technologies and Methods)
Figures

Figure 1

Open AccessArticle Semi-Automatic Algorithms for Estimation and Tracking of AP-Diameter of the IVC in Ultrasound Images
J. Imaging 2019, 5(1), 12; https://doi.org/10.3390/jimaging5010012
Received: 29 October 2018 / Revised: 20 December 2018 / Accepted: 4 January 2019 / Published: 9 January 2019
Viewed by 379 | PDF Full-text (674 KB) | HTML Full-text | XML Full-text
Abstract
Acutely ill patients presenting with conditions such as sepsis, trauma, and congestive heart failure require judicious resuscitation in order to achieve and maintain optimal circulating blood volume. Increasingly, emergency and critical care physicians are using portable ultrasound to approximate the temporal changes of [...] Read more.
Acutely ill patients presenting with conditions such as sepsis, trauma, and congestive heart failure require judicious resuscitation in order to achieve and maintain optimal circulating blood volume. Increasingly, emergency and critical care physicians are using portable ultrasound to approximate the temporal changes of the anterior–posterior (AP)-diameter of the inferior vena cava (IVC) in order to guide fluid administration or removal. This paper proposes semi-automatic active ellipse and rectangle algorithms capable of improved and quantified measurement of the AP-diameter. The proposed algorithms are compared to manual measurement and a previously published active circle model. Results demonstrate that the rectangle model outperforms both active circle and ellipse irrespective of IVC shape and closely approximates tedious expert assessment. Full article
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Journal of Imaging in 2018
J. Imaging 2019, 5(1), 11; https://doi.org/10.3390/jimaging5010011
Published: 9 January 2019
Viewed by 361 | PDF Full-text (178 KB) | HTML Full-text | XML Full-text
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
Open AccessArticle Prediction of the Leaf Primordia of Potato Tubers Using Sensor Fusion and Wavelength Selection
J. Imaging 2019, 5(1), 10; https://doi.org/10.3390/jimaging5010010
Received: 10 November 2018 / Revised: 29 December 2018 / Accepted: 3 January 2019 / Published: 9 January 2019
Viewed by 383 | PDF Full-text (1497 KB) | HTML Full-text | XML Full-text
Abstract
The sprouting of potato tubers during storage is a significant problem that suppresses obtaining high quality seeds or fried products. In this study, the potential of fusing data obtained from visible (VIS)/near-infrared (NIR) spectroscopic and hyperspectral imaging systems was investigated, to improve the [...] Read more.
The sprouting of potato tubers during storage is a significant problem that suppresses obtaining high quality seeds or fried products. In this study, the potential of fusing data obtained from visible (VIS)/near-infrared (NIR) spectroscopic and hyperspectral imaging systems was investigated, to improve the prediction of primordial leaf count as a significant sign for tubers sprouting. Electronic and lab measurements were conducted on whole tubers of Frito Lay 1879 (FL1879) and Russet Norkotah (R.Norkotah) potato cultivars. The interval partial least squares (IPLS) technique was adopted to extract the most effective wavelengths for both systems. Linear regression was utilized using partial least squares regression (PLSR), and the best calibration model was chosen using four-fold cross-validation. Then the prediction models were obtained using separate test data sets. Prediction results were enhanced compared with those obtained from individual systems’ models. The values of the correlation coefficient (the ratio between performance to deviation, or r(RPD)) were 0.95(3.01) and 0.9s6(3.55) for FL1879 and R.Norkotah, respectively, which represented a feasible improvement by 6.7%(35.6%) and 24.7%(136.7%) for FL1879 and R.Norkotah, respectively. The proposed study shows the possibility of building a rapid, noninvasive, and accurate system or device that requires minimal or no sample preparation to track the sprouting activity of stored potato tubers. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Figures

Figure 1

Open AccessArticle Design of a Tunable Snapshot Multispectral Imaging System through Ray Tracing Simulation
J. Imaging 2019, 5(1), 9; https://doi.org/10.3390/jimaging5010009
Received: 3 December 2018 / Revised: 14 December 2018 / Accepted: 28 December 2018 / Published: 5 January 2019
Viewed by 492 | PDF Full-text (7659 KB) | HTML Full-text | XML Full-text
Abstract
Research on snapshot multispectral imaging has been popular in the remote sensing community due to the high demands of video-rate remote sensing system for various applications. Existing snapshot multispectral imaging techniques are mainly of a fixed wavelength type, which limits their practical usefulness. [...] Read more.
Research on snapshot multispectral imaging has been popular in the remote sensing community due to the high demands of video-rate remote sensing system for various applications. Existing snapshot multispectral imaging techniques are mainly of a fixed wavelength type, which limits their practical usefulness. This paper describes a tunable multispectral snapshot system by using a dual prism assembly as the dispersion element of the coded aperture snapshot spectral imagers (CASSI). Spectral tuning is achieved by adjusting the air gap displacement of the dual prism assembly. Typical spectral shifts of about 1 nm at 400 nm and 12 nm at 700 nm wavelength have been achieved in the present design when the air-gap of the dual prism is changed from 4.24 mm to 5.04 mm. The paper outlines the optical designs, the performance, and the pros and cons of the dual-prism CASSI (DP-CASSI) system. The performance of the system is illustrated by TraceProTM ray tracing, to allow researchers in the field to repeat or to validate the results presented in this paper. Full article
Figures

Figure 1

Open AccessArticle Hyperspectral Imaging as Powerful Technique for Investigating the Stability of Painting Samples
J. Imaging 2019, 5(1), 8; https://doi.org/10.3390/jimaging5010008
Received: 26 October 2018 / Revised: 21 November 2018 / Accepted: 26 December 2018 / Published: 3 January 2019
Viewed by 415 | PDF Full-text (13924 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this work is to present the utilization of Hyperspectral Imaging for studying the stability of painting samples to simulated solar radiation, in order to evaluate their use in the restoration field. In particular, ready-to-use commercial watercolours and powder pigments were [...] Read more.
The aim of this work is to present the utilization of Hyperspectral Imaging for studying the stability of painting samples to simulated solar radiation, in order to evaluate their use in the restoration field. In particular, ready-to-use commercial watercolours and powder pigments were tested, with these last ones being prepared for the experimental by gum Arabic in order to propose a possible substitute for traditional reintegration materials. Samples were investigated through Hyperspectral Imaging in the short wave infrared range before and after artificial ageing procedure performed in Solar Box chamber under controlled conditions. Data were treated and elaborated in order to evaluate the sensitivity of the Hyperspectral Imaging technique to identify the variations on paint layers, induced by photo-degradation, before they could be detected by eye. Furthermore, a supervised classification method for monitoring the painted surface changes, adopting a multivariate approach was successfully applied. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Figures

Figure 1

Open AccessArticle Optimized Memory Allocation and Power Minimization for FPGA-Based Image Processing
J. Imaging 2019, 5(1), 7; https://doi.org/10.3390/jimaging5010007
Received: 19 November 2018 / Revised: 24 December 2018 / Accepted: 27 December 2018 / Published: 1 January 2019
Viewed by 576 | PDF Full-text (2876 KB) | HTML Full-text | XML Full-text
Abstract
Memory is the biggest limiting factor to the widespread use of FPGAs for high-level image processing, which require complete frame(s) to be stored in situ. Since FPGAs have limited on-chip memory capabilities, efficient use of such resources is essential to meet performance, size [...] Read more.
Memory is the biggest limiting factor to the widespread use of FPGAs for high-level image processing, which require complete frame(s) to be stored in situ. Since FPGAs have limited on-chip memory capabilities, efficient use of such resources is essential to meet performance, size and power constraints. In this paper, we investigate allocation of on-chip memory resources in order to minimize resource usage and power consumption, contributing to the realization of power-efficient high-level image processing fully contained on FPGAs. We propose methods for generating memory architectures, from both Hardware Description Languages and High Level Synthesis designs, which minimize memory usage and power consumption. Based on a formalization of on-chip memory configuration options and a power model, we demonstrate how our partitioning algorithms can outperform traditional strategies. Compared to commercial FPGA synthesis and High Level Synthesis tools, our results show that the proposed algorithms can result in up to 60% higher utilization efficiency, increasing the sizes and/or number of frames that can be accommodated, and reduce frame buffers’ dynamic power consumption by up to approximately 70%. In our experiments using Optical Flow and MeanShift Tracking, representative high-level algorithms, data show that partitioning algorithms can reduce total power by up to 25% and 30%, respectively, without impacting performance. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs)
Figures

Figure 1

Open AccessArticle A Low-Rate Video Approach to Hyperspectral Imaging of Dynamic Scenes
J. Imaging 2019, 5(1), 6; https://doi.org/10.3390/jimaging5010006
Received: 10 November 2018 / Revised: 14 December 2018 / Accepted: 26 December 2018 / Published: 31 December 2018
Viewed by 572 | PDF Full-text (24942 KB) | HTML Full-text | XML Full-text
Abstract
The increased sensitivity of modern hyperspectral line-scanning systems has led to the development of imaging systems that can acquire each line of hyperspectral pixels at very high data rates (in the 200–400 Hz range). These data acquisition rates present an opportunity to acquire [...] Read more.
The increased sensitivity of modern hyperspectral line-scanning systems has led to the development of imaging systems that can acquire each line of hyperspectral pixels at very high data rates (in the 200–400 Hz range). These data acquisition rates present an opportunity to acquire full hyperspectral scenes at rapid rates, enabling the use of traditional push-broom imaging systems as low-rate video hyperspectral imaging systems. This paper provides an overview of the design of an integrated system that produces low-rate video hyperspectral image sequences by merging a hyperspectral line scanner, operating in the visible and near infra-red, with a high-speed pan-tilt system and an integrated IMU-GPS that provides system pointing. The integrated unit is operated from atop a telescopic mast, which also allows imaging of the same surface area or objects from multiple view zenith directions, useful for bi-directional reflectance data acquisition and analysis. The telescopic mast platform also enables stereo hyperspectral image acquisition, and therefore, the ability to construct a digital elevation model of the surface. Imaging near the shoreline in a coastal setting, we provide an example of hyperspectral imagery time series acquired during a field experiment in July 2017 with our integrated system, which produced hyperspectral image sequences with 371 spectral bands, spatial dimensions of 1600 × 212, and 16 bits per pixel, every 0.67 s. A second example times series acquired during a rooftop experiment conducted on the Rochester Institute of Technology campus in August 2017 illustrates a second application, moving vehicle imaging, with 371 spectral bands, 16 bit dynamic range, and 1600 × 300 spatial dimensions every second. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Figures

Figure 1

Open AccessArticle Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
J. Imaging 2019, 5(1), 5; https://doi.org/10.3390/jimaging5010005
Received: 20 September 2018 / Revised: 23 December 2018 / Accepted: 25 December 2018 / Published: 30 December 2018
Viewed by 548 | PDF Full-text (2247 KB) | HTML Full-text | XML Full-text
Abstract
Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an [...] Read more.
Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images. Full article
(This article belongs to the Special Issue Medical Image Analysis)
Figures

Figure 1

Open AccessArticle Magnetic Resonance Conditional Microinjector
J. Imaging 2019, 5(1), 4; https://doi.org/10.3390/jimaging5010004
Received: 17 September 2018 / Revised: 12 December 2018 / Accepted: 20 December 2018 / Published: 30 December 2018
Viewed by 441 | PDF Full-text (1817 KB) | HTML Full-text | XML Full-text
Abstract
Glaucoma, one of the leading causes of blindness, has been linked to increases in intraocular pressure. In order to observe and study this effect, proposed is a specialized microinjector and driver that can be used to inject small amounts of liquid into a [...] Read more.
Glaucoma, one of the leading causes of blindness, has been linked to increases in intraocular pressure. In order to observe and study this effect, proposed is a specialized microinjector and driver that can be used to inject small amounts of liquid into a target volume. Magnetic resonance imaging (MRI) guided remotely activated devices require specialized equipment that is compatible with the MR environment. This paper presents an MR Conditional microinjector system with a pressure sensor for investigating the effects of intraocular pressure (IOP) in near-real-time. The system uses pressurized air and a linear actuation device to push a syringe in a controlled, stepwise manner. The feasibility and utility of the proposed investigative medical research tool were tested and validated by measuring the pressure inside an intact animal donor eyeball while precise, small volumes of water were injected into the specimen. Observable increases in the volume of the specimen at measured, specific target pressure increases show that the system is technically feasible for studying IOP effects, while the changes in shape were depicted in MRI scan images themselves. In addition, it was verified that the presence and operation of the system did not interfere with the MRI machine, confirming its conditional compatibility with the 3T MRI. Full article
(This article belongs to the Special Issue Image-Guided Medical Robotics)
Figures

Figure 1

Open AccessReview Compressive Sensing Hyperspectral Imaging by Spectral Multiplexing with Liquid Crystal
J. Imaging 2019, 5(1), 3; https://doi.org/10.3390/jimaging5010003
Received: 28 October 2018 / Revised: 25 November 2018 / Accepted: 18 December 2018 / Published: 22 December 2018
Viewed by 565 | PDF Full-text (11520 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Hyperspectral (HS) imaging involves the sensing of a scene’s spectral properties, which are often redundant in nature. The redundancy of the information motivates our quest to implement Compressive Sensing (CS) theory for HS imaging. This article provides a review of the Compressive Sensing [...] Read more.
Hyperspectral (HS) imaging involves the sensing of a scene’s spectral properties, which are often redundant in nature. The redundancy of the information motivates our quest to implement Compressive Sensing (CS) theory for HS imaging. This article provides a review of the Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) camera, its evolution, and its different applications. The CS-MUSI camera was designed within the CS framework and uses a liquid crystal (LC) phase retarder in order to modulate the spectral domain. The outstanding advantage of the CS-MUSI camera is that the entire HS image is captured from an order of magnitude fewer measurements of the sensor array, compared to conventional HS imaging methods. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Figures

Figure 1

Open AccessArticle What’s in a Smile? Initial Analyses of Dynamic Changes in Facial Shape and Appearance
J. Imaging 2019, 5(1), 2; https://doi.org/10.3390/jimaging5010002
Received: 15 November 2018 / Revised: 13 December 2018 / Accepted: 18 December 2018 / Published: 21 December 2018
Viewed by 573 | PDF Full-text (5914 KB) | HTML Full-text | XML Full-text
Abstract
Single-level principal component analysis (PCA) and multi-level PCA (mPCA) methods are applied here to a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Inspection [...] Read more.
Single-level principal component analysis (PCA) and multi-level PCA (mPCA) methods are applied here to a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Inspection of eigenvalues gives insight into the importance of different factors affecting shapes, including: biological sex, facial expression (neutral versus smiling), and all other variations. Biological sex and facial expression are shown to be reflected in those components at appropriate levels of the mPCA model. Dynamic 3D shape data for all phases of a smile made up a second dataset sampled from 60 adult British subjects (31 male; 29 female). Modes of variation reflected the act of smiling at the correct level of the mPCA model. Seven phases of the dynamic smiles are identified: rest pre-smile, onset 1 (acceleration), onset 2 (deceleration), apex, offset 1 (acceleration), offset 2 (deceleration), and rest post-smile. A clear cycle is observed in standardized scores at an appropriate level for mPCA and in single-level PCA. mPCA can be used to study static shapes and images, as well as dynamic changes in shape. It gave us much insight into the question “what’s in a smile?”. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Figures

Figure 1

Open AccessReview Recent Trends in Compressive Raman Spectroscopy Using DMD-Based Binary Detection
J. Imaging 2019, 5(1), 1; https://doi.org/10.3390/jimaging5010001
Received: 21 November 2018 / Revised: 11 December 2018 / Accepted: 13 December 2018 / Published: 21 December 2018
Viewed by 528 | PDF Full-text (3155 KB) | HTML Full-text | XML Full-text
Abstract
The collection of high-dimensional hyperspectral data is often the slowest step in the process of hyperspectral Raman imaging. With the conventional array-based Raman spectroscopy acquiring of chemical images could take hours to even days. To increase the Raman collection speeds, a number of [...] Read more.
The collection of high-dimensional hyperspectral data is often the slowest step in the process of hyperspectral Raman imaging. With the conventional array-based Raman spectroscopy acquiring of chemical images could take hours to even days. To increase the Raman collection speeds, a number of compressive detection (CD) strategies, which simultaneously sense and compress the spectral signal, have recently been demonstrated. As opposed to conventional hyperspectral imaging, where full spectra are measured prior to post-processing and imaging CD increases the speed of data collection by making measurements in a low-dimensional space containing only the information of interest, thus enabling real-time imaging. The use of single channel detectors gives the key advantage to CD strategy using optical filter functions to obtain component intensities. In other words, the filter functions are simply the optimized patterns of wavelength combinations characteristic of component in the sample, and the intensity transmitted through each filter represents a direct measure of the associated score values. Essentially, compressive hyperspectral images consist of ‘score’ pixels (instead of ‘spectral’ pixels). This paper presents an overview of recent advances in compressive Raman detection designs and performance validations using a DMD based binary detection strategy. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Figures

Figure 1

J. Imaging EISSN 2313-433X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top