Previous Issue

Table of Contents

J. Imaging, Volume 4, Issue 7 (July 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-12
Export citation of selected articles as:
Open AccessArticle ECG Electrode Placements for Magnetohydrodynamic Voltage Suppression
J. Imaging 2018, 4(7), 94; https://doi.org/10.3390/jimaging4070094 (registering DOI)
Received: 28 June 2018 / Accepted: 13 July 2018 / Published: 17 July 2018
PDF Full-text (1614 KB) | HTML Full-text | XML Full-text
Abstract
This study aims to investigate a set of electrocardiogram (ECG) electrode lead locations to improve the quality of four-lead ECG signals acquired during magnetic resonance imaging (MRI). This was achieved by identifying electrode placements that minimized the amount of induced magnetohydrodynamic voltages (V
[...] Read more.
This study aims to investigate a set of electrocardiogram (ECG) electrode lead locations to improve the quality of four-lead ECG signals acquired during magnetic resonance imaging (MRI). This was achieved by identifying electrode placements that minimized the amount of induced magnetohydrodynamic voltages (VMHD) in the ECG signals. Reducing VMHD can improve the accuracy of QRS complex detection in ECG as well as heartbeat synchronization between MRI and ECG during the acquisition of cardiac cine. A vector model based on thoracic geometry was developed to predict induced VMHD and to optimize four-lead ECG electrode placement for the purposes of improved MRI gating. Four human subjects were recruited for vector model establishment (Group 1), and five human subjects were recruited for validation of VMHD reduction in the proposed four-lead ECG (Group 2). The vector model was established using 12-lead ECG data recorded from Group 1 of four healthy subjects at 3 Tesla, and a gradient descent optimization routine was utilized to predict optimal four-lead ECG placement based on VMHD vector alignment. The optimized four-lead ECG was then validated in Group 2 of five healthy subjects by comparing the standard and proposed lead placements. A 43.41% reduction in VMHD was observed in ECGs using the proposed electrode placement, and the QRS complex was preserved. A VMHD-minimized electrode placement for four-lead ECG gating was presented and shown to reduce induced magnetohydrodynamic (MHD) signals, potentially allowing for improved cardiac MRI physiological monitoring. Full article
Figures

Graphical abstract

Open AccessEditorial Detection of Moving Objects
J. Imaging 2018, 4(7), 93; https://doi.org/10.3390/jimaging4070093
Received: 5 July 2018 / Revised: 9 July 2018 / Accepted: 12 July 2018 / Published: 13 July 2018
PDF Full-text (130 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Detection of Moving Objects)
Open AccessArticle Background Subtraction Based on a New Fuzzy Mixture of Gaussians for Moving Object Detection
J. Imaging 2018, 4(7), 92; https://doi.org/10.3390/jimaging4070092
Received: 15 May 2018 / Revised: 14 June 2018 / Accepted: 28 June 2018 / Published: 10 July 2018
PDF Full-text (1047 KB) | HTML Full-text | XML Full-text
Abstract
Moving foreground detection is a very important step for many applications such as human behavior analysis for visual surveillance, model-based action recognition, road traffic monitoring, etc. Background subtraction is a very popular approach, but it is difficult to apply given that it must
[...] Read more.
Moving foreground detection is a very important step for many applications such as human behavior analysis for visual surveillance, model-based action recognition, road traffic monitoring, etc. Background subtraction is a very popular approach, but it is difficult to apply given that it must overcome many obstacles, such as dynamic background changes, lighting variations, occlusions, and so on. In the presented work, we focus on this problem (foreground/background segmentation), using a type-2 fuzzy modeling to manage the uncertainty of the video process and of the data. The proposed method models the state of each pixel using an imprecise and adjustable Gaussian mixture model, which is exploited by several fuzzy classifiers to ultimately estimate the pixel class for each frame. More precisely, this decision not only takes into account the history of its evolution, but also its spatial neighborhood and its possible displacements in the previous frames. Then we compare the proposed method with other close methods, including methods based on a Gaussian mixture model or on fuzzy sets. This comparison will allow us to assess our method’s performance, and to propose some perspectives to this work. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Faster R-CNN-Based Glomerular Detection in Multistained Human Whole Slide Images
J. Imaging 2018, 4(7), 91; https://doi.org/10.3390/jimaging4070091
Received: 16 May 2018 / Revised: 13 June 2018 / Accepted: 2 July 2018 / Published: 4 July 2018
PDF Full-text (5364 KB) | HTML Full-text | XML Full-text
Abstract
The detection of objects of interest in high-resolution digital pathological images is a key part of diagnosis and is a labor-intensive task for pathologists. In this paper, we describe a Faster R-CNN-based approach for the detection of glomeruli in multistained whole slide images
[...] Read more.
The detection of objects of interest in high-resolution digital pathological images is a key part of diagnosis and is a labor-intensive task for pathologists. In this paper, we describe a Faster R-CNN-based approach for the detection of glomeruli in multistained whole slide images (WSIs) of human renal tissue sections. Faster R-CNN is a state-of-the-art general object detection method based on a convolutional neural network, which simultaneously proposes object bounds and objectness scores at each point in an image. The method takes an image obtained from a WSI with a sliding window and classifies and localizes every glomerulus in the image by drawing the bounding boxes. We configured Faster R-CNN with a pretrained Inception-ResNet model and retrained it to be adapted to our task, then evaluated it based on a large dataset consisting of more than 33,000 annotated glomeruli obtained from 800 WSIs. The results showed the approach produces comparable or higher than average F-measures with different stains compared to other recently published approaches. This approach could have practical application in hospitals and laboratories for the quantitative analysis of glomeruli in WSIs and, potentially, lead to a better understanding of chronic glomerulonephritis. Full article
(This article belongs to the Special Issue Medical Image Analysis)
Figures

Graphical abstract

Open AccessArticle Compressive Online Video Background–Foreground Separation Using Multiple Prior Information and Optical Flow
J. Imaging 2018, 4(7), 90; https://doi.org/10.3390/jimaging4070090
Received: 1 May 2018 / Revised: 15 June 2018 / Accepted: 27 June 2018 / Published: 3 July 2018
PDF Full-text (3290 KB) | HTML Full-text | XML Full-text
Abstract
In the context of video background–foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method operates on a small set
[...] Read more.
In the context of video background–foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method operates on a small set of measurements taken per frame, in contrast to conventional batch-based RPCA, which processes the full data. The proposed method also leverages multiple prior information by incorporating previously separated background and foreground frames in an n-1 minimization problem. Moreover, optical flow is utilized to estimate motions between the previous foreground frames and then compensate the motions to achieve higher quality prior foregrounds for improving the separation. Our method is tested on several video sequences in different scenarios for online background–foreground separation given compressive measurements. The visual and quantitative results show that the proposed method outperforms other existing methods. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Digital Comics Image Indexing Based on Deep Learning
J. Imaging 2018, 4(7), 89; https://doi.org/10.3390/jimaging4070089
Received: 30 April 2018 / Revised: 21 June 2018 / Accepted: 27 June 2018 / Published: 2 July 2018
PDF Full-text (7516 KB) | HTML Full-text | XML Full-text
Abstract
The digital comic book market is growing every year now, mixing digitized and digital-born comics. Digitized comics suffer from a limited automatic content understanding which restricts online content search and reading applications. This study shows how to combine state-of-the-art image analysis methods to
[...] Read more.
The digital comic book market is growing every year now, mixing digitized and digital-born comics. Digitized comics suffer from a limited automatic content understanding which restricts online content search and reading applications. This study shows how to combine state-of-the-art image analysis methods to encode and index images into an XML-like text file. Content description file can then be used to automatically split comic book images into sub-images corresponding to panels easily indexable with relevant information about their respective content. This allows advanced search in keywords said by specific comic characters, action and scene retrieval using natural language processing. We get down to panel, balloon, text, comic character and face detection using traditional approaches and breakthrough deep learning models, and also text recognition using LSTM model. Evaluations on a dataset composed of online library content are presented, and a new public dataset is also proposed. Full article
(This article belongs to the Special Issue Image Based Information Retrieval from the Web)
Figures

Figure 1

Open AccessArticle Imaging with a Commercial Electron Backscatter Diffraction (EBSD) Camera in a Scanning Electron Microscope: A Review
J. Imaging 2018, 4(7), 88; https://doi.org/10.3390/jimaging4070088
Received: 21 May 2018 / Revised: 14 June 2018 / Accepted: 20 June 2018 / Published: 1 July 2018
PDF Full-text (4819 KB) | HTML Full-text | XML Full-text
Abstract
Scanning electron microscopy is widespread in field of material science and research, especially because of its high surface sensitivity due to the increased interactions of electrons with the target material’s atoms compared to X-ray-oriented methods. Among the available techniques in scanning electron microscopy
[...] Read more.
Scanning electron microscopy is widespread in field of material science and research, especially because of its high surface sensitivity due to the increased interactions of electrons with the target material’s atoms compared to X-ray-oriented methods. Among the available techniques in scanning electron microscopy (SEM), electron backscatter diffraction (EBSD) is used to gather information regarding the crystallinity and the chemistry of crystalline and amorphous regions of a specimen. When post-processing the diffraction patterns or the image captured by the EBSD detector screen which was obtained in this manner, specific imaging contrasts are generated and can be used to understand some of the mechanisms involved in several imaging modes. In this manuscript, we reviewed the benefits of this procedure regarding topographic, compositional, diffraction, and magnetic domain contrasts. This work shows preliminary and encouraging results regarding the non-conventional use of the EBSD detector. The method is becoming viable with the advent of new EBSD camera technologies, allowing acquisition speed close to imaging rates. This method, named dark-field electron backscatter diffraction imaging, is described in detail, and several application examples are given in reflection as well as in transmission modes. Full article
(This article belongs to the Special Issue Phase-Contrast and Dark-Field Imaging)
Figures

Figure 1

Open AccessArticle A Survey of Comics Research in Computer Science
J. Imaging 2018, 4(7), 87; https://doi.org/10.3390/jimaging4070087
Received: 21 May 2018 / Revised: 15 June 2018 / Accepted: 20 June 2018 / Published: 26 June 2018
Cited by 1 | PDF Full-text (3130 KB) | HTML Full-text | XML Full-text
Abstract
Graphic novels such as comic books and mangas are well known all over the world. The digital transition started to change the way people are reading comics: more and more on smartphones and tablets, and less and less on paper. In recent years,
[...] Read more.
Graphic novels such as comic books and mangas are well known all over the world. The digital transition started to change the way people are reading comics: more and more on smartphones and tablets, and less and less on paper. In recent years, a wide variety of research about comics has been proposed and might change the way comics are created, distributed and read in the future. Early work focuses on low level document image analysis. Comic books are complex; they contains text, drawings, balloons, panels, onomatopoeia, etc. Different fields of computer science covered research about user interaction and content generation such as multimedia, artificial intelligence, human–computer interaction, etc. with different sets of values. We review the previous research about comics in computer science to state what has been done and give some insights about the main outlooks. Full article
Figures

Figure 1

Open AccessArticle LaBGen-P-Semantic: A First Step for Leveraging Semantic Segmentation in Background Generation
J. Imaging 2018, 4(7), 86; https://doi.org/10.3390/jimaging4070086
Received: 16 May 2018 / Revised: 8 June 2018 / Accepted: 18 June 2018 / Published: 25 June 2018
PDF Full-text (10901 KB) | HTML Full-text | XML Full-text
Abstract
Given a video sequence acquired from a fixed camera, the stationary background generation problem consists of generating a unique image estimating the stationary background of the sequence. During the IEEE Scene Background Modeling Contest (SBMC) organized in 2016, we presented the LaBGen-P method.
[...] Read more.
Given a video sequence acquired from a fixed camera, the stationary background generation problem consists of generating a unique image estimating the stationary background of the sequence. During the IEEE Scene Background Modeling Contest (SBMC) organized in 2016, we presented the LaBGen-P method. In short, this method relies on a motion detection algorithm for selecting, for each pixel location, a given amount of pixel intensities that are most likely static by keeping the ones with the smallest quantities of motion. These quantities are estimated by aggregating the motion scores returned by the motion detection algorithm in the spatial neighborhood of the pixel. After this selection process, the background image is then generated by blending the selected intensities with a median filter. In our previous works, we showed that using a temporally-memoryless motion detection, detecting motion between two frames without relying on additional temporal information, leads our method to achieve the best performance. In this work, we go one step further by developing LaBGen-P-Semantic, a variant of LaBGen-P, the motion detection step of which is built on the current frame only by using semantic segmentation. For this purpose, two intra-frame motion detection algorithms, detecting motion from a unique frame, are presented and compared. Our experiments, carried out on the Scene Background Initialization (SBI) and SceneBackgroundModeling.NET (SBMnet) datasets, show that leveraging semantic segmentation improves the robustness against intermittent motions, background motions and very short video sequences, which are among the main challenges in the background generation field. Moreover, our results confirm that using an intra-frame motion detection is an appropriate choice for our method and paves the way for more techniques based on semantic segmentation. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Fisher Vector Coding for Covariance Matrix Descriptors Based on the Log-Euclidean and Affine Invariant Riemannian Metrics
J. Imaging 2018, 4(7), 85; https://doi.org/10.3390/jimaging4070085
Received: 29 April 2018 / Revised: 7 June 2018 / Accepted: 18 June 2018 / Published: 22 June 2018
PDF Full-text (3976 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an overview of coding methods used to encode a set of covariance matrices. Starting from a Gaussian mixture model (GMM) adapted to the Log-Euclidean (LE) or affine invariant Riemannian metric, we propose a Fisher Vector (FV) descriptor adapted to each
[...] Read more.
This paper presents an overview of coding methods used to encode a set of covariance matrices. Starting from a Gaussian mixture model (GMM) adapted to the Log-Euclidean (LE) or affine invariant Riemannian metric, we propose a Fisher Vector (FV) descriptor adapted to each of these metrics: the Log-Euclidean Fisher Vectors (LE FV) and the Riemannian Fisher Vectors (RFV). Some experiments on texture and head pose image classification are conducted to compare these two metrics and to illustrate the potential of these FV-based descriptors compared to state-of-the-art BoW and VLAD-based descriptors. A focus is also applied to illustrate the advantage of using the Fisher information matrix during the derivation of the FV. In addition, finally, some experiments are conducted in order to provide fairer comparison between the different coding strategies. This includes some comparisons between anisotropic and isotropic models, and a estimation performance analysis of the GMM dispersion parameter for covariance matrices of large dimension. Full article
Figures

Figure 1

Open AccessEditorial Document Image Processing
J. Imaging 2018, 4(7), 84; https://doi.org/10.3390/jimaging4070084
Received: 15 June 2018 / Accepted: 15 June 2018 / Published: 22 June 2018
PDF Full-text (150 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Document Image Processing)
Open AccessArticle Evaluation of 3D/2D Imaging and Image Processing Techniques for the Monitoring of Seed Imbibition
J. Imaging 2018, 4(7), 83; https://doi.org/10.3390/jimaging4070083
Received: 18 May 2018 / Revised: 11 June 2018 / Accepted: 18 June 2018 / Published: 21 June 2018
PDF Full-text (2484 KB) | HTML Full-text | XML Full-text
Abstract
Seed imbibition is a very important process in plant biology by which, thanks to a simple water income, a dry seed may turn into a developing organism. In natural conditions, this process occurs in the soil, e.g., with difficult access for a direct
[...] Read more.
Seed imbibition is a very important process in plant biology by which, thanks to a simple water income, a dry seed may turn into a developing organism. In natural conditions, this process occurs in the soil, e.g., with difficult access for a direct observation. Monitoring the seed imbibition with non-invasive imaging techniques is therefore an important and possibly challenging task if one tries to perform it in natural conditions. In this report, we describe a set of four different imaging techniques that enable to addressing this task either in 3D or in 2D. For each technique, the following items are proposed. A detailed experimental protocol is provided to acquire images of the imbibition process. With the illustration of real data, the significance of the physical quantities measured in terms of their relation to the income of water in the seed is presented. Complete image analysis pipelines are then proposed to extract dynamic information on the imbibition process from such monitoring experiments. A final discussion compares the advantages and current limitations of each technique in addition to elements concerning the associated throughput and cost. These are criteria especially relevant in the field of plant phenotyping where large populations of plants are imaged to produce quantitatively significative traits after image processing. Full article
Figures

Figure 1

Back to Top