Special Issue "Big Visual Data Processing and Analytics"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 April 2017)

Special Issue Editors

Guest Editor
Prof. Dr. Xinmei Tian

Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, Anhui 230027, China
Website | E-Mail
Phone: +86-551-6360-0281
Interests: image understanding; machine learning; multimedia information retrieval
Guest Editor
Prof. Dr. Fionn Murtagh

Big Data Lab, Computing and Mathematics, College of Engineering and Technology, University of Derby, UK
Website | E-Mail
Phone: +44 133 259 2471
Interests: data science; big data analytics, geometry and topology of data and information; digital content analytics; computational science (including innovative models and paradigms from digital humanities and quantitative and qualitative social sciences); applications in engineering, psychoanalysis, astronomy and in many fields
Guest Editor
Prof. Dr. Dacheng Tao

Centre for Quantum Computation & Intelligent Systems and the Faculty of Engineering and Information Technology, University of Technology, Sydney, 81 Broadway Street, Ultimo, NSW 2007, Australia
Website | E-Mail
Phone: +61 2 95141829
Interests: computer vision; image processing; data science; machine learning; neural networks

Special Issue Information

Dear Colleagues,

There is a broad range of important applications that rely on accurate visual data processing and analytics. However, accurately understanding the visual data remains a highly challenging problem. Recently developed techniques in visual data collection, storage, and transmission, bring us an era of data deluge. The ever-increasing huge volume of visual data provides us with both challenges and opportunities for data analysis and image processing. On the one hand, though big visual data brings richer information, it is challenging to deal with big volumes of visual data to mine reliable and helpful knowledge from them. On the other hand, big data and images also provide us with the opportunities to address the traditional challenges by leveraging advanced machine learning tools, for example deep learning. Therefore, advanced techniques and methodologies are desired to better analyze and understand the big visual data.

This Special Issue aims at providing an opportunity for colleagues to share their high quality research articles that address broad challenges of big visual data processing and analytics. We invite colleagues to contribute original research articles, as well as review articles, that advance very significantly toward efficient techniques for deep understanding of big visual data.

The topics of interest of this Special Issue include, but are not limited to:

  • Techniques, processes, and methods for collecting and analyzing visual data
  • Statistical techniques for visual data analyzing
  • Systems for large-scale visual data
  • Visual data search and mining
  • Applications of visual data analysis: web, multimedia, finance, genomics, bioinformatics, social sciences and social networks
  • Image capturing and generation
  • Image analysis and interpretation
  • Image processing applications
  • Image coding analysis and recognition
  • Image representation
  • Image sensing, classification, retrieval, categorization and clustering approaches
  • Remote image sensing
  • Signal-processing aspects of image processing

Prof. Dr. Xinmei Tian
Prof. Dr. Fionn Murtagh
Prof. Dr. Dacheng Tao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Open AccessArticle 3D Reconstructions Using Unstabilized Video Footage from an Unmanned Aerial Vehicle
J. Imaging 2017, 3(2), 15; doi:10.3390/jimaging3020015
Received: 31 January 2017 / Revised: 12 April 2017 / Accepted: 18 April 2017 / Published: 22 April 2017
Cited by 4 | PDF Full-text (6276 KB) | HTML Full-text | XML Full-text
Abstract
Structure from motion (SFM) is a methodology for automatically reconstructing three-dimensional (3D) models from a series of two-dimensional (2D) images when there is no a priori knowledge of the camera location and direction. Modern unmanned aerial vehicles (UAV) now provide a low-cost means
[...] Read more.
Structure from motion (SFM) is a methodology for automatically reconstructing three-dimensional (3D) models from a series of two-dimensional (2D) images when there is no a priori knowledge of the camera location and direction. Modern unmanned aerial vehicles (UAV) now provide a low-cost means of obtaining aerial video footage of a point of interest. Unfortunately, raw video lacks the required information for SFM software, as it does not record exchangeable image file (EXIF) information for the frames. In this work, a solution is presented to modify aerial video so that it can be used for photogrammetry. The paper then examines how the field of view effects the quality of the reconstruction. The input is unstabilized, and distorted video footage obtained from a low-cost UAV which is then combined with an open-source SFM system to reconstruct a 3D model. This approach creates a high quality reconstruction by reducing the amount of unknown variables, such as focal length and sensor size, while increasing the data density. The experiments conducted examine the optical field of view settings to provide sufficient overlap without sacrificing image quality or exacerbating distortion. The system costs less than e1000, and the results show the ability to reproduce 3D models that are of centimeter-level accuracy. For verification, the results were compared against millimeter-level accurate models derived from laser scanning. Full article
(This article belongs to the Special Issue Big Visual Data Processing and Analytics)
Figures

Figure 1

Open AccessArticle Visual Analytics of Complex Genomics Data to Guide Effective Treatment Decisions
J. Imaging 2016, 2(4), 29; doi:10.3390/jimaging2040029
Received: 21 May 2016 / Revised: 18 September 2016 / Accepted: 23 September 2016 / Published: 30 September 2016
PDF Full-text (6668 KB) | HTML Full-text | XML Full-text
Abstract
In cancer biology, genomics represents a big data problem that needs accurate visual data processing and analytics. The human genome is very complex with thousands of genes that contain the information about the individual patients and the biological mechanisms of their disease. Therefore,
[...] Read more.
In cancer biology, genomics represents a big data problem that needs accurate visual data processing and analytics. The human genome is very complex with thousands of genes that contain the information about the individual patients and the biological mechanisms of their disease. Therefore, when building a framework for personalised treatment, the complexity of the genome must be captured in meaningful and actionable ways. This paper presents a novel visual analytics framework that enables effective analysis of large and complex genomics data. By providing interactive visualisations from the overview of the entire patient cohort to the detail view of individual genes, our work potentially guides effective treatment decisions for childhood cancer patients. The framework consists of multiple components enabling the complete analytics supporting personalised medicines, including similarity space construction, automated analysis, visualisation, gene-to-gene comparison and user-centric interaction and exploration based on feature selection. In addition to the traditional way to visualise data, we utilise the Unity3D platform for developing a smooth and interactive visual presentation of the information. This aims to provide better rendering, image quality, ergonomics and user experience to non-specialists or young users who are familiar with 3D gaming environments and interfaces. We illustrate the effectiveness of our approach through case studies with datasets from childhood cancers, B-cell Acute Lymphoblastic Leukaemia (ALL) and Rhabdomyosarcoma (RMS) patients, on how to guide the effective treatment decision in the cohort. Full article
(This article belongs to the Special Issue Big Visual Data Processing and Analytics)
Figures

Figure 1

Open AccessArticle VIIRS Day/Night Band—Correcting Striping and Nonuniformity over a Very Large Dynamic Range
J. Imaging 2016, 2(1), 9; doi:10.3390/jimaging2010009
Received: 1 December 2015 / Revised: 26 January 2016 / Accepted: 26 February 2016 / Published: 14 March 2016
PDF Full-text (13483 KB) | HTML Full-text | XML Full-text
Abstract
The Suomi National Polar-orbiting (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day-Night Band (DNB) measures visible and near-infrared light extending over seven orders of magnitude of dynamic range. This makes radiometric calibration difficult. We have observed that DNB imagery has striping, banding and
[...] Read more.
The Suomi National Polar-orbiting (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day-Night Band (DNB) measures visible and near-infrared light extending over seven orders of magnitude of dynamic range. This makes radiometric calibration difficult. We have observed that DNB imagery has striping, banding and other nonuniformities—day or night. We identified the causes as stray light, nonlinearity, detector crosstalk, hysteresis and mirror-side variation. We found that these affect both Earth-view and calibration signals. These present an obstacle to interpretation by users of DNB products. Because of the nonlinearity we chose the histogram matching destriping technique which we found is successful for daytime, twilight and nighttime scenes. Because of the very large dynamic range of the DNB, we needed to add special processes to the histogram matching to destripe all scenes, especially imagery in the twilight regions where scene illumination changes rapidly over short distances. We show that destriping aids image analysts, and makes it possible for advanced automated cloud typing algorithms. Manual or automatic identification of other features, including polar ice and gravity waves in the upper atmosphere are also discussed. In consideration of the large volume of data produced 24 h a day by the VIIRS DNB, we present methods for reducing processing time. Full article
(This article belongs to the Special Issue Big Visual Data Processing and Analytics)
Figures

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: An Evaluation of Face Recognition with Bag-of-Features
Authors: Fumin Shen, Chunhua Shen, Yang Yang and Heng Tao Shen
Abstract: Linear representation (especially sparse representa-tion) based classification has become very popular in the face recognition community in recent a few years. This approach shares many similar components with the Bag-of-Features (BoF) model based classification framework. While the latter one has achieved the state-of-the-art performance in many computer vision applications, it has less been studied on the face recognition problems. In this work, we thoroughly evaluate the unsupervised BoF model in the scenario of face recognition. Surprising results are obtained, where we show that, with the same low-level features (global or local) as input, the BoF based framework is not only more robust to various variations in face images but also in need of less algorithm restrictions, compared to the representation based one. Additional interesting results are also found. In particular, we suggest a face recogniser composed of a dictionary of randomly extracted patches, the ‘triangle’ encoder and pyramid pooling, followed by a very simple linear classifier. The proposed method is shown to be highly efficient and achieve the state-of-the-art performance on several public databases, e.g., FERET, Multi-PIE and LFW.

Title: Cell Segmentation and Tracking for Cellular Blebbing Image Series
Authors: Chengpan Li, Weiping Ding and Dong Liu
Abstract: The dynamic process of cellular blebs is of great interest to better understand the mechanism contributing to cell motility, cytokinesis and apoptosis in various cells. Time-lapse microscopy imaging serves as a useful tool to observe the dynamic process of cells, but existing methods dealing with timelapse image data are rather limited and often cause inexact analysis results especially on blebbing cell images. In this paper, we propose a new method to segment and track cells during blebbing in time-lapse image series so as to analyze the dynamic properties of cellular blebs. We first design two structures based on mathematical morphology to detect the edge of cell according to cell with blebs and without blebs, respectively, because of significantly different boundaries of two kinds of cells. We then adopt an ellipse fitting method based on cell shape and minimum deformation constraint of two successive images to segment and track blebs, which produces less over-segmentation. The proposed method is verified with our captured cell blebbing images and results match human observation consistently, which demonstrates the usefulness of the method in automatic analysis of cell blebbing images.

Title: A Survey on Image Aesthetic Quality Assessment
Authors: Zhe Dong and Hao Lv et al.
Abstract: Visual aesthetic quality assessment has drawn a lot of attentions in recent years. This research topic mainly aims at classifying the images /videos into “good” or “bad automatically, from the perspective of human aesthetic standards. With effective aesthetic quality assessment algorithms, it has many amazing applications. Therefore experts have conducted a lot of works on this topic. However, limited success has been achieved since it is highly abstract and very challenging. The purpose of this paper is to give a comprehensive survey of existing works on aesthetic quality assessment and a deep discussion about the possible directions for further research in this field.

Title: Deep Learning for Texture Analysis: A Review
Authors: Samar Shahbazzadeh (RMIT University of Melbourne), Amir Dadashnialehi, Reza Hoseinnezhad, Alireza Bab-Hadiashar
Abstract: Deep learning techniques have been recently applied with outstanding success to a number of machine learning problems. The texture analysis problem is a significant and active field of research in computer vision and machine learning. This work aims to review the state-of-the-art in deep learning algorithms for the texture analysis problem. Deep learning architectures and training methods that have been tested on a variety of texture databases are reviewed and future research directions are suggested.

Journal Contact

MDPI AG
J. Imaging Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to J. Imaging Edit a special issue Review for J. Imaging
logo
loading...
Back to Top