New Methods for Omni-directional and Equirectangular Image and Video Processing

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 2369

Special Issue Editors


E-Mail Website
Guest Editor
Dipartimento di Ingegneria, Università degli Studi di Palermo, 90128 Palermo, Italy
Interests: computer vision; artificial intelligence; visual tracking; action recognition; photo and video retrieval

E-Mail Website
Guest Editor
Dipartimento di Ingegneria, Università degli Studi di Palermo, 90128 Palermo, Italy
Interests: computer vision; visual tracking; action recognition; behavior modeling and understanding; camera networks

Special Issue Information

Dear Colleagues,

Omni-directional (or 360°) cameras are devices able to record a spherical view of the whole environment, unlike traditional cameras that have a predefined field of view. Indeed, 360° cameras can generally acquire panoramic images with a 360° horizontal view and 180° vertical view, resulting in a complete representation of the environment. The newest omni-directional devices use multiple calibrated cameras with partially overlapping fields of view. Each camera can shoot part of the scene, and the final image is reconstructed by stitching algorithms after correcting the distortion introduced by lenses. The most popular 360° cameras typically comprise two wide-angle lenses; the entire system is relatively inexpensive, and the stitching process is much simpler and quite efficient. Recently, such cameras have gained popularity and their use is spreading, especially in consumer and cultural heritage applications. Users may interact with a recorded video by navigating around the environment through changing the point of view; 360° pictures and videos are uploaded and usable on several social platforms (such as Facebook and YouTube) and can also be viewed through head-mounted viewers (Google Cardboard, Oculus Quest, etc.) to improve the users’ sense of immersivity.

Regardless of the acquisition device, pixels of the sensed images are mapped onto a sphere and then projection techniques, such as equirectangular or cubic projections, are used. Cubic projections are mainly adopted to navigate through the environment. Equirectangular projections represent the sphere on a single image (2:1 ratio) and are mainly used to store the data. This projection introduces distortions particularly visible around the poles of the sphere.

These 360° videos are potentially very attractive in the fields of mixed reality, mobile robotics, video surveillance, and distancing applications. Nonetheless, few studies have been proposed about the processing of equirectangular images in these fields. This might be ascribable to the challenges presented by these images and videos, which may hinder the development of methods for their processing, especially deep learning techniques. Indeed, equirectangular images have high resolution and display severe deformations that may inhibit the adoption of state-of-the-art computer vision and image processing techniques. Some attempts have been made to adapt pre-trained networks to equirectangular formats (i.e., SphereNet) or to adopt cubic projections on demand; however, there are still issues linked with the high computational demand and loss of resolution when processing large images.

The aim of this Special Issue is to present novel and diverse research articles that demonstrate new methods for the efficient processing of equirectangular images and videos. Topics of interest include, but are not limited to, the following:

  • Compression techniques for spherical views;
  • Stabilization and pre-processing techniques in 360° videos;
  • Novel 360° datasets and experimental protocols;
  • Novel applications involving the use of omni-directional and equirectangular images or videos;
  • Visual tracking approaches for equirectangular images;
  • Segmentation techniques and detection approaches for spherical images;
  • Action detection and classification from 360° videos;
  • Visual attention techniques in 360° videos;
  • Depth estimation in spherical views;
  • Applications based on multi-camera systems involving 360° cameras;
  • Novel learning methods specifically designed for spherical images.

Prof. Dr. Marco La Cascia
Dr. Liliana Lo Presti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • equirectangular images
  • computer vision
  • omni-directional cameras
  • 360° images and videos
  • panoramic images
  • spherical images

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 19173 KiB  
Article
OMNI-CONV: Generalization of the Omnidirectional Distortion-Aware Convolutions
by Charles-Olivier Artizzu, Guillaume Allibert and Cédric Demonceaux
J. Imaging 2023, 9(2), 29; https://doi.org/10.3390/jimaging9020029 - 28 Jan 2023
Viewed by 1597
Abstract
Omnidirectional images have drawn great research attention recently thanks to their great potential and performance in various computer vision tasks. However, processing such a type of image requires an adaptation to take into account spherical distortions. Therefore, it is not trivial to directly [...] Read more.
Omnidirectional images have drawn great research attention recently thanks to their great potential and performance in various computer vision tasks. However, processing such a type of image requires an adaptation to take into account spherical distortions. Therefore, it is not trivial to directly extend the conventional convolutional neural networks on omnidirectional images because CNNs were initially developed for perspective images. In this paper, we present a general method to adapt perspective convolutional networks to equirectangular images, forming a novel distortion-aware convolution. Our proposed solution can be regarded as a replacement for the existing convolutional network without requiring any additional training cost. To verify the generalization of our method, we conduct an analysis on three basic vision tasks, i.e., semantic segmentation, optical flow, and monocular depth. The experiments on both virtual and real outdoor scenarios show our adapted spherical models consistently outperform their counterparts. Full article
Show Figures

Figure 1

Back to TopTop