Electronics and Algorithms for Real-Time Video Processing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 January 2022) | Viewed by 6345

Special Issue Editors


E-Mail Website
Guest Editor
Calle de Maria Tubau 9, Nokia R+D, 28050 Madrid, Spain
Interests: video codecs; image processing; artificial intelligence; embedded systems; virtual reality; real-time protocols

E-Mail Website
Guest Editor
Departamento de Arquitectura de Computadores, ETSI Informatica. Boulevar Louis Pasteur, Campus de Teatinos, 29071 Málaga, Spain
Interests: computer vision techniques for video processing; accelerator-based high performance computing; video applications on embedded heterogeneous systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Technologies, University CEU-San Pablo, 28003 Madrid, Spain
Interests: virtual reality; video codecs; videogame design; cloud computing

E-Mail Website
Guest Editor
Department of Information Technologies, University CEU-San Pablo, 28003 Madrid, Spain
Interests: FPGA/GPU algorithm acceleration; video codecs; biosignal digital processing; VLSI system design and design automation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Real-time video has an ample variety of applications, ranging from remote videogames, remote virtual reality, vehicular communications, computer vision, etc. Thus, with the advent of 5G a new range of real-time video services is emerging, enabled by the low latencies and high bandwidth potential of this technology. Similarly, new Computer Vision techniques based on Deep Learning approaches are achieving encouraging accuracy in results. However, it is still a challenge to deploy theses applications on low-power and small devices fulfilling real-time constrains.

Existing solutions for video applications, tailored for other scenarios, were not designed for this extreme and somewhat conflicting constraints. On top of that, many solutions mandate the use of special-purpose video processing techniques, implemented through software, hardware, or a combination of both. It is therefore necessary to design these new algorithms, suited for the application special requirements and the computing constraints imposed by the target hardware architecture.

Furthermore, hardware architectures such as multi-core, many-core and reconfigurable platforms for both embedded systems and high-performance computing can be exploited to enable novel real-time services. To realize this, it is important not only to optimize the use of resources (memory access, parallelization, etc.), but also to adapt the algorithms to the specific hardware architectures. Moreover, recent advances in reconfigurable system-on-chip (blending processors and reconfigurable fabric), as well as cloud deployment of high-end accelerators (GPU and reconfigurable systems) open new opportunities for efficient real-time video.

In this Special Issue we propose to investigate new applications, techniques and implementations for real-time video, focusing on two different target hardware architectures: embedded systems and high-performance computing systems.

Topics of interest to this Special Issue include, but are not limited to, the following:

  • Video for mobile systems physically moving at a high speed
  • Low-power/low-cost implementations of real-time video processing by means of microcontroller units
  • Error correction techniques for video transmission through unreliable data networks
  • Image compression/processing techniques for real-time video
  • Novel algorithms to cope with extreme video constraints: low-latency, high frame rate, high packet loss, real-time video processing, etc.
  • Embedded reconfigurable systems implementations
  • High-performance video systems through high-end reconfigurable accelerators
  • Many-core implementation of low-latency video
  • Multi-core implementation of real-time video
  • Computer Vison applications on embedded and/or reconfigurable architectures satisfying real-time design constraints.

Dr. Javier García-Aranda
Dr. Nicolás Guil-Mata
Dr. Rodrigo García-Carmona
Dr. Gabriel Caffarena
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 7686 KiB  
Article
An FPGA-Based LOCO-ANS Implementation for Lossless and Near-Lossless Image Compression Using High-Level Synthesis
by Tobías Alonso, Gustavo Sutter and Jorge E. López de Vergara
Electronics 2021, 10(23), 2934; https://doi.org/10.3390/electronics10232934 - 26 Nov 2021
Cited by 2 | Viewed by 2477
Abstract
In this work, we present and evaluate a hardware architecture for the LOCO-ANS (Low Complexity Lossless Compression with Asymmetric Numeral Systems) lossless and near-lossless image compressor, which is based on JPEG-LS standard. The design is implemented in two FPGA generations, evaluating its performance [...] Read more.
In this work, we present and evaluate a hardware architecture for the LOCO-ANS (Low Complexity Lossless Compression with Asymmetric Numeral Systems) lossless and near-lossless image compressor, which is based on JPEG-LS standard. The design is implemented in two FPGA generations, evaluating its performance for different codec configurations. The tests show that the design is capable of up to 40.5 MPixels/s and 124 MPixels/s per lane for Zynq 7020 and UltraScale+ FPGAs, respectively. Compared to the single thread LOCO-ANS software implementation running in a 1.2 GHz Raspberry Pi 3B, each hardware lane achieves 6.5 times higher throughput, even when implemented in an older and cost-optimized chip like the Zynq 7020. Results are also presented for a lossless only version, which achieves a lower footprint and approximately 50% higher performance than the version that supports both lossless and near-lossless. Interestingly, these great results were obtained applying High-Level Synthesis, describing the coder with C++ code, which tends to establish a trade-off between design time and quality of results. These results show that the algorithm is very suitable for hardware implementation. Moreover, the implemented system is faster and achieves higher compression than the best previously available near-lossless JPEG-LS hardware implementation. Full article
(This article belongs to the Special Issue Electronics and Algorithms for Real-Time Video Processing)
Show Figures

Figure 1

20 pages, 3705 KiB  
Article
Elastic Downsampling: An Adaptive Downsampling Technique to Preserve Image Quality
by Jose J. García Aranda, Manuel Alarcón Granero, Francisco Jose Juan Quintanilla, Gabriel Caffarena and Rodrigo García-Carmona
Electronics 2021, 10(4), 400; https://doi.org/10.3390/electronics10040400 - 07 Feb 2021
Cited by 2 | Viewed by 3066
Abstract
This paper presents a new adaptive downsampling technique called elastic downsampling, which enables high compression rates while preserving the image quality. Adaptive downsampling techniques are based on the idea that image tiles can use different sampling rates depending on the amount of information [...] Read more.
This paper presents a new adaptive downsampling technique called elastic downsampling, which enables high compression rates while preserving the image quality. Adaptive downsampling techniques are based on the idea that image tiles can use different sampling rates depending on the amount of information conveyed by each block. However, current approaches suffer from blocking effects and artifacts that hinder the user experience. To bridge this gap, elastic downsampling relies on a Perceptual Relevance analysis that assigns sampling rates to the corners of blocks. The novel metric used for this analysis is based on the luminance fluctuations of an image region. This allows a gradual transition of the sampling rate within tiles, both horizontally and vertically. As a result, the block artifacts are removed and fine details are preserved. Experimental results (using the Kodak and USC Miscelanea image datasets) show a PSNR improvement of up to 15 dB and a superior SSIM (Structural Similarity) when compared with other techniques. More importantly, the algorithms involved are computationally cheap, so it is feasible to implement them in low-cost devices. The proposed technique has been successfully implemented using graphics processors (GPU) and low-power embedded systems (Raspberry Pi) as target platforms. Full article
(This article belongs to the Special Issue Electronics and Algorithms for Real-Time Video Processing)
Show Figures

Figure 1

Back to TopTop