Next Article in Journal
Analysis of Image Feature Characteristics for Automated Scoring of HER2 in Histology Slides
Next Article in Special Issue
High-Level Synthesis of Online K-Means Clustering Hardware for a Real-Time Image Processing Pipeline
Previous Article in Journal
Scalable Database Indexing and Fast Image Retrieval Based on Deep Learning and Hierarchically Nested Structure Applied to Remote Sensing and Plant Biology
Previous Article in Special Issue
Efficient FPGA Implementation of Automatic Nuclei Detection in Histopathology Images
Article Menu

Export Article

Open AccessArticle
J. Imaging 2019, 5(3), 34; https://doi.org/10.3390/jimaging5030034

High-Throughput Line Buffer Microarchitecture for Arbitrary Sized Streaming Image Processing

Department of Electrical and Electronic Engineering, The University of Hong Kong, Pok Fu Lam, Hong Kong
*
Author to whom correspondence should be addressed.
Received: 21 January 2019 / Revised: 25 February 2019 / Accepted: 25 February 2019 / Published: 6 March 2019
(This article belongs to the Special Issue Image Processing Using FPGAs)
Full-Text   |   PDF [1993 KB, uploaded 8 March 2019]   |  

Abstract

Parallel hardware designed for image processing promotes vision-guided intelligent applications. With the advantages of high-throughput and low-latency, streaming architecture on FPGA is especially attractive to real-time image processing. Notably, many real-world applications, such as region of interest (ROI) detection, demand the ability to process images continuously at different sizes and resolutions in hardware without interruptions. FPGA is especially suitable for implementation of such flexible streaming architecture, but most existing solutions require run-time reconfiguration, and hence cannot achieve seamless image size-switching. In this paper, we propose a dynamically-programmable buffer architecture (D-SWIM) based on the Stream-Windowing Interleaved Memory (SWIM) architecture to realize image processing on FPGA for image streams at arbitrary sizes defined at run time. D-SWIM redefines the way that on-chip memory is organized and controlled, and the hardware adapts to arbitrary image size with sub-100 ns delay that ensures minimum interruptions to the image processing at a high frame rate. Compared to the prior SWIM buffer for high-throughput scenarios, D-SWIM achieved dynamic programmability with only a slight overhead on logic resource usage, but saved up to 56 % of the BRAM resource. The D-SWIM buffer achieves a max operating frequency of 329.5 MHz and reduction in power consumption by 45.7 % comparing with the SWIM scheme. Real-world image processing applications, such as 2D-Convolution and the Harris Corner Detector, have also been used to evaluate D-SWIM’s performance, where a pixel throughput of 4.5 Giga Pixel/s and 4.2 Giga Pixel/s were achieved respectively in each case. Compared to the implementation with prior streaming frameworks, the D-SWIM-based design not only realizes seamless image size-switching, but also improves hardware efficiency up to 30 × . View Full-Text
Keywords: streaming architecture; low-latency; high-throughput; FPGA; D-SWIM; line buffer streaming architecture; low-latency; high-throughput; FPGA; D-SWIM; line buffer
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Shi, R.; Wong, J.S.; So, H. .-H. High-Throughput Line Buffer Microarchitecture for Arbitrary Sized Streaming Image Processing. J. Imaging 2019, 5, 34.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
J. Imaging EISSN 2313-433X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top