High Performance Reconfigurable Computing

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Parallel and Distributed Algorithms".

Deadline for manuscript submissions: closed (15 June 2019) | Viewed by 18325

Special Issue Editor

Programming Systems Group, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Interests: programming models; compilers; program analysis and optimizations; heterogeneous computing; high-performance computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Reconfigurable computing using Field Programmable Gate Arrays (FPGAs) and coarse-grained reconfigurable devices have received renewed interest because of their unique combination of flexibility, performance, and energy efficiency. Their reconfigurable nature allows these architectures to be customized to match the needs of a given application so that they can achieve much higher energy efficiency and/or performance gains compared to conventional CPUs and GPUs.

Recent trends in technologies, such as (1) high-end FPGAs with hardened floating-point data signal processing blocks, (2) new system-on-chip devices integrating CPUs, FPGAs, and GPUs, and (3) high-level programming support (HLS, OpenCL, OpenACC, etc.), make high performance reconfigurable computing more attractive for serious exploration in scientific simulation and data analytics.

We invite you to submit your latest research on the theoretical and practical issues in applying reconfigurable computing to high performance computing (HPC) and data analytics.  Potential topics of this Special Issue include, but are not limited to:

  • HPC and machine learning algorithms and applications implemented on reconfigurable devices
  • Programming models, compilers, and system software for reconfigurable computing
  • Algorithms and methods for leveraging dynamic reconfiguration to increase performance, flexibility, reliability, or programmability.
  • Mapping algorithms and tools for heterogeneous system-on-chip devices

Dr. Seyong Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • High performance computing
  • Reconfigurable computing
  • Data analytics

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

24 pages, 694 KiB  
Article
Mapping a Guided Image Filter on the HARP Reconfigurable Architecture Using OpenCL
by Thomas Faict, Erik H. D’Hollander and Bart Goossens
Algorithms 2019, 12(8), 149; https://doi.org/10.3390/a12080149 - 27 Jul 2019
Cited by 4 | Viewed by 3947
Abstract
Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis [...] Read more.
Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis (HLS) language, is made available. By making use of HLS, a faster design cycle can be achieved compared to programming in a traditional hardware description language. This, however, comes at the cost of having less control over the hardware implementation. We will investigate how OpenCL can be applied to implement a real-time guided image filter on the HARP platform. In the first phase, the performance-critical parameters of the OpenCL programming model are defined using several specialized benchmarks. In a second phase, the guided image filter algorithm is implemented using the insights gained in the first phase. Both a floating-point and a fixed-point implementation were developed for this algorithm, based on a sliding window implementation. This resulted in a maximum floating-point performance of 135 GFLOPS, a maximum fixed-point performance of 430 GOPS and a throughput of HD color images at 74 frames per second. Full article
(This article belongs to the Special Issue High Performance Reconfigurable Computing)
Show Figures

Figure 1

21 pages, 7922 KiB  
Article
A New Method of Applying Data Engine Technology to Realize Neural Network Control
by Song Zheng, Chao Bi and Yilin Song
Algorithms 2019, 12(5), 97; https://doi.org/10.3390/a12050097 - 9 May 2019
Cited by 1 | Viewed by 3939
Abstract
This paper presents a novel diagonal recurrent neural network hybrid controller based on the shared memory of real-time database structure. The controller uses Data Engine (DE) technology, through the establishment of a unified and standardized software architecture and real-time database in different control [...] Read more.
This paper presents a novel diagonal recurrent neural network hybrid controller based on the shared memory of real-time database structure. The controller uses Data Engine (DE) technology, through the establishment of a unified and standardized software architecture and real-time database in different control stations, effectively solves many problems caused by technical standard, communication protocol, and programming language in actual industrial application: the advanced control algorithm and control system co-debugging difficulties, algorithm implementation and update inefficiency, and high development and operation and maintenance costs effectively fill the current technical gap. More importantly, the control algorithm development uses a unified visual graphics configuration programming environment, effectively solving the problem of integrated control of heterogeneous devices; and has the advantages of intuitive configuration and transparent data processing process, reducing the difficulty of the advanced control algorithms debugging in engineering applications. In this paper, the application of a neural network hybrid controller based on DE in motor speed measurement and control system shows that the system has excellent control characteristics and anti-disturbance ability, and provides an integrated method for neural network control algorithm in a practical industrial control system, which is the major contribution of this article. Full article
(This article belongs to the Special Issue High Performance Reconfigurable Computing)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 1109 KiB  
Review
A Survey of Convolutional Neural Networks on Edge with Reconfigurable Computing
by Mário P. Véstias
Algorithms 2019, 12(8), 154; https://doi.org/10.3390/a12080154 - 31 Jul 2019
Cited by 88 | Viewed by 9719
Abstract
The convolutional neural network (CNN) is one of the most used deep learning models for image detection and classification, due to its high accuracy when compared to other machine learning algorithms. CNNs achieve better results at the cost of higher computing and memory [...] Read more.
The convolutional neural network (CNN) is one of the most used deep learning models for image detection and classification, due to its high accuracy when compared to other machine learning algorithms. CNNs achieve better results at the cost of higher computing and memory requirements. Inference of convolutional neural networks is therefore usually done in centralized high-performance platforms. However, many applications based on CNNs are migrating to edge devices near the source of data due to the unreliability of a transmission channel in exchanging data with a central server, the uncertainty about channel latency not tolerated by many applications, security and data privacy, etc. While advantageous, deep learning on edge is quite challenging because edge devices are usually limited in terms of performance, cost, and energy. Reconfigurable computing is being considered for inference on edge due to its high performance and energy efficiency while keeping a high hardware flexibility that allows for the easy adaption of the target computing platform to the CNN model. In this paper, we described the features of the most common CNNs, the capabilities of reconfigurable computing for running CNNs, the state-of-the-art of reconfigurable computing implementations proposed to run CNN models, as well as the trends and challenges for future edge reconfigurable platforms. Full article
(This article belongs to the Special Issue High Performance Reconfigurable Computing)
Show Figures

Figure 1

Back to TopTop