applsci-logo

Journal Browser

Journal Browser

Advances in Computer Architecture Design, Parallel Processing, and Fault Tolerance

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 May 2025 | Viewed by 2725

Special Issue Editors


E-Mail Website
Guest Editor
Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, Torino, Italy
Interests: functional testing; general-purpose graphics processing units (gpgpus); parallel processing

E-Mail Website
Guest Editor
Federal University of Rio Grande do Sul, PPGC, PGMICRO, Porto Alegre, Brazil
Interests: fault tolerance; in-network computing; computer architectures; Internet of Things; artificial intelligence

E-Mail Website
Guest Editor
VRAIN Valencian Research Institute for Artificial Intelligence, Universitat Politècnica de València, 46022 València, Spain
Interests: affective computing; agreement technology; artificial intelligence; computational chemistry; computer science
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Università degli Studi di Salerno, 84084 Fisciano, Italy
Interests: cryptography; information/data security; computer security; digital watermarking; cloud computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The advances in computer architectures and systems play a key role in supporting and boosting the development and deployment of complex applications in several domains, including machine learning and high-performance scientific computing.

This Special Issue aims to disseminate relevant contributions to modern and emerging designs and techniques in computer architecture and parallel processing. The primary focus is on original and pertinent methodologies, techniques, and architectures in hardware accelerators for machine learning, processor architectures, and systems. Topics include enhancements in energy efficiency, performance, and fault tolerance. Additionally, this Special Issue aims to address the evaluation of reliability in emerging and novel computer architectures and systems.

The primary scope of this Special Issue includes, but is not limited to:

  • Parallel architectures;
  • Emerging architectures for hardware accelerators;
  • Design and programming of large language model (LLM) accelerators;
  • Reliability and fault tolerance;
  • Graphics processing units;
  • Vector processors and accelerators;
  • High-performance computing.

Dr. Josie E. Rodriguez Condia
Dr. José Rodrigo Azambuja
Prof. Dr. Vicent Botti
Dr. Arcangelo Castiglione
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • parallel architectures
  • parallel processing
  • architectures for hardware accelerators
  • design of large language model accelerators
  • programming of large language model accelerators
  • reliability
  • fault tolerance
  • graphics processing units
  • vector processors
  • high-performance computing systems
  • energy-efficient architectures and programming

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3630 KiB  
Article
Parallel Simulations of the Sharp Wave-Ripples of the Hippocampus on Multicore CPUs and GPUs
by Emanuele Torti, Simone Migliazza, Elisa Marenzi, Giovanni Danese and Francesco Leporati
Appl. Sci. 2024, 14(21), 9967; https://doi.org/10.3390/app14219967 - 31 Oct 2024
Viewed by 772
Abstract
The simulation of realistic systems plays a crucial role in modern sciences. Complex organs such as the brain can be described by mathematical models to reproduce biological behaviors. In the brain, the hippocampus is a critical region for memory and learning. In the [...] Read more.
The simulation of realistic systems plays a crucial role in modern sciences. Complex organs such as the brain can be described by mathematical models to reproduce biological behaviors. In the brain, the hippocampus is a critical region for memory and learning. In the literature, a model to reproduce the memory consolidation mechanism has been proposed. This model exhibits a high degree of biological realism, though it is accompanied by a significant increase in computational complexity. This paper proposes the development of parallel simulation targeting different devices, namely multicore CPUs and GPUs. The experiments highlighted that the biological realism is maintained, together with a significant decrease of the processing times. Finally, the conducted analysis highlights that the GPU is one of the most suitable technologies for this kind of simulation. Full article
Show Figures

Figure 1

25 pages, 1511 KiB  
Article
Performance Study of an MRI Motion-Compensated Reconstruction Program on Intel CPUs, AMD EPYC CPUs, and NVIDIA GPUs
by Mohamed Aziz Zeroual, Karyna Isaieva, Pierre-André Vuissoz and Freddy Odille
Appl. Sci. 2024, 14(21), 9663; https://doi.org/10.3390/app14219663 - 23 Oct 2024
Viewed by 1180
Abstract
Motion-compensated image reconstruction enables new clinical applications of Magnetic Resonance Imaging (MRI), but it relies on computationally intensive algorithms. This study focuses on the Generalized Reconstruction by Inversion of Coupled Systems (GRICS) program, applied to the reconstruction of 3D images in cases of [...] Read more.
Motion-compensated image reconstruction enables new clinical applications of Magnetic Resonance Imaging (MRI), but it relies on computationally intensive algorithms. This study focuses on the Generalized Reconstruction by Inversion of Coupled Systems (GRICS) program, applied to the reconstruction of 3D images in cases of non-rigid or rigid motion. It uses hybrid parallelization with the MPI (Message Passing Interface) and OpenMP (Open Multi-Processing). For clinical integration, the GRICS needs to efficiently harness the computational resources of compute nodes. We aim to improve the GRICS’s performance without any code modification. This work presents a performance study of GRICS on two CPU architectures: Intel Xeon Gold and AMD EPYC. The roofline model is used to study the software–hardware interaction and quantify the code’s performance. For CPU–GPU comparison purposes, we propose a preliminary MATLAB–GPU implementation of the GRICS’s reconstruction kernel. We establish the roofline model of the kernel on two NVIDIA GPU architectures: Quadro RTX 5000 and A100. After the performance study, we propose some optimization patterns for the code’s execution on CPUs, first considering only the OpenMP implementation using thread binding and affinity and appropriate architecture-compilation flags and then looking for the optimal combination of MPI processes and OpenMP threads in the case of the hybrid MPI–OpenMP implementation. The results show that the GRICS performed well on the AMD EPYC CPUs, with an architectural efficiency of 52%. The kernel’s execution was fast on the NVIDIA A100 GPU, but the roofline model reported low architectural efficiency and utilization. Full article
Show Figures

Figure 1

Back to TopTop