High-Performance Computing for AI: Architecture, Systems, and Algorithms
A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".
Deadline for manuscript submissions: 15 April 2025 | Viewed by 169
Special Issue Editors
Interests: high-performance computing; large-scale deep learning; system security
Interests: efficient machine learning algorithm; algorithm-system co-design for AI acceleration; large scale machine learning for chip design; energy efficient privacy preserving machine learning
Interests: performance optimization on HPC and AI/DL applications; parallel computing on various architectures; heterogeneous computing and memory systems; scientific machine learning
Special Issue Information
Dear Colleagues,
The rapid advancement of artificial intelligence (AI), such as the prevalence of large language models (LLMs), makes the tremendous demand for high-performance computing (HPC) capable of supporting the deployment of AI with increasingly complex models and large datasets. The HPC research community has invested a significant effort to enhance the scalability and efficiency of large-scale model training, including:
- Designing AI-specific architectures, including TPU, Graphcore IPU, and Cerebras Wafer Scale Engine, that support large-scale AI training and inference while maintaining energy efficiency and cost-effectiveness;
- Optimizing HPC system software, including communication and I/O middleware, AI compilers, and runtime, to facilitate seamless AI deployments on HPC platforms;
- Developing new algorithms and optimization strategies, encompassing parallelization strategies for distributed training, methods for reducing computational complexity, and schemes for increasing resource utilization, that exploit full HPC capabilities to maximize the speed and accuracy of large-scale training.
This Special Issue on "High-Performance Computing for AI: Architecture, Systems, and Algorithms" aims to bring together pioneering research and perspectives on the design and development of innovative HPC architectures, systems, and algorithms to enable and accelerate next-generation machine learning (ML). The topics of interest include, but are not limited to, the following:
- Specialized hardware and architectural support for ML/AI;
- Energy-efficient training and inference;
- Performance modeling and analysis of ML/AI applications;
- ML/AI compilers and runtimes at scale;
- Development of ML/AI software pipelines on HPC;
- Parallel and distributed learning algorithms;
- Implementation of ML/AI algorithms on parallel architectures;
- Computational optimization methods for ML/AI;
- Scalable neural architecture search;
- Federated and collaborative learning.
The aim of this Special Issue is to serve as a platform for researchers and practitioners interested in harnessing the power of HPC to accelerate AI deployment to exchange new ideas and findings, showcase research achievements, and discuss challenges and future directions. We hope it can inspire further advancements in the field and foster collaborations within and between HPC and AI communities.
Dr. Xiaodong Yu
Dr. Shaoyi Huang
Dr. Zhen Xie
Guest Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- high-performance computing (HPC)
- artificial intelligence (AI)
- distributed systems
- AI hardware architectures
- energy-efficient AI
- AI compiler
- large-scale machine learning (ML)
- performance benchmarking and modeling
- algorithm acceleration
- federated learning
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.
Further information on MDPI's Special Issue polices can be found here.