Advances in High-Speed Computing and Parallel Algorithm

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 May 2026 | Viewed by 576

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
Interests: speed computing and parallel algorithms; computational linear algebra; numerical methods for partial differential equations
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. Parallelism, the capability of a computer to execute operations concurrently, has been a constant throughout the history of computing. It impacts hardware, software, theory, and applications. Supercomputers owe their performance advantage to parallelism. Today, physical limitations have forced the adoption of parallelism as the preeminent strategy of computer manufacturers for performance gains of all classes of machines, from embedded and mobile systems to the most powerful servers. Parallelism is crucial for many applications in the sciences and engineering.

The focus will be on models, algorithms, and software tools that facilitate efficient and convenient utilization of modern parallel and distributed computing architectures, as well as on large-scale applications.

Topics of interest include but are not limited to:

  • Parallel/distributed architectures, enabling technologies;
  • Quantum computing and communication;
  • Cluster, cloud, edge, and fog computing;
  • Multi-core and many-core parallel computing, GPU computing;
  • Heterogeneous/hybrid computing and accelerators;
  • Parallel/distributed algorithms;
  • Performance analysis;
  • Performance issues on various types of parallel systems;
  • Auto-tuning and auto-parallelization: methods, tools, and applications;
  • Parallel/distributed programming;
  • Tools and environments for parallel/distributed computing;
  • HPC numerical linear algebra;
  • HPC methods of solving differential equations;
  • Applications of parallel/distributed computing;
  • Methods and tools for the parallel solution of large-scale problems.

Dr. Ivan Lirkov
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • parallel computing
  • GPU computing
  • parallelization on HPC platforms
  • performance analysis
  • multi-/many-core platforms
  • cache management

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 469 KB  
Article
Performance Evaluation of Separate Chaining for Concurrent Hash Maps
by Ana Castro, Miguel Areias and Ricardo Rocha
Mathematics 2025, 13(17), 2820; https://doi.org/10.3390/math13172820 - 2 Sep 2025
Viewed by 284
Abstract
Hash maps are a widely used and efficient data structure for storing and accessing data organized as key-value pairs. Multithreading with hash maps refers to the ability to concurrently execute multiple lookup, insert, and delete operations, such that each operation runs independently while [...] Read more.
Hash maps are a widely used and efficient data structure for storing and accessing data organized as key-value pairs. Multithreading with hash maps refers to the ability to concurrently execute multiple lookup, insert, and delete operations, such that each operation runs independently while sharing the underlying data structure. One of the main challenges in hash map implementation is the management of collisions. Arguably, separate chaining is among the most well-known strategies for collision resolution. In this paper, we present a comprehensive study comparing two common approaches to implementing separate chaining—linked lists and dynamic arrays—in a multithreaded environment using a lock-based concurrent hash map design. Our study includes a performance evaluation covering parameters such as cache behavior, energy consumption, contention under concurrent access, and resizing overhead. Experimental results show that dynamic arrays maintain more predictable memory access and lower energy consumption in multithreaded environments. Full article
(This article belongs to the Special Issue Advances in High-Speed Computing and Parallel Algorithm)
Show Figures

Figure 1

Back to TopTop