Applied Mathematics in Data Science and High-Performance Computing

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 3315

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Faculty of Innovation Engineering, Macau University of Science and Technology, Avenida Wai Long, TaiPa, Macau 999078, China
Interests: machine learning and its applications; deep learning; reinforcement learning; data science; high performance computing; inverse problem

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Faculty of Innovation Engineering, Macau University of Science and Technology, Avenida Wai Long, TaiPa, Macau 999078, China
Interests: intelligent information processing; computer graphics and image processing; multimedia information security; remote sensing data processing and analysis; applied mathematics and scientific computing

E-Mail Website
Guest Editor
School of Mathematics and Statistics, Shaoguan University, Shaoguan 512000, China
Interests: high performance computing; numerical linear algebra; numerical optimization

Special Issue Information

Dear Colleagues,

Data science, as the typical cross-disciplinary field, is now attracting increasing attention, particularly with the advent of artificial intelligence and the acceleration of high-performance computing. Applied mathematics, a well-established and renowned discipline, can be integrated to offer classical algorithms that enhance data science applications. Additionally, the latest advancements in parallel computing and GPU-based acceleration significantly boost the performance of these applications when combined with applied mathematics.

This Special Issue, entitled “Applied Mathematics in Data Science and High-Performance Computing,” aims to explore the application and integration of classical methods in applied mathematics within the data science field. It also seeks to investigate parallel acceleration strategies in high-performance computing, leveraging multi-core CPU and GPU frameworks, and to study efficient numerical methods for applied mathematical problems that arise from scientific computing. Contributions exploring the fusion of these methods with neural networks are particularly encouraged.

Dr. Xiaoping Lu
Prof. Dr. Zhanchuan Cai
Dr. Hua Zheng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data science
  • high performance computing
  • neural network
  • parallel
  • numerical analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 6417 KiB  
Article
Neural Operator for Planetary Remote Sensing Super-Resolution with Spectral Learning
by Hui-Jia Zhao, Jie Lu, Wen-Xiu Guo and Xiao-Ping Lu
Mathematics 2024, 12(22), 3461; https://doi.org/10.3390/math12223461 - 6 Nov 2024
Viewed by 859
Abstract
High-resolution planetary remote sensing imagery provides detailed information for geomorphological and topographic analyses. However, acquiring such imagery is constrained by limited deep-space communication bandwidth and challenging imaging environments. Conventional super-resolution methods typically employ separate models for different scales, treating them as independent tasks. [...] Read more.
High-resolution planetary remote sensing imagery provides detailed information for geomorphological and topographic analyses. However, acquiring such imagery is constrained by limited deep-space communication bandwidth and challenging imaging environments. Conventional super-resolution methods typically employ separate models for different scales, treating them as independent tasks. This approach limits deployment and real-time applications in planetary remote sensing. Moreover, capturing global context is crucial in planetary remote sensing images due to their contextual similarities. To address these limitations, we propose Discrete Cosine Transform (DCT)–Global Super Resolution Neural Operator (DG-SRNO), a global context-aware arbitrary-scale super-resolution model. DG-SRNO achieves super-resolution at any scale using a single framework by learning the mapping between low-resolution (LR) and high-resolution (HR) function spaces. We mathematically prove the global receptive field of DG-SRNO. To evaluate DG-SRNO’s performance in planetary remote sensing tasks, we introduce the Ceres 800 dataset, a planetary remote sensing super-resolution dataset. Extensive quantitative and qualitative experiments demonstrate DG-SRNO’s impressive reconstruction capabilities. Full article
(This article belongs to the Special Issue Applied Mathematics in Data Science and High-Performance Computing)
Show Figures

Figure 1

16 pages, 599 KiB  
Article
Variational Autoencoding with Conditional Iterative Sampling for Missing Data Imputation
by Shenfen Kuang, Jie Song, Shangjiu Wang and Huafeng Zhu
Mathematics 2024, 12(20), 3288; https://doi.org/10.3390/math12203288 - 20 Oct 2024
Cited by 1 | Viewed by 1237
Abstract
Variational autoencoders (VAEs) are popular for their robust nonlinear representation capabilities and have recently achieved notable advancements in the problem of missing data imputation. However, existing imputation methods often exhibit instability due to the inherent randomness in the sampling process, leading to either [...] Read more.
Variational autoencoders (VAEs) are popular for their robust nonlinear representation capabilities and have recently achieved notable advancements in the problem of missing data imputation. However, existing imputation methods often exhibit instability due to the inherent randomness in the sampling process, leading to either underestimation or overfitting, particularly when handling complex missing data types such as images. To address this challenge, we introduce a conditional iterative sampling imputation method. Initially, we employ an importance-weighted beta variational autoencoder to learn the conditional distribution from the observed data. Subsequently, leveraging the importance-weighted resampling strategy, samples are drawn iteratively from the conditional distribution to compute the conditional expectation of the missing data. The proposed method has been experimentally evaluated using classical generative datasets and compared with various well-known imputation methods to validate its effectiveness. Full article
(This article belongs to the Special Issue Applied Mathematics in Data Science and High-Performance Computing)
Show Figures

Figure 1

15 pages, 383 KiB  
Article
A Covariance-Free Strictly Complex-Valued Relevance Vector Machine for Reducing the Order of Linear Time-Invariant Systems
by Weixiang Xie and Jie Song
Mathematics 2024, 12(19), 2991; https://doi.org/10.3390/math12192991 - 25 Sep 2024
Viewed by 792
Abstract
Multiple-input multiple-output (MIMO) linear time-invariant (LTI) systems exhibit enormous computational costs for high-dimensional problems. To address this problem, we propose a novel approach for reducing the dimensionality of MIMO systems. The method leverages the Takenaka–Malmquist basis and incorporates the strictly complex-valued relevant vector [...] Read more.
Multiple-input multiple-output (MIMO) linear time-invariant (LTI) systems exhibit enormous computational costs for high-dimensional problems. To address this problem, we propose a novel approach for reducing the dimensionality of MIMO systems. The method leverages the Takenaka–Malmquist basis and incorporates the strictly complex-valued relevant vector machine (SCRVM). We refer to this method as covariance-free maximum likelihood (CoFML). The proposed method avoids the explicit computation of the covariance matrix. CoFML solves multiple linear systems to obtain the required posterior statistics for covariance. This is achieved by exploiting the preconditioning matrix and the matrix diagonal element estimation rule. We provide theoretical justification for this approximation and show why our method scales well in high-dimensional settings. By employing the CoFML algorithm, we approximate MIMO systems in parallel, resulting in significant computational time savings. The effectiveness of this method is demonstrated through three well-known examples. Full article
(This article belongs to the Special Issue Applied Mathematics in Data Science and High-Performance Computing)
Show Figures

Figure 1

Back to TopTop