Applied Machine Learning in Data Science

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 November 2025 | Viewed by 2883

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
Interests: multimodal learning; knowledge-informed machine learning; multimodal few-/zero-shot learning; meta learning; spatiotemporal data mining

E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
Interests: data mining; machine learning

E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
Interests: data mining; machine learning; graph mining; social network analysis; tensor-based learning; mining algorithms
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue aims to illuminate the diverse applications of machine learning in data science, focusing on innovative methodologies that address contemporary challenges. Central to our exploration are knowledge-informed machine learning techniques that effectively incorporate domain expertise, enhancing model performance in scenarios with limited data availability. We also emphasize data-limited machine learning approaches designed to extract meaningful insights from sparse datasets. This Special Issue will delve into multimodal learning, which integrates various data types—such as text, images, and audio—creating more comprehensive and effective models. Additionally, we encourage contributions on parameter-efficient and sample-efficient fine-tuning of pretrained large models, enabling researchers to adapt these powerful architectures to specific tasks with minimal computational resources and data. By collating research across these critical areas, this Special Issue seeks to foster collaboration and innovation, ultimately advancing the field of data science through applied machine learning techniques.

Dr. Baoquan Zhang
Prof. Dr. Yunming Ye
Prof. Dr. Xutao Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • knowledge-informed machine learning
  • data-limited machine learning
  • multimodal learning
  • parameter-efficient fine-tuning of pretrained models
  • application of pretrained models in data science

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 3746 KiB  
Article
DCP: Learning Accelerator Dataflow for Neural Networks via Propagation
by Peng Xu, Wenqi Shao and Ping Luo
Electronics 2025, 14(15), 3085; https://doi.org/10.3390/electronics14153085 - 1 Aug 2025
Viewed by 227
Abstract
Deep neural network (DNN) hardware (HW) accelerators have achieved great success in improving DNNs’ performance and efficiency. One key reason is the dataflow in executing a DNN layer, including on-chip data partitioning, computation parallelism, and scheduling policy, which have large impacts on latency [...] Read more.
Deep neural network (DNN) hardware (HW) accelerators have achieved great success in improving DNNs’ performance and efficiency. One key reason is the dataflow in executing a DNN layer, including on-chip data partitioning, computation parallelism, and scheduling policy, which have large impacts on latency and energy consumption. Unlike prior works that required considerable efforts from HW engineers to design suitable dataflows for different DNNs, this work proposes an efficient data-centric approach, named Dataflow Code Propagation (DCP), to automatically find the optimal dataflow for DNN layers in seconds without human effort. It has several attractive benefits that prior studies lack, including the following: (i) We translate the HW dataflow configuration into a code representation in a unified dataflow coding space, which can be optimized by back-propagating gradients given a DNN layer or network. (ii) DCP learns a neural predictor to efficiently update the dataflow codes towards the desired gradient directions to minimize various optimization objectives, e.g., latency and energy. (iii) It can be easily generalized to unseen HW configurations in a zero-shot or few-shot learning manner. For example, without using additional training data, Extensive experiments on several representative models such as MobileNet, ResNet, and ViT show that DCP outperforms its counterparts in various settings. Full article
(This article belongs to the Special Issue Applied Machine Learning in Data Science)
Show Figures

Figure 1

16 pages, 3562 KiB  
Article
Enhancing Large Language Models for Specialized Domains: A Two-Stage Framework with Parameter-Sensitive LoRA Fine-Tuning and Chain-of-Thought RAG
by Yao He, Xuanbing Zhu, Donghan Li and Hongyu Wang
Electronics 2025, 14(10), 1961; https://doi.org/10.3390/electronics14101961 - 11 May 2025
Cited by 1 | Viewed by 2313
Abstract
Large language models (LLMs) have shown impressive general-purpose language capabilities, but their application in specialized domains such as healthcare and law remains limited due to two major challenges, namely, a lack of deep domain-specific knowledge and the inability to incorporate real-time information updates. [...] Read more.
Large language models (LLMs) have shown impressive general-purpose language capabilities, but their application in specialized domains such as healthcare and law remains limited due to two major challenges, namely, a lack of deep domain-specific knowledge and the inability to incorporate real-time information updates. This paper focuses on addressing these challenges by introducing parameter-sensitive low-rank adaptation (LoRA) and retrieval-augmented generation (RAG), named SensiLoRA-RAG, a two-stage framework designed to enhance LLM performance in domain-specific question-answering tasks. In the first stage, we propose a parameter-sensitive LoRA fine-tuning method that efficiently adapts LLMs to specialized domains using limited high-quality professional data, enabling rapid and resource-efficient specialization. In the second stage, we develop a chain-of-thought RAG mechanism that dynamically retrieves and integrates up-to-date external knowledge, improving the model’s ability to reason with current information and complex domain context. We evaluate our framework on tasks in the medical and legal fields, demonstrating that SensiLoRA-RAG significantly improves answer accuracy, domain relevance, and adaptability compared to baseline methods. Full article
(This article belongs to the Special Issue Applied Machine Learning in Data Science)
Show Figures

Figure 1

Back to TopTop