applsci-logo

Journal Browser

Journal Browser

Transfer Learning: Techniques and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 November 2025 | Viewed by 950

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Technologies and Management of Digital Transformation, Lise-Meitner-Strasse 27, 42119 Wuppertal, Germany
Interests: industrial deep learning; machine learning; transfer learning; continual learning; lifelong learning; reinforcement learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
Interests: transfer learning; transferability estimation; domain adaptation; generalization; explainable representation learning; topological data analysis

Special Issue Information

Dear Colleagues,

Transfer learning has become essential in advancing artificial intelligence and deep learning, enabling the effective reuse of knowledge across domains, tasks, and data distributions. It plays a critical role in overcoming challenges such as data scarcity, domain shift, and generalization to unseen environments, making it vital in AI applications.

This Special Issue aims to compile recent advances in transfer learning, including novel methods, theoretical foundations, and practical applications. Topics of interest include, but are not limited to, the following:

  • Novel methods and theoretical frameworks for transfer learning and domain adaptation.
  • Novel pre-training strategies (e.g., self-supervised, supervised, domain-specific) and their impact on downstream tasks.
  • Novel methods for cross-domain, cross-task and cross-modal transfer learning.
  • Techniques for dealing with domain shift, catastrophic forgetting, and negative transfer.
  • Transfer learning across different data modalities (e.g., vision, text, speech, graphs, time series) or structures.
  • Applications of transfer learning to solve real-world challenges (e.g., simulation to reality, medical image analysis, robotics, autonomous systems, computer vision, engineering and Industry 4.0).
  • Empirical investigations and benchmarks comparing different transfer learning approaches.
  • Investigations into the robustness, fairness, and limitations of transfer learning methods.
  • Efficient adaptation methods for large foundation models and LLMs.

We hope to provide a collection of high-quality papers that reflect the current state of the art and emerging research directions in transfer learning.

We look forward to receiving your contributions.

Dr. Hasan Tercan
Dr. Yang Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • transfer learning
  • domain adaptation
  • deep learning
  • artificial intelligence
  • pre-training
  • knowledge transfer
  • machine learning
  • sim-to-real transfer
  • transfer learning in continual and dynamic environment
  • test-time adaptation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 1896 KB  
Article
Cross-Language Code Smell Detection via Transfer Learning
by Rana Sandouka and Hamoud Aljamaan
Appl. Sci. 2025, 15(17), 9293; https://doi.org/10.3390/app15179293 - 24 Aug 2025
Viewed by 647
Abstract
Code smells are code structures that indicate a potential issue in code design or implementation. These issues could affect the processes of code testing and maintenance, and overall software quality. Therefore, it is important to detect code smells in the early stages of [...] Read more.
Code smells are code structures that indicate a potential issue in code design or implementation. These issues could affect the processes of code testing and maintenance, and overall software quality. Therefore, it is important to detect code smells in the early stages of software development to enhance system quality. Most studies have focused on detecting code smells of a single programming language. This article explores TL for cross-language code smell detection, where Java is the source, and both C# and Python are the target datasets, focusing on Large Class, Long Method, and Long Parameter List code smells. We conducted a comparison study across two transfer learning approaches—instance-based (Importance Weighting Classifier, Nearest Neighbors Weighting, and Transfer AdaBoost) and parameter-based (Transfer Tree, Transfer Forest)—with various base models. The results showed that the instance-based approach outperformed the parameter-based approach, particularly with Transfer AdaBoost using ensemble learning base models. The Transfer AdaBoost approach with Gradient Boosting and Extra Trees achieved consistent and robust results across both C# and Python, with an 83% winning rate, as indicated by the Wilcoxon signed-rank test. These findings underscore the effectiveness of transfer learning for cross-language code smell detection, supporting its generalizability across different programming languages. Full article
(This article belongs to the Special Issue Transfer Learning: Techniques and Applications)
Show Figures

Figure 1

Back to TopTop