Innovative Deep Transfer Learning Techniques and Their Use in Real-World Applications

A special issue of Mathematical and Computational Applications (ISSN 2297-8747).

Deadline for manuscript submissions: 15 June 2026 | Viewed by 6118

Special Issue Editors


E-Mail Website
Guest Editor
School of Systems and Technology, University of Management and Technology, Lahore 54770, Pakistan
Interests: machine learning; deep learning; pattern recognition; medical imaging; computer vision; bioinformatics; NLP

E-Mail Website
Guest Editor
School of Systems and Technology, University of Management and Technology, Lahore 54770, Pakistan
Interests: machine learning; deep learning; pattern recognition; medical imaging; computer vision; bioinformatics; NLP

E-Mail Website
Guest Editor
Department of Finance, Faculty of Finance and Banking, Bucharest University of Economic Studies, 010552 Bucharest, Romania
Interests: corporate finance; corporate governance; quantitative finance; sustainable development
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the last couple of decades, the use of deep learning models has increased exponentially due to the fact of demonstrated improved predictive ability. This not only expands to research outcomes as quality academic publications but also for the direct benefit to the public by their use in real-world applications. Examples of real-world applications could include image recognition/classification, disease diagnosis, natural language processing, finance trends, etc.  The proposed Special Issue focuses specifically on recent advancements in the use of deep transfer learning techniques and their application to real-world problems. Deep transfer learning models exploit the pre-trained models on a dataset to be fine-tuned to deal with other datasets, maybe of the same nature or in some cases entirely of a different nature, but with similar characterization in terms of the type of data. A major benefit of using these approaches is to demonstrate how previously trained models can be fine-tuned without requiring the training from scratch, thus reducing the time of training with improved predictive capability and increased generalizability. The objective is to bring leading scientists and researchers together and create an interdisciplinary platform of computational theories, methodologies, and techniques related to deep transfer learning and their applications to real-world problems.

In this Special Issue, original research articles, reviews and substantive applications related to deep transfer learning are welcome. Research areas may include (but are not limited to) the following:

  • Novel Architectures of algorithms for deep transfer learning;
  • Cross-domain and cross-model transfer learning;
  • Few-shot, one-shot, and zero-shot learning via transfer learning;
  • Detection of fake media, deepfakes, and misinformation using transfer learning;
  • Transfer learning for cybersecurity and digital forensics;
  • Transfer learning under data scarcity or imbalanced data conditions;
  • Application in computer vision, NLP, healthcare, and finance.

We look forward to receiving your contributions.

Dr. Muhammad Asad Arshed
Prof. Dr. Atif Alvi
Prof. Dr. Ştefan Cristian Gherghina
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematical and Computational Applications is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep transfer learning
  • pretrained models
  • domain adaptation
  • fake media detection
  • few-shot learning
  • real-world applications
  • cross-domain learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 2990 KB  
Article
Federated and Interpretable AI Framework for Secure and Transparent Loan Default Prediction in Financial Institutions
by Awad M. Awadelkarim
Math. Comput. Appl. 2026, 31(2), 56; https://doi.org/10.3390/mca31020056 - 5 Apr 2026
Viewed by 458
Abstract
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, [...] Read more.
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, which limit the application of advanced models because of regulatory and confidentiality issues, and black-box decision making, which diminishes confidence in automated credit risk tools. This study mitigates these problems by adopting a federated-inspired decentralized ensemble learning model combined with explainable artificial intelligence (XAI) in predicting loan defaults. Various machine learning classifiers are trained on partitioned institutional data without the need to share any data; they include K-Nearest Neighbors, support vector machine, random forest, and XGBoost. They use a prediction-level aggregation strategy to simulate the collaborative decision-making process without losing locality of data. SHAP and LIME are used to promote model interpretability by giving both global and local explanations of the consequences of prediction. The proposed framework was tested on a large public dataset of loans that contains more than 116,000 records, including various financial and borrower-related features. The experimental findings show that XGBoost has high and reliable predictive accuracy in both centralized and decentralized scenarios, achieving 99.7% accuracy under federated-inspired evaluation. The explanation analysis shows interest rate spread and upfront charges as the most significant predictors of loan default risk. The main contributions of this research are as follows: (i) a privacy-preserving decentralized ensemble learning framework that is applicable in multi-institutional financial contexts, (ii) a detailed analysis of centralized and decentralized predictive performances, and (iii) the pipeline of the XAI, which can be used to increase its transparency and regulatory confidence in automated credit risk evaluation. These results prove that decentralized learning combined with explainable AI can provide high-performing, transparent and privacy-sensitive loan default prediction systems in practice in real-world banking systems. Full article
Show Figures

Figure 1

27 pages, 904 KB  
Article
An Interpretable Hybrid RF–ANN Early-Warning Model for Real-World Prediction of Academic Confidence and Problem-Solving Skills
by Mostafa Aboulnour Salem and Zeyad Aly Khalil
Math. Comput. Appl. 2025, 30(6), 140; https://doi.org/10.3390/mca30060140 - 18 Dec 2025
Cited by 3 | Viewed by 920
Abstract
Early identification of students at risk for low academic confidence, poor problem-solving skills, or poor academic performance is crucial to achieving equitable and sustainable learning outcomes. This research presents a hybrid artificial intelligence (AI) framework that combines feature selection using a Random Forest [...] Read more.
Early identification of students at risk for low academic confidence, poor problem-solving skills, or poor academic performance is crucial to achieving equitable and sustainable learning outcomes. This research presents a hybrid artificial intelligence (AI) framework that combines feature selection using a Random Forest (RF) algorithm with data classification via an Artificial Neural Network (ANN) to predict risks related to Academic Confidence and Problem-Solving Skills (ACPS) among higher education students. Three real-world datasets from Saudi universities were used: MSAP, EAAAM, and MES. Data preprocessing included Min–Max normalisation, class balancing using SMOTE (Synthetic Minority Oversampling Technique), and recursive feature elimination. Model performance was evaluated using five-fold cross-validation and a paired t-test. The proposed model (RF-ANN) achieved an average accuracy of 98.02%, outperforming benchmark models such as XGBoost, TabNet, and an Autoencoder–ANN. Statistical tests confirmed the significant performance improvement (p < 0.05; Cohen’s d = 1.1–2.7). Feature importance and explainability analysis using a Random Forest and Shapley Additive Explanations (SHAP) showed that psychological and behavioural factors—particularly study hours, academic engagement, and stress indicators—were the most influential drivers of ACPS risk. Hence, the findings demonstrate that the proposed framework combines high predictive accuracy with interpretability, computational efficiency, and scalability. Practically, the model supports Sustainable Development Goal 4 (Quality Education) by enabling early, transparent identification of at-risk students, thereby empowering educators and academic advisors to deliver timely, targeted, and data-driven interventions. Full article
Show Figures

Figure 1

18 pages, 2060 KB  
Article
A Context-Aware Representation-Learning-Based Model for Detecting Human-Written and AI-Generated Cryptocurrency Tweets Across Large Language Models
by Muhammad Asad Arshed, Ştefan Cristian Gherghina, Iqra Khalil, Hasnain Muavia, Anum Saleem and Hajran Saleem
Math. Comput. Appl. 2025, 30(6), 130; https://doi.org/10.3390/mca30060130 - 29 Nov 2025
Viewed by 1438
Abstract
The extensive use of large language models (LLMs), particularly in the finance sector, raises concerns about the authenticity and reliability of generated text. Developing a robust method for distinguishing between human-written and AI-generated financial content is therefore essential. This study addressed this challenge [...] Read more.
The extensive use of large language models (LLMs), particularly in the finance sector, raises concerns about the authenticity and reliability of generated text. Developing a robust method for distinguishing between human-written and AI-generated financial content is therefore essential. This study addressed this challenge by constructing a dataset based on financial tweets, where original financial tweet texts were regenerated using six LLMs, resulting in seven distinct classes: human-authored text, LLaMA3.2, Phi3.5, Gemma2, Qwen2.5, Mistral, and LLaVA. A context-aware representation-learning-based model, namely DeBERTa, was extensively fine-tuned for this task. Its performance was compared to that of other transformer variants (DistilBERT, BERT Base Uncased, ELECTRA, and ALBERT Base V1) as well as traditional machine learning models (logistic regression, naive Bayes, random forest, decision trees, XGBoost, AdaBoost, and voting (AdaBoost, GradientBoosting, XGBoost)) using Word2Vec embeddings. The proposed DeBERTa-based model achieved an impressive test accuracy, precision, recall, and F1-score, all reaching 94%. In contrast, competing transformer models achieved test accuracies ranging from 0.78 to 0.80, while traditional machine learning models yielded a significantly lower performance (0.39–0.80). These results highlight the effectiveness of context-aware representation learning in distinguishing between human-written and AI-generated financial text, with significant implications for text authentication, authorship verification, and financial information security. Full article
Show Figures

Figure 1

19 pages, 11950 KB  
Article
A Novel Hybrid Attention-Based RoBERTa-BiLSTM Model for Cyberbullying Detection
by Mohammed A. Mahdi, Suliman Mohamed Fati, Mohammed Gamal Ragab, Mohamed A. G. Hazber, Shahanawaj Ahamad, Sawsan A. Saad and Mohammed Al-Shalabi
Math. Comput. Appl. 2025, 30(4), 91; https://doi.org/10.3390/mca30040091 - 21 Aug 2025
Cited by 1 | Viewed by 2252
Abstract
The escalating scale and psychological harm of cyberbullying across digital platforms present a critical social challenge, demanding the urgent development of highly accurate and reliable automated detection systems. Standard fine-tuned transformer models, while powerful, often fall short in capturing the nuanced, context-dependent nature [...] Read more.
The escalating scale and psychological harm of cyberbullying across digital platforms present a critical social challenge, demanding the urgent development of highly accurate and reliable automated detection systems. Standard fine-tuned transformer models, while powerful, often fall short in capturing the nuanced, context-dependent nature of online harassment. This paper introduces a novel hybrid deep learning model called Robustly Optimized Bidirectional Encoder Representations from the Transformers with the Bidirectional Long Short-Term Memory-based Attention model (RoBERTa-BiLSTM), specifically designed to address this challenge. To maximize its effectiveness, the model was systematically optimized using the Optuna framework and rigorously benchmarked against eight state-of-the-art transformer baseline models on a large cyberbullying dataset. Our proposed model achieves state-of-the-art performance, outperforming BERT-base, RoBERTa-base, RoBERTa-large, DistilBERT, ALBERT-xxlarge, XLNet-large, ELECTRA-base, DeBERTa-v3-small with an accuracy of 94.8%, precision of 96.4%, recall of 95.3%, F1-score of 95.8%, and an AUC of 98.5%. Significantly, it demonstrates a substantial improvement in F1-score over the strongest baseline and reduces critical false negative errors by 43%, all while maintaining moderate computational efficiency. Furthermore, our efficiency analysis indicates that this superior performance is achieved with a moderate computational complexity. The results validate our hypothesis that a specialized hybrid architecture, which synergizes contextual embedding with sequential processing and attention mechanism, offers a more robust and practical solution for real-world social media applications. Full article
Show Figures

Figure 1

Back to TopTop