Abstract
In the era of heterogeneous computing environments, including diverse hardware platforms and programming paradigms, accurate performance prediction of software applications is essential for efficient resource allocation, cost optimization, and informed deployment decisions. However, traditional methods often require platform-specific measurements, which are resource-intensive and limited by data scarcity in low-performance or novel code contexts. This exploratory study addresses these challenges by leveraging transfer learning with regression models to enable cross-platform (from Intel x86 to ARM M1) and cross-code performance predictions. Using the Renaissance benchmark suite (renaissance-gpl-0.15.0.jar), we systematically evaluate eight traditional machine learning models and a deep neural network across scenarios of data transfer and fine-tuning. Key findings demonstrate that transfer learning significantly improves prediction accuracy, with tree-based models like Extra Trees achieving high R2 scores (0.92) and outperforming DNNs in robustness, particularly under noisy or data-scarce conditions. The study provides empirical insights into model effectiveness, highlights the superiority of transfer settings, and offers practical guidance for software engineers to reduce measurement overheads and enhance optimization processes.