A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
AbstractTransfer learning aims for high accuracy by applying knowledge of source domains for which data collection is easy in order to target domains where data collection is difficult, and has attracted attention in recent years because of its significant potential to enable the application of machine learning to a wide range of real-world problems. However, since the technique is user-dependent, with data prepared as a source domain which in turn becomes a knowledge source for transfer learning, it often involves the adoption of inappropriate data. In such cases, the accuracy may be reduced due to “negative transfer.” Thus, in this paper, we propose a novel transfer learning method that utilizes the flipping output technique to provide multiple labels in the source domain. The accuracy of the proposed method is statistically demonstrated to be significantly better than that of the conventional transfer learning method, and its effect size is as high as 0.9, showing high performance. View Full-Text
Share & Cite This Article
Koishi, Y.; Ishida, S.; Tabaru, T.; Miyamoto, H. A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output. Algorithms 2019, 12, 95.
Koishi Y, Ishida S, Tabaru T, Miyamoto H. A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output. Algorithms. 2019; 12(5):95.Chicago/Turabian Style
Koishi, Yasutake; Ishida, Shuichi; Tabaru, Tatsuo; Miyamoto, Hiroyuki. 2019. "A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output." Algorithms 12, no. 5: 95.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.