Next Article in Journal
Triplet Loss Network for Unsupervised Domain Adaptation
Previous Article in Journal
A Cyclical Non-Linear Inertia-Weighted Teaching–Learning-Based Optimization Algorithm
Open AccessArticle

A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output

1
Advanced Manufacturing Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Saga 841-0052, Japan
2
Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology (Kyutech), Fukuoka 808-0196, Japan
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(5), 95; https://doi.org/10.3390/a12050095
Received: 9 April 2019 / Revised: 24 April 2019 / Accepted: 3 May 2019 / Published: 7 May 2019
  |  
PDF [632 KB, uploaded 7 May 2019]
  |  

Abstract

Transfer learning aims for high accuracy by applying knowledge of source domains for which data collection is easy in order to target domains where data collection is difficult, and has attracted attention in recent years because of its significant potential to enable the application of machine learning to a wide range of real-world problems. However, since the technique is user-dependent, with data prepared as a source domain which in turn becomes a knowledge source for transfer learning, it often involves the adoption of inappropriate data. In such cases, the accuracy may be reduced due to “negative transfer.” Thus, in this paper, we propose a novel transfer learning method that utilizes the flipping output technique to provide multiple labels in the source domain. The accuracy of the proposed method is statistically demonstrated to be significantly better than that of the conventional transfer learning method, and its effect size is as high as 0.9, showing high performance. View Full-Text
Keywords: transfer learning; ensemble learning; data expansion; flipping output transfer learning; ensemble learning; data expansion; flipping output
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Koishi, Y.; Ishida, S.; Tabaru, T.; Miyamoto, H. A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output. Algorithms 2019, 12, 95.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Algorithms EISSN 1999-4893 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top