Title: Kernel methods for transfer learning to avoid negative transfer

Authors: Hao Shao

Addresses: Shanghai University of International Business and Economics, Shanghai, 200336, China

Abstract: In the big data era, with the devolvement of information storage and processing capability of computers, new tasks become more and more complex and also with higher requirements. In the other side, out-of-date information of old tasks are abounded at will. Due to the considerable cost for classifying the emerged new tasks, transfer learning techniques are developed to extract useful knowledge from existing similar datasets, and a large number of research works have been published in recent years. However, an open problem in transfer learning is the negative transfer which happens due to different distributions among tasks. In this manuscript, we are targeting at proposing a kernel method to evaluate both the task relatedness and the instance similarities. The minimum description length principle (MDLP) is adopted which was proved effective for evaluating models in transfer learning scenario. Extensive experiments show the effectiveness of our algorithm in terms of the classification accuracy in real datasets.

Keywords: transfer learning; kernel methods; classification; negative transfer; big data; knowledge extraction; task relatedness; instance similarities; minimum description length principle.

DOI: 10.1504/IJCSM.2016.076430

International Journal of Computing Science and Mathematics, 2016 Vol.7 No.2, pp.190 - 199

Accepted: 10 Nov 2015
Published online: 06 May 2016 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article