Reinforced confidence in self-training for a semi-supervised medical data classification
by Mohammed El Amine Bechar; Nesma Settouti; Mohammed Amine Chikh; Mouloud Adel
International Journal of Applied Pattern Recognition (IJAPR), Vol. 4, No. 2, 2017

Abstract: The implication of semi-supervised method has become crucial to automate tasks that require the manual human expertise for data labelling, the advantage of this method resides in the fact that they require a low amount of labelled information. In this work, we are particularly interested in self-training paradigm. These techniques use the same principle as those of supervised techniques, but with a confidence measure that allows only a selection of the most confident samples. We propose a novel self-training algorithm named reinforced confidence in self-training (R-COSET) based on an iterative process. In each iteration the learned hypothesis can be improved by confidence data, where the proposed confidence measure is reinforced by two confidence levels in order to increase the robustness of the self-training process. Experiments show that the introduction of the second level of the neighbourhood graph in confidence measure is beneficial and that R-COSET can effectively improve classification performance.

Online publication date: Fri, 21-Jul-2017

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Applied Pattern Recognition (IJAPR):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com