Authors: T. Maruthi Padmaja; Bapi S. Raju; P. Radha Krishna
Addresses: University of Hyderabad, (P.O.) Central University, Gachibowli, Hyderabad – 500 046 (AP), India ' University of Hyderabad, (P.O.) Central University, Gachibowli, Hyderabad – 500 046 (AP), India ' Infosys Limited, Hyderabad – 500 057 (AP), India
Abstract: The performance of the support vector machine classification model is prone to the class imbalance problem, which occurs when one class of data severely outnumbers the other class. Traditionally, this issue could be addressed by balancing class distributions with sampling methods. This paper explores and applies the probabilistic active learning (StatQSVM) (Mitra et al., 2004) strategy for yielding balanced class distributions from large scale unbalanced datasets. Rather than querying the instances based on their proximity, StatQSVM selects a set of instances based on locally defined confidence factor with respect to current hyperplane that models the class separation. The explorative study on StatQSVM is carried out using simulated as well as real-world unbalanced datasets. Performance deterioration was observed at high class imbalance settings within the study. To overcome this problem, a fast probabilistic cost weighted undersampling approach, called CStatQSVM with a new stopping criterion is proposed. The experimental results show that the CStatQSVM is successful on minority as well as majority class prediction as compared to LOB, StatQSVM active learning methods and other conventional methods that address class imbalance problem.
Keywords: class imbalance problem; active learning; support vector machines; SVM; learning on the border; LOB; StatQSVM; CStatQSVM; active sample selection; data imbalance; unbalanced datasets.
International Journal of Knowledge Engineering and Soft Data Paradigms, 2013 Vol.4 No.1, pp.85 - 106
Available online: 18 Mar 2013Full-text access for editors Access for subscribers Purchase this article Comment on this article