Parallel reducts for incremental data
by Dayong Deng; Lin Chen; Dianxun Yan; Houkuan Huang
International Journal of Granular Computing, Rough Sets and Intelligent Systems (IJGCRSIS), Vol. 3, No. 2, 2013

Abstract: Parallel reducts are more suitable for dynamic data, incremental data, and multi-source data than other reducts, and can be obtained by attribute significance in a family of decision subsystems. However, when data are increasing, they should be improved or changed to fit the new dataset. In this paper, some properties of parallel reducts for changing data are discussed, and an algorithm for improving parallel reducts is proposed. Some improved ideas of the algorithm are introduced to fit decreasing data and changing data. Experimental results show that the algorithm can reduce most of time for calculating a new parallel reduct when new data are increasing.

Online publication date: Sat, 19-Jul-2014

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Granular Computing, Rough Sets and Intelligent Systems (IJGCRSIS):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com