Forthcoming articles


International Journal of Collaborative Intelligence


These articles have been peer-reviewed and accepted for publication in IJCI, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.


Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.


Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.


Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.


Register for our alerting service, which notifies you by email when new issues of IJCI are published online.


We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.


International Journal of Collaborative Intelligence (5 papers in press)


Regular Issues


  • NNGDPC: a kNNG-based Density Peaks Clustering   Order a copy of this article
    by Miao Li 
    Abstract: Density peaks clustering (DPC) algorithm is a novel clustering algorithm based on density. It needs neither iterative process nor more parameters. However, the geometry of the distribution of the data has not been taken into account in the original algorithm. DPC does not perform well when clusters have different densities. In order to overcome this problem, we propose a novel density peaks clustering based on k-nearest neighbor graph called NNGDPC (kNNG-based Density Peaks Clustering) which introduces the idea of the nearest neighbors (k-NNG) into DPC. By experiments on synthetic data sets, we show the power of the proposed algorithm. Experimental results show that our algorithm is feasible and effective.
    Keywords: Data clustering; Density peaks; K-nearest neighbor graph.

  • Density-based Multi-weight vector support vector machine   Order a copy of this article
    by Xiaopeng Hua 
    Abstract: Recently proposed Muti-weight vector support vector machine (MVSVM) considers all of points and views them as equally important points. In real cases, most of the points of a dataset are highly correlated, at least locally, or the dataset has an inherent geometrical property. These points generally lie in the high density regions and are crucial for data classification. This motivates the rush toward new classifiers that can sufficiently take advantage of the points in the high density regions. In this paper, a novel binary classifier called density-based multi-weight vector support vector machine (DMVSVM) is presented. With the introduction of underlying correlated information between points, DMVSVM not only inherits the merit of MVSVM, but also has its additional characteristics: (1) density weighting method is adopted to measure the importance of points in the same class; (2) having comparable or better classification ability compared to MVSVM. The experimental results on publicly available datasets confirm the effectiveness of our method.
    Keywords: multi-weight vector support vector machine; density; correlated information; classification.

  • Attribute Reduction Algorithm of Variable Neighborhood Rough Set Model Based on FCM   Order a copy of this article
    by Xinghui Zhao, Jiancong Fan, Yixuan Long 
    Abstract: Based on the analysis of existing neighborhood rough sets algorithm, a new attribute reduction algorithm, called Canopy-FCM-VNRSMAR algorithm by reducing attribute using Canopy-FCM variable neighborhood rough set model is proposed in this paper. This algorithm is constructed by using attribute importance degree as the heuristic condition and makes the setting of neighborhood value completely according to the distribution of data. So it avoids the disadvantages of setting the global neighborhood value. The experimental results on open datasets on UCI show that the proposed algorithm can preserve fewer conditional attributes and improve the classification accuracy of data. In addition, it can extend the use of neighborhood rough sets.
    Keywords: Neighborhood Rough Set; Unsymmetrical Variable Neighborhood; Attribute Importance Degree; Global Fixed Neighborhood.

  • A Feature Weighted Affinity Propagation Clustering Algorithm Based on Rough Entropy Reduction   Order a copy of this article
    by X.U. LI 
    Abstract: In the clustering task, each feature of data sample is not taken the same contribution, some features provides more related information to the final results, if they are treated equally with other features, not only the complexity of the algorithm is increased but also the accuracy of the final results will be affected. So as a key phase in clustering, feature weighting is becoming more and more concerned by scholars. This paper proposes a feature weighted affinity propagation clustering algorithm based on rough entropy reduction (FWRER-AP). Rough entropy is used to assign weights for every feature according to their different contribution. Then the optimization samples are used in AP clustering algorithm, we can get the final clustering results through iterations. Compared with traditional AP clustering algorithm, experiment shows that the optimal algorithm not only reduces the complexity,but also improves the accuracy at the same time.
    Keywords: rough entropy; attribute reduction; feature weighted; normalization; AP clustering.

  • A novel least squares twin parametric insensitive support vector regression   Order a copy of this article
    by Xiuxi Wei 
    Abstract: The recently proposed twin parametric insensitive support vector regression, denoted by TPISVR, gets perforce regression performance and is suitable for many cases, especially when the noise is heteroscedastic. However, in the TPISVR, it solves two dual quadratic programming problems (QPPs). Moreover, compared with support vector regression (SVR), TPISVR has at least four regularization parameters that need regulating, which would affect its practical applications. In this paper, we increase the efficiency of TPISVR from two aspects. Fist, by introducing the least squares method, we propose a novel least squares twin parametric insensitive support vector regression, called LSTPISVR for short. LSTPISVR attempts to solve two modified primal problems of TPISVR, instead of two dual problems usually solved. Compared with the traditional solution method, LSTPISVR can improve the training speed without loss of generalization. Second, a discrete binary particle swarm optimization (BPSO) algorithm is introduced to do the parameter selection. Computational results on several synthetic as well as benchmark datasets confirm the great improvements on the training process of our LSTPISVR.
    Keywords: Support vector regression; Twin support vector regression; Least squares; twin parametric insensitive support vector regression; BPSO.