International Journal of Computational Intelligence Studies
These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.
Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.
Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.
Register for our alerting service, which notifies you by email when new issues are published online.
International Journal of Computational Intelligence Studies (13 papers in press)
Special Issue on: IEEE IWCIA2019 Innovative Computational Intelligence for Deep Learning and Knowledge Acquisition
Abstract: In solving programming problems, it is difficult for beginners to create a program from scratch. One way to navigate this difficulty is to suggest the next word following an incomplete program. In the present study, we propose a method for code completion characterized by two principal elements: the prediction of the next within-vocabulary word and the prediction of the next referenceable identifier. For the prediction of within-vocabulary words, a neural language model based on an LSTM network with an attention mechanism is proposed. Additionally, for the prediction of referenceable identifiers, a model based on a pointer network to a given incomplete program is proposed. For evaluation of the proposed method, source code accumulated in an online judge system is used. The results of the experiment demonstrate that in both statically and dynamically typed languages, the proposed method can predict the next word to a high degree of accuracy.
Keywords: Programming education; Code completion; Deep learning; LSTM; Pointer network.
Multi-objective optimization of allocations and locations of incineration facilities with Voronoi diagram and genetic algorithm: Case study of Hiroshima city and Aki county
by Taketo Kamikawa, Takashi Hasuike
Abstract: This research focuses on the two objectives of maximizing the amount of heat generated by incineration and minimizing the waste collection distances divided by population densities, in determining allocations and locations of general waste incineration facilities as a case study of Hiroshima city and Aki county in Japan. For these objectives, we propose the version 2 of Multi-Objective optimization with Voronoi diagram and Genetic Algorithm (MOVGA2). As for the maximization of the amount of generated heat, we predict the amount by the regression equation of multiple linear regression analysis using 2013 to 2017 panel data and formulate it as the set partitioning problem (SPP) to maximize the prediction value. As for the minimization of waste collection distances divided by population densities, we formulate it as the multi-Weber problem. To solve these two problems, we use MOVGA2, which has the seeds and weights of the Laguerre Voronoi diagram as a gene. As a result of the survey using data of the year 2017 of Hiroshima city and Aki county, in the case of 2 existing facilities, 2 new facilities and 3 closed facilities it was found that the calorific value increased enough to cover the power of 1,233 households (converted to housing complex) per year despite the increase of only 9% t-km for waste collection distances.
Keywords: genetic algorithm; Voronoi diagram; thermal energy; incineration facility; combustible waste.
Efficient Parameter-Free Adaptive Penalty Method with Balancing the Objective Function Value and the Constraint Violation
by Takeshi Kawachi, Jun-ichi Kushida, Akira Hara, Tetsuyuki Takahama
Abstract: Real world problems are often formularized as constrained optimization problems (COPs). Constraint handling techniques are important for efficient search, and various approaches such as penalty methods or feasibility rule have been studied. The penalty methods deal with a single fitness function by combining an objective function value and a constraint violation with a penalty factor. Moreover, the penalty factor can be flexibly adapted by feeding back information on search process in adaptive penalty methods. However, setting parameters properly and keeping the good balance between the objective function value and the constraint violation are difficult. In this paper, we propose a new parameter-free adaptive penalty method with balancing the objective function value and the constraint violation. L-SHADE is adopted as a base search algorithm, and the optimization results of 28 benchmark functions provided by the CEC2017 and CEC2018 competitions on constrained single-objective numerical optimizations are compared with other methods.
Keywords: Evolutionary Algorithms; Differential Evolution; Constraint Handling Techniques; Penalty Method;.
Detecting Audio Adversarial Examples for Protecting Speech-to-Text Transcription Neural Network
by Keiichi Tamura, Akitada Omagari, Hajime Ito, Shuichi Hashida
Abstract: With the increasing use of deep learning techniques in real-world applications, their vulnerabilities have received significant attention from deep-learning researchers and practitioners. In particular, adversarial examples for deep neural networks and protection methods against them have been well-studied in recent years because they have serious vulnerabilities that threatens safety in real-world. Audio adversarial examples, which are targeted attacks, are designed such that the deep neural network-based speech-to-text systems misunderstand input voice sound. In this study, we propose a new protection method against audio adversarial examples. The proposed protection method is based on a sandbox approach, where an input voice sound is checked in the system to determine if it is an audio adversarial example. To evaluate the proposed protection method, we used actual audio adversarial examples created on Deep Speech, which is a typical speech-to-text transcription neural network. The experimental results show that our protection method can detect audio adversarial examples with high accuracy.
Keywords: Adversarial Example; Deep Learning; Computer Security; Data Representation; Speech-to-Text; Sandbox Method.
Using Term Similarity Measures for Classifying Short Document Data
by Hirohisa Seki, Shuhei Toriyama
Abstract: Term expansion (a.k.a. document expansion), proposed by Carpineto et al., is a method used for text classification. When handling short text data like social media and blogs, we can apply the term expansion method to expand the sparse information in them. While the prior works on term expansion use an FCA (Formal Concept Analysis)-based similarity measure defined between terms (or words), this paper studies the effectiveness of using two kinds of measures for term expansion: one is weighted similarity measures studied in FCA, and the other is some correlation measures, like cosine and all-conf, often employed in data mining. We also present some properties on the relationship between these term similarity/correlation measures and the notion of relevancy in classification. We show empirically that cosine correlation measure outperforms the prior methods in our two short document data. We also make a comparison of our approach with an LDA (Latent Dirichlet Allocation)-based term expansion approach by Rogers et al.
Keywords: term expansion; similarity measure,; correlation; formal concepts; LDA; short document data; classification.
A Video Prediction Method by using Long Short Term Memory based Adaptive Structural Learning of Deep Belief Network and its Investigation of Input Sequence Length for Data Structure
by Shin Kamada, Takumi Ichimura
Abstract: Artificial Intelligence (AI) with sophisticated technologies has become an essential technique in our life. Especially, deep learning has been a successful model which can effectively represent several features of input space and remarkably improve image recognition performance on the deep architectures. In our research, Adaptive RBM and Adaptive DBN have been developed as a deep learning model. The adaptive structural learning can find a suitable size of network structure for given input space during its training. This is, the neuron generation and annihilation algorithms were implemented on Restricted Boltzmann Machine (RBM), and layer generation algorithm was implemented on Deep Belief Network (DBN). Moreover, the learning algorithm of Adaptive RBM and Adaptive DBN was extended to the time-series prediction by using the idea of LSTM (Long Short Term Memory). Our previous research tackled the problems for supervised learning, in this paper, we challenge to reveal the power of our proposed method in the video recognition research field by using Moving MNIST for unsupervised learning, since video includes rich source of visual information. Moving MNIST is a benchmark data set for video recognition where two digits are moving for randomly determined direction in time sequence. Adaptive LSTM-DBN trained the time-series movement or features from the video data and it showed higher future prediction performance than the existing LSTM models (more than 90\ for test data). Moreover, the detailed prediction results were investigated by training Adaptive LSTM-DBN with various length of input sequence. As a result, we found some cases that the models represented multiple time-series movements of not only short-term simple movements but also long-term complicated features, according to the length of input sequence.
Keywords: Deep learning; Deep Belief Network; Adaptive structural learning method; Video recognition.
by Khadija ELAMRANI, Noureddine Chenfour, Mohamed LAHMER, Ghita Daoudi
Abstract: The main purpose of the digital workplace (DW) is to ensure to the organizations different contributors or actors a portal of digital services, which are accessible through a virtual desktop covering all its business services. During our studies, we were able to identify five major problems. First of all, we note a great confusion in the related definitions because most of them are restricted to the teaching sector. Secondly, most existing DWs are summarized as a simple gateway to pre-existing digital tools collection that covers the organizations business domains, without any means of communication between them. Another problem is the lack of a reference architecture. Moreover, we could not identify any logical or physical model to represent the different DWs entities. Lastly, there is a total absence of a standard or even an appropriate vocabulary.rnFaced with these shortfalls, we propose in this paper a set of fundamentals that is composed by a definition encapsulating the different domains, as well as a naming system and a vocabulary that identify both the entities that compose the virtual desktop and their connections and flows. Based on these fundamentals, we also propose our framework WOLF (Digital Workplace based on Open and Light architecture Framework) that generate automatically customized digital workplaces, and is distinguished from other existing DWs solutions by its generic and extensible character. The generated DW encapsulates all of the organizations domains, services, flows and a collaboration system between the different actors. Our proposed frameworks architecture allows us to classify and organize the various entities into a tree representation whilst data nodes are modelled using XML files.
Keywords: Digital workplace; Digital workspace; Collaboration; Digital work environment.
Alzheimer's disease prediction using Regression models and SVM
by M. Rohini, D. Surendran
Abstract: Alzheimer's disease (AD) and cognitive impairment due to aging are the recently prevailing diseases among aged inhabitants because of an increase in the aging population. Several demographic characters, structural and functional neuroimaging investigations, cardio-vascular studies, neuropsychiatric symptoms, cognitive performances and biomarkers in cerebrospinal fluids are the various predictors for AD. We can consider these input features for the prediction of symptoms whether they belong to AD or normal cognitive impairment for aging. In the proposed study, the hypothesis is derived for supervised learning methods such as multivariate linear regression, logistic regression, and SVM. We perform feature scaling and normalization with features as an initial step for applying the parameters to derive the hypothesis. We analyze performance metrics with the implementation results. The present work is applied to 1000 baseline assessment data from Alzheimers disease Neuro-Imaging Initiative studies (ADNI) that give conversion prediction. The comparison of results in literature studies suggests that the efficiency of the proposed study is highly helpful in differentiating AD pathology from cognitive impairment because of aging.
Keywords: Multivariate linear regression; logistic regression; Support Vector Machine(SVM);Feature scaling; Normalization;ADNI.
Performance of Convolutional Neural Networks Optimizers: An Extensive Evaluation on Glaucoma Prediction
by Kishore Balasubramanian
Abstract: Purpose: To assess the performance of Convolutional Neural Networks (CNN) Optimizers in predicting glaucoma on retinal fundus images. CNN model was opted owing to its capability in handling raw digital image data avoiding handcrafted feature extractionrnDesign: Fundus images - locally collected database. The performance of CNN depends on quality of the data, tuning of the algorithm that include learning rate, number of epochs and batches, weight initialization, activation function, optimization, loss function and models combination. This paper focuses on improving the CNN performance by the way of optimizing the architecture parameter selection through optimizers and loss function. In this work, 4 gradient descent-based optimizers were compared, i.e., Stochastic Gradient Descent (SGD), Adaptive Gradient (Adagrad), Adaptive Delta (Adadelta) and Adaptive Momentum (Adam). Mean Square Error (MSE) and Binary Cross Entropy (BCE) were the loss functions chosen. Gradient descent optimizers were considered because of their high convergence speed on large volume of datasets.rnMethod: The dataset was divided into 60% training and 40% testing. Two CNN architectures, AlexNet, and ResNet were developed and trained on the dataset with 0.01 learning rate. Batch size was 60. The number of epochs set was 50, 100 and 200. The methods were evaluated in terms of mean square error and accuracy. rnResults: Lowest training loss and appreciable accuracy with ADAM; Adam works well comparatively and also outperforms other adaptive techniques. rnConclusion: The assessment demonstrated that ADAM based optimization in CNN was able to diagnose glaucoma accurately with less loss and better convergence speed. rn
Keywords: Glaucoma; Fundus Image; Convolutional Neural Networks; Deep Learning; Optimization.
Challenges for Neuroscience-based Computational Intelligence
by Jose A. Fernandez Leon, Gerardo Acosta
Abstract: We describe some of the major issues concerning the interdisciplinary community that looks for understanding intelligence from the neuroscience and computational perspectives. The challenges outlined focus on the diverse range of theoretical and practical questions that may stimulate not only the study of intelligence in the biological realm but also the practice of research on the theoretical aspects to guide research in neuroscience. Setting the study of computational intelligence from the neuroscience field in a holistic and integrative way is a step toward fostering impactful interactions between distinct perspectives and viewpoints. rnThese ideas might be useful for brain understanding and for constructing new paradigms of machine learning. Rather than proposing a shift on the approach taken in computational intelligence research, discussions in this work suggest approaching the described grand challenges in an integrative manner but guided by theoretical aspects, rather than based mostly on technological developments to study the brain.rn
Keywords: computational intelligence; cognitive maps; place cells; grid cells; cognition; neuroinspiration; deep learning.
Opinion Mining based Secured Collaborative Recommender System
by Veer Sain Dixit, Akanksha Bansal Chopra
Abstract: Recommender Systems have impressively hit the e-commerce industry in last few years. With the increased use of recommender systems, the risk of preventing the authentic and true data has also been increased. Many algorithms have been proposed by various researchers to detect and prevent an attack, in the recent years. But, presently, no algorithm exists as a complete solution to this issue because of one or other constraints. The research in the paper intends to provide solution to this issue and proposes a framework of more secured collaborative recommender system as compared to conventional recommender systems. The proposed framework uses the approach of Turing test to identify any robotic involvement and does not allow insertion of ratings from such non human users. Ratings from real users are allowed, collected and examined. Textual opinions are also collected while collecting the ratings. Opinion Mining is performed on given textual opinion to identify and discard Push and Nuke ratings. Valid ratings are stored in database for generating recommendations for items. The paper also examines the effect of Push and Nuke ratings on performance by evaluating accuracy of recommender system using various measure metrics.
Keywords: Recommender system; Push ratings; Nuke ratings; Opinion Mining.
Behavior study of genetic operators for the solid waste collection problem: The municipality of Sidi Bou Said case study
by Olfa Khlif, Jouhaina Chaouachi Siala, Farah Farjallah
Abstract: Genetic algorithms are categorized as efficient globalrnsearch heuristic in solving high computational problems. However,rntheir performance is heavily dependent on the choice ofrnthe appropriate operator especially crossover. Moreover, therncrossover operator efficiency could considerably be influencedrnby the problem definition, the fitness function and the instancernstructure. For the practical demonstration of these phenomena,rnwe tackle the solid waste collection problem. We focus thereforernon determining the most promising set of parameters by varyingrnthe performance measures. The case of the city of Sidi Bou Saidrnis treated along during our experimental study.
Keywords: Solid waste collection problem; Genetic algorithm; crossover operator.
A New approach for business process management enhancement: Mobile Hospital Case Study
by Hela LIMAM, Amal Bouderbela, Jalel Akaichi
Abstract: Nowadays, Business Process Modeling Notation (BPMN) is the tool used to support business process management for technical and business users to bridge the communication gap between business process design and implementation. However, the BPMN metamodel lack of formal specification of well-formedness rules. In this context, we propose an Object Constraint Language (OCL) based approach to enhance the expressivity of the BPMN model to hold the information needed and to express resource allocation constraints. Furthermore, we propose an algorithm that verifies if the BPMN model has bad marks in order to verify the correctness of the enhanced model. The whole approach has been tested in the medical field. In particular, it has been used to model, check and improve the care process of a mobile hospital.
Keywords: Business Process (BP); BP Management; BP Improvement; BP Modelling Notation; Object Constraint Language; OCL constraints; Mobile Hospital.