International Journal of Computational Intelligence Studies (8 papers in press)
Wolf : A framework for digital workplace - Architecture and models -
by Khadija ELAMRANI, Noureddine Chenfour, Mohamed LAHMER, Ghita Daoudi
Abstract: The main purpose of the digital workplace (DW) is to ensure to the organizations different contributors or actors a portal of digital services, which are accessible through a virtual desktop covering all its business services. During our studies, we were able to identify five major problems. First of all, we note a great confusion in the related definitions because most of them are restricted to the teaching sector. Secondly, most existing DWs are summarized as a simple gateway to pre-existing digital tools collection that covers the organizations business domains, without any means of communication between them. Another problem is the lack of a reference architecture. Moreover, we could not identify any logical or physical model to represent the different DWs entities. Lastly, there is a total absence of a standard or even an appropriate vocabulary.rnFaced with these shortfalls, we propose in this paper a set of fundamentals that is composed by a definition encapsulating the different domains, as well as a naming system and a vocabulary that identify both the entities that compose the virtual desktop and their connections and flows. Based on these fundamentals, we also propose our framework WOLF (Digital Workplace based on Open and Light architecture Framework) that generate automatically customized digital workplaces, and is distinguished from other existing DWs solutions by its generic and extensible character. The generated DW encapsulates all of the organizations domains, services, flows and a collaboration system between the different actors. Our proposed frameworks architecture allows us to classify and organize the various entities into a tree representation whilst data nodes are modelled using XML files.
Keywords: Digital workplace; Digital workspace; Collaboration; Digital work environment.
Alzheimer's disease prediction using Regression models and SVM
by M. Rohini, D. Surendran
Abstract: Alzheimer's disease (AD) and cognitive impairment due to aging are the recently prevailing diseases among aged inhabitants because of an increase in the aging population. Several demographic characters, structural and functional neuroimaging investigations, cardio-vascular studies, neuropsychiatric symptoms, cognitive performances and biomarkers in cerebrospinal fluids are the various predictors for AD. We can consider these input features for the prediction of symptoms whether they belong to AD or normal cognitive impairment for aging. In the proposed study, the hypothesis is derived for supervised learning methods such as multivariate linear regression, logistic regression, and SVM. We perform feature scaling and normalization with features as an initial step for applying the parameters to derive the hypothesis. We analyze performance metrics with the implementation results. The present work is applied to 1000 baseline assessment data from Alzheimers disease Neuro-Imaging Initiative studies (ADNI) that give conversion prediction. The comparison of results in literature studies suggests that the efficiency of the proposed study is highly helpful in differentiating AD pathology from cognitive impairment because of aging.
Keywords: Multivariate linear regression; logistic regression; Support Vector Machine(SVM);Feature scaling; Normalization;ADNI.
Performance of Convolutional Neural Networks Optimizers: An Extensive Evaluation on Glaucoma Prediction
by Kishore Balasubramanian
Abstract: Purpose: To assess the performance of Convolutional Neural Networks (CNN) Optimizers in predicting glaucoma on retinal fundus images. CNN model was opted owing to its capability in handling raw digital image data avoiding handcrafted feature extractionrnDesign: Fundus images - locally collected database. The performance of CNN depends on quality of the data, tuning of the algorithm that include learning rate, number of epochs and batches, weight initialization, activation function, optimization, loss function and models combination. This paper focuses on improving the CNN performance by the way of optimizing the architecture parameter selection through optimizers and loss function. In this work, 4 gradient descent-based optimizers were compared, i.e., Stochastic Gradient Descent (SGD), Adaptive Gradient (Adagrad), Adaptive Delta (Adadelta) and Adaptive Momentum (Adam). Mean Square Error (MSE) and Binary Cross Entropy (BCE) were the loss functions chosen. Gradient descent optimizers were considered because of their high convergence speed on large volume of datasets.rnMethod: The dataset was divided into 60% training and 40% testing. Two CNN architectures, AlexNet, and ResNet were developed and trained on the dataset with 0.01 learning rate. Batch size was 60. The number of epochs set was 50, 100 and 200. The methods were evaluated in terms of mean square error and accuracy. rnResults: Lowest training loss and appreciable accuracy with ADAM; Adam works well comparatively and also outperforms other adaptive techniques. rnConclusion: The assessment demonstrated that ADAM based optimization in CNN was able to diagnose glaucoma accurately with less loss and better convergence speed. rn
Keywords: Glaucoma; Fundus Image; Convolutional Neural Networks; Deep Learning; Optimization.
Challenges for Neuroscience-based Computational Intelligence
by Jose A. Fernandez Leon, Gerardo Acosta
Abstract: We describe some of the major issues concerning the interdisciplinary community that looks for understanding intelligence from the neuroscience and computational perspectives. The challenges outlined focus on the diverse range of theoretical and practical questions that may stimulate not only the study of intelligence in the biological realm but also the practice of research on the theoretical aspects to guide research in neuroscience. Setting the study of computational intelligence from the neuroscience field in a holistic and integrative way is a step toward fostering impactful interactions between distinct perspectives and viewpoints. rnThese ideas might be useful for brain understanding and for constructing new paradigms of machine learning. Rather than proposing a shift on the approach taken in computational intelligence research, discussions in this work suggest approaching the described grand challenges in an integrative manner but guided by theoretical aspects, rather than based mostly on technological developments to study the brain.rn
Keywords: computational intelligence; cognitive maps; place cells; grid cells; cognition; neuroinspiration; deep learning.
Opinion Mining based Secured Collaborative Recommender System
by Veer Sain Dixit, Akanksha Bansal Chopra
Abstract: Recommender Systems have impressively hit the e-commerce industry in last few years. With the increased use of recommender systems, the risk of preventing the authentic and true data has also been increased. Many algorithms have been proposed by various researchers to detect and prevent an attack, in the recent years. But, presently, no algorithm exists as a complete solution to this issue because of one or other constraints. The research in the paper intends to provide solution to this issue and proposes a framework of more secured collaborative recommender system as compared to conventional recommender systems. The proposed framework uses the approach of Turing test to identify any robotic involvement and does not allow insertion of ratings from such non human users. Ratings from real users are allowed, collected and examined. Textual opinions are also collected while collecting the ratings. Opinion Mining is performed on given textual opinion to identify and discard Push and Nuke ratings. Valid ratings are stored in database for generating recommendations for items. The paper also examines the effect of Push and Nuke ratings on performance by evaluating accuracy of recommender system using various measure metrics.
Keywords: Recommender system; Push ratings; Nuke ratings; Opinion Mining.
Behavior study of genetic operators for the solid waste collection problem: The municipality of Sidi Bou Said case study
by Olfa Khlif, Jouhaina Chaouachi Siala, Farah Farjallah
Abstract: Genetic algorithms are categorized as efficient globalrnsearch heuristic in solving high computational problems. However,rntheir performance is heavily dependent on the choice ofrnthe appropriate operator especially crossover. Moreover, therncrossover operator efficiency could considerably be influencedrnby the problem definition, the fitness function and the instancernstructure. For the practical demonstration of these phenomena,rnwe tackle the solid waste collection problem. We focus thereforernon determining the most promising set of parameters by varyingrnthe performance measures. The case of the city of Sidi Bou Saidrnis treated along during our experimental study.
Keywords: Solid waste collection problem; Genetic algorithm; crossover operator.
A New approach for business process management enhancement: Mobile Hospital Case Study
by Hela LIMAM, Amal Bouderbela, Jalel Akaichi
Abstract: Nowadays, Business Process Modeling Notation (BPMN) is the tool used to support business process management for technical and business users to bridge the communication gap between business process design and implementation. However, the BPMN metamodel lack of formal specification of well-formedness rules. In this context, we propose an Object Constraint Language (OCL) based approach to enhance the expressivity of the BPMN model to hold the information needed and to express resource allocation constraints. Furthermore, we propose an algorithm that verifies if the BPMN model has bad marks in order to verify the correctness of the enhanced model. The whole approach has been tested in the medical field. In particular, it has been used to model, check and improve the care process of a mobile hospital.
Keywords: Business Process (BP); BP Management; BP Improvement; BP Modelling Notation; Object Constraint Language; OCL constraints; Mobile Hospital.
Eagle View: An Abstract Evaluation of Machine Learning Algorithms based on Data Properties
by Dhairya Vyas, Viral Kapdia
Abstract: Data can be generated from almost any type of information. Experimental data analysis (EDA) and feature engineering for machine learning models necessitate a thorough understanding of the different types of data. Algorithms that interpret and recall future data details use machine learning (ML) data. The majority of the data can be found on the internet. In terms of machine learning, the majority of the data can be grouped into four categories: numerical data, category data, time-series data, and text. Supervised Learning, a collection of unproven learning algorithms, is the subject of this research. Regression models, random forests, logistical regressions, support vector machines, decision trees, neural networks, naive Bayes, t-distributed stochastic neighbourhood embedding (t-SNE), k-means clustering, principal component analysis (PCA), interim variance (TD), Q-learning, and others are among the most recent machine learning approaches and other promising learning promises. Then concentrate on investigating and debating the issues with machine learning, as well as possible solutions. Investigate the conflict time and learning machine release effects on various data types after that. Finally, define data types based on what youve learned so far.
Keywords: data types; supervised; unsupervised; reinforcement; outliers; time complexity.