Forthcoming articles

International Journal of Data Analysis Techniques and Strategies

International Journal of Data Analysis Techniques and Strategies (IJDATS)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Data Analysis Techniques and Strategies (33 papers in press)

Regular Issues

  • Enhanced Auto Associative Neural Network using feed forward neural network An Approach to improve performance of fault detection and analysis   Order a copy of this article
    by Subhas Meti 
    Abstract: Biosensors have played a significant role in many of present days applications ranging from military applications to healthcare sectors. However, its practicality and robustness in its usage in real time scenario is still a matter of concern. Primarily issues such as prediction of sensor data, noise estimation, and channel estimation and most importantly in fault detection and analysis. In this paper an enhancement is applied to the Auto Associative Neural Network (AANN) by considering the cascade feed forward propagation. The residual noise is also computed along with fault detection and analysis of the sensor data. An experimental result shows a significant reduction in the MSE as compared to conventional AANN. The regression based correlation coefficient has improved in the proposed method as compared to conventional AANN.
    Keywords: WBAN; Fault Detection and Analysis; Feed Forward Neural Network; Enhanced AANN; Residual Noise.

  • Sentiment Analysis Based Framework for Assessing Internet Telemedicine Videos   Order a copy of this article
    Abstract: Telemedicine services through Internet and mobile devices need effective medical video delivery systems. This work describes a novel framework to study the assessment of Internet based telemedicine videos using Sentiment Analysis. The dataset comprises more than one thousand text comments of medical experts collected from various Medical animation videos of YouTube repository. The proposed framework deploys machine learning classifiers such as Bayes net, KNN, C 4.5 decision tree, SVM (Support Vector Machine) and SVM-PSO (SVM with Particle Swarm Optimization) to infer Opinion Mining outputs. The results portray that SVM-PSO classifier performs better in assessing the reviews of Medical video content with more than 80% accuracy. The Models inference of Precision and Recall values using SVM-PSO algorithm shows 87.8% and 85.57% respectively and henceforth underlines its superiority over other classifiers. The concepts of Sentiment Analysis can be applied effectively to the web based user comments of medical videos and the end results can be highly critical to enhance the reputation of Telemedicine education across the globe.
    Keywords: Machine Learning; Telemedicine; Medical videos.

  • Data Mining Classification Techniques - Comparison for Better Accuracy in Prediction of Cardiovascular Disease   Order a copy of this article
    by Richa Sharma 
    Abstract: Cardiovascular disease is a broad term which includes strokes or any disorder to the system that has the heart at its center, this disease is the critical cause of mortality every year across the globe. Data mining has variety of techniques and algorithms that would help to draw some interesting conclusions, mining in healthcare helps to predict the disease. This study aims to knowledge discovery from heart disease dataset and analyze the several data mining classification techniques for better accuracy and less error rate. Dataset for experiments are choosen from UCI Machine Learning Repository database the dataset are analyzed on two different data mining tools i.e WEKA and Tanagra analysis are done using 10 fold cross validation technique, Na
    Keywords: Data mining; Classification techniques; Machine learning Tools; Cardiovascular disease; KNN; Naïve Bayes; C-PLS; Decision Tree.

  • Real Time Data Warehouse: Health Care Use Case   Order a copy of this article
    by Hanen Bouali 
    Abstract: Recently, advances in hardware technology have allowed experts to auto-matically record transactions and other pieces of information of everydaylife at a rapid rate. System0s that executes complex event over real-timestreams of RFID readings encoded an event. Hence, in the healthcare con-text, applications are increasingly interconnected and can impose a massiveevent load to be processed. Furthermore, existing systems su ers the lackfor supporting heterogeneity and dynamism. Consequently, resulting fromRFID technology and many other sensors, streaming data brought anotherdimension to data querying and data mining research. This is due to thefact that, in data stream, only a time window is available. In contrast to thetraditional data sources, data streams present new characteristics as beingcontinuous, high-volume, open-ended and concept drifts. To analyse Com-plex queries for event streams, data warehouse seems to be the answer forthis. However, classical data warehouse does not incorporate the speci cityof event streams due to the complexity of their components that are spatial,temporal, semantic and real time. For these reasons, we focus on this paperon presenting the conceptual modelling of the real time data warehouse byde ning a new dimensionality and stereotype for classical data warehouse toadapt it to the event streams. Then, to prove the eciency of our real timedata warehouse, we will adapt the general pattern model to a medical unitpregnancy care which shows promising results.
    Keywords: data warehouse; data analysis; real time; healthcare.

  • Enhancement of SentiWordNet using Contextual Valence Shifters   Order a copy of this article
    by Poornima Mehta, Satish Chandra 
    Abstract: Sentence structure has a considerable impact on the sentiment polarity of a sentence. In the presence of Contextual Valence Shifters like conjunctions, conditionals and intensifiers some parts of the sentence are more relevant to determine the sentence polarity. In this work we have used Valence Shifters in sentences to enhance the sentiment lexicon SentiWordNet in a given document set. They have also been used to improve the sentiment analysis at document level. In the near past, microblogging services like Twitter have become an important data source for sentiment analysis. Tweets, being restricted to 140 characters are short and therefore have slangs, are grammatically incorrect, have spelling mistakes and have informal expressions. The method is aimed at noisy and unstructured data like tweets on which computationally intensive tools like dependency parsers are not very successful. Our proposed system works better on both noisy (Stanford and Airlines datasets of Twitter) and structured (Movie review) datasets.
    Keywords: Sentiment Analysis; SentiWordNet; Valence Shifters; Micro-blogs; Discourse; Twitter; Lexicon Enhancement.

  • Bayesian Feature Construction for the Improvement of Classification Performance   Order a copy of this article
    by Manolis Maragoudakis 
    Abstract: in this paper we are going to talk about the problem of the increase in validity, concerning the process of classification, but not through approaches having to do with the improvement of the ability to construct a precise classification model using any algorithm of Machine Learning. On the contrary, we approach this important matter by the view of a wider encoding of the training data and more specifically under the perspective of the creation of more features so that the hidden angles of the subject areas, which model the available data, are revealed to a higher degree. We suggest the use of a novel feature construction algorithm, which is based on the ability of the Bayesian networks to re-enact the conditional independence assumptions of features, bringing forth properties concerning their interrelation that are not clear when a classifier provides the data in their initial form. The results from the increase of the features are shown through the experimental measurement in a wide domain area and after the use of a large number of classification algorithms, where the improvement of the performance of classification is evident.
    Keywords: Machine learning; Knowledge engineering methodologies; Pattern analysis; Statistical Pattern Recognition.

  • A novel ensemble classifier by combining Sampling and Genetic algorithm to combat multiclass imbalanced problems   Order a copy of this article
    by Archana Purwar, Sandeep Singh 
    Abstract: To handle data sets with imbalanced classes is an exigent problem in the area of machine learning and data mining .Though a lot of work has been done by many researcher in the literature for two class imbalanced problems, multiclass problems still needs to be explored . Most of existing imbalanced learning techniques have proved to be inappropriate or even produce a negative effect to handle multiclass problems. To the best of our knowledge, no one has used combination of sampling (with and without replacement) and genetic algorithm to solve multiclass imbalanced problem. In this paper, we propose sampling and Genetic algorithm based ensemble classifier (SA-GABEC) to handle imbalanced classes.SA-GABEC tries to locate the best subset of classifiers for a given sample that are precise in predictions and can create an acceptable diversity in features subspace .These subsets of classifiers are fused together to give better predictions as compared to single classifier. Moreover, this paper also proposes modified SA-GABEC which performs the feature selection before applying sampling and outperforms SA-GABEC. To demonstrate the adequacy of our proposed classifiers, we have validated our classifier using two assessment metrics, recall and extended G-mean. Further, we have compared results with existing approaches such as GAB-EPA, Adaboost and Bagging.
    Keywords: Feature extraction; diversity; genetic algorithm; ensemble learning; and multiclass imbalanced problems.

  • Dynamics of the Network Economy: A Content Analysis of the Search Engine Trends and Correlate Results Using Word Clusters   Order a copy of this article
    by Murat Yaslioglu 
    Abstract: Network economy is a relatively untouched area, strategic approach to the dynamics of this new economy is quite limited. Network economy is about the networks, so it was questioned that what better medium than the biggest network itself can be while collecting insights. Thus, it was decided to follow up the information on the internet including every kind of documentation. In order to do so, initially a deep relation analysis using trends was conducted firstly to find out the related topics to new economys dynamics: network effect, network externalities, interoperability, big data and open standards. Additionally, social media was also investigated since it is considered as the marketplace where network economy applies. After the relation analysis, the correlates of the aforementioned keywords were analysed. And finally all the clean top results on the web were collected by the help of Linux command line tools into various, very large text files. These files were analysed for its content by the help of Nvivo qualitative analysis tool to form clusters. By the broad information available at hand, an extensive discussion on each result is written. It is believed that this new research approach will also guide many future researches on various subjects.
    Keywords: Network economy; network effect; network externalities; interoperability; big data; open standards; network strategy; methodology; analytics; word clusters; search engines.

  • Testing a File Carving Tool Using Realistic Datasets Generated with Openness by a User Level File System   Order a copy of this article
    by Srinivas Kaparthi, Venugopa T 
    Abstract: File Carver views a used hard disk as a storage medium containing raw data. From users point of view, the same hard disk is storage medium containing files. File carving has application areas of data recovery and digital forensics. It analyzes the raw data and reassembles file fragments, without using files metadata, for reconstructing the actual files present on the disk. During development phase of a file carver, it is inappropriate to use a used hard disk as an input medium due to the fact that the file system does not provide openness regarding file fragmentation and location of data on the disk. In this paper, we propose a method that provides realistic data sets with openness which can be used to test carving tools. Realistic property of data sets is achieved by implementing a file system at user level. A large file is used to mimic a hard disk in this process. The large file, on the hard disk, is handled by the host file system. The same large file to mimic as a test hard disk is handled by a file system at user level. Openness is achieved because the file system at user level acts as a white box while the file system at kernel level acts like a black box. The large file thus generated is a realistic data set generated with openness and can be used as an input for verifying the correctness of a file carving tool during its development phase.
    Keywords: File carver; file system; meta data; digital forensics; data recovery.

  • Fiber Optic Angle Rate Gyroscope Performance Evaluation in Terms of Allan Variance   Order a copy of this article
    by Jianbo Hu 
    Abstract: Based on the analysis of the error-sources of the Fiber Optic Angle Rate Gyroscope (FOARG), the Allan parameters are focused on calculation the Allan variances. The relationship between the Allan variance and the accuracy of FOARG is given. For the existences in the output of some-type FOARG, such as high noise, large volatilities in value and existing notable errors, a data-process algorithim is proposed with meaning and smoothing one. A lot of Matlab blocks, such as data-sampling, meaning and smoothing, are designed to process some-type FOARGs dynamic data and static data and to evaluate its performance with Allan variance.
    Keywords: Fiber Optic Gyroscop; data-process; Allan variance.

  • A Novel Integrated Principal Component Analysis and Support vector Machines based diagnostic system for detection of Chronic Kidney disease   Order a copy of this article
    by Aditya Khamparia, Babita Pandey 
    Abstract: The alarming growth of chronic kidney disease has become a major issue in our nation. The kidney disease does not have specific target, but individuals with diseases such as obesity, cardiovascular disease and diabetes are all at increased risk. On the contrary, there is no such awareness about related kidney disease and its failure which affects individuals health. Therefore, there is need of providing advanced diagnostic system which improves health condition of individual. The intent of proposed work is to combine emerging data reduction technique i.e. principal component analysis (PCA) and Supervised classification technique Support vector machine (SVM) for examination of kidney disease through which patients were being suffered from past. Variety of statistical reasoning and probabilistic features were encountered in proposed work like accuracy and recall parameters which examine the validity of dataset and obtained results. Experimental results concluded that SVM with gaussian radial basis kernel achieved higher precision and performed better than other models in term of diagnostic accuracy rates.
    Keywords: Principal component analysis; Support vector machine; classification; kidney disease; kernel; feature extraction.
    DOI: 10.1504/IJDATS.2020.10018953
  • Hybrid Fuzzy Logic and Gravitational Search Algorithm based Multiple Filters for Image Restoration   Order a copy of this article
    by A. Senthilselvi, Sukumar S 
    Abstract: Image restoration is a noise removal approach, which is used to remove noise from noisy image and restore the image. It has been widely used in various fields, such as computer vision, medical imaging, etc. In this paper, we consider the standard test images for noise removal experimentation. The images are mainly corrupted by an impulse noise.. In this paper, we present a multiple image filters for removal of impulse noises from test images. It utilizes fuzzy logic (FL) approach to design a noise detector (ND) optimized by gravitational search algorithm (GSA) and utilizes median filter (MF) for restoring. The proposed multiple filters first used the FL approach to detect each pixels of a test images are noise corrupted or not. If it is considered as noise-corrupted, the multiple filters restore it with the MF filter. Otherwise, it remains unchanged. Here, at first we split the image into number of windows and each window apply the multiple filters. The filter output is used for the rule generation and the optimal rules are selected using GSA. Then, the optimal rules are given to the fuzzy logic system to detect the noise pixel. For experimentation, in this paper we used five types of standard test images. The experimental results are carried out using different noise level and different methods. The performance measured in terms of PSNR, MSE, and visual quality.
    Keywords: Image restoration; impulse noise; fuzzy logic; multiple filters; median filter; standard test images and gravitational search algorithm.

  • Measuring Pearsons correlation coefficient of fuzzy numbers with different membership functions under weakest t-norm   Order a copy of this article
    by Mohit Kumar 
    Abstract: In statistical theory, the correlation coefficient has been widely used to assess a possible linear association between two variables and often calculated in crisp environment. In this study, a simplified and effective method is presented to compute the Pearsons correlation coefficient of fuzzy numbers with different membership functions using weakest triangular norm (t-norm) based approximate fuzzy arithmetic operations. Different from previous research studies, the correlation coefficient computed in this paper is a fuzzy number rather than a crisp number. The proposed method has been illustrated by computing the correlation coefficient between the technology level and management achievement from a sample of 15 machinery firms in Taiwan. The correlation coefficient computed by proposed method has less uncertainty and obtained results are more exact. The computed results has also been compared with existing approaches.
    Keywords: Pearson’s correlation coefficient; fuzzy number; weakest t- norm arithmetic.

    Abstract: This paper investigates dual control policy of bulk arrival queueing model with phase dependent breakdown and vacation interruption. In this model service process is split into two phases called first essential service and second essential service. Here the occurrence of breakdown during first essential service and second essential service are different. When the server got failure during first essential service, service process will be interrupted and sent to repair station immediately. On contrary during second essential service when the server got failure the service will not be interrupted, it performs continuously for current batch by doing some technical precaution arrangements. Server will be repaired after the service completion during renewal period. On the second essential service completion, if the queue length is less than a then the server leaves for vacation. The server has to do preparatory work to initiate service after vacation. During vacation if the queue length reaches the value a then the server breaks the vacation and performs preparatory work to start first essential service. Though the vacation period ends, if the queue length is still less than a then the server remains dormant (idle) until the queue length reaches the value a. For this system probability generating function of the queue size will be obtained by using supplementary variable technique. Various performance measures are also derived with appropriate numerical solution. Additionally cost model is also presented to minimize the total average cost of the system..
    Keywords: Phase dependent breakdown; vacation break-off; dual control policy; bulk arrival; batch service; cost optimization; renewal time; supplementary variable technique.

  • Implementation of an Efficient FPGA Architecture for Capsule Endoscopy Processor Core using Hyper Analytic Wavelet based Image Compression Technique   Order a copy of this article
    by Abdul Jaleel N, Vijayakumar P 
    Abstract: To receive images of human intestine for medical diagnostics, Wireless capsule endoscopy (WCE) is a state-of-the-art technology. This paper proposes implementation of efficient FPGA architecture for capsule endoscopy processor core. The main part of this processor is image compression, for which we proposed an algorithm called as Hyper analytic Wavelet Transform (HWT). The Hyper analytic Wavelet Transform (HWT) is quasi shift-invariant; it has a good directional selectivity, and a reduced degree of redundancy. Huffman coding also used to reduce the amount of bits required to represent a string of symbols. This paper also provided Forward Error Correction (FEC) scheme based on Low Density Parity Check codes (LDPC) to reduce Bit Error Rate (BER) of the transmitted data. Compared to the similar existing works this paper proposed an efficient architecture.
    Keywords: Wireless capsule endoscopy (WCE); Hyper analytic Wavelet Transform (HWT); Huffman coding; Low Density Parity Check codes (LDPC); Forward Error Correction (FEC); quasi shift-invariant; Bit Error Rate (BER).

  • Data Aggregation to Better Understand the Impact of Computerization on Employment   Order a copy of this article
    by James Otto, Chaodong Han 
    Abstract: Data reduction methods are called for to address challenges presented by big data. Correlation of two variables may be less clear if data are organized at disaggregate levels in regression analysis. In this study, we apply data aggregation to regression analysis in the context of a study forecasting the impact of computerization on jobs and wages. We show that data grouped by the ranked independent variable, versus random or other grouping schemes, provides a clearer pattern of the employment impacts of computerization probability on job categories. The coefficient estimates are more consistent for groupings based on a ranked ind ependent variable, than those provided by random grouping of the same independent variable. The improved estimations can have positive policy implications.
    Keywords: Data reduction methods; Impact of computerization; Computerization probability; Automation; Data grouping schemes; Statistical regression; Data aggregation; Ranked regression; Information reduction.

  • Inference in Mixed Linear Models with four variance components - Sub-D and Sub-DI   Order a copy of this article
    by Adilson Silva, Miguel Fonseca, Antonio Monteiro 
    Abstract: This work approaches the new estimators for variance componentes in mixed linear models Sub-D and its improved version Sub-DI, developed and tested by Silva (2017). Both estimators were deduced and tested in mixed linear models with two and three variance components; the authors gave the corresponding formulations in models with an arbitrary number of variance components but no one had never tested their performances in models with more than three variance components. Particularly, here we aim to give the explicit formulations for both Sub-D and Sub-DI in models with four variance components, as well as a numerical example testing their performances. Tables containing the results of the numerical example will be given.
    Keywords: Orthogonal Matrices; Variance Components; Sub-D; Sub-DI; Mixed Linear Models.
    DOI: 10.1504/IJDATS.2020.10021769
  • Detecting Text in License Plates using a novel MSER based Method.   Order a copy of this article
    by Admi Mohamed, E.L. Fkihi Sanaa, Faizi Rdouan 
    Abstract: A new license plate detection method is proposed in this paper. The proposed approach consists of three steps: the first step aims to delete some details in the input image by converting it to a gray level image and inverse it (negative) and then use MSER for the extraction of text in candidate regions. The second step is based on a dynamic grouped DBSCAN algorithm for a fast classification of the connected region, and the outer tangent of circles intersections for filtering regions with the same orientations. Finally, a geometrical and statistical character filter is used to eliminate false detections in the third step. Experimental results show that our approach performs better and achieves a better detection than that proposed by Xu-Cheng Yin(2014).
    Keywords: Text detection; MSER; circle overlapping; DBSCAN; License plate detection.

  • Microarray Cancer Classification using Feature Extraction based Ensemble Learning Method   Order a copy of this article
    Abstract: Microarray cancer datasets generally contain many features with a small number of samples, so initially we need to reduce redundant features to allow faster convergence. To address this issue, we proposed a novel feature extraction based ensemble classification technique using support vector machine (SVM) which classifying microarray cancer data and helps to build intelligent systems for early cancer detection. Novelty of the proposed approach is described by classifying cancer data as follows: a) We extracted information by reducing the size of larger dataset using various feature selection techniques, such as, principal component analysis (PCA), chi-square, genetic algorithm (GA) and F-Score. b) Classifying extracted information in two samples as normal and malignant classes using majority voting ensemble SVM. In SVM ensemble based approach we use different SVM kernels, like, linear, polynomial, radial basis function (RBF), and sigmoid. The calculated results of particular kernels are combined using majority voting approach. The effectiveness of the algorithm is validated on six benchmark cancer datasets viz. Colon, Ovarian, Leukaemia, Breast, Lung and Prostate using ensemble SVM classification.
    Keywords: Cancer classification; Support vector machine; PCA; GA; F-Score; Chi-square.

Special Issue on: DAC9 Theory and Applications of Correspondence Analysis and Classification

  • Comparison of hierarchical clustering methods for binary data from molecular markers   Order a copy of this article
    by Emmanouil D. Pratsinakis, Symela Ntoanidou, Alexios Polidoros, Christos Dordas, Panagiotis Madesis, Ilias Elefterohorinos, George Menexes 
    Abstract: Data from molecular markers used for constructing dendrograms, which are based on genetic distances between different plant species, are encoded as binary data (0: absence of the band at the agarose gel, 1: the presence of the band at the agarose gel). For the construction of the dendrograms, the most commonly used linkage methods are the UPGMA (Unweighted Pair Group Method with Arithmetic mean) and the Neighbor-Joining, in combination with multiple distances (mainly with the, squared or not, Euclidean distance). It seems that in this scientific field the Golden Standard clustering method (combination of distance and linkage method) is the UPGMA in combination with the squared Euclidean distance. In this study, a review is presented on the distances and the linkage methods used with binary data. Furthermore, an evaluation of the linkage methods and the corresponding appropriate distances (comparison of 162 clustering methods) is attempted using binary data resulted from molecular markers applied to five populations of the wild mustard Sinapis arvensis species. The validation of the various cluster solutions was tested using external criteria (geographical area and herbicides resistance). The results showed that the squared Euclidean distance, in combination with UPGMA linkage method, is not a panacea for dendrogram construction, in the biological sciences, based on binary data derived from molecular markers. Thirty six other hierarchical clustering methods could be used. In addition, the Benz
    Keywords: Dendrograms; proximities; linkage methods; Benz.

  • Assessment of the awareness of Cypriot Accounting Firms level concerning Cyber Risk. An exploratory analysis   Order a copy of this article
    by Stratos Moschidis, Efstratios Livanis, Athanasios Thanopoulos 
    Abstract: Technology development has made a decisive contribution to the digitization of businesses, which makes it easier for them to work more efficiently. However, in recent years, data leakages have shown an increasing trend. To investigate the level of awareness among Cypriot accountancy firms about cyber-related risks, we use the data from a recent survey of Cypriot professional accountants members of Institute of Certified Public Accountants of Cyprus, ICPAC. The categorical nature of the data and the purpose of our research led us to use methods of multidimensional statistical analysis. Τhe emergence of intense differences between accounting companies in relation to the issue as we will present is particularly interesting
    Keywords: cyber risk; multiple correspondence analysis; Cypriot accounting firms; exploratory statistics.

  • Sequential dimension reduction and clustering of mixed-type data   Order a copy of this article
    by Angelos Markos, Odysseas Moschidis, Theodore Chadjipantelis 
    Abstract: Real data sets usually involve a number of variables that are heterogeneous in nature. Clustering of a set of objects described by a mixture of continuous and categorical variables can be a challenging task, because it requires to take into account the relationships between variables that are of different measurement levels. In the context of data reduction, an effective class of methods combine dimension reduction of the variables with clustering of the objects in the reduced space. In this paper, we focus on three methods for sequential dimension reduction and clustering of mixed-type data. The first step of each approach involves the application of Principal Component Analysis or Correspondence Analysis on a suitably transformed matrix to retain as much variance as possible in as few dimensions as possible. In the second step, a partitioning or hierarchical clustering algorithm is applied to the object scores in the reduced space. The common theoretical underpinnings of the three approaches are highlighted. The results of a benchmarking study on simulated data show that sequential dimension reduction and clustering methods outperform alternative approaches, especially when categorical variables are more informative than continuous with regard to the underlying cluster structure. Strengths and limitations of the methods are also demonstrated on a real data set with nominal, ordinal and continuous variables.
    Keywords: Cluster Analysis; Dimension Reduction; Correspondence Analysis; Principal Component Analysis; Mixed-type Data.

  • A comparative evaluation of dissimilarity-based and model-based clustering in science education research: The case of childrens mental models of the Earth   Order a copy of this article
    by Dimitrios Stamovlasis, Julie Vaiopoulou, George Papageorgiou 
    Abstract: In the present work two different classification methods, a dissimilarity-based clustering approach (DBC) and the model-based latent class analysis (LCA) were used to analyse responses to a questionnaire designed to measure childrens mental representation of the earth. It contributes to an ongoing debate in cognitive psychology and science education research between two antagonistic theories on the nature of childrens knowledge, that is, the coherent versus fragmented knowledge hypothesis. Methodology-wise the problem concerns the classification of response patterns into distinct clusters, which correspond to specific hypothesized mental models. DBC employs the partitioning around medoids (PAM) approach and selects the final cluster solution based on average silhouette width, cluster stability and interpretability. LCA, a model-based clustering method achieves a taxonomy by employing the conditional probabilities of responses. Initially, a brief presentation and comparison of the two methods is provided, while issues on clustering philosophies are discussed. Both, PAM and LCA attained to detect merely the cluster, which corresponds, to the coherent scientific model and an artificial segment added on purpose in the empirical data. The two methods, despite the obvious deviations in cluster-membership assignment, finally provide sound findings as far as hypotheses tested, by converging to identical conclusions.
    Keywords: Mental model; latent class analysis; partitioning around medoids; dissimilarity-based clustering; coherent mental model hypothesis; fragmented knowledge hypothesis.

Special Issue on: LOPAL'2018 Advances and Applications in Optimisation and Learning Algorithms

  • Evaluating information criteria in latent class analysis: Application to identify classes of Breast Cancer data set   Order a copy of this article
    by Abdallah Abarda, Mohamed Dakkon, Khawla Asmi, Youssef Bentaleb 
    Abstract: In recent studies, latent class analysis (LCA) modeling has been proposed as a convenient alternative to standard classification methods. It has become a popular tool for clustering respondents into homogeneous subgroups based on their responses on a set of categorical variables. The abscence of a common accepted statistical indicator for deciding the number of classes in the study of population represents one of a major unresolved issue in the application of the LCA. Determining the number of classes constituting the profiles of a given population is often done by using the likelihood ratio test, however its use is not correct theoretically. To overcome this problem, we will propose an alternative for the classical latent class models selection methods based on the information criteria. This article aims to investigate the performance of information criteria for selecting the latent class analysis models. Nine information criteria are compared under various sample sizes and model dimensionalities. We propose also an application of ICs to select the best model of breast cancer data set.
    Keywords: Latent class analysis; Model selection; Information criteria; Classification methods.

  • Sentiment classification of review data using sentence significance score optimization   Order a copy of this article
    by Ketan Kumar Todi, Muralikrishna SN, Ashwath Rao B 
    Abstract: A significant amount of work has been done in the field of sentiment analysis in textual data using the concepts and techniques of Natural Language Processing (NLP). In this work, unlike the existing techniques, we present a novel method wherein we consider the significance of the sentences in formulating the opinion. Often in any review, the sentences in the review may correspond to different aspects which are often irrelevant in deciding whether the sentiment is positive or negative on a topic. Thus, we assign a sentence significance score to evaluate the overall sentiment of the review. We employ a clustering mechanism followed by the neural network approach to determine the optimal significance score for the review. The proposed supervised method shows a higher accuracy than the state-of-the-art techniques. We further, determine the subjectivity of sentences and establish a relationship between subjectivity of sentences and the significance score. We experimentally show that the significance scores found in the proposed method correspond to identifying the subjective sentences and objective sentences in reviews. The sentences with low significance score corresponds to objective sentences and the sentences with high significance score corresponds to subjective sentences.
    Keywords: Aspect ; Sentiment Classification; Clustering; Neural Network; Optimization; Significance score.

  • Towards Knowledge Warehousing: Application to Smart Housing   Order a copy of this article
    by Hadjer Moulai, Habiba Drias 
    Abstract: The terms data, information and knowledge should not be treated as synonyms in any context. In fact, a hierarchical order between these entities exists where data become information and information become knowledge. Massive amounts of data are analysed everyday in order to extract valuable knowledge to support decision making. However, the size of the extracted knowledge compromises the speed of reasoning and exploitation of the latter. In this paper, we propose the paradigm of knowledge warehousing to store and analyse big amounts of knowledge through online knowledge processing and knowledge mining techniques. Our proposal is supported by an original knowledge warehouse framework and a case study for the smart housing technology. A multi-agent system built on a knowledge warehouse architecture is illustrated where each agent has a knowledge base about his assigned task. The paradigm is expected to be applicable for other knowledge tasks and domains as well.
    Keywords: knowledge warehouse; knowledge management; knowledge mining; warehousing technology; smart housing; agent technology.

  • Road signs recognition : State-of-the-art and perspectives   Order a copy of this article
    by Btissam Bousarhane, Saloua Bensiali, Driss Bouzidi 
    Abstract: Traffic accidents represent a global problem that affects, enormously, many countries. Morocco is one of these countries that pay, each year, a heavy price in terms of human lives losses and economic costs. Making cars safer is a crucial element of saving lives on roads. In case of inattention or distraction, drivers need a performant system that is capable of assisting and alerting them when a road sign appears in their field of vision. To create such type of systems, we need to know first the specificity of traffic signs and the major difficulties that still face their recognition, which represents the object of the first and second sections of this paper. We should also study the different methods proposed by researchers to overcome each of these challenges. This study will help us to identify the strengths and weaknesses of each method, as proposed in the third section (Classical vs. Machine learning approaches). Evaluation metrics and criteria for proving the effectiveness of these approaches represents also an important element which section three of this article presents. Ameliorating the existing methods is crucial to ensure the effectiveness of the recognition process, especially by using deep learning algorithms and optimization techniques, as discussed in the last section of this paper.
    Keywords: Road signs recognition; detection; classification; tracking; machine learning; deep learning; evaluation datasets; evaluation metrics; hardware optimization; algorithmic optimization; CNN.

  • Combining Planning and Learning for Context Aware Service Composition   Order a copy of this article
    by Tarik Fissaa, Mahmoud Elhamlaoui, Hatim Guermah, Hatim Hafiddi, Mahmoud NASSAR 
    Abstract: Computing vision introduced by Mark Weiser in the early 90s has defined the basis of whatis called now ubiquitous computing. This new discipline results from the convergence of powerful,small and affordable computing devices with networking technologies that connect them all together.Thus, ubiquitous computing has brought a new generation of service-oriented architectures (SOA)based on context-aware services. These architectures provide users with personalized and adaptedbehaviors by composing multiple services according to their contexts. In this context, the objectiveof this paper is to propose an approach for context-aware semantic based services composition. Ourcontributions are built around following axes: (ii) a semantic based context modeling and context-aware semantic composite service specification, (ii) an architecture for context-aware semantic basedservices composition using Artificial Intelligence planning, (iii) an intelligent mechanism based onreinforcement learning for context-aware selection in order to deal with dynamicity and uncertaincarachter of modern ubiquitous environment
    Keywords: Context Awareness; Ontology; Service Composition; Semantic Web; AI Planning; Reinforcement Learning.

  • Bayesian Consensus Clustering with LIME for Security in Big Data   Order a copy of this article
    by Balamurugan Selvarathinam 
    Abstract: Malware creates huge noises in the current data era. The security query rises every day with new Malwares created by the intruders. Malware protection remains one of the trending areas of research in Android platform. Malwares are routed through the SMS / MMS in the subscribers network. The SMS once read is forwarded to other users. This will impact the device, once the intruders access the device data. Device Data theft and the user data theft also includes, credit card credentials, login credentials card information based on the users data stored in android device. This paper works towards how the various malwares in the SMS can be detected to protect Mobile users from potential risks from multiple data sources. Using a single data source will not be very effective with the Spam Detection, as the single data source will not contain all the updated Malwares and Spams. This work uses two methods namely, BCC for Spam Clustering and LIME for Classification of malwares. The significance of these methods is their ability work with unstructured data from different sources. After the two-step classification a set of unique malwares is identified, and all further malwares are grouped according to their category.
    Keywords: Bayesian Consensus Clustering; LIME; Classification; Big Data security.
    DOI: 10.1504/IJDATS.2021.10023272
  • Efficient Data Clustering Algorithm Designed Using Heuristic Approach   Order a copy of this article
    by POONAM NANDAL, DEEPA BURA, Meeta Singh 
    Abstract: Information retrieval from a large amount of information available in a database is a major issue these days. The relevant information extraction from the voluminous information available on web is being done using various techniques like Natural Language Processing, Lexical Analysis, Clustering, Categorization etc. In this paper, we have discussed the clustering methods used for clustering of large amount of data using different features to classify the data. In todays era various problem solving techniques makes the use of heuristic approach for designing and developing various efficient algorithms. In this paper, we have proposed a clustering technique using a heuristic function to select the centroid so that the clusters formed are as per the need of the user. The heuristic function designed in this paper is based on the conceptually similar data points so that they are grouped into accurate clusters. 𝑘 -means clustering algorithm is majorly used to cluster the data which is also focussed in this paper. It has been empirically found that the clusters formed and the data points which belong to a cluster are close to human analysis as compared to existing clustering algorithms.
    Keywords: Clustering; Natural Language Processing; k-means; Concept; Heuristic.

  • Semantic Integration of Traditional and Heterogeneous Data Sources (UML, XML and RDB) in OWL 2 Triplestore   Order a copy of this article
    by Oussama EL Hajjamy, Hajar Khallouki, Larbi Alaoui, Mohamed Bahaj 
    Abstract: With the success of the internet and the expansion of the amount of data in the web, the exchange of information from various heterogeneous and classical data sources becomes a critical need. In this context, researchers must propose integration solutions that allow applications to simultaneously access several data sources. In this perspective, it is necessary to find a solution for integrating data from classical data sources (UML, XML and RDB) into richer systems based on ontologies using the semantic web language OWL. In this work, we propose a semi-automatic integration approach of classical data sources via a global schema located in database management systems of RDF or OWL data, called triplestore. The goal is to combine several classical and heterogeneous data sources, according to the same schema and unified semantic. Our contribution is subdivided into three axes: The first one aims to establish an automatic solution that converts classical data sources such as UML, XML and relational databases (RDB) to local ontologies based on OWL2 language. The second axis consists of semantically aligning local ontologies based on syntactic, semantic and structural similarity measurement techniques in order to increase the probability of having real correspondences and real differences. Finally, the third axis aims to merge the pre-existing local ontologies into a global ontology based on the alignment found in the previous step. A tool based on our approach has also been developed and tested to demonstrate the power of our strategy and validates the theoretical concept.
    Keywords: data integration; UML; XML; RDB; semantic web; OWL2; triplestore; alining ontologies; merge ontologies.

  • A Novel Homophone-Based Text Compression for Secure Transmission   Order a copy of this article
    by Baritha Begum 
    Abstract: Internet is widely used in recent years for communication. In the last decade, there has been increasing in the amount of data transmitted via Internet, representing text, images, speech, video, sound and computer data. Hence there is a need for efficient compression algorithms that can be effectively used in the existing network bandwidth. Data secrecy is one of the most important concerns in security of any network. Here proposed with Homophone based Encryption with Compression (HEC) algorithm is a viable to maintain data confidentiality. HEC algorithm reduces the quantum of data used for exemplification. Homophone words have that sounds alike but different meanings and spellings. The proposed scheme enhances with the security and compression ratio of the input information in three steps. First input word in transformed into already existing homophones word using an inbuilt dictionary which enhances security. Then compression is done by BWT, modified RLE and Huffman coding. In latter, the array reduction based encryption cum compression used which further increases the compression ratio and improves security. This scheme has been tested with a number of text files from standard corpora. The results indicate that HEC scheme achieves a higher compression ratio and security than many widely used dictionary and statistical based compression schemes.
    Keywords: Homophone; data Compression; Encryption; Data security; Bits per character; Compression ratio; unicity distance and compression efficiency.

  • Improving Social Media Engagements on paid and nonpaid advertisements: A Data Mining Approach   Order a copy of this article
    by Jen-peng Huang, Genesis Sembiring Depari 
    Abstract: The purpose of this research is to develop a strategy to improve the number of social media engagement on Facebook both for paid and nonpaid publications through a data mining approach. Several Facebook post characteristics were weighted in order to rank the input variables importance. Support Vector Machine, Deep Learning, and Random Forest performance along with dynamic parameters were compared in order to obtain a robust algorithm in assessing the importance of several input factors. Random Forest is found as the most powerful algorithm with 79% accuracy and therefore used to analyze the importance of input factors in order to improve the number of engagements of social media posts. Eventually, we found that Total page likes (number of page follower) of company Facebook page are the most important factor in order to have more social media engagements both for paid and nonpaid publications. In order to prove that engagements also beneficial to reach more people, we also examined the correlation of shares, likes, comments and other post characteristics in reaching more people through company Facebook pages. In the final part, we also propose a managerial implication on how to improve the number of engagements in social media both for paid and nonpaid publications.
    Keywords: Social Media; Data Mining; Paid Advertisement; Non-Paid Advertisement; Social Media Engagements.