These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.
Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.
Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.
International Journal of Data Science (8 papers in press)
State Space and Box-Jenkins Approaches: A Comparison of Models Prediction Performance in Finance. by Obinna Adubisi, John Ikwuoche, Ogbaji Eka, Erinma Uduma Abstract: This paper describes a study that used data collected from the Central Bank statistical web database system in Nigeria to evaluate and compare the forecasting performance of the nonstationary linear state space model and Box-Jenkins (ARIMA) model at different historic time periods. The comparison uses data series on inflation rates (core and non-core) in Nigeria for a specified period. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE) and root mean square percentage error (RMSPE). The one-year forecast evaluation results indicated that predictions from the nonstationary linear state space model outperformed the seasonal ARIMA model at different time periods. Furthermore, the proposed nonstationary linear state space model captured the dynamic structure of the inflationary series reasonably and requires no new cycle of identification and model estimation given the availability of new data. Keywords: ARIMA; Filtering; Inflation rate; Smoothing; State space model.
Most Preferable Combination of Explicit Drift Detection Approaches with Different Classifiers for Mining Concept Drifting Data Streams by Ritesh Srivastava, Veena Tayal Abstract: Sensors in the real world applications are the major sources of big data streams with varying underlying data distribution. Continuously generated time varying data streams are commonly referred as concept drifting data streams or dynamic (non-stationary) data stream. Unlike to the stationary data, the learner of concept drifting data stream requires quick forgetting of out-dated concepts in order to learn the new concept whenever the concept drifts occur. Many concept drifting data mining algorithms explicitly utilize the drift detection algorithms for ensuring the forgetting condition. In concept drifting data streams, the accuracy of the learner depends on the accuracy of the drift detection algorithm. The drift detection algorithms are generally characterized by its accuracy in drifts detection and its promptness towards drifts detection. Usually, a significant drop in the average accuracy of the concept drifting data stream classifier signifies the occurrence of the concept drift. For achieving and maintaining a consistent high accuracy in the classification of concept drifting data stream, it is very important to understand the preferable combinations of drift detection algorithms with the classification algorithms. In order to explore such preferable combinations, this work presents an empirical evolution of some popular drift detection methods with some state-of-art classification algorithms. We utilized some standard benchmark datasets of real world to conduct our experiments. Keywords: Concept drifts; Online learning; Data stream mining;Big data; Machine learning; classification; Drift detection methods;.
Addressing Uncertainty in Buyer-Supplier Interfaces by Supply Chain Phase and Decision-Making Level: A Fuzzy Goal-Fitting Approach by Margaret Shipley, Ray (Qing) Cao, G. Jonathan Davis Abstract: This exploratory study addresses uncertainty in supply chain management interfaces required for effective buyer-supplier partnerships. Such partnerships require sharing of knowledge throughout the phases of the supply chain where more impact may be critical at different organizational levels of managerial decision making. The study considered the phases of Plan, Source, Make, and Deliver and the Operational, Tactical and Strategic levels of decision making as the points for interface importance. Fuzzy probabilities of degree of fit to goals set to statistical confidence intervals is detailed with application based on input from over 400 buyers comparing seven suppliers in the electronics industry. Survey questions are mapped to seminal works on performance criteria, where possible, to phase and decision-making level. Results showed that the Source, Plan, and Deliver phases at different levels are to varying degrees important in buyer-supplier interfaces. Interestingly, the Make phase was less important overall for interfacing. The results suggest a heuristic for managers to use in maximizing supply chain performance gains through limited attention to buyer-supplier partnership. Keywords: multi-criteria decision making; MCDM; data analytics; fuzzy sets; supply chain partnership; SCM; supplier selection; data science.
An application of the logic of explanatory power in rough set analysis: implications for the classification of decision rules by Anthony T. Odoemena Abstract: This paper uses the logic of explanatory power to address the question of uncertain decision rule classification and interpretation in rough set data analysis. A set theoretic configuration of the measure of explanatory power is introduced. The usefulness of the measure is then examined in the context of two datasets - one related to car evaluation and the other related to the provision of extra educational supports. It is found that the explanatory power measure has some interesting properties that enhance the informativeness and interpretation of non-deterministic decision rules. The result of the numerical analysis shows that the explanatory power index is unique. The index can also facilitate the establishment of an objective threshold that determines whether the explanatory relevance of the premise in a given decision rule is positive, negative, or neutral. Keywords: rough sets; explanatory power; data analysis; decision rules. DOI: 10.1504/IJDS.2019.10022074
A retrospective data analysis of Legionella pneumophila diagnostic procedures and their impact on patients' management: the experience of a rapid point-of-care test by Eliona Gkika, Dimosthenis Chochlakis, Yannis Tselentis, Constantin Zopounidis, Vassilis S. Kouikoglou, Kitsos Gkikas, Anna Psaroulaki Abstract: We compare a conventional and a rapid point of care test (POCT) for the diagnosis of Legionella pneumophila, considering various performance criteria. We used data of patients with positive test for L. pneumophila (confirmed cases), registered by the microbiology laboratories of two hospitals in Crete, Greece. Hospital A adopts a conventional, indirect fluorescent-antibody technique and Hospital B uses a urinary antigen POCT. The mean laboratory turnaround time was 4.45 days for the conventional test and 0.11 days for POCT. A total of 24 laboratory positive cases (11 inpatients, 13 outpatients) were identified out of 905 samples taken from 751 people. The mean daily hospitalisation cost per inpatient was 79.86 for Hospital B and 127.45 for Hospital A; for the latter a much higher antibiotic treatment cost/patient was recorded. The analysis suggests that a rapid POCT for L. pneumophila could significantly decrease time to diagnosis, improve treatment and reduce hospitalisation charges. Keywords: Legionella pneumophila; point of care testing; turnaround time; length of stay; cost reduction. DOI: 10.1504/IJDS.2019.10022085
Analysis of weather data using various regression algorithms by Yeturu Jahnavi Abstract: Weather forecasting is a vital application in meteorology and has been one of the most challenging problems around the world. Data mining is a process that uses a variety of data analysis tools to discover patterns and relationships in data that may be used to make valid predictions. This is carried out using several regression algorithms. This paper focuses on weather analysis using various regression algorithms in data mining. In this work, linear regression, classification and regression tree, multilayer perceptron neural network and support vector machine (SVM) are used. For weather analysis various primary atmospheric parameters such as average temperature, average pressure and relative humidity are considered. The performance is analysed using various evaluation measures. Evaluation criteria like root mean square error, mean absolute error, relative absolute error and root relative square error are used for measuring the performance of regression algorithms. Keywords: data mining; weather prediction; regression algorithms. DOI: 10.1504/IJDS.2019.10022087
Sentiment analysis on organisational resilience by Tiffany Maldonado, Ray Qing Cao, Lila L. Carden Abstract: By applying a sentiment analysis, we examine how firms can achieve organisational resilience by focusing on two different operational strategies in their responses to adverse events: anticipatory responses or reactionary responses. We examined 210 firms and found that firms that focus on an anticipatory strategy of investing in corporate social responsibility benefited from increased organisational resilience. We also found that firms that focus on a reactionary focus of risk management practice in their daily operations also benefited from increased organisational resilience. Furthermore, our study revealed that firms that focus on the economic and environmental aspects of corporate social responsibility and the risk assessment process benefited from higher levels of organisational resilience. Keywords: sentiment analysis; texting mining; big data; data analytics; organisational resilience; corporate social responsibility. DOI: 10.1504/IJDS.2019.10022106
Analysis of co-authorship network based on some betweenness centrality concepts by Kannan Balakrishnan, Divya Sindhu Lekha, R. Sunil Kumar Abstract: Reliant components of a network are the connector nodes which aid in establishing a strongly connected network. Betweenness centrality of a node well captures its connecting capability. We suggest some new betweenness centrality measures which could be useful in analysing the structural connectivity of a network. In this paper we study the behaviour of collaboration in a co-authorship network, namely the NetScience network, from the perspective of these measures. We analyse the network from a micro perspective, where we consider small groups of scientists doing research in a common subdiscipline. We show that each group is formed by the influence of only one or two highly collaborating authors. Another speculation was that even though these authors are highly influential in smaller groups they do not possess notable contribution to the overall research of main discipline. Keywords: complex networks; network centrality; graph theory; betweenness center; collaboration network; co-authorship network. DOI: 10.1504/IJDS.2019.10022088