Forthcoming articles

International Journal of Grid and Utility Computing

International Journal of Grid and Utility Computing (IJGUC)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Grid and Utility Computing (81 papers in press)

Regular Issues

  • Research on regression test method based on multiple UML graphic models   Order a copy of this article
    by Mingcheng Qu, Xianghu Wu, Yongchao Tao, Guannan Wang, Ziyu Dong 
    Abstract: Most of the existing graph-based regression testing schemes aim at a given UML graph and are not flexible in regression testing. This paper proposes a method of a universal UML for a variety of graphical model modifications, and obtains a UML graphics module structure modified regression testing that must be retested, determined by the domain of influence analysis on the effect of UML modification on the graphical model test case generated range analysis, finally re auto generate test cases. This method has been proved to have a high logical coverage rate. In order to fully consider all kinds of dependency, it cannot limit the type modification of UML graphics, and has higher openness and comprehensiveness.
    Keywords: regression testing; multiple UML graphical models; domain analysis.

  • An improved energy efficient multi-hop ACO-based intelligent routing protocol for MANET   Order a copy of this article
    by Jeyalaxmi Perumaal, Saravanan R 
    Abstract: A Mobile Ad-hoc Network (MANET) consists of group of mobile nodes, and the communication among them is done without any supporting centralised structure. Routing in a MANET is a difficult because of its dynamic features, such as high mobility, constrained bandwidth, link failures due to energy loss, etc., The objective of the proposed work is to implement an intelligent routing protocol. Selection of the best hops is mandatory to provide good throughput in the network, therefore Ant Colony Optimisation (ACO) based intelligent routing is proposed. Selecting the best intermediate hop for intelligent routing includes ACO technique, which greatly reduces the network delay and link failures by validating the co-ordinator nodes. Best co-ordinator nodes are selected as good intermediate hops in the intelligent routing path. The performance is evaluated using the simulation tool NS2, and the metrics considered for evaluation are delivery and loss rate of sent data, throughput and lifetime of the network, delay and energy consumption.
    Keywords: ant colony optimisation; intelligent routing protocol; best co-ordinator nodes; MANET.

  • A survey on fog computing and its research challenges   Order a copy of this article
    by Jose Dos Santos Machado, Edward David Moreno, Admilson De Ribamar Lima Ribeiro 
    Abstract: This paper reviews the new paradigm of distributed computing, which is fog computing, and it presents its concept, characteristics and areas of performance. It performs a literature review on the problem of its implementation and analyses its research challenges, such as security issues, operational issues and their standardisation. We show and discuss that many questions need to be researched in academia so that their implementation will become a reality, but it is clear that their adherence is inevitable for the internet of the future.
    Keywords: fog computing; edge computing; cloud computing; IoT; distributed computing; cloud integration to IoT.

  • Analysis of spectrum handoff schemes for cognitive radio networks considering secondary user mobility   Order a copy of this article
    by K.S. Preetha, S. Kalaivani 
    Abstract: There has been a gigantic spike in the usage and development of wireless devices since wireless technology came into existence. This has contributed to a very serious problem of spectrum unavailability or spectrum scarcity. The solution to this problem comes in the form of cognitive radio networks, where secondary users (SUs), also known as unlicensed users, make use of the spectrum in an opportunistic manner. The SU uses the spectrum in a manner such that the primary or the licensed user (PU) doesnt face interference above a threshold level of tolerance. Whenever a PU comes back to reclaim its licensed channel, the SU using it needs to perform a spectrum handoff (SHO) to another channel that is free of PU. This way of functioning is termed as spectrum mobility. Spectrum mobility can be achieved by means of SHO. Initially, the SUs continuously sense the channels to identify an idle channel. Errors in the sensing channel are possible. A detection theory is put forth to analyse the spectrum sensing errors with the receiver operating characteristic (ROC) considering false alarm probability, miss detection and detection probability. In this paper, we meticulously investigate and analyse the probability of spectrum handoff (PSHO), and hence the performance of spectrum mobility, with Lognormal-3 and Hyper-Erlang distribution models considering SU call duration and residual time of availability of spectrum holes as measurement metrics designed for tele-traffic analysis.
    Keywords: cognitive radio networks; detection probability; probability of a miss; SNR; false alarm probability; primary users; secondary users.

  • Hybrid coherent encryption scheme for multimedia big data management using cryptographic encryption methods   Order a copy of this article
    by Stephen Dass, J. Prabhu 
    Abstract: In todays world of technology, data has been playing an imperative role in many different technical areas. Data confidentiality, integrity and data security over the internet from different media and applications are challenging tasks. Data generation from multimedia and IoT data is another huge source of big data on the internet. When sensitive and confidential data are accessed by attacks this lead to serious countermeasures to security and privacy. Data encryption is the mechanism to forestall this issue. Many encryption techniques are used for multimedia and IoT, but when massive data are developed it there are more computational challenges. This paper designs and proposes a new coherent encryption algorithm that addresses the issue of IoT and multimedia big data. The proposed system can cause a strong cryptographic effect without holding much memory and easy performance analysis. Handling huge data with the help of GPU is included in the proposed system to enhance the data processing more efficiently. The proposed algorithm is compared with other symmetric cryptographic algorithms such as AES,DES,3-DES, RC6 and MARS based on architecture, flexibility, scalability, security level and also based on computational running time, and throughput for both encryption and decryption processes. An avalanche effect is also calculated for the proposed system to be 54.2%. The proposed framework better secures the multimedia against real time attacks when compared with the existing system.
    Keywords: big data; symmetric key encryption; analysis; security; GPU; IoT; multimedia big data.

  • A study on data deduplication schemes in cloud storage   Order a copy of this article
    by Priyan Malarvizhi Kumar, Usha Devi G, Shakila Basheer, Parthasarathy P 
    Abstract: Digital data is growing at immense rates day by day, and finding efficient storage and security mechanisms is a challenge. Cloud storage has already gained popularity because of the huge data storage capacity in storage servers made available to users by the cloud service providers. When lots of users upload data in cloud there can be too many redundant data as well and this can waste storage space as well as affect transmission bandwidth. To promise efficient storage handling of this redundant data is very important, which is done by the concept of deduplication. The major challenge for deduplication is that most users upload data in encrypted form for privacy and security of data. There are many prevailing mechanisms for deduplication, some of which handle encrypted data as well. The purpose of this paper is to conduct a survey of the existing deduplication mechanisms in cloud storage and to analyse the methodologies used by each of them.
    Keywords: deduplication; convergent encryption; cloud storage.

  • How do checkpoint mechanisms and power infrastructure failures impact on cloud applications?   Order a copy of this article
    by Guto Leoni Santos, Demis Gomes, Djamel Sadok, Judith Kelner, Elisson Rocha, Patricia Takako Endo 
    Abstract: With the growth of cloud computing usage by commercial companies, providers of this service are looking for ways to improve and estimate the quality of their services. Failures in the power subsystem represent a major risk of cloud data centre unavailability at the physical level. At same time, software-level mechanisms (such as application checkpointing) can be used to maintain the application consistency after a downtime and also improve its availability. However, understanding how failures at the physical level impact on the application availability, and how software-level mechanisms can improve the data centre availability is a challenge. This paper analyses the impact of power subsystem failures on cloud application availability, as well as the impact of having checkpoint mechanisms to recover the system from software-level failure. For that, we propose a set of stochastic models to represent the cloud power subsystem, the cloud application, and also the checkpoint-based retrieval mechanisms. To evaluate data centre performance, we also model requests arrival and time to process as a queue, and feed this model with real data acquired from experiments done in a real testbed. To verify which components of the power infrastructure most impact on the data centre availability we perform sensitivity analysis. The results of the stationary analysis show that the selection of a given type of checkpoint mechanism does not present a significant impact on the observed metrics. On the other hand, improving the power infrastructure implies performance and availability gains.
    Keywords: cloud data centre; checkpoint mechanisms; availability; performance; stochatic models.

  • Link survivability rate-based clustering for QoS maximisation in VANET   Order a copy of this article
    by D. Kalaivani, P.V.S.S.R. Chandra Mouli Chandra Mouli 
    Abstract: The clustering technique is used in VANET to manage and stabilise topology information. The major requirement of this technique is data transfer through the group of nodes without disconnection, node coordination, minimised interference between number of nodes, and reduction of hidden terminal problem. The data communication among each node in the cluster is performed by a cluster head (CH). The major issues involved in the clustering approaches are improper definition of cluster structure, maintenance of cluster structure in dynamic network. To overcome these issues in the clustering technique, the link- and weight-based clustering approach is developed along with a distributed dispatching information table (DDIT) to repeatedly use the significant information for avoiding data transmission failure. In this paper, the clustering algorithm is designed on the basis of relative velocity value of two same directional vehicles by forming a cluster with number of nodes in a VANET network. Then, the CH is appropriately selected based on the link survival rate of the vehicle to provide the emergency message towards different vehicles in the cluster, along with the stored data packet information in the DDIT table for fault prediction. Finally, the efficient medium access control (MAC) protocol is used to provide a prioritised message for avoiding spectrum shortage of emergency messages in the cluster. The comparative analysis between the proposed link-based CH selection with DDIT (LCHS-DDIT) with the existing methods, such as clustering-based cognitive MAC (CCMAC), multichannel CR ad-hoc network (MCRAN), and dedicative short range communication (DSRC), proves the effectiveness of LCHS-DDIT regarding the throughput, packet delivery ratio, routing control overhead with minimum transmission delay.
    Keywords: vehicular ad-hoc networks; link survival rate; control channel; service channel; medium access control; roadside unit; on-board unit.

  • Evaluation of navigation based on system optimal traffic assignment for connected cars   Order a copy of this article
    by Weibin Wang, Minoru Uehara, Haruo Ozaki 
    Abstract: Recently, many cars have become connected to the internet. In the near future, almost all cars will be connected cars. Such a connected car will automatically drive according to a navigation system. Conventional car navigation systems are based on user equilibrium (UE) traffic assignment. However, system optimal (SO) traffic assignment is better than UE traffic assignment. To realise SO traffic assignment, complete traffic information is required. When connected cars become ubiquitous, all traffic information will be gathered into the cloud. Therefore, a cloud-based navigation system can provide SO-based navigation to connected cars. An SO-based navigation method in which the cloud collects traffic information from connected cars, computes SO traffic assignments, and recommends SO routes to users was recently proposed. In this paper, we evaluate this SO-based navigation method in detail.
    Keywords: system optimal traffic assignment; connected cars; intelligent transportation system.

  • Assessing distributed collaborative recommendations in different opportunistic network scenarios   Order a copy of this article
    by Lucas Nunes Barbosa, Jonathan Gemmell, Miller Horvath, Tales Heimfarth 
    Abstract: Mobile devices are common throughout the world, even in countries with limited internet access and even when natural disasters disrupt access to a centralised infrastructure. This access allows for the exchange of information at an incredible pace and across vast distances. However, this wealth of information can frustrate users as they become inundated with irrelevant or unwanted data. Recommender systems help alleviate this burden. In this work, we propose a recommender system where users share information via an opportunistic network. Each device is responsible for gathering information from nearby users and computing its own recommendations. An exhaustive empirical evaluation was conducted on two different datasets. Scenarios with different node densities, velocities and data exchange parameters were simulated. Our results show that in a relatively short time when a sufficient number of users are present, an opportunistic distributed recommender system achieves results comparable to that of a centralised architecture.
    Keywords: opportunistic networks; recommender systems; mobile ad hoc networks.

  • Research on the relationship between geomantic omen and housing choice in the big data era   Order a copy of this article
    by Lin Cheng 
    Abstract: In order to make the optimal decision of housing choice based on geomantic omen, the modern information technology in big data era is applied to confirm the relationship between the geomantic omen and housing choice. Firstly, geomantic theory and residential district planning decision are discussed. The function, core content and goal of geomantic theory are analysed, and the importance of geomantic theory on the site selection, orientation and spacing and indoor environment of residential region is analysed. The indoor environment of an urban residence includes the following elements, which are road, water body, plant and environmental elements. The geomantic theory can make the distribution trend of road system humane based on four principles. The flow direction of water, distribution of dynamic and static water, water area and layout and composition of water body should be designed based on geomantic theory. Secondly, the big data-mining algorithm based on grey relational theory is studied. The linear big data is pretreated, and the grey relational theory is used to construct the big data-mining algorithm. The selection procedure of weight is designed. Thirdly, the big data relational analysis algorithm is put forward. The analysis procedure includes three aspects, which are preprocessing of original data, procession of environmental parameters, and calculation of relational degree. Finally, three residential districts are used as examples to carry out the grey relational analysis for the geomantic theory and housing choice, and the results verify the effectiveness of the big mining algorithm. In addition, geomantic culture is more important for residents' satisfaction than housing choice, and development of good commercialised living population can be achieved based on geomantic theory.
    Keywords: big data; housing choice; grey relational analysis.

  • Study on NVH robustness evaluation method of high mileage automobile based on systematic sampling   Order a copy of this article
    by Jianqiang Xiong, Le Yuan, Dingding Liao, Jun Wu 
    Abstract: At present, automobile riding comfort is primarily focused on the study of the performance of new automobile NVH, and less research on how to analyse and evaluate the NVH characteristics of high mileage automobile. Based on the principle of statistics, this paper presents a robust evaluation method based on systematic sampling for the stability of high mileage automotive NVH characteristics and expounds the method. The basic idea and the main implementation steps focus on the analysis of the NVH characteristics of high mileage automobile and how to evaluate the robustness of high mileage automobile NVH, and provide a new direction for research into automobile riding comfort.
    Keywords: automobile vibration and noise; evaluation method; high mileage automobile; systematic sampling.

  • Data access control algorithm based on fine-grained cloud storage   Order a copy of this article
    by Qiaoge Xu 
    Abstract: With development of network storage and cloud computation, the cloud storage security has become the critical problem of cloud security technology. The data confidentiality of customer should be ensured in unbelievable storage environment, and the legal data of customer should be protected from tampering. In order to ensure the cloud storage security and achieve fine-grained data access control, a new fine-grained data access control algorithm is established based on CP-ABE algorithm. The basic theory of CP-ABE algorithm is studied in depth, the flowchart of CP-ABE algorithm is put forward. Then the fine-grained cloud storage controlling scheme based on digital envelop is put forward. The structure of new fine-grained cloud storage controlling scheme is designed, the trusted third party mainly generates the public parameters and main password of system, the data owner possesses the original plaintext data of client, the normal user can read digital envelopes stored in cloud storage server, and the cloud service provider (CSP) can offer data storage for the user. The new scheme construction process is given, and then the corresponding algorithm is designed. The new scheme can reduce user management complexity of CSP, and the new scheme also keeps the access controlling fine-grained degree and flexible of original scheme. The fine-grained degree access privilege tree is also designed to to improve the robustness of the fine-grained data access control algorithm and to describe the encryption strategy. Simulation analysis is carried out, and results show that the proposed data access control algorithm can effectively improve the searching efficiency of cipher text, and achieve fine-grained access under cloud storage environment.
    Keywords: data access control algorithm; fine-grained cloud storage; searching efficiency.
    DOI: 10.1504/IJGUC.2020.10027929
     
  • Multi-objective optimisation of traffic signal control based on particle swarm optimisation   Order a copy of this article
    by Jian Li 
    Abstract: In order to relieve traffic jams, an improved particle swarm optimisation is applied in multiple objective optimisation of traffic signal control. Firstly, a multiple objective optimal model of traffic signal is constructed considering the queue length, vehicle delay, and exhaust emission. The multiple optimal function is transferred to single optimal function through three weighted indexes. The vehicle delay and queue length model under the control of traffic signal is constructed through combining the Webster model and the high capacity manual delay model. The vehicle exhaust emission model under the control of traffic signal is also constructed. The objective function and constraint conditions are confirmed. Secondly, the improved particle swarm optimisation algorithm is established through combining the traditional particle swarm algorithm and genetic algorithm. The mathematics of the particle swarm algorithm is studied in depth, and particles are endowed with hybrid probability, which is random and has no fitness degree value. In every iteration, a number of particles are selected based on the hybrid probability to put them into pool. The location of subparticles can be calculated based on the weighted location of the mother particle. The value of the inertia factor can be regulated based on the following nonlinear inertia weight decrement function. Finally, the simulation analysis is carried out using an intersection as the research objective, the flow of straight road ranges from 300 to 450 pcu, the flow of left turn road ranges from 250 to 380 pcu. The optimal performance index is obtained, and the new multiple objective optimisation model can give better optimal results than the traditional multiple objective optimisation algorithm. A better traffic control effect is obtained.
    Keywords: particle swarm optimisation; traffic signal control; intersection.

  • Performance analysis of StaaS on IoT devices in fog computing environment using embedded systems   Order a copy of this article
    by José Dos Santos Machado, Danilo Souza Silva, Raphael Fontes, Adauto Menezes, Edward Moreno, Admilson Ribeiro 
    Abstract: This work presents the concept of fog computing, its theoretical contextualisation, and related works, and performs an analysis of fog computing to provide StaaS (Storage as a Service) on IoT devices using embedded systems platforms, in addition to comparing its results with those obtained by a high-performance server. In this article, we use OwnCloud and Benchmark SmashBox (for data transfer analysis). The results showed that the implementation of this service in embedded systems devices can be a good alternative to reduce one of these problems, in this case the storage of data, which currently affects IoT devices.
    Keywords: fog computing; cloud computing; IoT; embedded systems; StaaS.

  • Predicting students' academic performance: Levy search of cuckoo-based hybrid classification   Order a copy of this article
    by Deepali R. Vora, Kamatchi Iyer 
    Abstract: Nowadays, Educational Data Mining (EDM) exists as a novel trend in the Knowledge Discovery in Databases (KDD) and Data Mining (DM) fields concerned with mining valuable patterns and finding out practical knowledge from educational systems. However, evaluating the educational performance of students is challenging as their academic performance pivots on varied constraints. Hence, this paper intends to predict the educational performance of students based on socio-demographic information. To attain this, performance prediction architecture is introduced with two modules. One module is for handling the big data via MapReduce (MR) framework, whereas the second module is an intelligent module that predicts the performance of the students using intelligent data processing stages. Here, the hybridisation of classifiers such as Support Vector Machine (SVM) and Deep Belief Network (DBN) is adopted to get better results. In DBN, Levy Search of Cuckoo (LC) algorithm is adopted for weight computation. Hence, the proposed prediction model SVM-LCDBN is proposed that makes deep connection with the hybrid classifier to attain more accurate output. Moreover, the adopted scheme is compared with conventional algorithms, and the results are attained.
    Keywords: data mining; educational data mining; MapReduce framework; support vector machine; deep belief network; cuckoo search algorithm; Levy flight.

  • Combined interactive protocol for lattice-based group signature schemes with verifier-local revocation   Order a copy of this article
    by Maharage Nisansala Sevwandi Perera, Takeshi Koshiba 
    Abstract: In group signature schemes the signer is required to prove his validity of generating signatures on behalf of the group to the signature verifier. However, since the signer's identity should be anonymous to the verifier, the proving mechanism should not reveal any information related to the signer. Thus, the signers should follow a zero-knowledge proving system when interacting with the verifiers. In group signature schemes with verifier-local revocation (VLR) mechanism, the group members have another secret attribute called a revocation token other than the secret signing key. Thus, the signer has to prove that his revocation token is not in the revoked member list without revealing his token to the verifier. Even though the first lattice-based group signature scheme with verifier-local revocation (Langlois et al. at PKC2014) proved the validity of the signer using an underlying interactive protocol, their scheme relied on weaker security because revocation tokens are derived from the secret signing keys. This paper discusses situations, where the secret signing keys and revocation tokens are generated separately to achieve strong security, and provides a new combined interactive protocol that passes zero-knowledge to prove the validity of the signer. Moreover, the new interactive protocol covers the situations where the group manager prefers using an explicit tracing algorithm rather than using the implicit tracing algorithm to identify a signer. As a result, this work presents a combined interactive protocol that signer can use to prove his validity of signing, his separately generated revocation token is not in the revocation list, and his index is correctly encrypted to support the explicit tracing algorithm.
    Keywords: lattice-based group signatures; verifier-local revocation; zero-knowledge proof; interactive protocol.

  • Chronological and exponential based Lion optimisation for optimal resource allocation in cloud   Order a copy of this article
    by J. Devagnanam, N.M. Elango 
    Abstract: Cloud computing is a service-oriented architecture, which has prominent importance over the development of the enterprises and markets. The main intention of the cloud computing is to maximise the effectiveness of the shared resources upon the needs and also, to maintain the profit of the cloud provider as well. Accordingly, this paper introduced an optimisation scheme for allocating suitable resources for cloud computing. Previously, a resource allocation was developed by introducing the EWMA based Lion Algorithm (E-Lion). In this work, the previously developed E-Lion algorithm is extended by including the chronological concept to develop a novel algorithm, named Chronological E-Lion. Also, for further refining the resource allocation scheme, the proposed Chronological E-Lion algorithm uses the fitness with parameters, such as cost, profit, CPU allocation, memory allocation, MIPS, and frequency scaling. Implementation of the proposed scheme uses three different problem instances and is evaluated based on metrics, such as profit, CPU allocation rate, and memory allocation rate. From the simulation results, it can be concluded that the proposed Chronological based E-Lion algorithm achieved improved performance with the values of 45.925, 0.1555, and 0.0093, for profit, CPU utilisation rate, and memory utilisation rate.
    Keywords: cloud computing; resource allocation; E-Lion; chronological concept; CPU utilisation rate; memory utilisation rate.

  • An empirical study of alternating least squares collaborative filtering recommendation for MovieLens on Apache Hadoop and Spark   Order a copy of this article
    by Jung-Bin Li, Szu-Yin Lin, Yu-Hsiang Hsu, Ying-Chu Huang 
    Abstract: In recent years, both consumers and businesses have faced the problem of information explosion, and the recommendation system provides a possible solution. This study implements a movie recommendation system that provides recommendations to consumers in an effort to increase consumer spending while reducing the time between film selections. This study is a prototype of collaborative filtering recommendation system based on Alternating Least Squares (ALS) algorithm. The advantage of collaborative filtering is that it can avoid possible violations of the Personal Data Protection Act and reduce the possibility of errors due to poor quality of personal data. Our research improves the ALS's limited scalability by using a platform that combines Spark with Hadoop Yarn and uses this combination to calculate movie recommendations and store data separately. Based on the results of this study, our proposed system architecture provides recommendations with satisfactory accuracy while maintaining acceptable computational time with limited resources.
    Keywords: recommendation system; alternating least square; collaborative filtering; MovieLens; Hadoop; Spark; content-based filtering.

  • Evaluating and modelling solutions for disaster recovery   Order a copy of this article
    by Júlio Mendonça, Ricardo Lima, Ermeson Andrade 
    Abstract: Systems outages can have disastrous effects on businesses, such as data loss, customer dissatisfaction, and subsequent revenue loss. Disaster recovery (DR) solutions have been adopted by companies to minimise the effects of these outages. However, the selection of an optimal DR solution is difficult, since there does not exist a single solution that suits the requirement of every company (e.g., availability and costs). In this paper, we propose an integrated model-experiment approach to evaluate DR solutions. We perform experiments in different real-world DR solutions and propose analytic models to evaluate these solutions regarding DR key metrics: steady-state availability, recovery time objective (RTO), recovery point objective (RPO), downtime, and costs. The results reveal that DR solutions can significantly improve availability and minimise costs. Also, a sensitivity analysis identifies the parameters that most affect the RPO and RTO of the DR adopted solutions.
    Keywords: disaster recovery; disaster tolerance; cloud computing; Petri nets.

  • Big data inconsistencies and incompleteness: a literature review   Order a copy of this article
    by Olayinka Johnny, Marcello Trovati 
    Abstract: The analysis and integration of big data highlight some issues in the identification and resolution of data inconsistencies and knowledge incompleteness. This paper presents an overview of data inconsistencies and a review of approaches to resolve various levels of data inconsistencies. Moreover, we discuss some issues related to incompleteness and stability of known knowledge over specific time periods, and the implication to the decision-making process. In addition, the use of the Bayesian network model in inconsistency resolution in data analysis and knowledge engineering will also be considered.
    Keywords: big data; data inconsistencies; NLP; Bayesian networks.

  • Detection of fatigue on gait using accelerometer data and supervised machine learning   Order a copy of this article
    by Dante Arias-Torres, José Adan Hernández-Nolasco, Miguel A. Wister, Pablo Pancardo 
    Abstract: In this paper, we aim to detect the fatigue based on accelerometer data from human gait using traditional classifiers from machine learning. First, we compare widely used machine learning classifiers to know which classifier can detect fatigue with the fewest errors. We observe that the best results were obtained with a Support Vector Machine (SVM) classifier. Later, we propose a new approach to solve the feature selection problem to know which features are more relevant to detect fatigue in healthy people based on their gait patterns. Finally, we use relevant gait features discovered in a previous step as input in classifiers used previously to know their impact on the classification process. Our results indicate that, using only some gait features selected by our proposed feature selection method, it is possible to improve fatigue detection based on data from human gait. We conclude that it is possible to distinguish between a normal gait person and a fatigued gait person with high accuracy.
    Keywords: gait; fatigue; detection; accelerometer; supervised learning.
    DOI: 10.1504/IJGUC.2020.10028761
     
  • Towards an encompassing maturity model for the management of higher education institutions   Order a copy of this article
    by Rui Humberto Pereira, Joao Vidal Carvalho, Alvaro Rocha 
    Abstract: Maturity models have been adopted in organisations from different sectors of activity, as guides and references for information systems (IS) management. In the educational field, these models have also been used to deal with the enormous complexity and demand of educational information systems management. Higher education institutions (HEI) IS require different expertise including areas such as academic management and integration, pedagogy, support to research activities and digital technologies associated with education that allow the accurate development and implementation of these systems. This article presents a research project that aims to develop a new comprehensive maturity model for HEI that helps them to address the complexity of their IS, as a useful tool for the demanding role of the management of these systems (and institutions as well). Thus, the maturity models in the Education area are discussed with special insight on those whose focus are the IS and technologies (IST). Those models are identified, as well as the characteristics and gaps that they present in this specific HEI area, justifying the need and the opportunity for the development of a new and comprehensive maturity model. Finally, the methodology is discussed for the development of maturity models that will be adopted for the design of the new model (called HEIMM) and the underlying reasons for its choice. At the moment, we are developing the HEIMM following the chosen methodology.
    Keywords: stages of growth; maturity models; higher education institutions; management.

  • An efficient greedy task scheduling algorithm for heterogeneous inter-dependent tasks on computational grids   Order a copy of this article
    by Srinivas D B, Sujay N. Hegde, M.A. Rajan, H.K. Krishnappa 
    Abstract: Computational grids are interconnected assortments of distributed and heterogeneous resources, coordinately working together to meet the computational needs of different users. Grid services housed in data centres aim to achieve optimal grid utilisation to serve more customers to maximise profits. Similarly, grid service users always want to minimise turnaround time of their applications. Generally, user applications are represented by precedence constrained task graphs. Scheduling these precedence constrained tasks of a task graph on computational grids is a key enabler to achieve the better turnaround time and higher throughput. Many researchers have focused on developing task scheduling algorithms for fully dependent or independent task graphs. These algorithms are based on genetic, game theory, heuristics,bio-inspired approaches, etc. Therefore, designing efficient task scheduling algorithms is still a difficult task owing to its complexity (NP-complete). Thus, there is always scope to design efficient task scheduling algorithms to achieve better turnaround time and grid utilisation for precedence constrained task graphs. In this direction, we propose an efficient greedy task scheduling algorithm for precedence constrained task graphs with varied dependencies (no, partial and fully) on computational grids. Correctness of our proposed task scheduling algorithm is verified by comparing it with proposed backtracking brute force scheduling algorithm. Performance of proposed scheduling algorithm is compared (Turn Around Time (TAT) and grid utilisation) against hungarian, Partial Precedence Constrained Scheduler (P_PCS) and AND scheduling algorithms. Simulation results shows that the performance proposed scheduling algorithm is on par with hungarian, P_PCS and AND scheduling algorithms. Further, running time of proposed algorithm is better than hungarian and is equivalent to P_PCS algorithm.
    Keywords: partial dependency; task scheduling; turnaround time; grid utilisation; greedy approach; DAG.
    DOI: 10.1504/IJGUC.2020.10026377
     
  • On a user authentication method to realise an authentication system using s-EMG   Order a copy of this article
    by Hisaaki Yamaba, Shotaro Usuzaki, Kayoko Takatsuka, Kentaro Aburada, Tetsuro Katayama, Mirang Park, Naonobu Okazaki 
    Abstract: In our present era, mobile devices such as tablet-type personal computers (PCs) and smartphones have penetrated deeply into our daily lives. This has resulted in situations where malicious strangers have found ways to spy on our touchscreen authentication operations and steal passwords that allow them to access our mobile devices and steal important information and data. In response, designers are working to develop new authentication methods that can prevent this sort of crime, which is commonly called shoulder-surfing. Herein, we report on a new user authentication method for mobile devices that uses surface electromyogram (s-EMG) signals rather than screen-touch operations. These s-EMG signals, which are generated by the electrical activity of muscle fibres during contraction, can be used to identify who generated the signals and which gestures were made. Our method uses a technique called pass-gesture, which refers to a series of hand gestures, to achieve s-EMG-based authentication. However, while human beings can recognise gestures from the s-EMG signals they produce by viewing their charts, it is necessary to introduce computer programs that can do this automatically. In this paper, we propose two methods that can be used to compare s-EMG signals and determine whether they were made by the same gesture. One uses support vector machines (SVMs), and the other uses dynamic time warping (DTW). We also introduce an appropriate method for selecting the validation data used to train SVMs using correlation coefficients and cross-correlation functions, and report on a series of experiments that were carried out to confirm the performance of those proposed methods. From the obtained experimental results, the effectiveness of the two proposed methods was confirmed.
    Keywords: user authentication; surface electromyogram; support vector machines; correlation coefficient; cross-correlation; dynamic time warping.

  • An integrated framework of generating quizzes based on linked data and its application in the medical education field   Order a copy of this article
    by Wei Shi, Chenguang Ma, Hikaru Yamamura, Kosuke Kaneko, Yoshihiro Okada 
    Abstract: E-learning has greatly developed in recent years because of the popularity of smart-phones and tablets. With the improvement of the device performances and the data transmission capability of the internet, more and more complex E-learning materials are provided to learners. Quiz games are a kind of e-learning format which can both test and train learners. How to automatically generate good quizzes is discussed by many researchers. In this paper, we proposed a new framework which can support users to create their own quiz games based on linked data. Compared with other methods, our framework effectively uses the feature of the linked data, which stores both the values and the linkages among values. The quizzes generated by our framework are easy to improve and extend, have more changes, and support the learning analytics of users activities. For obtaining better educational effects, we further extend our framework for supporting the generation of quizzes in 3D environments. Especially, we discuss how to apply our framework in medical education in this paper.
    Keywords: linked data; quiz; serious game; e-learning.

  • Efficient variant transaction injection protocols and adaptive policy optimisation for decentralised ledger systems   Order a copy of this article
    by Bruno Andriamanalimanana, Chen-Fu Chiang, Sam Sengupta, Jorge Novillo, Ali Tekeoglu 
    Abstract: For any decentralised (or distributed) cryptocurrency system, it is important to provide users an efficent network. One of the major performance bottlenecks is the latency issue. To address this issue, we provide four protocols to use the resources based on the traffic in the network to alleviate the latency in the network. The laggedness in the traffic of the network is characterised by the amount of unverified transactions that are expected to be passing the verification process fast such that the underlying decentralised system can provide an appealing alternative to current credit card systems. To facilitate the verification process, we discuss three variant injection protocols: Periodic Injection of Transaction via Evaluation Corridor (PITEC), Probabilistic Injection of Transactions (PIT) and Adaptive Semi-synchronous Transaction Inject (ASTI). The injection protocols are variant based on the given assumptions of the network, such as the consumption rate. The ultimate goal is to provide dynamic injections of unverified transaction into the verification pool to enhance the performance of the network. The adaptive policy optimisation (APO) protocols aims at optimising a cryptocurrency systems own house policy (such as maximising or minimising some objective functions concerning the system). We simply translate the house policy optimisation into a 0/1 knapsack problem. The APO protocol is a fully polynomial time approximation scheme for the decentralised ledger system.
    Keywords: blockchain; optimisation; decentralised ledger system architecture.

  • Usage of DTNs for low cost IoT application in smart cities: performance evaluation of spray and wait routing protocol and its enhanced versions   Order a copy of this article
    by Evjola Spaho 
    Abstract: Delay Tolerant Networks (DTNs) can be used as a low cost solution to implement different applications of the Internet of Things (IoT) in a smart city. An issue that needs to be solved when this approach is used is the efficient transmission of data. In this paper, we create a DTN for a smart city IoT application and enhance the Binary Spray and Wait (B-S&W) routing protocol to improve delivery probability and average delay. We evaluate and compare the B-S&W routing protocol and our two enhanced versions of spray and wait (S&W-V1 and S&W-V2). The simulation results show that the proposed versions S&W-V1 and S&W-V2 improve the delivery probability and average latency.
    Keywords: IoT; smart cities; delay tolerant networks; wireless Sensor networks; routing protocols; spray and wait protocol.

  • Applying REECHD to non-uniformly distributed heterogeneous devices   Order a copy of this article
    by Diletta Cacciagrano, Flavio Corradini, Matteo Micheletti, Leonardo Mostarda 
    Abstract: Heterogeneous Wireless Sensor Networks (WSNs) are essential to implement the IoT vision. They are a virtual skin that allows the gathering of environmental data. This can be used to enhance human life and build innovative applications such as smart environments, advance healthcare systems and smart cities. WSNs are composed of nodes that can have different initial energy, different node transmission rate and different hardware. Energy efficiency is one of the most important challenges when building WSNs since nodes are battery powered. Clustering is an extensively used approach to enhance the heterogeneous WSN lifetime. Clustering partitions the WSN nodes into a set of clusters. Each cluster contains a cluster head (CH) that collects data from its member nodes and forwards them to a centralised Base Station (BS). Although a plethora of different clustering approaches have been proposed, CH selection is usually based on the node residual energy. Some approaches can take into account the node transmission rate but only for cluster formation. Rotating Energy Efficient Clustering for Heterogeneous Devices (REECHD) is our novel clustering algorithm that considers both node residual energy and node transmission rate for cluster head election. REECHD also proposes the introduction of the intra-traffic rate limit (ITRL). This limits the amount of intra-traffic data that a CH can receive. ITRL can improve energy efficiency. In this work we apply REECHD to a WSN where devices are not uniformly distributed. More precisely, we consider WSNs where some sub-areas generate a higher volume of traffic. We show how the use of ITRL improves energy efficiency by adaptively generating a different amount of clusters in different WSN subareas. REECHD outperforms the state of art clustering protocols of 220% when first node die lifetime measure is considered. Our results show that REECHD enhances on average the network lifespan when compared with the state of art protocols.
    Keywords: energy efficiency; clustering; heterogeneous wireless sensor networks.

  • Online information bombardment! How does eWOM on social media lead to consumer purchase intentions?   Order a copy of this article
    by Muddasar Ghani Khwaja, Saqib Mahmood, Ahmad Jusoh 
    Abstract: Social media networks have increased electronic word of mouth (eWOM) conversations, which has created numerous opportunities for online businesses. People using social media networks have been able to discuss, evaluate and argue on different products and services with their friends, family and peers. This study aims to determine the influence of these social media conversations on the consumer purchase intentions. In this regard, Erkan and Evans' (2016) study was extended as an integrated framework of Theory of Reasoned Action (TRA) and Information Adoption Model (IAM). The proposed framework was estimated using Structural Equation Modelling (SEM) on AMOS. A survey method was deployed in which data were taken from 342 respondents who have been involved in online purchasing due to social media influence. The results attained affirmed the established theoretical foundations.
    Keywords: eWOM; IAM; TAM; purchase intentions.

  • Public key encryption with equality test for vehicular system based on near-ring   Order a copy of this article
    by Muthukumaran Venkatesan, Ezhilmaran Devarasaran 
    Abstract: In recent years, vehicles have been increasingly integrated with an intelligent transport system (ITS). This has led to the development of Vehicular Ad hoc Networks(VANETs) through which the vehicles communicate with each other in an effective manner. Since VANET assists in both vehicle to vehicle and vehicle to infrastructure communication the matter of security and privacy has become a major concern. In this context, this work presents a public key Encryption with equality test based on DLP with decomposition problems over near-ring The proposed method is highly secure and it solves the problem of quantum algorithm attacks in VANET systems. Further, the proposed system prevents the chosen-ciphertext attack in type-I adversary and it is indistinguishable against the random oracle model for the type-II adversary. The proposed scheme is highly secure and the security analysis measures are stronger than existing techniques.
    Keywords: near-ring; Diffie-Hellman; vehicular ad hoc networks.

  • An old risk in the new era: SQL injection in cloud environment   Order a copy of this article
    by Fu Xiao, Wang Zhijian, Wang Meiling, Chen Ning, Zhu Yue, Zhang Lei, Wang Pei, Cao Xiaoning 
    Abstract: After haunting all the software engineers for more than 26 years since it was discovered and classified in 2002, SQL injection still poses a most serious thread to developers, maintainers and users of web applications, even into the brand new cloud era. SaaS, PaaS and IaaS virtualisation technologies which are widely used by cloud computing seemed to fail the enhancement of security against such an attack. We strive to study the mechanism and principles of SQL injection attack in order to help the information security personnel to understand and manage such risks.
    Keywords: SQL injection; virtualisation; SaaS; PaaS; cloud computing.

  • A survey on resolving security issues in SaaS through software defined networks   Order a copy of this article
    by Gopal Krishna Shyam, Reddy SaiSindhuTheja 
    Abstract: The key ingredient in the success of Software-as-a-Service (SaaS) is based on the clients satisfaction. Organisations are adopting SaaS solutions that offer several advantages, mostly in terms of minimising cost and time. Sensitive data acquired from the organisations are processed by the SaaS applications and stored at the SaaS provider end. All data flow over the network needs to be secured to avoid leakage of sensitive data. However, upon preliminary investigation, security is found to be the foremost issue that hampers the growth of confidence in the entrepreneurs for data deployment. This paper mainly focuses on different security issues in SaaS. Further, we analyse the security issues derived from the use of software defined networking (SDN) and elaborate on how it helps to improve security in SaaS. Additionally, comparisons between current solutions and SDN solutions are made. Hence, this work aims at giving new directions to researchers, specifically in the domain of SaaS, in understanding security issues and planning possible countermeasures.
    Keywords: software-as-a-service; security issues; software defined networking.

  • Towards automation of fibre to the home planning with consideration of optic access network model   Order a copy of this article
    by Abid Naeem, Sheeraz Ahmed, Yousaf Ali, Nadeem Safwan, Zahoor Ali Khan 
    Abstract: With the intention to meet the increasing demand for future higher bandwidth applications, fibre-based access is considered the best resolution to deliver triple play services (voice, data, video). Therefore there is great need to migrate from traditional copper-based networks to fibre-based access. Owing to rapid technological evolution, tough competition and budget limitation, service providers are struggling to provide a cost-effective solution to minimise their operational cost with excellent customer satisfaction. One of the factors that increases the cost of overall fibre to the home (FTTH) networks is the unplanned deployment resulting in use of extra components and resources. Therefore, it is imperative to determine a suitable technique to help to reduce the planning process, required time and deployment cost through optimisation. Automation-based planning is one of the possible ways to automate the network design at probable lowest cost. In this research a technique for migration from copper to fibre-access network with a climbable and optimised Passive Optic Network (PONFTTx) infrastructure is proposed, identifying the need for new technology in developing countries.
    Keywords: fibre to the home; passive optical networks; GPON; triple play.

  • FOGSYS: a system for the implementation of StaaS service in fog computing using embedded platforms   Order a copy of this article
    by José Dos Santos Machado, Danilo Souza Silva, Raphael Silva Fontes, Adauto Cavalcante Menezes, Edward David Moreno, Admilson De Ribamar Lima Ribeiro 
    Abstract: This work presents the concept of fog computing, its theoretical contextualisation, and related works, and develops the FogSys system with the main objective of simulating, receiving, validating and storing data from IoT devices to be transferred to cloud computing. Fog computing serves to provide the StaaS (Storage as a Service) service. The results showed that the implementation of this service in devices of embedded systems can be a good alternative to reduce one of these problems, in this case, the storage of data, which is faced currently by IoT devices.
    Keywords: fog computing; cloud computing; IoT; embedded systems; StaaS.

  • BFO-based firefly algorithm for multi-objective optimal allocation of generation by integrating renewable energy sources   Order a copy of this article
    by Swaraj Banerjee, Dipu Sarkar 
    Abstract: Among the rapid evolution of modernisation of alternative energy, the electric power system can be made out of a few Renewable Energy Resources (RES). The Economic Load Dispatch (ELD), which is a standout amongst the most complicated optimisation issues, involves a noteworthy spot in the power system's operation and control. Traditionally, it is to recognise the optimal combination of the generation levels of all power producing units so as to diminish the whole fuel cost while satisfying the loads and deficit in the power transmission system. This paper presents a modern and proficient technique for clearing up the ELD issue. To resolve this issue we have amalgamated two meta-heuristic optimisation algorithms, namely the Bacterial Foraging Optimisation (BFO) algorithm and the Firefly optimisation Algorithm (FA) by incorporating both solar and wind power renewable energies. The quality of the proposed methodology is tried and approved on the standard IEEE 3-, 6- and 10-unit systems by solving some cases such as the fuel cost minimisation, whole generation cost minimisation, emission minimisation, and at the same time the system transmission loss. The attained results are contrasted with the MOPSO and the hybrid BOA algorithms. The results show that the proposed methodology gives an accurate solution for some categories of objective functions.
    Keywords: economic load dispatch; solar energy; wind power; fuel and total generation cost; bacterial foraging optimisation; firefly optimisation algorithm.

  • Cloud infrastructure planning: models considering an optimisation method, cost and performance requirements   Order a copy of this article
    by Jamilson Dantas, Rubens Matos, Carlos Melo, Paulo Maciel 
    Abstract: Over the years, many companies have employed cloud computing systems as the best choice regarding the infrastructure to support their services, while keeping high availability and performance levels. The assurance of the availability of resources, considering the occurrence of failures and desired performance metrics, is a significant challenge for planning a cloud computing infrastructure. The dynamic behaviour of virtualised resources requires special attention to the effective amount of capacity that is available to users, so the system can be correctly sized. Therefore, planning computational infrastructure is an important activity for cloud infrastructure providers to analyse the cost-benefit trade-off among distinct architectures and deployment sizes. This paper proposes a methodology and models to support planning and the selection of a cloud infrastructure according to availability, COA, performance and cost requirements. An optimisation model based on GRASP meta-heuristic is used to generate a cloud infrastructure with a number of physical machines and Virtual Machines (VM) configurations. Such a system is represented using an SPN model and closed-form equations to estimate cost and dependability metrics. The proposed method is applied in a case study of a video transcoding service hosted in a cloud environment. The case study demonstrates the selection of cloud infrastructures with best performance and dependability metrics, considering the use of VP9, VP8 and H264 video codecs, as well as distinct VM setups. The results show the best configuration choice considering a six user profile. The results also show the computation of the probability of finalising a set of video transcoding jobs by a given time.
    Keywords: cloud computing; performance; availability modelling; GRASP; COA; stochastic Petri nets; cost requirements.

  • Crowdsensing campaigns management in smart cities   Order a copy of this article
    by Carlos Roberto De Rolt, Julio Dias, Eliza Gomes, Marcelo Buosi 
    Abstract: The growth of cities is accompanied by a large number of different problems in the urban environment that makes effective management of public services a hard task. The use of information technology is one way to help in solving urban problems, aiming for the development of smart cities. Crowdsensing mechanism is an important tool in this process, exploring a collective intelligence and organising a collaboration of large groups of people. This work focuses mainly on the process of management of crowdsensing campaigns contributing to the theoretical framework regarding the theme. The learning in a use case contributed to improving the technical requirements of the computational platform used. Through a crowdsensing system, collaborative data collection and sensor monitoring campaigns were executed, which allowed learning about the management of crowdsensing campaigns, with results such as adjustments in the computational platform by the insertion of new types of campaign and the inclusion of feedback elements. This work reports the process of implementation and improvement of a crowdsensing system, which was initially developed from theoretical knowledge and deployed in the University of Bologna where students participated in campaigns managed through a computational platform entitled ParticipACT, resulting in several studies about the subject. In another context, based on this pioneering experience, the computer platform, ParticipACT, was transferred to LabGES, the Management Technologies Laboratory of UDESC (Santa Catarina State University), based on an international cooperation agreement. Collaborative data collection campaigns were carried out in a monitored way that enabled learning about crowdsensing campaigns management, resulting in significant contributions to the improvement of the system and propositions of adjustments in the theoretical framework of management campaign models.
    Keywords: crowdsensing; smart cities; campaign management; ParticipACT.

  • A knowledge- and intelligence-based strategy for resource discovery on IaaS cloud systems   Order a copy of this article
    by Mohammad Samadi Gharajeh 
    Abstract: Resource discovery selects appropriate computing resources in cloud systems to accomplish the users jobs. This paper proposes a knowledge- and intelligence-based strategy for resource discovery in IaaS cloud systems, called KINRED. It uses a fuzzy system, a multi-criteria decision making (MCDM) controller, and an artificial neural node to discover suitable resources under various changes on network metrics. The suggested fuzzy system uses hardware specifications of the computing resources in which CPU speed, CPU core, memory, disk, the number of virtual machines, and usage rate are considered as inputs, and hardware type is considered as output of the system. The suggested MCDM controller makes proper decisions based on users requirements in which CPU speed, CPU core, memory, and disk are assumed as inputs, and job type is assumed as output of the controller. Furthermore, the artificial neural node selects the computing resource having the highest success rate based on both outputs of the fuzzy system and MCDM controller. Simulation results show that the proposed strategy surpasses some of the existing related works in terms of the number of successful jobs, system throughput, and service price.
    Keywords: cloud computing; resource discovery; knowledge-based system; intelligent strategy; artificial neural node.

  • Performance impact of the MVMM algorithm for virtual machine migration in data centres   Order a copy of this article
    by Nawel Kortas, Habib Youssef 
    Abstract: Virtual machine (VM) migration mechanisms and the design of data centres for cloud computing have a significant impact on energy cost and negotiated Service Level Agreement (SLA). The recent work focuses on how to use VM migration to achieve stable physical machine (PM) usage with the objective of reducing energy consumption, under stated SLA constraints. This paper presents and evaluates a new scheduling algorithm called MVMM (Minimisation of Virtual Machine Migration) for VM migration within a data centre environment. MVMM makes use of a DBN (Dynamic Bayesian Network) to decide where and when a particular VM migrates. Indeed, the DBN takes as input the data centre parameters then computes a score for each VM candidate for migration in order to reduce the energy consumption by decreasing the number of future migrations according to the probabilistic dependencies between the data centre parameters. Furthermore, our performance study shows that the choice of a data centre scheduling algorithm and network architecture in cloud computing significantly impacts the energy cost and application performance under resource and service demand variations. To evaluate the proposed algorithm, we integrated the MVMM scheduler into the GreenCloud simulator while taking into consideration key data centre characteristics such as scheduling algorithm, DCN (Data re Network) architecture, link, load and communication between VMs. The performance results show that the use of the MVMM scheduler algorithm within a three-tier debug architecture can reduce energy consumption by over 35% when compared with five well-known schedulers, namely Round Robin, Random, Heros, Green, and Dens.
    Keywords: MVMM algorithm; virtual machine; cloud computing; dynamic Bayesian networks; SLA; scheduler algorithm; data centre network architectures; VM migration.

  • SDSAM: a service-oriented approach for descriptive statistical analysis of multidimensional spatio-temporal big data   Order a copy of this article
    by Weilong Ding, Zhuofeng Zhao, Jie Zhou, Han Li 
    Abstract: With the expansion of the Internet of Things, spatio-temporal data has been widely used and generated. The rise of big data in space and time has led to a flood of new applications with statistical analysis characteristics. In addition, applications based on statistical analysis of these data must deal with the large capacity, diversity and frequent changes of data, as well as the query, integration and visualisation of data. Developing such applications is essentially a challenging and time-consuming task. In order to simplify the statistical analysis of spatio-temporal data, a service-oriented method is proposed in this paper. This method defines the model of spatio-temporal data service and functional service. It defines a process-based application of spatio-temporal big data statistics to invoke basic data services and functional services, and proposes an implementation method of spatio-temporal data service and functional service based on Hadoop environment. Taking the highway big data analysis as an example, the validity and applicability of this method are verified. The effectiveness of this method is verified by an example. The validity and applicability of the method are verified by a case study of Expressway large data analysis. An example is given to verify the validity of the method.
    Keywords: spatio-temporal data; RESTful; web service.

  • Personality-aware recommendations: an Empirical study in education   Order a copy of this article
    by Yong Zheng, Archana Subramaniyan 
    Abstract: Recommender systems have been developed to deliver item recommendations to the users tailored to user preferences. The impact of the human personality has been realised in user decision making. There are several personality-aware recommendation models which incorporate the personality traits into the recommendations. They have been demonstrated to be effective in improving the quality of the recommendations in several domains, including movies, music and social networks. However, the impact on the area of education is still under investigation. In this paper, we discuss and summarise state-of-the-art personality-based collaborative filtering techniques for recommendations, and perform an empirical study on educational data. Particularly, we collect the personality traits in two ways: a user survey and a natural language processing system. We examine the effectiveness of the recommendation models by using these subjective and inferred personality traits, respectively. Our experimental results reveal that students with different personality traits may make different choices, and the inferred personality traits are more reliable and effective to be used in the process of recommendations.
    Keywords: personality; recommender systems; education; empirical study.

  • Big data analytics: an improved method for large-scale fabrics detection based on feature importance analysis from cascaded representation   Order a copy of this article
    by Minghu Wu, Song Cai, Chunyan Zeng, Zhifeng Wang, Nan Zhao, Li Zhu, Juan Wang 
    Abstract: Aiming at the dimensional disaster and data imbalance in large-scale fabrics data, this paper proposes a classification method of fabrics images based on feature fusion and feature selection. The model of representation learning using transfer learning idea is firstly established to extract semantic features from fabrics images. Then, the features generated from the different models are cascaded with the purpose of features complement. Furthermore, the extremely randomised trees (Extra-Trees) are used to analyse the importance of the cascaded representation and reduce the computation time of the classification model with big data and high-dimensional representation. Finally, the multilayer perceptron completes the classification of selected features. Experimental results demonstrate that the method can detect fabrics with high accuracy. Moreover, feature importance analysis effectively accelerates the detection speed when the model processes big data.
    Keywords: big data; representation learning; feature fusion; feature selection.

  • Research on design method of manoeuvring targets tracking generator based on LabVIEW programming   Order a copy of this article
    by Caiyun Gao, Shiqiang Wang, Huiyong Zeng, Juan Bai, Binfeng Zong, Jiliang Cai 
    Abstract: Aiming at the issue of poor visual display and non-real-time status output while describing maneuvering target track with data, a new design method of target track generator is proposed based on laboratory virtual instrument engineering workbench (LabVIEW). Firstly, the motion model of maneuvering target is builded. Secondly, the design requirement of track generator is discussed. Finally, target track of multiple targets and multiple maneuvering model is produced by visual panel and code design with LabVIEW. Simulation results indicate that the proposed method can output the target status in real time with different data rates while displaying the multiple targets maneuvering track directly and have good visibility. And also the generated track parameters are of high accuracy and effective data.
    Keywords: LabVIEW; virtual instrument; target track simulation; situation display of radar.

  • Finite state transducer based light-weight cryptosystem for data confidentiality in cloud computing   Order a copy of this article
    by Basappa Kodada, Demian Antony D'Mello 
    Abstract: Cloud computing is derived from parallel, cluster, grid and distributed computing and is becoming one of the advanced and growing technologies. With the rapid growth of internet technology and its speed, the number of users for cloud computing is growing enormously, and huge amounts of data are being generated. With the growth of data in cloud, the security and safety of data, such as data confidentiality and privacy, are a paramount issue because data plays a vital role in the current trend. This paper proposes a new type of cryptosystem based on a finite state transducer to provide data confidentiality for cloud computing. The paper presents the protocol communication process and gives an insight into security analysis on the proposed scheme. The scheme proves that it is stronger and more secure than the existing schemes that can be derived from results as proof of concept.
    Keywords: security; confidentiality; encryption; decryption; automata; finite state machine; finite state transducer; cryptography; data safety.

  • FastGarble: an optimised garbled circuit construction framework   Order a copy of this article
    by A. Anasuya Innocent, G. Prakash, K. Sangeeta 
    Abstract: In the emerging field of cryptography, secure computation can be used to solve a number of distributed computing applications without loss of privacy of sensitive/ private data. The applications can be as simple as coin tossing or agreement between parties, or as complex as e-auctions, e-voting, or private data retrieval for the purpose of carrying out research on sensitive data, private editing on cloud, etc., without the help of a trusted third party. Confidentiality can be achieved by the use of conventional cryptographic techniques, but they require the data availability for working. For working on sensitive data some other technique is needed, and there comes the use of secure computation. Any protocol on secure computation starts with the construction of a garbled circuit of the underlying functionality, and the efficiency of protocol and circuit construction are directly proportional to each other. Hence, as the complexity of an application increases, the circuit size increases, resulting in poor efficiency of the protocol, which in turn restricts secure computation from finding its use in day-to-day applications. In this paper, an optimised garbled circuit construction framework, named FastGarble, is proposed, which has been shown to improve the time complexity of garbled circuit construction.
    Keywords: secure computation; garbled circuit; performance; secure two-party computation; time complexity.

  • Fine-grained access control of files stored in cloud storage with traceable and revocable multi-authority CP-ABE scheme   Order a copy of this article
    by Bharati Mishra, Debasish Jena, Srikanta Patnaik 
    Abstract: Cloud computing is gaining increasing popularity among enterprises, universities, government departments, and end-users. Geographically distributed users can collaborate by sharing files through the cloud. Ciphertext-policy attribute-based (CP-ABE) access control provides an efficient technique to enforce fine-grained access control by the data owner. Single authority CP-ABE schemes create a bottleneck for enterprise applications. Multi-authority CP-ABE systems deal with multiple attribute authorities performing the attribute registration or key distribution. Type I pairing is used in designing the existing multi-authority systems. They are vulnerable to some reported known attacks on them. This paper proposes a multi-authority CP-ABE scheme that supports attribute and policy revocation. Type III pairing is used in designing the scheme, which has higher security, faster group operations, and requires less memory to store the elements. The proposed scheme has been implemented using the Charm framework, which uses the PBC library. The OpenStack cloud platform is used for computing and storage services. It has been proved that the proposed scheme is collusion resistant, traceable, and revocable. AVISPA tool has been used to verify that the proposed scheme is secure against a replay attack and man-in-the-middle attack.
    Keywords: cloud storage; access control; CP-ABE; attribute revocation; blockchain; multi-authority.

  • On generating Pareto optimal set in bi-objective reliable network topology design   Order a copy of this article
    by Basima Elshqeirat, Ahmad Aloqaily, Sieteng Soh, Kwan-Wu Chin, Amitava Datta 
    Abstract: This paper considers the following NP-hard network topology design (NTD) problem called NTD-CB/R: given (i) the location of network nodes, (ii) connecting links, and (iii) each links reliability, cost and bandwidth, design a topology with minimum cost (C) and maximum bandwidth (B) subject to a pre-defined reliability (R) constraint. A key challenge when solving the bi-objective optimisation problem is to simultaneously minimise C while maximising B. Existing solutions aim to obtain one topology with the largest bandwidth cost ratio. To this end, this paper aims to generate the best set of non-dominated feasible topologies, aka the Pareto Optimal Set (POS). It formally defines a dynamic programming (DP) formulation for NTD-CB/R. Then, it proposes two alternative Lagrange relaxations to compute a weight for each link from its reliability, bandwidth, and cost. The paper further proposes a DP approach, called DPCB/R-LP, to generate POS with maximum weight. It also describes a heuristic to enumerate only k?n paths to reduce the computational complexity for a network with n possible paths. Extensive simulations on hundreds of various sized networks that contain up to 299 paths show that DPCB/R-LP can generate 70.4% of the optimal POS while using only up to 984 paths and 27.06 CPU seconds. With respect to a widely used metric, called overall-Pareto-spread (OR), DPCB/R-LP produces 94.4% of POS with OS = 1, measured against the optimal POS. Finally, all generated POS each contains a topology that has the largest bandwidth cost ratio, significantly higher than 88% obtained by existing methods.
    Keywords: bi-objective optimisation; dynamic programming; Lagrange relaxation; Pareto optimal set; network reliability; topology design.

  • Dynamic quality of service for different flow types in SDN networks   Order a copy of this article
    by Alessandro Lima, Eduardo Alchieri 
    Abstract: The structure of the internet makes it difficult to implement Quality of Service (QoS) in different flows generated by many different applications, ranging from an e-commerce application with a light demand to real-time applications such as VoIP or videoconferencing, which make heavy use of the internet. One of the challenges is the lack of technical knowledge and the difficulty of configuring network equipment with many proprietary technologies. Software Defined Networks (SDN) are a good alternative to mitigate these problems. By separating the control plane from the data plane, network administrators can efficiently use the network resources and, moreover, it is easier to provide new services and applications tailored to the needs of the network. However, SDN technology itself still suffers from the limitation of solid QoS mechanisms, especially considering flows classified as elephant (large data volume), cheetah (high throughput) and alpha (large bursts). Aiming to fill this gap, this work proposes a new SDN service, called QoS-Flux, that receives network information from the data plane, through the OpenFlow protocol, to apply different QoS algorithms and filters to dynamically deal with different flows. Experimental results show that QoS-Flux significantly improves the QoS metrics of delay, jitter, packet loss, and bandwidth in a SDN network.
    Keywords: SDN; quality of service; elephant flow; alpha flow; cheetah flow.

  • Optimal controller design for an islanded microgrid during load change   Order a copy of this article
    by Bineeta Soreng, Raseswari Pradhan 
    Abstract: There is a high tendency of voltage and frequency variation in islanding mode compared with grid connected mode. This paper emphasises on developing a technique for optimal regulation of voltage and frequency for a Microgrid (MG). Here, the studied microgrid is a Photovoltaic (PV) based MG (PVMG). The proposed technique is a Sliding Mode Controller (SMC) optimised using the Whales Optimisation Algorithm (WOA) named as SMC-WOA. The effectiveness of the proposed technique is validated by the dynamic response of the studied PVMG during operation mode and load change. For controlling of the studied PVMG, two loops, namely voltage loop and current loop, are used. Again, droop controller is used for power sharing in the studied PVMG. For ensuring the efficacy of the proposed technique SMC-WOA, dynamic responses of the studied system with SMC-WOA are compared with that of the Grey Wolf Optimization (GWO) based SMC (SMC-GWO) and Sine Cosine Algorithm (SCA) based SMC (SMC-SCA). With proper analysis of the simulation results, it is found that the proposed SMC-WOA helps in yielding better results compared with SMC-GWO and SMC-SCA techniques in terms of faster solution with minimum voltage, frequency overshoot along with minimum output current and total harmonic distortion. The validation of the proposed technique is also tested by comparing with the PI controller optimised with the same WOA, GWO and SCA.
    Keywords: microgrids; PI controller; sliding mode; whales optimisation algorithm; grey wolf optimisation; sine cosine algorithm; total harmonic distortion.

  • Research on hardware-in-the-loop simulation of single point suspension system based on fuzzy PID   Order a copy of this article
    by Jie Yang, Tao Gao, Shengli Yuan, Heng Shi, Zhenli Zhang 
    Abstract: The stability control of maglev trains is one of the core problems in the research of maglev train technology and the realisation of this goal is of the great scientific value in the field of magnetic levitation. Based on this, the research on the suspension control strategy of single point maglev system is founded and the control strategy is verified by experiments on Magnetic Levitation Ball System (MLBS). Aiming at the structure of multi-group independent control system of maglev train suspension frame, in order to improve the overall stability control ability of the suspension frame, the coupling relationship between the subsystems is established by introducing the suspension response deviation compensator. Finally, the effect of single point suspension control system is discussed and the cooperative control of suspension frame is carried out on MATLAB. Simulation and analysis show that each subsystem has good anti-jamming ability and the suspension system realises the balanced and stable control under different interference signals, which provides a certain reference value for further study of maglev train suspension control.
    Keywords: magnetic suspension; fuzzy PID; maglev train bogie; composite control.
    DOI: 10.1504/IJGUC.2020.10028885
     
  • A study on fog computing architectures and energy consumption approaches regarding QoS requirements   Order a copy of this article
    by Amel Ksentini, Maha Jebalia, Sami Tabbane 
    Abstract: In the Internet of Things (IoT) paradigm, data is gathered for treatment from locals, machines, vehicles, etc. Cloud computing is providing suitable hardware and software for data processing. Thus, the integration of IoT with cloud capabilities offers several benefits for many applications. However, challenges persist for some use-cases like delay-sensitive services due to the huge amount of information collected by IoT devices and to be processed by cloud servers. Fog computing is expected to overcome several limits and challenges in cloud computing concerning the Quality of Service (QoS) requirements like latency, bandwidth and location awareness. Nevertheless, researchers still have to deal with several issues namely the architectural level and the energetic aspect. In this paper, we investigate fog system architectures and energy consumption in literature, while considering QoS requirements in the synthesis. A system model is then introduced with a potential solution for QoS management for fog computing environment.
    Keywords: fog computing; IoT; architecture; QoS; energy consumption; delay-sensitivity; real-time processing; application-oriented architecture; management-oriented architecture; resource management; cloud computing.
    DOI: 10.1504/IJGUC.2020.10028886
     
  • Success factor analysis for cloud services: a comparative study on software as a service   Order a copy of this article
    by Dietmar Nedbal, Mark Stieninger 
    Abstract: The emergence of cloud computing has been triggering fundamental changes in the information technology landscape for years. The proliferation of cloud services gave rise to novel types of business models, the complexity of which results from numerous different factors critical to a successful adoption. However, when it comes to improvement activities by cloud service providers, due to their multifacetedness, the challenge lies in figuring out where to start. Furthermore, the acuteness of actions to be taken varies among different settings. Thus, we propose the success factor analysis as an approach to prioritise improvement activities according to their acuteness, which is thereby indicated by the gap between the priority and the actual performance of a particular factor. Results show that the factors with the overall highest gap are security & safety, trust, and costs. Overall, the strengths of cloud services are seen in technical features leading to a good ease of use, a positively perceived usefulness, and a broad availability.
    Keywords: success factor analysis; cloud computing; software as a service; SaaS; cloud services; survey.
    DOI: 10.1504/IJGUC.2020.10028887
     
  • Implementing the software defined management framework   Order a copy of this article
    by Maxwell Eduardo Monteiro, Rodolfo S. Villaça, Káio Simonassi, Renan Tavares, Cássio Reginato 
    Abstract: Software Defined Infrastructure (SDI) has become a relevant topic for computing and communication industry. Despite this huge technological movement, Network and Systems Management has been disregarded as one of the main themes in this ecosystem, and Software Defined Infrastructure has been managed by semi-software-defined management solutions. In order to reduce this gap, this paper presents SDMan, a Software Defined Management framework. The SDMan's proof of concept uses the OpenStack cloud platform and aims to demonstrate the feasibility of the proposed solution.
    Keywords: software defined infrastructure; software defined networks; network management; cloud computing.
    DOI: 10.1504/IJGUC.2020.10028895
     
  • Policies and mechanisms for enhancing the resource management in cloud computing: a performance perspective   Order a copy of this article
    by Mitali Bansal, Sanjay Kumar Malik, Sanjay Kumar Dhurandher, Isaac Woungang 
    Abstract: Resource management is among the critical challenges in cloud computing since it can affect its performance, cost, and functionality. In this paper, a survey on policies and mechanisms for enhancing the resource management in cloud computing is proposed. From a performance perspective, several resource management schemes for cloud computing are investigated and qualitatively compared in terms of various different parameters such as performance, response time, scalability, pricing factor, throughput, and accuracy, providing some fundamental knowledge based for researchers in the cloud computing area. We also classified various cloud computing techniques based on various policies like capacity allocation, admission control, load balancing and energy optimisation. Further, we divided defined techniques on the basis of various parameters like low, medium, high time span time techniques, reliability, performance, availability to name a few.
    Keywords: cloud computing; resource management; load balancing; policies and mechanisms; performance perspective.
    DOI: 10.1504/IJGUC.2020.10028888
     
  • A hybrid collaborative filtering recommendation algorithm: integrating content information and matrix factorisation   Order a copy of this article
    by Jing Wang, Arun Kumar Sangaiah, Wei Liu 
    Abstract: Matrix factorisation is a one of the most popular techniques in recommendation systems. However, matrix factorisation still suffers from cold start problem and needs complicated computation. In this paper, we present a hybrid recommendation algorithm, which integrates user and item content information and matrix factorisation. First, based on user or item content information, biases of user or item can be evaluated in advance. Incorporating user and item biases into matrix factorisation model, we can obtain final prediction model. At last, momentum stochastic gradient descent method is used to optimise other parameters. Experimental results on a real data set have shown best performance of our algorithm in terms of MAE and RMSE when compared with other classical matrix factorisation recommendation algorithms.
    Keywords: recommender system; collaborative filtering; matrix factorisation; momentum stochastic gradient descent.
    DOI: 10.1504/IJGUC.2020.10028889
     
  • Classification of cognitive algorithms for managing services used in cloud computing   Order a copy of this article
    by Lidia Ogiela, Makoto Takizawa, Urszula Ogiela 
    Abstract: This paper presents a new idea of cognitive systems dedicated to cloud computing. Especially presents backgrounds, introduction and description of service management procedures and algorithms dedicated to cloud computing and infrastructure. Cognitive methods based on semantic description and interpretation procedures. Described idea will be dedicated to secure service management procedures, especially in the cloud and fog stages. The proposed algorithms of cognitive service management will be presented and described by use of semantic aspects. Semantic analysis is used to extract meaning of analysed data. Also, in management processes is possible to analyse meaning aspects. These kinds of analysis can be used in different application areas. This paper presents services management protocols in the cloud and in the fog. In both of these stages is possible realised management procedure by application of secure methods and protocols. This paper presents the sharing techniques for data security in cloud computing.
    Keywords: cognitive algorithms; fog and cloud computing; service management protocols; cognitive data security.
    DOI: 10.1504/IJGUC.2020.10028890
     
  • Model for generation of social network considering human mobility and interaction   Order a copy of this article
    by Naoto Fukae, Hiroyoshi Miwa, Akihiro Fujihara 
    Abstract: The structure of an actual network in the real world has often the scale-free property that the degree distribution follows the power law. As for a generation mechanism of a human relations network, it is necessary to consider human mobility and interactions, because, in general, a person moves around, meets another person, and makes human relations stochastically. However, there are few models considering human mobility so far. In this paper, we propose a mathematical model generating a human relations network for the purpose of fundamental research on the usage model for the utility computing. We show by the numerical experiments that a network generated by the proposed model has the scale-free property, the clustering coefficient follows the power law, and the average distance is small. This means that the proposed model can explain the mechanism generating an actual human relations network.
    Keywords: network; scale-free; human mobility; interaction; homesick Lévy walk; network generation model.
    DOI: 10.1504/IJGUC.2020.10028891
     
  • A proposal for a healthcare environment with a real-time approach   Order a copy of this article
    by Eliza H.A. Gomes, Mario A.R. Dantas, Patricia D.M. Plentz 
    Abstract: The increased use of IoT has contributed to the popularisation of environments that monitor the daily activities and health of the elderly, children or people with disabilities. The requirements of these environments, such as low latency and rapid response, corroborate the usefulness of associating Fog Computing with healthcare environments since one of the advantages of Fog Computing is to provide low latency. Because of this, we propose the use of a hardware and software infrastructure capable of storing, processing and presenting monitoring data in real-time, based on Fog Computing paradigm. Additionally, we propose the structuring of sensors for the implementation of a simulated healthcare environment, as well as the processing logic for the presentation of results referring to the health of the user.
    Keywords: IIoT platform; time constraint; fog computing; healthcare application.
    DOI: 10.1504/IJGUC.2020.10028892
     
  • Design and implementation of broadcasting system for selective contents considering interruption time   Order a copy of this article
    by Takuro Fujita, Yusuke Gotoh 
    Abstract: Owing the recent popularisation of digital broadcasting, selective contents broadcasting has attracted much attention. In selective contents broadcasting, although the server delivers contents based on their preferences, users may experience the interruption time while playing their selected contents. To reduce this interruption time, many researchers have proposed scheduling methods. However, since these scheduling methods evaluated the interruption time in simulation environments, we need to evaluate them in network environments. In this paper, we propose a broadcasting system of selective contents and evaluate its effectiveness in network environments.
    Keywords: broadcasting; interruption time; scheduling; selective contents; waiting time.
    DOI: 10.1504/IJGUC.2020.10028893
     
  • An algorithm to optimise the energy distribution of data centre electrical infrastructures   Order a copy of this article
    by Joao Ferreira, Gustavo Callou, Paulo Maciel, Dietmar Tutsch 
    Abstract: For the overriding need of storing all the data that is produced by social networks, the increase in energy consumption by e-commerce and cloud computing demands critical attention. Parallel to this surge in the use of electricity there is a long held hope for a reduction in both financial and environmental costs. What is presented here, in order to optimise energy distribution infrastructures at the data centre, is the power-balancing algorithm (PLDA-D). In order to analyse energy flow models (EFMs), the PLDA-D bases itself on the Bellman and Ford-Fulkerson flow algorithms. Power efficiency, sustainability, and cost metrics of data centre infrastructures constitute the elements for EFM computation. The case study that follows analysed four power infrastructures to demonstrate practical applications for power reduction strategies. A 3.8% reduction in sustainability impact and operational costs is shown by the results obtained.
    Keywords: sustainability; energy flow model; dependability; optimisation; data centre power architectures.
    DOI: 10.1504/IJGUC.2020.10028894
     

Special Issue on: ICTSCI-2019 Swarm Intelligence Techniques for Optimum Utilisation of Resources in Grid and Utility Computing

  • A vector-based watermarking scheme for 3D models using block rearrangements   Order a copy of this article
    by Modigari Narendra, M.L. Valarmathi, L. Jani Anbarasi, L. Prasanna 
    Abstract: Watermarking schemes help to secure the ownership, authenticity, and copyright-related security issues of the electronic data. An efficient, computationally secure wavelet based watermark is proposed for 3D OBJ models. Digital watermarking is used to enhance the copyright protection of the 3D models and to overcome the copyright and integrity violations. Watermark is embedded into the higher bands of the 3D model vertices resulting in higher invisibility and robustness towards various attacks. The embedding of the watermark on the 3D models were tested to provide high robustness and imperceptibility. Tampering of the 3D obj models was performed to discover the tampered area of the watermarks. Different kinds of geometric and non-geometric attack are analysed, which shows the robustness of the watermarking scheme. Based on the experimental design, the robustness of the watermarking scheme of the 3D models is proved to show a perfect authentication.
    Keywords: 3D mesh; mesh watermarking; block rearrangements.

  • Towards self-optimisation in fog computing environments   Order a copy of this article
    by Danilo Silva, Jose Machado, Admilson Ribamar, Edward Moreno 
    Abstract: In recent years, the number of smart devices (e.g. smartphones, sensors, autonomous vehicles) has grown exponentially. In this scenario, the computational demand for latency-sensitive applications in domains such as IoT, Industry 4.0 and smart cities has grown and the traditional model of cloud computing is no longer able to meet alone all the needs of this type of application. In this direction, a new paradigm of computation was introduced, called fog computing. This paradigm defines the architecture that extends the computational capacity and storage of the cloud to the edge of the network. However, many challenges need to be overcome, especially those regarding issues such as security, power consumption, high latency in communication with critical IoT applications, and need for quality of service (QoS). In this work, we have presented a container migration mechanism between the nodes between the fog and the cloud that supports the implementation of optimisation strategies to achieve different objectives solutions to problems of resources allocation in an environment integrating the IoT and fog computing. In addition, our work emphasises the performance optimisation and latency, through an autonomic architecture based on the MAPE-K control loop and providing a foundation for the analysis and optimisation architectures design to support IoT applications.
    Keywords: fog computing; resource management; autonomic; IoT; service migration.
    DOI: 10.1504/IJGUC.2020.10029710
     
  • Improved African buffalo optimisation algorithm for petroleum product supply chain management   Order a copy of this article
    by Chinwe Peace Igiri, Yudhveer Singh, Deepshikha Bhargava, Samuel Shikaa 
    Abstract: Designing an efficient supply chain network for a real-world optimisation problem is complex. The complexity is due to the highly constrained large problem size. An optimum petroleum products scheduling would not only influence the distribution cost but also induce scarcity or surplus as the case may be. Unfortunately, the poor masses bear the consequence, especially in case of shortage. Practically speaking, the bio-inspired method is the preferred alternative to conventional or exact algorithms. The latter are gradient based, and therefore require initial guess to obtain a reasonable solution. The African buffalo optimisation (ABO) algorithm belongs to the class of swarm intelligent algorithms and has shown significant performance in the literature. The ABO models the grazing and defending lifestyle of the African buffaloes in the savannah desert. The chaotic ABO and chaotic-Levy ABO are improved variants of the standard ABO. They have also demonstrated outstanding performance in recent studies. Active research methodology approach is employed to carry out this investigation: the PICO (Problem Intervention Comparison Outcome). The present study applies these three algorithms to design an optimum petroleum distribution scheduling system. It further compares the outcomes to identify the best to guide management and logistics to make an informed decision. The result shows that the chaotic-Levy flight ABO, chaotic ABO, and standard ABO reduced the original total cost by 24.79%, 24.85%, and 9.35%, respectively. This performance proves the robustness of CLABO and CABO in real-world optimisation problems.
    Keywords: supply chain network; computational intelligence; petroleum product scheduling; bio-inspired algorithm; swarm intelligent; African buffalo optimisation algorithm; chaotic African buffalo optimisation algorithm; chaotic–Levy flight African buffalo optimisation algorithm.

  • Towards an effective approach for architectural knowledge management considering global software development   Order a copy of this article
    by Muneeb Ali Hamid, Yaser Hafeez, Bushra Hamid, Mamoona Humayun, N.Z. Jhanjhi 
    Abstract: Architectural design is expected to provide virtuous outcomes of quality software products by satisfying customer requirements. A foremost apprehension of the customer is to have a better quality product within a minimal time span. The evaporation of architectural knowledge causes snags for the quality of a system being developed. This research study aims to propose and validate a framework bridging the gaps in the area of architectural knowledge management. A mixed research approach is employed to describe and evaluate this method. In order to align closely with industry practices, action research has been considered as research methodology, while the evaluation has been performed using a multiple case study approach. The data was collected by semi-structured interviews, through the participants. We have explored both the existing theories and industrial practices to build the proposed framework. The results show that the framework enables the architects to cope with complex designs in a distributed software development environment. The developed tool enabled to shift the theory into practice by assisting in the creation of system architecture, knowledge survival and support architectural evolution with changing requirements. This framework has implications for the effective management of architectural knowledge in a distributed environment. In this research study, our contribution is to map the essential knowledge management activities to store architectural knowledge. Furthermore, it enhances the element for sharing the architectural experience and enables the reuse of architectural knowledge by employing a case-based reasoning cycle. The experiences of the case study provide direction for future research.
    Keywords: knowledge management; architectural knowledge; global software development; design decision.

  • Entropy-based classification of trust factors for cloud computing   Order a copy of this article
    by Ankita Sharma, Puja Munjal, Hema Banati 
    Abstract: Cloud computing has now been introduced in organisations all around the globe. With the developing prevalence of grid and distributed computing, it has become incredibly important to maintain security and trust. Researchers have now begun concentrating on mining information in cloud computing and have begun distinguishing the basic factor of moral trust. Moral angles in the cloud rely upon the application and the present conditions. Data mining is a procedure for distinguishing the most significant data from a lot of irregular information. In this paper, a three-phase methodology is adopted, involving machine learning techniques to discover the most important parameter on which trust is based in the cloud environment. The methodology was then implemented on data set, proving privacy is the most important factor to calculate ethical trust in cloud computing. The results can be employed in real cloud environment to establish trust, as service providers now consider privacy as the main issue in this relatively new distributed computing environment.
    Keywords: cloud computing; data mining; classification; decision tree; trust; entropy; multivariate regression.

  • Hardware implementation of OLSR and improved OLSR for AANETs   Order a copy of this article
    by Pardeep Kumar, Seema Verma 
    Abstract: The emerging Airborne Ad hoc Networks (AANETs) are a subclass of Vehicular Ad hoc Networks (VANETs). The features of AANETs, specifically highly dynamic topology due to high cruising speed of aircraft, make them unique and different compared with typical MANETs and VANETs. The major challenge in AANET implementation is the regular route breaks due to very high mobility. To deal with routing challenges in AANETs, we have designed a new protocol named Airborne OLSR (AOLSR) which is an improved version of OLSR protocol in terms of a more optimised MPR selection technique. However, hardware implementation makes it possible to achieve a quicker call setup, low execution time and immediate response to any topological modulation. So, a hardware implementation of OLSR and AOLSR protocols using Verilog is presented in this paper. The architecture is simulated in Vivado ver. 2018. The simulation analysis proves that the proposed AOLSR performs better than OLSR in terms of power consumption and execution time.
    Keywords: AANETs; VANETs; AOLSR; OLSR; Verilog.

Special Issue on: Current Trends in Ambient Intelligence-Enabled Internet of Things and Web of Things Interface Vehicular Systems

  • Hybrid energy-efficient and QoS-aware algorithm for intelligent transportation system in internet of things   Order a copy of this article
    by N.N. Srinidhi, G.P. Sunitha, S. Raghavendra, S.M. Dilip Kumar, Victor Chang 
    Abstract: The Internet of Things (IoT) consists of a large number of energy compel devices that are prefigured to progress the effective competence of several industrial applications. It is essential to reduce the energy use of every device deployed in the IoT network without compromising the quality of service (QoS) for intelligent transportation systems. Here, the difficulty of providing the operation between the QoS allocation and the energy competence for the intelligent transportation system is deliberated. To achieve this objective, a multi-objective optimisation problem to accomplish the aim of estimating the outage performance of the clustering process and the network lifetime is devised. Subsequently, a Hybrid Energy-Efficient and QoS-Aware (HEEQA) algorithm that is a combination of quantum particle swarm optimisation (QPSO) along with improved non-dominated sorting genetic algorithm (NGSA) to achieve energy balance among the devices is proposed, and later the MAC layer parameters are tuned to reduce further the energy consumption of the devices. NSGA is applied to solve the problem of multi-objective optimisation and the QPSO algorithm is used to find the optimal cooperative nodes and cluster head in the clusters. The simulation outcome has put forward that the HEEQA algorithm has attained better operation balance between the energy competence and the QoS provisioning in the clustering process by minimising the energy consumption, delay, transmission overhead and maximising network lifetime, throughput and delivery ratio and is best suited for intelligent transportation application.
    Keywords: energy efficiency; intelligent transportation system; IoT; network lifetime; QoS.

  • Analysing control plane scalability issue of software-defined wide area network using simulated annealing technique   Order a copy of this article
    by Kshira Sahoo, Somula Ramasubbareddy, B. Balamurugan, B. Vikram Deep 
    Abstract: In Software Defined Networks (SDN), the decoupling of the control logic from the data plane enables vendor-independent policies, programmability, and provide other numerous advantages. However, since its inception, SDN is a subject of a wide range of criticism mainly related to the scalability issues of the control plane. To address these limitations, recent architectures have supported the implementation of multiple SDN controllers. Usage of multiple controllers in the network arises controller placement problem (CPP). The placement problem is a major issue for wide area networks because, while placing the controllers, significant strategies need to be considered. In most of the placement strategies, authors focused on propagation latency, because it is a critical factor in real networks. In this paper, the placement problem has formulated as an optimisation problem and the Simulated Annealing (SA) technique has been used to analyse the problem. This technique is a probabilistic single-solution-based search method that has influenced the annealing process of metallurgy engineering. Further, we investigate the behaviour of SA with four different neighboring solution techniques. The effectiveness of the algorithms was carried out on TataNld topology and implemented using MATLAB simulator.
    Keywords: software-defined networks; scalability; controller placement problem; simulated annealing.

  • Energy-aware multipath routing protocol for Internet of Things using network coding techniques   Order a copy of this article
    by S. Sankar, P. Srinivasan, Somula Ramasubbareddy, B. Balamurugan 
    Abstract: Energy conservation is a significant challenge in the Internet of Things (IoT), as it connects resource-constrained devices. The routing plays a vital role in transferring the data packets from the source to the destination. In Low Power and Lossy Networks (LLN), the existing routing protocols use the single routing metric, composite routing metric and opportunistic routing technique, to select the parent for the data transfer. However, the packet loss occurs, owing to the bottleneck of nodes nearby the sink and data traffic during the data transfer. In this paper, we propose an energy-aware multipath routing protocol (EAM-RPL) to prolong the network lifetime. The multipath model establishes multiple paths from the source node to the sink. In EAM-RPL, the source node applies the randomised linear network coding to encode the data packets and it transmits the data packets into the next level of cluster nodes. The intermediate nodes receive the encoded data packets from the source node and it forwards to the next cluster of nodes. Finally, the sink node receives the data packets and it decodes the original data packet sent by the source node. The simulation is conducted using COOJA network simulator. The effectiveness of EAM-RPL is compared with the RPL protocol. The simulation result shows that the proposed EAM-RPL improves the packet delivery ratio by 3-5% and prolongs the network lifetime by 5-10%.
    Keywords: Internet of Things; network coding; IPv6 routing protocol; low power and lossy networks; multipath routing.

  • Dynamic group key management scheme for clustered wireless sensor networks   Order a copy of this article
    by Vijaya Saraswathi Redrowthu, L. Padma Sree, K. Anuradha 
    Abstract: Group key management is a technique to establish a shared group key, between the cluster head and sensor nodes, for multiple sessions in a clustered network environment. The common use of this established group key (also termed as conference key) is to permit users to encrypt and decrypt a particular broadcast message that is meant for the total user group. In this work, we propose a cluster-based dynamic group key management protocol that is based on public key cryptography. Cluster head initiates establishment of a group key to the sensor nodes efficiently and achieves secure communication. Later, the computation of the common group key is performed by each sensor node. Group members have functionality to join and leave from particular communication along with this, other nodes, equal to threshold compute new conference key without involvement of cluster head. The proposed protocol is investigated in terms of security and complexity analysis using network simulator NS-2.
    Keywords: key management; group key management; wireless networks; privacy; public key cryptography; network simulator.

  • Intrusion detection technique using coarse Gaussian SVM   Order a copy of this article
    by Bhoopesh Singh Bhati, C.S. Rai 
    Abstract: In the new era of internet technology, everybody is transferring the data from place to place through the internet. As internet technology is improving, different types of attack have also increased. To detect the attacks it is important to protect transmitted information. The role of Intrusion Detection System (IDS) is imperative to detect various types of attack. Researchers have proposed numerous theories and methods in the area of IDS, the research in area of intrusion detection is still going on. In this paper, a Coarse Gaussian Support Vector Machine (CGSVM) based intrusion detection technique is proposed. The proposed method has four major steps, namely data collection, preprocessing and studying data, training and testing using CGSVM, and decisions. In implementation, KDDcup99 datasets are used as a benchmark and MATLAB programming environment is used. The results of the simulation are represented by Receiver Operating Characteristics (ROC) and confusion matrix. Here, the proposed method achieved high detection rates: 99.99%, 99.95%, 99.53%, 99.19%, and 90.57% for DOS, normal, probe, R2L, and U2R, respectively.
    Keywords: information security; intrusion detection; machine learning; CGSVM.
    DOI: 10.1504/IJGUC.2020.10026645
     
  • Investigation of multi-objective optimisation techniques to minimise the localisation error in wireless sensor networks   Order a copy of this article
    by Harriet Puvitha, Saravanan Palani, V. Vijayakumar, Logesh Ravi, V. Subramaniyaswamy 
    Abstract: Wireless Sensor Networks (WSN) play a major role in remote sensing environments. In recent trends, sensors are used in various wireless technologies owing to their smaller size, cheaper rates and ability to communicate with each other to create a network. The sensor network is the convergent technology of microelectronic and electromechanical technologies. The localisation process can determine the location of each node in the network. Mobility-assisted localisation is an effective technique for node localisation using a mobility anchor. The mobility anchor is also used to optimise the path planning for the location-aware mobile node. In this proposed system, a multi-objective method is proposed to minimise the distance between the source and the target node using the Dijkstra algorithm with obstacle avoidance. The Grasshopper Optimisation Algorithm (GOA), and the Butterfly Optimisation Algorithm (BOA) based multi-objective models are implemented along with obstacle avoidance and path planning. The proposed system maximises the localisation accuracy. Also it minimises the localisation error and the computation time compared with existing systems.
    Keywords: localisation models; grasshopper optimisation; butterfly optimisation; Dijkstra; path planning.

Special Issue on: Data-Intensive Services Advanced Applications and Infrastructure Research

  • Research on integrated energy system planning method considering wind power uncertainty   Order a copy of this article
    by Yong Wang, Yongqiang Mu, Jingbo Liu, Yongyi Tong, Hongbo Zhu, Mingfeng Chen, Peng Liang 
    Abstract: With the development of energy technology, the planning and operation of integrated energy systems coupled with electricity-gas-heat energy has become an important research topic in the future energy field. In order to solve the influence of wind power uncertainty on the unified planning of integrated energy systems, this paper constructs a wind energy uncertainty quantitative model based on intuitionistic fuzzy sets. Based on this, an integrated energy system planning model with optimal economic cost and environmental cost is established. The model is solved by the harmonic search algorithm. Finally, the proposed method is validated by simulation examples. The effectiveness of the integrated energy system planning method can improve the grid capacity of the wind power and reduce the CO2 of the system. And it has guiding significance for the long-term planning of integrated energy systems
    Keywords: wind power uncertainty; planning method; electricity-gas-heat energy.

  • Research on modelling analysis and maximum power point tracking strategies for distributed photovoltaic power generation systems based on adaptive control technology   Order a copy of this article
    by Yan Geng, Jianwei Ji, Bo Hu, Yingjun Ju 
    Abstract: As is well-known, the distributed photovoltaic power generation technology has been rapidly developed in recent years. The cost of distributed photovoltaic power generation is much higher than that of traditional power generation modes. Therefore, how to improve the effective use of photovoltaic cells has become a popular research direction. Based on the analysis of the characteristics of photovoltaic cells, this paper presents a mathematical model of photovoltaic cells and a maximum power point tracking algorithm based on hysteresis control and adaptive control technology variable step perturbation observation method. This algorithm can balance the control precision and control speed from the disturbance observation method and improve the tracking results significantly. Finally, the feasibility of the algorithm and the tracking effects are simulated by using Matlab/Simulink software.
    Keywords: distributed photovoltaic; adaptive control technology; maximum power point tracking strategies.

Special Issue on: IoT Integration in Next-Generation Smart City Planning

  • Internet of things based architecture for additive manufacturing interface   Order a copy of this article
    by Swee King Phang, Norhijazi Ahmad, Chockalingam Vaithilingam Aravind, Xudong Chen 
    Abstract: This paper addresses the current challenges in managing multiple additive manufacturing units (i.e., 3D printers) without an online system. Managing multiple 3D printers is troublesome and difficult. The traditional process of selecting free printers and monitoring printing statuses manually has revealed a big flaw in the system as it requires physical interaction between the machine and human. As of today, there is little to none for a 3D printer online managing system. Most printing still requires human monitorisation and the project work to be printed must be fed physically to the printer via external drives. In this paper, a solution to zero physical interaction to additive manufacturing units is proposed. The objective is achieved by using the saturated IoT technologies. Webserver will be used to create a webpage to upload the file, for approval, and to check the printing status. A server will be used to store the files, slicing software, file queueing system and to store temporary information of the manufacturing unit's status. Cameras on multiple 3D printers will be used as sensors to monitor the project progress visually. In the end product of the IoT based 3D printing systems, the user will be able to upload the files, ask for superior approval (optional), queue to a specific manufacturing unit to print out by the algorithm set on the cloud server, receive important data from the server such as time estimation, progress percentage and the extruders temperature, and receive notification of error if any issues arise, and notification of completion. The proposed system is implemented and verified in the Additive Manufacturing Lab in Taylors University Malaysia.
    Keywords: additive manufacturing units; 3D printing; online printing; printer management; cloud printing; printing networking; IoT printer; printing monitoring; heat monitor.

  • Enhanced authentication and access control in internet of things: a potential blockchain-based method   Order a copy of this article
    by Syeda Mariam Muzammal, Raja Kumar Murugesan 
    Abstract: With the rapid growth of Internet of Things (IoT), it can be foreseen that IoT services will be influencing several use-cases. IoT brings along the security and privacy issues that may hinder its widescale adoption. The scenarios in IoT applications are quite dynamic compared with the traditional internet. It is vital that only authenticated and authorised users get access to the services provided. Hence, there is a need for a novel authentication and access control technique that is compatible and practically applicable in diverse IoT scenarios to provide adequate security to devices and data communicated. This article aims to highlight the potential of blockchain for enhanced and secured authentication and access control in IoT. The proposed method relies on blockchain technology, which tends to eliminate the limitations of intermediaries for authentication and access control. Compared with existing systems, it has advantages of decentralisation, secured authentication, authorisation, adaptability, and scalability.
    Keywords: internet of things; security; authentication; access control; blockchain.

Special Issue on: CONIITI 2019 Intelligent Software and Technological Convergence

  • A computer-based decision support system for knowledge management in the swine industry   Order a copy of this article
    by Johanna Trujillo-Díaz, Milton M. Herrera, Flor Nancy Díaz-Piraquive 
    Abstract: The swine industry contributes to food security around the world. However, the most vulnerable point in the industry occurs at the pig production cycle. This production cycle generates an imbalance between supply and demand, which affects profitability. This paper describes a computer-based decision support system for knowledge management (KM) which contributes to improving the profitability performance management into the swine industry. The computer-based system allows assessing decision alternatives on the dimensions of the KM capacity and profitability performance. This tool contributes to generating integration strategies for the swine industry four simulation scenarios was designed for representing a pig company in the Colombian case.
    Keywords: decision support system; simulation; swine; knowledge management; system dynamics.

Special Issue on: Novel Hybrid Artificial Intelligence for Intelligent Cloud Systems

  • QoS-driven hybrid task scheduling algorithm in a cloud computing environment   Order a copy of this article
    by Sirisha Potluri, Sachi Mohanty, Sarita Mohanty 
    Abstract: Cloud computing environment is a growing technology of distributed computing. Typically using cloud computing the services are deployed with individuals or organisations and to allow sharing of resources, services, and information based on the demand of users over the internet. CloudSim is a simulator tool used to simulate cloud scenarios. A QoS-driven hybrid task scheduling architecture and algorithm for dependent and independent tasks in a cloud computing environment is proposed in this paper. The results are compared against the Min-Min task scheduling algorithm, QoS-driven independent task scheduling algorithm, and QoS-driven hybrid task scheduling algorithm. QoS-driven hybrid task scheduling algorithm is compared with time and cost as QoS parameters and it gives a better result for these parameters.
    Keywords: cloud computing; task scheduling; quality of service.

Special Issue on: WETICE-2019 Novel Approaches to the Management and Protection of Emerging Distributed Computing Systems

  • Benchmarking management techniques for massive IIoT time series in a fog architecture   Order a copy of this article
    by Sergio Di Martino, Adriano Peron, Alberto Riccabone, Vincenzo Norman Vitale 
    Abstract: Within the Industrial Internet of Things (IIoT) scenario, the online availability of a growing number of assets in factories is enabling the collection of huge amounts of data. They can be used for big data analytics, with great possibilities for efficiency improvements and business growth. Each asset produces collections of time series, namely data streams, that must be properly handled with specific techniques providing, at the same time, effective ingestion and retrieval performance, in complex network architectures, maintaining compliance with company and infrastructure boundaries. In this paper, we describe an industrial experience in the management of massive time series from instrumented machinery, conducted in a plant of Avio Aero (part of General Electric Aviation). As a first step, we propose a fog-based architecture to ease the collection of these massive dataset, supporting local and remote data analytics tasks. Then, we present the results of an empirical comparison of four database management systems, namely PostgreSQL, Cassandra, MongoDB and InfluxDB, in the ingestion and retrieval of gigabytes of real IIoT data, collected from an instrumented dressing machine. More in detail, we tested different settings and indexing features offered by these DBMS, under different types of query. Results show that, in the investigated context, InfluxDB is able to provide very good performance, but PostgreSQL can still be a very interesting alternative, when more advanced settings are exploited. MongoDB and Cassandra, on the other hand, are not able to provide the performance of the two other DBMS.
    Keywords: big data; time series; IIoT; fog architecture; TSMS; NoSQL Ddtabase; relational database; benchmarking.

  • DIOXIN: runtime security policy enforcement of Fog Computing applications   Order a copy of this article
    by Enrico Russo, Luca Verderame, Alessandro Armando, Alessio Merlo 
    Abstract: Fog Computing is an emerging distributed computational paradigm that moves the computation towards the edge (i.e., where data are produced). Although Fog operating systems provide basic security mechanisms, security controls over the behaviour of applications running on Fog nodes are limited. For this reason, applications are prone to a variety of attacks. We show how current Fog operating systems (with a specific focus on Cisco IOx) are actually unable to prevent these attacks. We propose a runtime policy enforcement mechanism that allows for the specification and enforcement of user-defined security policies on the communication channels adopted by interacting Fog applications. We prove that the proposed technique reduces the attack surface of Fog Computing w.r.t. malicious applications. We demonstrate the effectiveness of the proposed technique by carrying out an experimental evaluation against a realistic Fog-based IoT scenario for smart irrigation.
    Keywords: Fog Computing; security assessment; Cisco IOx; runtime monitoring.