Forthcoming articles

International Journal of Grid and Utility Computing

International Journal of Grid and Utility Computing (IJGUC)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Grid and Utility Computing (80 papers in press)

Regular Issues

  • An integrated framework of generating quizzes based on linked data and its application in the medical education field   Order a copy of this article
    by Wei Shi, Chenguang Ma, Hikaru Yamamura, Kosuke Kaneko, Yoshihiro Okada 
    Abstract: E-learning has greatly developed in recent years because of the popularity of smart-phones and tablets. With the improvement of the device performances and the data transmission capability of the internet, more and more complex E-learning materials are provided to learners. Quiz games are a kind of e-learning format which can both test and train learners. How to automatically generate good quizzes is discussed by many researchers. In this paper, we proposed a new framework which can support users to create their own quiz games based on linked data. Compared with other methods, our framework effectively uses the feature of the linked data, which stores both the values and the linkages among values. The quizzes generated by our framework are easy to improve and extend, have more changes, and support the learning analytics of users activities. For obtaining better educational effects, we further extend our framework for supporting the generation of quizzes in 3D environments. Especially, we discuss how to apply our framework in medical education in this paper.
    Keywords: linked data; quiz; serious game; e-learning.

  • Usage of DTNs for low cost IoT application in smart cities: performance evaluation of spray and wait routing protocol and its enhanced versions   Order a copy of this article
    by Evjola Spaho 
    Abstract: Delay Tolerant Networks (DTNs) can be used as a low cost solution to implement different applications of the Internet of Things (IoT) in a smart city. An issue that needs to be solved when this approach is used is the efficient transmission of data. In this paper, we create a DTN for a smart city IoT application and enhance the Binary Spray and Wait (B-S&W) routing protocol to improve delivery probability and average delay. We evaluate and compare the B-S&W routing protocol and our two enhanced versions of spray and wait (S&W-V1 and S&W-V2). The simulation results show that the proposed versions S&W-V1 and S&W-V2 improve the delivery probability and average latency.
    Keywords: IoT; smart cities; delay tolerant networks; wireless Sensor networks; routing protocols; spray and wait protocol.

  • Public key encryption with equality test for vehicular system based on near-ring   Order a copy of this article
    by Muthukumaran Venkatesan, Ezhilmaran Devarasaran 
    Abstract: In recent years, vehicles have been increasingly integrated with an intelligent transport system (ITS). This has led to the development of Vehicular Ad hoc Networks(VANETs) through which the vehicles communicate with each other in an effective manner. Since VANET assists in both vehicle to vehicle and vehicle to infrastructure communication the matter of security and privacy has become a major concern. In this context, this work presents a public key Encryption with equality test based on DLP with decomposition problems over near-ring The proposed method is highly secure and it solves the problem of quantum algorithm attacks in VANET systems. Further, the proposed system prevents the chosen-ciphertext attack in type-I adversary and it is indistinguishable against the random oracle model for the type-II adversary. The proposed scheme is highly secure and the security analysis measures are stronger than existing techniques.
    Keywords: near-ring; Diffie-Hellman; vehicular ad hoc networks.

  • An old risk in the new era: SQL injection in cloud environment   Order a copy of this article
    by Fu Xiao, Wang Zhijian, Wang Meiling, Chen Ning, Zhu Yue, Zhang Lei, Wang Pei, Cao Xiaoning 
    Abstract: After haunting all the software engineers for more than 26 years since it was discovered and classified in 2002, SQL injection still poses a most serious thread to developers, maintainers and users of web applications, even into the brand new cloud era. SaaS, PaaS and IaaS virtualisation technologies which are widely used by cloud computing seemed to fail the enhancement of security against such an attack. We strive to study the mechanism and principles of SQL injection attack in order to help the information security personnel to understand and manage such risks.
    Keywords: SQL injection; virtualisation; SaaS; PaaS; cloud computing.

  • A survey on resolving security issues in SaaS through software defined networks   Order a copy of this article
    by Gopal Krishna Shyam, Reddy SaiSindhuTheja 
    Abstract: The key ingredient in the success of Software-as-a-Service (SaaS) is based on the clients satisfaction. Organisations are adopting SaaS solutions that offer several advantages, mostly in terms of minimising cost and time. Sensitive data acquired from the organisations are processed by the SaaS applications and stored at the SaaS provider end. All data flow over the network needs to be secured to avoid leakage of sensitive data. However, upon preliminary investigation, security is found to be the foremost issue that hampers the growth of confidence in the entrepreneurs for data deployment. This paper mainly focuses on different security issues in SaaS. Further, we analyse the security issues derived from the use of software defined networking (SDN) and elaborate on how it helps to improve security in SaaS. Additionally, comparisons between current solutions and SDN solutions are made. Hence, this work aims at giving new directions to researchers, specifically in the domain of SaaS, in understanding security issues and planning possible countermeasures.
    Keywords: software-as-a-service; security issues; software defined networking.
    DOI: 10.1504/IJGUC.2021.10032066
  • Towards automation of fibre to the home planning with consideration of optic access network model   Order a copy of this article
    by Abid Naeem, Sheeraz Ahmed, Yousaf Ali, Nadeem Safwan, Zahoor Ali Khan 
    Abstract: With the intention to meet the increasing demand for future higher bandwidth applications, fibre-based access is considered the best resolution to deliver triple play services (voice, data, video). Therefore there is great need to migrate from traditional copper-based networks to fibre-based access. Owing to rapid technological evolution, tough competition and budget limitation, service providers are struggling to provide a cost-effective solution to minimise their operational cost with excellent customer satisfaction. One of the factors that increases the cost of overall fibre to the home (FTTH) networks is the unplanned deployment resulting in use of extra components and resources. Therefore, it is imperative to determine a suitable technique to help to reduce the planning process, required time and deployment cost through optimisation. Automation-based planning is one of the possible ways to automate the network design at probable lowest cost. In this research a technique for migration from copper to fibre-access network with a climbable and optimised Passive Optic Network (PONFTTx) infrastructure is proposed, identifying the need for new technology in developing countries.
    Keywords: fibre to the home; passive optical networks; GPON; triple play.

  • FOGSYS: a system for the implementation of StaaS service in fog computing using embedded platforms   Order a copy of this article
    by José Dos Santos Machado, Danilo Souza Silva, Raphael Silva Fontes, Adauto Cavalcante Menezes, Edward David Moreno, Admilson De Ribamar Lima Ribeiro 
    Abstract: This work presents the concept of fog computing, its theoretical contextualisation, and related works, and develops the FogSys system with the main objective of simulating, receiving, validating and storing data from IoT devices to be transferred to cloud computing. Fog computing serves to provide the StaaS (Storage as a Service) service. The results showed that the implementation of this service in devices of embedded systems can be a good alternative to reduce one of these problems, in this case, the storage of data, which is faced currently by IoT devices.
    Keywords: fog computing; cloud computing; IoT; embedded systems; StaaS.

  • BFO-based firefly algorithm for multi-objective optimal allocation of generation by integrating renewable energy sources   Order a copy of this article
    by Swaraj Banerjee, Dipu Sarkar 
    Abstract: Among the rapid evolution of modernisation of alternative energy, the electric power system can be made out of a few Renewable Energy Resources (RES). The Economic Load Dispatch (ELD), which is a standout amongst the most complicated optimisation issues, involves a noteworthy spot in the power system's operation and control. Traditionally, it is to recognise the optimal combination of the generation levels of all power producing units so as to diminish the whole fuel cost while satisfying the loads and deficit in the power transmission system. This paper presents a modern and proficient technique for clearing up the ELD issue. To resolve this issue we have amalgamated two meta-heuristic optimisation algorithms, namely the Bacterial Foraging Optimisation (BFO) algorithm and the Firefly optimisation Algorithm (FA) by incorporating both solar and wind power renewable energies. The quality of the proposed methodology is tried and approved on the standard IEEE 3-, 6- and 10-unit systems by solving some cases such as the fuel cost minimisation, whole generation cost minimisation, emission minimisation, and at the same time the system transmission loss. The attained results are contrasted with the MOPSO and the hybrid BOA algorithms. The results show that the proposed methodology gives an accurate solution for some categories of objective functions.
    Keywords: economic load dispatch; solar energy; wind power; fuel and total generation cost; bacterial foraging optimisation; firefly optimisation algorithm.

  • Research on modelling analysis and maximum power point tracking strategies for distributed photovoltaic power generation systems based on adaptive control technology   Order a copy of this article
    by Yan Geng, Jianwei Ji, Bo Hu, Yingjun Ju 
    Abstract: As is well-known, the distributed photovoltaic power generation technology has been rapidly developed in recent years. The cost of distributed photovoltaic power generation is much higher than that of traditional power generation modes. Therefore, how to improve the effective use of photovoltaic cells has become a popular research direction. Based on the analysis of the characteristics of photovoltaic cells, this paper presents a mathematical model of photovoltaic cells and a maximum power point tracking algorithm based on hysteresis control and adaptive control technology variable step perturbation observation method. This algorithm can balance the control precision and control speed from the disturbance observation method and improve the tracking results significantly. Finally, the feasibility of the algorithm and the tracking effects are simulated by using Matlab/Simulink software.
    Keywords: distributed photovoltaic; adaptive control technology; maximum power point tracking strategies.

  • Cloud infrastructure planning: models considering an optimisation method, cost and performance requirements   Order a copy of this article
    by Jamilson Dantas, Rubens Matos, Carlos Melo, Paulo Maciel 
    Abstract: Over the years, many companies have employed cloud computing systems as the best choice regarding the infrastructure to support their services, while keeping high availability and performance levels. The assurance of the availability of resources, considering the occurrence of failures and desired performance metrics, is a significant challenge for planning a cloud computing infrastructure. The dynamic behaviour of virtualised resources requires special attention to the effective amount of capacity that is available to users, so the system can be correctly sized. Therefore, planning computational infrastructure is an important activity for cloud infrastructure providers to analyse the cost-benefit trade-off among distinct architectures and deployment sizes. This paper proposes a methodology and models to support planning and the selection of a cloud infrastructure according to availability, COA, performance and cost requirements. An optimisation model based on GRASP meta-heuristic is used to generate a cloud infrastructure with a number of physical machines and Virtual Machines (VM) configurations. Such a system is represented using an SPN model and closed-form equations to estimate cost and dependability metrics. The proposed method is applied in a case study of a video transcoding service hosted in a cloud environment. The case study demonstrates the selection of cloud infrastructures with best performance and dependability metrics, considering the use of VP9, VP8 and H264 video codecs, as well as distinct VM setups. The results show the best configuration choice considering a six user profile. The results also show the computation of the probability of finalising a set of video transcoding jobs by a given time.
    Keywords: cloud computing; performance; availability modelling; GRASP; COA; stochastic Petri nets; cost requirements.

  • Crowdsensing campaigns management in smart cities   Order a copy of this article
    by Carlos Roberto De Rolt, Julio Dias, Eliza Gomes, Marcelo Buosi 
    Abstract: The growth of cities is accompanied by a large number of different problems in the urban environment that makes effective management of public services a hard task. The use of information technology is one way to help in solving urban problems, aiming for the development of smart cities. Crowdsensing mechanism is an important tool in this process, exploring a collective intelligence and organising a collaboration of large groups of people. This work focuses mainly on the process of management of crowdsensing campaigns contributing to the theoretical framework regarding the theme. The learning in a use case contributed to improving the technical requirements of the computational platform used. Through a crowdsensing system, collaborative data collection and sensor monitoring campaigns were executed, which allowed learning about the management of crowdsensing campaigns, with results such as adjustments in the computational platform by the insertion of new types of campaign and the inclusion of feedback elements. This work reports the process of implementation and improvement of a crowdsensing system, which was initially developed from theoretical knowledge and deployed in the University of Bologna where students participated in campaigns managed through a computational platform entitled ParticipACT, resulting in several studies about the subject. In another context, based on this pioneering experience, the computer platform, ParticipACT, was transferred to LabGES, the Management Technologies Laboratory of UDESC (Santa Catarina State University), based on an international cooperation agreement. Collaborative data collection campaigns were carried out in a monitored way that enabled learning about crowdsensing campaigns management, resulting in significant contributions to the improvement of the system and propositions of adjustments in the theoretical framework of management campaign models.
    Keywords: crowdsensing; smart cities; campaign management; ParticipACT.

  • A knowledge- and intelligence-based strategy for resource discovery on IaaS cloud systems   Order a copy of this article
    by Mohammad Samadi Gharajeh 
    Abstract: Resource discovery selects appropriate computing resources in cloud systems to accomplish the users jobs. This paper proposes a knowledge- and intelligence-based strategy for resource discovery in IaaS cloud systems, called KINRED. It uses a fuzzy system, a multi-criteria decision making (MCDM) controller, and an artificial neural node to discover suitable resources under various changes on network metrics. The suggested fuzzy system uses hardware specifications of the computing resources in which CPU speed, CPU core, memory, disk, the number of virtual machines, and usage rate are considered as inputs, and hardware type is considered as output of the system. The suggested MCDM controller makes proper decisions based on users requirements in which CPU speed, CPU core, memory, and disk are assumed as inputs, and job type is assumed as output of the controller. Furthermore, the artificial neural node selects the computing resource having the highest success rate based on both outputs of the fuzzy system and MCDM controller. Simulation results show that the proposed strategy surpasses some of the existing related works in terms of the number of successful jobs, system throughput, and service price.
    Keywords: cloud computing; resource discovery; knowledge-based system; intelligent strategy; artificial neural node.

  • Performance impact of the MVMM algorithm for virtual machine migration in data centres   Order a copy of this article
    by Nawel Kortas, Habib Youssef 
    Abstract: Virtual machine (VM) migration mechanisms and the design of data centres for cloud computing have a significant impact on energy cost and negotiated Service Level Agreement (SLA). The recent work focuses on how to use VM migration to achieve stable physical machine (PM) usage with the objective of reducing energy consumption, under stated SLA constraints. This paper presents and evaluates a new scheduling algorithm called MVMM (Minimisation of Virtual Machine Migration) for VM migration within a data centre environment. MVMM makes use of a DBN (Dynamic Bayesian Network) to decide where and when a particular VM migrates. Indeed, the DBN takes as input the data centre parameters then computes a score for each VM candidate for migration in order to reduce the energy consumption by decreasing the number of future migrations according to the probabilistic dependencies between the data centre parameters. Furthermore, our performance study shows that the choice of a data centre scheduling algorithm and network architecture in cloud computing significantly impacts the energy cost and application performance under resource and service demand variations. To evaluate the proposed algorithm, we integrated the MVMM scheduler into the GreenCloud simulator while taking into consideration key data centre characteristics such as scheduling algorithm, DCN (Data re Network) architecture, link, load and communication between VMs. The performance results show that the use of the MVMM scheduler algorithm within a three-tier debug architecture can reduce energy consumption by over 35% when compared with five well-known schedulers, namely Round Robin, Random, Heros, Green, and Dens.
    Keywords: MVMM algorithm; virtual machine; cloud computing; dynamic Bayesian networks; SLA; scheduler algorithm; data centre network architectures; VM migration.

  • SDSAM: a service-oriented approach for descriptive statistical analysis of multidimensional spatio-temporal big data   Order a copy of this article
    by Weilong Ding, Zhuofeng Zhao, Jie Zhou, Han Li 
    Abstract: With the expansion of the Internet of Things, spatio-temporal data has been widely used and generated. The rise of big data in space and time has led to a flood of new applications with statistical analysis characteristics. In addition, applications based on statistical analysis of these data must deal with the large capacity, diversity and frequent changes of data, as well as the query, integration and visualisation of data. Developing such applications is essentially a challenging and time-consuming task. In order to simplify the statistical analysis of spatio-temporal data, a service-oriented method is proposed in this paper. This method defines the model of spatio-temporal data service and functional service. It defines a process-based application of spatio-temporal big data statistics to invoke basic data services and functional services, and proposes an implementation method of spatio-temporal data service and functional service based on Hadoop environment. Taking the highway big data analysis as an example, the validity and applicability of this method are verified. The effectiveness of this method is verified by an example. The validity and applicability of the method are verified by a case study of Expressway large data analysis. An example is given to verify the validity of the method.
    Keywords: spatio-temporal data; RESTful; web service.

  • Personality-aware recommendations: an Empirical study in education   Order a copy of this article
    by Yong Zheng, Archana Subramaniyan 
    Abstract: Recommender systems have been developed to deliver item recommendations to the users tailored to user preferences. The impact of the human personality has been realised in user decision making. There are several personality-aware recommendation models which incorporate the personality traits into the recommendations. They have been demonstrated to be effective in improving the quality of the recommendations in several domains, including movies, music and social networks. However, the impact on the area of education is still under investigation. In this paper, we discuss and summarise state-of-the-art personality-based collaborative filtering techniques for recommendations, and perform an empirical study on educational data. Particularly, we collect the personality traits in two ways: a user survey and a natural language processing system. We examine the effectiveness of the recommendation models by using these subjective and inferred personality traits, respectively. Our experimental results reveal that students with different personality traits may make different choices, and the inferred personality traits are more reliable and effective to be used in the process of recommendations.
    Keywords: personality; recommender systems; education; empirical study.

  • Research on integrated energy system planning method considering wind power uncertainty   Order a copy of this article
    by Yong Wang, Yongqiang Mu, Jingbo Liu, Yongyi Tong, Hongbo Zhu, Mingfeng Chen, Peng Liang 
    Abstract: With the development of energy technology, the planning and operation of integrated energy systems coupled with electricity-gas-heat energy has become an important research topic in the future energy field. In order to solve the influence of wind power uncertainty on the unified planning of integrated energy systems, this paper constructs a wind energy uncertainty quantitative model based on intuitionistic fuzzy sets. Based on this, an integrated energy system planning model with optimal economic cost and environmental cost is established. The model is solved by the harmonic search algorithm. Finally, the proposed method is validated by simulation examples. The effectiveness of the integrated energy system planning method can improve the grid capacity of the wind power and reduce the CO2 of the system. And it has guiding significance for the long-term planning of integrated energy systems
    Keywords: wind power uncertainty; planning method; electricity-gas-heat energy.

  • Big data analytics: an improved method for large-scale fabrics detection based on feature importance analysis from cascaded representation   Order a copy of this article
    by Minghu Wu, Song Cai, Chunyan Zeng, Zhifeng Wang, Nan Zhao, Li Zhu, Juan Wang 
    Abstract: Aiming at the dimensional disaster and data imbalance in large-scale fabrics data, this paper proposes a classification method of fabrics images based on feature fusion and feature selection. The model of representation learning using transfer learning idea is firstly established to extract semantic features from fabrics images. Then, the features generated from the different models are cascaded with the purpose of features complement. Furthermore, the extremely randomised trees (Extra-Trees) are used to analyse the importance of the cascaded representation and reduce the computation time of the classification model with big data and high-dimensional representation. Finally, the multilayer perceptron completes the classification of selected features. Experimental results demonstrate that the method can detect fabrics with high accuracy. Moreover, feature importance analysis effectively accelerates the detection speed when the model processes big data.
    Keywords: big data; representation learning; feature fusion; feature selection.

  • Research on design method of manoeuvring targets tracking generator based on LabVIEW programming   Order a copy of this article
    by Caiyun Gao, Shiqiang Wang, Huiyong Zeng, Juan Bai, Binfeng Zong, Jiliang Cai 
    Abstract: Aiming at the issue of poor visual display and non-real-time status output while describing maneuvering target track with data, a new design method of target track generator is proposed based on laboratory virtual instrument engineering workbench (LabVIEW). Firstly, the motion model of maneuvering target is builded. Secondly, the design requirement of track generator is discussed. Finally, target track of multiple targets and multiple maneuvering model is produced by visual panel and code design with LabVIEW. Simulation results indicate that the proposed method can output the target status in real time with different data rates while displaying the multiple targets maneuvering track directly and have good visibility. And also the generated track parameters are of high accuracy and effective data.
    Keywords: LabVIEW; virtual instrument; target track simulation; situation display of radar.

  • Finite state transducer based light-weight cryptosystem for data confidentiality in cloud computing   Order a copy of this article
    by Basappa Kodada, Demian Antony D'Mello 
    Abstract: Cloud computing is derived from parallel, cluster, grid and distributed computing and is becoming one of the advanced and growing technologies. With the rapid growth of internet technology and its speed, the number of users for cloud computing is growing enormously, and huge amounts of data are being generated. With the growth of data in cloud, the security and safety of data, such as data confidentiality and privacy, are a paramount issue because data plays a vital role in the current trend. This paper proposes a new type of cryptosystem based on a finite state transducer to provide data confidentiality for cloud computing. The paper presents the protocol communication process and gives an insight into security analysis on the proposed scheme. The scheme proves that it is stronger and more secure than the existing schemes that can be derived from results as proof of concept.
    Keywords: security; confidentiality; encryption; decryption; automata; finite state machine; finite state transducer; cryptography; data safety.

  • FastGarble: an optimised garbled circuit construction framework   Order a copy of this article
    by A. Anasuya Innocent, G. Prakash, K. Sangeeta 
    Abstract: In the emerging field of cryptography, secure computation can be used to solve a number of distributed computing applications without loss of privacy of sensitive/ private data. The applications can be as simple as coin tossing or agreement between parties, or as complex as e-auctions, e-voting, or private data retrieval for the purpose of carrying out research on sensitive data, private editing on cloud, etc., without the help of a trusted third party. Confidentiality can be achieved by the use of conventional cryptographic techniques, but they require the data availability for working. For working on sensitive data some other technique is needed, and there comes the use of secure computation. Any protocol on secure computation starts with the construction of a garbled circuit of the underlying functionality, and the efficiency of protocol and circuit construction are directly proportional to each other. Hence, as the complexity of an application increases, the circuit size increases, resulting in poor efficiency of the protocol, which in turn restricts secure computation from finding its use in day-to-day applications. In this paper, an optimised garbled circuit construction framework, named FastGarble, is proposed, which has been shown to improve the time complexity of garbled circuit construction.
    Keywords: secure computation; garbled circuit; performance; secure two-party computation; time complexity.

  • Fine-grained access control of files stored in cloud storage with traceable and revocable multi-authority CP-ABE scheme   Order a copy of this article
    by Bharati Mishra, Debasish Jena, Srikanta Patnaik 
    Abstract: Cloud computing is gaining increasing popularity among enterprises, universities, government departments, and end-users. Geographically distributed users can collaborate by sharing files through the cloud. Ciphertext-policy attribute-based (CP-ABE) access control provides an efficient technique to enforce fine-grained access control by the data owner. Single authority CP-ABE schemes create a bottleneck for enterprise applications. Multi-authority CP-ABE systems deal with multiple attribute authorities performing the attribute registration or key distribution. Type I pairing is used in designing the existing multi-authority systems. They are vulnerable to some reported known attacks on them. This paper proposes a multi-authority CP-ABE scheme that supports attribute and policy revocation. Type III pairing is used in designing the scheme, which has higher security, faster group operations, and requires less memory to store the elements. The proposed scheme has been implemented using the Charm framework, which uses the PBC library. The OpenStack cloud platform is used for computing and storage services. It has been proved that the proposed scheme is collusion resistant, traceable, and revocable. AVISPA tool has been used to verify that the proposed scheme is secure against a replay attack and man-in-the-middle attack.
    Keywords: cloud storage; access control; CP-ABE; attribute revocation; blockchain; multi-authority.

  • On generating Pareto optimal set in bi-objective reliable network topology design   Order a copy of this article
    by Basima Elshqeirat, Ahmad Aloqaily, Sieteng Soh, Kwan-Wu Chin, Amitava Datta 
    Abstract: This paper considers the following NP-hard network topology design (NTD) problem called NTD-CB/R: given (i) the location of network nodes, (ii) connecting links, and (iii) each links reliability, cost and bandwidth, design a topology with minimum cost (C) and maximum bandwidth (B) subject to a pre-defined reliability (R) constraint. A key challenge when solving the bi-objective optimisation problem is to simultaneously minimise C while maximising B. Existing solutions aim to obtain one topology with the largest bandwidth cost ratio. To this end, this paper aims to generate the best set of non-dominated feasible topologies, aka the Pareto Optimal Set (POS). It formally defines a dynamic programming (DP) formulation for NTD-CB/R. Then, it proposes two alternative Lagrange relaxations to compute a weight for each link from its reliability, bandwidth, and cost. The paper further proposes a DP approach, called DPCB/R-LP, to generate POS with maximum weight. It also describes a heuristic to enumerate only k?n paths to reduce the computational complexity for a network with n possible paths. Extensive simulations on hundreds of various sized networks that contain up to 299 paths show that DPCB/R-LP can generate 70.4% of the optimal POS while using only up to 984 paths and 27.06 CPU seconds. With respect to a widely used metric, called overall-Pareto-spread (OR), DPCB/R-LP produces 94.4% of POS with OS = 1, measured against the optimal POS. Finally, all generated POS each contains a topology that has the largest bandwidth cost ratio, significantly higher than 88% obtained by existing methods.
    Keywords: bi-objective optimisation; dynamic programming; Lagrange relaxation; Pareto optimal set; network reliability; topology design.

  • Dynamic quality of service for different flow types in SDN networks   Order a copy of this article
    by Alessandro Lima, Eduardo Alchieri 
    Abstract: The structure of the internet makes it difficult to implement Quality of Service (QoS) in different flows generated by many different applications, ranging from an e-commerce application with a light demand to real-time applications such as VoIP or videoconferencing, which make heavy use of the internet. One of the challenges is the lack of technical knowledge and the difficulty of configuring network equipment with many proprietary technologies. Software Defined Networks (SDN) are a good alternative to mitigate these problems. By separating the control plane from the data plane, network administrators can efficiently use the network resources and, moreover, it is easier to provide new services and applications tailored to the needs of the network. However, SDN technology itself still suffers from the limitation of solid QoS mechanisms, especially considering flows classified as elephant (large data volume), cheetah (high throughput) and alpha (large bursts). Aiming to fill this gap, this work proposes a new SDN service, called QoS-Flux, that receives network information from the data plane, through the OpenFlow protocol, to apply different QoS algorithms and filters to dynamically deal with different flows. Experimental results show that QoS-Flux significantly improves the QoS metrics of delay, jitter, packet loss, and bandwidth in a SDN network.
    Keywords: SDN; quality of service; elephant flow; alpha flow; cheetah flow.

  • Optimal controller design for an islanded microgrid during load change   Order a copy of this article
    by Bineeta Soreng, Raseswari Pradhan 
    Abstract: There is a high tendency of voltage and frequency variation in islanding mode compared with grid connected mode. This paper emphasises on developing a technique for optimal regulation of voltage and frequency for a Microgrid (MG). Here, the studied microgrid is a Photovoltaic (PV) based MG (PVMG). The proposed technique is a Sliding Mode Controller (SMC) optimised using the Whales Optimisation Algorithm (WOA) named as SMC-WOA. The effectiveness of the proposed technique is validated by the dynamic response of the studied PVMG during operation mode and load change. For controlling of the studied PVMG, two loops, namely voltage loop and current loop, are used. Again, droop controller is used for power sharing in the studied PVMG. For ensuring the efficacy of the proposed technique SMC-WOA, dynamic responses of the studied system with SMC-WOA are compared with that of the Grey Wolf Optimization (GWO) based SMC (SMC-GWO) and Sine Cosine Algorithm (SCA) based SMC (SMC-SCA). With proper analysis of the simulation results, it is found that the proposed SMC-WOA helps in yielding better results compared with SMC-GWO and SMC-SCA techniques in terms of faster solution with minimum voltage, frequency overshoot along with minimum output current and total harmonic distortion. The validation of the proposed technique is also tested by comparing with the PI controller optimised with the same WOA, GWO and SCA.
    Keywords: microgrids; PI controller; sliding mode; whales optimisation algorithm; grey wolf optimisation; sine cosine algorithm; total harmonic distortion.

  • HyperGuard: on designing out-VM malware analysis approach to detect intrusions from hypervisor in cloud environment   Order a copy of this article
    by Prithviraj Singh Bisht, Preeti Mishra, Pushpanjali Chauhan, R.C. Joshi 
    Abstract: Cloud computing provides delivery of computing resources as a service on a pay-as-you-go basis. It represents a shift from products being purchased, to products being subscribed as a service, delivered to consumers over the internet from a large scale data centre. The main issue with cloud services is security from attackers who can easily compromise the Virtual Machines (VMs) and applications running over them. In this paper, we present a HyperGuard mechanism to detect malware that hide their presence by sensing the analysing environment or security tools installed in VMs. They may attach themselves with legitimate processes. Hence, HyperGuard is deployed at the hypervisor, outside the monitored VMs to detect such evasive attacks. It employs open source introspection libraries, such as DRAKVUF, LIbVMI etc., to capture the VM behaviour from hypervisor inform of syscall logs. It extracts the features in the form of n-grams. It makes use of Recursive Feature Elimination (RFE) and Support Vector Machine (SVM) to learn and detect the abnormal behaviour of evasive malware. The approach has been validated with a publicly available dataset (Trojan binaries) and a dataset obtained on request from University of new California (evasive malware binaries). The results seem to be promising.
    Keywords: Cloud secuirty,Intrusion detection,virtual machine introspection,system call traces; machine learning ; anaomaly behviour detection; sypder.

  • Method for determining cloth simulation filtering threshold value based on curvature value of fitting curve   Order a copy of this article
    by Jialong Sun 
    Abstract: Cloth simulation filtering (CSF) is an algorithm that is effectively applied to mobile 3D laser point cloud filtering. The classification threshold in cloth simulation is a key parameter for separating ground points from non-ground points. The selection of the classification threshold directly affects the point cloud filtering effect. In this paper, based on the filtering of the CSF algorithm, a method for calculating the threshold-filtering total error fitting curve curvature value to determine the cloth simulation classification threshold is proposed. First, according to the relationship between the classification threshold and filtering error in CSF, the least squares optimal fitting function is established. Then the curvature value of the fitted curve is calculated to determine the optimal classification threshold; Finally, the final filtering errors of ground points and non-ground points are analysed by examples, and the effectiveness of this method in CSF filtering is verified, and a good filtering effect is obtained.
    Keywords: cloth simulation filtering; laser point cloud; filtering; threshold.

  • Cloud workflow scheduling algorithm based on multi-objective hybrid particle swarm optimisation   Order a copy of this article
    by Baomin Xu 
    Abstract: Particle swarm optimisation has been widely used in solving scheduling problems. This paper proposes a hybrid algorithm namely Hill Climbing with Multi-objective Particle Swarm Optimization (HCMOPSO), which is based on heuristic local search and multi-objective particle swarm optimisation algorithm. HCMOPSO introduces hill climbing optimisation techniques into the particle swarm optimisation algorithm to improve the local search ability. Experimental results show that HCMOPSO is an effective cloud workflow scheduling algorithm, which has faster convergence velocity and better optimisation ability.
    Keywords: hill climbing algorithm; task scheduling; particle swarm optimisation; cloud workflow.

  • Dynamic Bayesian network based prediction of performance parameters in cloud computing   Order a copy of this article
    by Priyanka Bharti, Rajeev Ranjan 
    Abstract: Resource prediction is an important task in cloud computing environments. It can become more effective and practical for large Cloud Service Providers (CSPs) with a deeper understanding of their Virtual Machines (VM) workload's key characteristics. Resource prediction is also influenced by several factors including (but not constrained to) data centre resources, types of user application (workloads), network delay and bandwidth. Given the increasing number of users for cloud systems, if these factors can be accurately measured and predicted, improvements in resource prediction could be even greater. Existing prediction models have not explored how to capture the complex and uncertain (dynamic) relationships between these factors owing to the stochastic nature of cloud systems. Further, they are based on score-based Bayesian network (BN) algorithms having limited prediction accuracy when dependency exists between multiple variables. This work considers time-dependent factors in cloud performance prediction. It considers an application of Dynamic Bayesian Network (DBN) as an alternative model for dynamic prediction of cloud performance by extending the static capability of a BN. The developed model is trained using standard datasets from Microsoft Azure and Google Compute Engine. It is found to be effective in predicting the application workloads and its resource requirements with an enhanced accuracy compared with existing models. Further, it leads to better decision making processes with regard to response time and scalability in dynamic situations of the cloud environment.
    Keywords: cloud computing; dynamic Bayesian network; resource prediction; response time; scalability.

  • A privacy-aware and fair self-exchanging self-trading scheme for IoT data based on smart contract   Order a copy of this article
    by Yuling Chen, Hongyan Yin, Yaocheng Zhang, Wei Ren, Yi Ren 
    Abstract: With the development of the era of big data, the demand for data sharing and usage is increasing, especially in the era of Internet of things, thus putting forward a keen demand for data exchanging and data trading. However, the existing data exchanging and trading platforms are usually centralized and usersrnhave to trust platforms. This paper proposes a secure and fair exchanging and trading protocol based on blockchain and smart contract, especially, self-governance without relying centralized trust. By using the protocol, it can guarantee fairness to defend against trade cheating, and security for data confidentiality. It can also guarantee efficiency by transferring data links instead of data between data owners and data buyers. The extensive analysisrnjustified that the proposed scheme can facilitate the self-exchanging and self-trading for big data in a secure, fair and efficient manner.
    Keywords: big data; IoT; fair exchanging; blockchain; smart contract; oblivious protocol; fair trading.

  • Robust and secure authentication protocol protecting privacy for roaming mobile users in global mobility networks   Order a copy of this article
    by R.K. Madhusudhan, K.S. Suvidha 
    Abstract: With the advent of new 5G technology there is a need to develop security architecture. It should be integrated into the new 5G system that should work in compliance with the 5G paradigms. In this paper, a two-factor authentication scheme is developed to address the security features, such as user anonymity and privacy preservation during roaming scenario in GLObal MObility NETwork (GLOMONET). While roaming mobile users (MU) need to access the services of the FA, the FA grants the service request only to the authenticated MU. To verify the authenticity of the MU, the FA sends the service request of MU to the HA. The HA verifies the authenticity of the MU, after which the FA allows the MU to access the services. The entire communication during roaming is carried over an insecure channel, hence a security concern is raised. The main objective of the proposed protocol is to secure the channel and to overcome all active and passive security attacks. Because the protocol is designed for mobile networks, it should be light weight with less communication cost, and one such protocol is proposed in this article. The proposed protocol is simulated using NS2.35 simulator and the performance metrics, such as throughput, end to end delivery and packet delivery ratio, are computed. Additionally, the proposed protocol addresses the active and passive security attacks that exist in 5G cellular networks which is formally verified using AVISPA tool. The protocol is efficient in terms of computational and communication cost. The proposed scheme is robust and practically implementable.
    Keywords: GLOMONET; security; smartcard; ECC; timestamp; NS2.

  • An agent-based mechanism to form cloud federations and manage their requirements changes   Order a copy of this article
    by Nassima Bouchareb, Nacer Eddine Zarour 
    Abstract: Cloud computing is a business paradigm, where cloud providers offer computing resources (software and hardware) and cloud consumers use them. Forming cloud federations and managing them is a big problem. In this paper, we propose an agent-based mechanism to automatically manage cloud federations and their requirements changes, to accept the maximum of requests with minimum cost and energy consumption, by soliciting the best clouds. First, we present two strategies: offer strategy and acceptance strategy, which allow the formation of federations. Then, we describe how to manage the requirements changes of these strategies. Finally, this paper presents a case study to illustrate our federation management mechanism in cloud computing. We evaluate the proposed strategies by comparing them with other related works. Simulation results indicate that the proposed policies enhance the providers profit.
    Keywords: cloud computing; federation; requirements change; multi-agent systems; trust; utility; green computing.

  • Clustering structure-based multiple measurement vectors model and its algorithm   Order a copy of this article
    by Tijian Cai, Xiaoyu Peng, Xin Xie, Wei Liu, Jia Mo 
    Abstract: Most current multi-measurement vector models are based on the ideal assumption of shared sparse structure. However, owing to time-varying and multiple focuses of complex data, it is often difficult to meet the assumption in reality. Therefore, people had been working hard to use various sparse structures to solve the problem. In this paper, we take the cluster sparsity of signals into account and propose a Cluster Sparsity-based MMV (CS-MMV) model, which not only uses shared sparse structure between coefficients but also considers the cluster characteristic within coefficients. Furthermore, we extend a classic algorithm to implement the new model. Experiments on simulation data and two face benchmarks show that the new model is more suitable for complex data with clustered structure, and the extended algorithm can effectively improve the performance of sparse recovery.
    Keywords: compressed sensing; sparse recovery; multi-measurement vectors; structured sparsity.

  • Micro-PaaS fog: container based orchestration for IoT applications using SBC   Order a copy of this article
    by Walter D.O. Santo, Rubens De Souza Matos Júnior, Admilson De Ribamar Lima Ribeiro, Danilo Souza Silva, Reneilson Yves Carvalho Santos 
    Abstract: The Internet of Things (IoT) is an emerging technology paradigm in which ubiquitous sensors monitor physical infrastructures, environments, and people in real-time to help in decision making and improve the efficiency and reliability of the systems, adding comfort and life quality to society. In this sense, there are questions concerning the limitation of computational resources, high latency and different QoS requirements related to IoT that move cloud technologies to the fog computing direction, and the adoption of light virtualised solutions, as technologies based in containers to attend to many needs of different domains. This work, therefore, has as its goal to propose and implement a micro-Paas architecture for fog computing, in a cluster of single-board computers (SBC), for orchestration of applications using containers, applied to IoT and that attend to the QoS criteria, e.g. high availability, scalability, load balance, and latency. From this proposed model, the micro-Paas fog was implemented with virtualisation technology in containers using orchestration services in a cluster built with Raspberry Pi to monitor water and energy consumption at a total cost of property equivalent to 23% of a public platform as a service (PaaS).
    Keywords: fog computing; cluster; orchestration; containers; single board computing.

  • A review on data replication strategies in cloud systems   Order a copy of this article
    by Riad Mokadem, Jorge Martinez-Gil, Abdelkader Hameurlain, Joseph Kueng 
    Abstract: Data replication constitutes an important issue in cloud data management. In this context, a significant number of replication strategies have been proposed for cloud systems. Most of the studies in the literature have classified these strategies into static vs. dynamic or centralised vs. decentralised strategies. In this paper, we propose a new classification of data replication strategies in cloud systems. It takes into account several other criteria, specific to cloud environments: (i) the orientation of the profit, (ii) the considered objective function, (iii) the number of tenant objectives, (iv) the nature of the cloud environment and (v) the consideration of economic costs. Dealing with the last criterion, we focus on the provider's economic profit and the consideration of energy consumption by the provider. Finally, the impact of some important factors is investigated in a simulation study.
    Keywords: cloud systems; data replication; data replication strategies; classification; service level agreement; economic profit; performance.

  • Anomaly detection against mimicry attacks based on time decay modelling   Order a copy of this article
    by Akinori Muramatsu, Masayoshi Aritsugi 
    Abstract: Because cyberattackers attempt to cheat anomaly detection systems, it is required to make an anomaly detection system robust against such attempts. We focus on mimicry attacks and propose a system to detect such attacks in this paper. Mimicry attacks make use of ordinary operations in order not to be detected. We take account of time decay in modelling operations to give lower priorities to preceding operations, thereby enabling us to detect mimicry attacks. We empirically evaluate our proposal with varying time decay rates to demonstrate that our proposal can detect mimicry attacks that could not be detected by a state-of-the-art anomaly detection approach.
    Keywords: anomaly detection; mimicry attacks; time decay modelling; stream processing.

  • A cloud-based spatiotemporal data warehouse approach   Order a copy of this article
    by Georgia Garani, Nunziato Cassavia, Ilias Savvas 
    Abstract: The arrival of the big data era introduces new necessities for accommodating data access and analysis by organisations. The evolution of data is three-fold, increase in volume, variety, and complexity. The majority of data nowadays is generated in the cloud. Cloud data warehouses enhance the benefits of the cloud by facilitating the integration of cloud data in the cloud. A data warehouse is developed in this paper, which supports both spatial and temporal dimensions. The research focuses on proposing a general design for spatiobitemporal objects implemented by nested dimension tables using the starnest schema approach. Experimental results reflect that the parallel processing of such data in the cloud can process OLAP queries efficiently. Furthermore, increasing the number of computational nodes significantly reduces the time of query execution. The feasibility, scalability, and utility of the proposed technique for querying spatiotemporal data is demonstrated.
    Keywords: cloud computing; big data; hive; business intelligence; data warehouses; cloud based data warehouses; spatiotemporal data; spatiotemporal objects; starnest schema; OLAP; online analytical processing.

  • A truthful mechanism for crowdsourcing-based tourist spots detection in smart cities   Order a copy of this article
    by Anil Bikash Chowdhury, Vikash Kumar Singh, Sajal Mukhopadhyay, Abhishek Kumar, Meghana M. D 
    Abstract: With the advent of new technologies and the internet around the globe, many cities in different countries are involving the local residents (or city dwellers) for making decisions on various government policies and projects. In this paper, the problem of detecting tourist spots in a city with the help of city dwellers, in strategic setting, is addressed. The city dwellers vote against the different locations that may act as potential candidates for tourist spots. For the purpose of voting, the concept of single-peaked preferences is used, where each city dweller reports a privately held single-peaked value that signifies the location in a city. Given the above discussed scenario, the goal is to determine the location in the city as a tourist spot. For this purpose, we have designed the mechanisms (one of which is truthful). For measuring the efficacy of the proposed mechanisms the simulations are done.
    Keywords: tourism; smart cities; crowdsourcing; city dwellers; voting; single-peaked preferences; truthful.

  • An efficient greedy task scheduling algorithm for heterogeneous inter-dependent tasks on computational grids   Order a copy of this article
    by D.B. Srinivas, Sujay N. Hegde, M.A. Rajan, H.K. Krishnappa 
    Abstract: Designing a task scheduling algorithm for precedence constrained task graphs is still a challenge due to its complexity (NP-complete). Hence the majority of the research in this area is devoted to designing optimal scheduler based on a plethora of techniques such as heuristic, greedy, genetic, game theory, bio-inspired, machine learning etc. for fully dependent or independent task graphs. Motivated by these works, we propose an efficient greedy task scheduling algorithm for precedence constrained task graphs with varied dependencies (no, partial and fully) on computational grids. Performance of the proposed task scheduling algorithm is compared with respect to Turn Around Time (TAT) and grid utilisation against Hungarian, Partial Precedence Constrained (P_PCS) and AND scheduling algorithms. Simulation results shows that the performance of the proposed scheduling algorithm is on a par with Hungarian, P_PCS and AND scheduling algorithms and the running time of proposed algorithm is better than Hungarian and is on a par with P_PCS algorithm.
    Keywords: task scheduling; computational grids; partial dependency; turnaround time; grid utilisation; standard task graphs; greedy task scheduling algorithm; brute force scheduling algorithm; fragmentation; random task graph.
    DOI: 10.1504/IJGUC.2020.10026377
  • Chronological and exponential-based Lion optimisation for optimal resource allocation in cloud   Order a copy of this article
    by J. Devagnanam, N.M. Elango 
    Abstract: Cloud computing is a service-oriented architecture, which has a pool of resources. This paper introduces the resource allocation scheme for cloud computing by introducing an optimisation algorithm named Chronological E-Lion algorithm. The proposed Chronological E-Lion algorithm is developed by integrating the chronological concept to the EWMA-based Lion algorithm. The fitness function of the proposed Chronological E-Lion algorithm considers various parameters, like cost, profit, CPU allocation, memory allocation, MIPS, and frequency scaling, for optimally allocating the resources. The performance of the proposed method is analysed using three different problem instances based on the evaluation metrics, profit, CPU allocation rate, and memory allocation rate. From the simulation results, it can be concluded that the proposed Chronological E-Lion algorithm achieved improved performance with the values of 45.93, 0.16, and 0.0093, for profit, CPU utilisation rate, and memory utilisation rate.
    Keywords: cloud computing; resource allocation; E-Lion; chronological concept; CPU utilisation rate; memory utilisation rate.
    DOI: 10.1504/IJGUC.2020.10029884
  • How do checkpoint mechanisms and power infrastructure failures impact on cloud applications?   Order a copy of this article
    by Guto Leoni Santos, Demis Gomes, Djamel Sadok, Judith Kelner, Élisson Rocha, Patricia Takako Endo 
    Abstract: With the growth of cloud computing usage by commercial companies, service providers are looking for ways to improve the quality of their services. Failures in the power subsystem represent a major risk of cloud data centre unavailability at the physical level. At same time, software-level mechanisms (such as application checkpointing) can be used to maintain the application consistency after a downtime and also improve its availability. This paper analyses the impact of power subsystem failures on cloud applications availability, as well as the impact of having checkpoint mechanisms to recover the system from software-level failure. We propose stochastic models to represent the cloud power subsystem, the cloud application, and also the checkpoint-based retrieval mechanisms. The results show that the selection of a given type of checkpoint mechanism does not present a significant impact on the observed metrics. On the other hand, improving the power infrastructure implies gains in performance and availability.
    Keywords: cloud data centre; checkpoint mechanisms; availability; performance; stochastic models; power subsystem; stochastic Petri nets; reliability block diagrams; hardware failures; sensitivity analysis; queue system.
    DOI: 10.1504/IJGUC.2020.10030943
  • Applying REECHD to non-uniformly distributed heterogeneous devices   Order a copy of this article
    by Diletta Cacciagrano, Flavio Corradini, Matteo Micheletti, Leonardo Mostarda 
    Abstract: Heterogeneous Wireless Sensor Networks (WSNs) include nodes with different initial energy, different transmission rate and hardware. Clustering is an energy-efficient approach to collect data in a WSN. Clustering partitions the WSN nodes into clusters. Each cluster has a cluster head (CH) that gathers data from its member nodes and forwards them to a base station. Rotating Energy Efficient Clustering for Heterogeneous Devices (REECHD) is our novel clustering algorithm which introduces the concept of intra-traffic limit rate (ITLR). This defines a limit on the intra-traffic communication that all WSN clusters must comply with. In this paper we apply REECHD to a WSN where devices are not uniformly distributed. We show how the use of ITLR improves energy efficiency by adaptively generating a different amount of clusters in different WSN subareas. Our results show that REECHD enhances on average the network lifespan when compared to the state-of-art protocols.
    Keywords: energy efficiency; clustering; heterogeneous wireless sensor networks.
    DOI: 10.1504/IJGUC.2020.10030297
  • Assessing distributed collaborative recommendations in different opportunistic network scenarios   Order a copy of this article
    by Lucas Nunes Barbosa, Jonathan F. Gemmell, Miller Horvath, Tales Heimfarth 
    Abstract: Mobile devices are common throughout the world, even in countries with limited internet access and even when natural disasters disrupt access to a centralised infrastructure. This access allows for the exchange of information at an incredible pace and across vast distances. However, this wealth of information can frustrate users as they become inundated with irrelevant or unwanted data. Recommender systems help to alleviate this burden. In this work, we propose a recommender system where users share information via an opportunistic network. Each device is responsible for gathering information from nearby users and computing its own recommendations. An exhaustive empirical evaluation was conducted on two different data sets. Scenarios with different node densities, velocities and data exchange parameters were simulated. Our results show that in a relatively short time when a sufficient number of users are present, an opportunistic distributed recommender system achieves results comparable to that of a centralised architecture.
    Keywords: opportunistic networks; recommender systems; mobile ad hoc networks; decentralised recommender systems; user-based collaborative filtering; device-to-device communications; machine learning.
    DOI: 10.1504/IJGUC.2020.10030944
  • Combined interactive protocol for lattice-based group signature schemes with verifier-local revocation   Order a copy of this article
    by Maharage Nisansala Sevwandi Perera, Takeshi Koshiba 
    Abstract: In group signature schemes, signers are required to prove their validity of representing the group while being anonymous to the verifiers and traceable to the authority. On the other hand, group signature schemes with verifier-local revocation require signers to prove their revocation token is not in the revocation list. This paper presents a combined interactive protocol that supports group signature schemes with explicit tracing mechanism and verifier-local revocation, where keys and revocation tokens are generated separately to achieve stronger security. This new protocol enables signers to prove that their signing is valid, their revocation tokens are not in the revocation list, and their index is correctly encrypted.
    Keywords: lattice-based group signatures; verifier-local revocation; zero-knowledge proof; interactive protocol.
    DOI: 10.1504/IJGUC.2020.10030945
  • An empirical study of alternating least squares collaborative filtering recommendation for Movielens on Apache Hadoop and Spark   Order a copy of this article
    by Jung-Bin Li, Szu-Yin Lin, Yu-Hsiang Hsu, Ying-Chu Huang 
    Abstract: In recent years, both consumers and businesses have faced the problem of information explosion, and the recommendation system provides a possible solution. This study implements a movie recommendation system that provides recommendations to consumers in an effort to increase consumer spending while reducing the time between film selection. This study is a prototype of collaborative filtering recommendation system based on Alternating Least Squares (ALS) algorithm. The advantage of collaborative filtering is that it avoids possible violations of the Personal Data Protection Act and reduces the possibility of errors due to poor quality of personal data. Our research improves the ALS's limited scalability by using a platform that combines Spark with Hadoop Yarn and uses this combination to calculate movie recommendations and store data separately. Based on the results of this study, our proposed system architecture provides recommendations with satisfactory accuracy while maintaining acceptable computational time with limited resources.
    Keywords: recommendation system; alternating least squares; collaborative filtering; Movielens; Hadoop; Spark; content-based filtering.
    DOI: 10.1504/IJGUC.2020.10030946
  • Evaluating and modelling solutions for disaster recovery   Order a copy of this article
    by Júlio Mendonça, Ricardo Lima, Ermeson Andrade 
    Abstract: Systems outages can have disastrous effects on businesses such as data loss, customer dissatisfaction, and subsequent revenue loss. Disaster recovery (DR) solutions have been adopted by companies to minimise the effects of these outages. However, the selection of an optimal DR solution is difficult since there does not exist a single solution that suits the requirement of every company (e.g., availability and costs). In this paper, we propose an integrated model-experiment approach to evaluate DR solutions. We perform experiments in different real-world DR solutions and propose analytic models to evaluate these solutions regarding DR key-metrics: steady-state availability, recovery time objective (RTO), recovery point objective (RPO), downtime, and costs. The results reveal that DR solutions can significantly improve availability and minimise costs. Also, a sensitivity analysis identifies the parameters that most affect the RPO and RTO of the DR adopted solutions.
    Keywords: backup-as-a-service; cloud computing; disaster recovery; disaster tolerance; fault-tolerance; Petri nets; stochastic modelling.
    DOI: 10.1504/IJGUC.2020.10030947
  • Big data inconsistencies and incompleteness: a literature review   Order a copy of this article
    by Olayinka Johnny, Marcello Trovati 
    Abstract: The analysis and integration of big data highlight some issues in the identification and resolution of data inconsistencies and knowledge incompleteness. This paper presents an overview of data inconsistencies and a review of approaches to resolve various levels of data inconsistencies. Moreover, we discuss some issues related to incompleteness and stability of known knowledge over specific time periods, and the implication for the decision-making process. In addition, the use of a Bayesian network model in inconsistency resolution in data analysis and knowledge engineering will also be considered.
    Keywords: big data; data inconsistencies; NLP; Bayesian networks.
    DOI: 10.1504/IJGUC.2020.10030948
  • Towards an encompassing maturity model for the management of higher education institutions   Order a copy of this article
    by Rui Humberto Pereira, João Vidal De Carvalho, Álvaro Rocha 
    Abstract: Maturity models (MM) have been adopted in organisations from different sectors of activity, as guides and references for Information Systems (IS) management. In the educational field, these models have also been used to deal with the enormous complexity and demand of IS management. This paper presents a research project that aims to develop a new MM for Higher Education Institutions (HEI) that helps them to address those problems, as a useful tool for the management of their IS (and institutions as well). Thus, the MM in this area are identified, as well as the characteristics and gaps that they present, justifying the need and the opportunity for a new and comprehensive MM. Finally, we discuss the methodology for the development of MM that will be adopted for the design of the new model (called HEIMM) and the underlying reasons for its choice. At the moment, we are developing the HEIMM.
    Keywords: stages of growth; maturity models; higher education institutions; management.
    DOI: 10.1504/IJGUC.2020.10030949
  • On a user authentication method to realise an authentication system using s-EMG   Order a copy of this article
    by Hisaaki Yamaba, Shotaro Usuzaki, Kayoko Takatsuka, Kentaro Aburada, Tetsuro Katayama, Mirang Park, Naonobu Okazaki 
    Abstract: To prevent shoulder-surfing attacks, we proposed a user authentication method using surface electromyogram (s-EMG) signals, which can be used to identify who generated the signals and which gestures were made. Our method uses a technique called 'pass-gesture', which refers to a series of hand gestures, to achieve s-EMG-based authentication. However, it is necessary to introduce computer programs that can recognise gestures from the s-EMG signals. In this paper, we propose two methods that can be used to compare s-EMG signals and determine whether they were made by the same gesture. One uses support vector machines (SVMs), and the other uses dynamic time warping. We also introduced an appropriate method for selecting the validation data used to train SVMs using correlation coefficients and cross-correlation functions. A series of experiments was carried out to confirm the performance of those proposed methods, and the effectiveness of the two methods was confirmed.
    Keywords: user authentication; surface electromyogram; SVMs; support vector machines; correlation coefficient; cross-correlation; DTW; dynamic time warping.
    DOI: 10.1504/IJGUC.2020.10030950
  • Efficient variant transaction injection protocols and adaptive policy optimisation for decentralised ledger systems   Order a copy of this article
    by Bruno Andriamanalimanana, Chen-Fu Chiang, Jorge Novillo, Sam Sengupta, Ali Tekeoglu 
    Abstract: For decentralised cryptocurrency systems, it is important to provide users an efficient network. One performance bottleneck is the latency issue. To address this issue, we provide four protocols to utilise the resources based on the traffic in the network to alleviate the latency in the network. To facilitate the verification process, we discuss three variant injection protocols: Periodic Injection of Transaction via Evaluation Corridor (PITEC), Probabilistic Injection of Transactions (PIT) and Adaptive Semi-synchronous Transaction Inject (ASTI). The injection protocols are variants based on the given assumptions of the network. The goal is to provide dynamic injection of unverified transactions to enhance the performance of the network. The Adaptive Policy Optimisation (APO) protocols aim at optimising a cryptocurrency system's own house policy. The house policy optimisation is translated into a 0/1 knapsack problem. The APO protocol is a fully polynomial time approximation scheme for the decentralised ledger system.
    Keywords: blockchain; optimisation; decentralised ledger system architecture.
    DOI: 10.1504/IJGUC.2020.10032064
  • Online information bombardment! How does eWOM on social media lead to consumer purchase intentions?   Order a copy of this article
    by Muddasar Ghani Khwaja, Saqib Mahmood, Ahmad Jusoh 
    Abstract: Social media networks have increased Electronic Word of Mouth (eWOM) conversations which have created numerous opportunities for online businesses. People using social media networks have been able to discuss, evaluate and argue on different products and services with their friends, family and peers. The study aimed to determine the influence of these social media conversations on the consumer purchase intentions. In this regard, Erkan and Evans' (2016) study was extended as an integrated framework of Theory of Reasoned Action (TRA) and Information Adoption Model (IAM). The proposed framework was estimated using Structural Equation Modelling (SEM) on AMOS. Survey method was deployed in which data from 342 respondents were taken who have been involved in online purchasing due to social media influence. The results attained affirmed the established theoretical foundations.
    Keywords: eWOM; IAM; information adoption model; TAM; technology acceptance model; purchase intentions.
    DOI: 10.1504/IJGUC.2020.10032065

Special Issue on: ICTSCI-2019 Swarm Intelligence Techniques for Optimum Utilisation of Resources in Grid and Utility Computing

  • A vector-based watermarking scheme for 3D models using block rearrangements   Order a copy of this article
    by Modigari Narendra, M.L. Valarmathi, L. Jani Anbarasi, D.R.L. Prasanna 
    Abstract: Watermarking schemes help to secure the ownership, authenticity, and copyright-related security issues of electronic data. Digital watermarking is used to enhance the copyright protection of the 3D models thus overcoming the integrity violations. An efficient, robust, semi-blind computationally secure DWT-based watermarking is proposed for 3D models. Firstly, a block segmentation of the vertices is performed on the horizontally or vertically scrambled host 3D model. The secret watermark is embedded into the lower frequency band of each segment using Discrete Wavelet Transform (DWT). The proposed scheme employs block rearrangement to scramble the host 3D model so as to enhance the security and fidelity of the embedded watermark. Reverse scrambling and inverse DWT is accomplished for constructing the watermarked model. Semi-blind extraction process achieved a successful retrieval of the secret watermark from the watermarked 3D model. A simulation study with a variety of 3D models evidenced the performance of the proposed watermarking scheme in terms of geometrical attacks, robustness and degradation affect.
    Keywords: 3D watermarking; block segmentation; discrete wavelet transform; geometrical attacks.
    DOI: 10.1504/IJGUC.2020.10029974
  • Entropy-based classification of trust factors for cloud computing   Order a copy of this article
    by Ankita Sharma, Puja Munjal, Hema Banati 
    Abstract: Cloud computing has now been introduced in organisations all around the globe. With the developing prevalence of grid and distributed computing, it has become incredibly important to maintain security and trust. Researchers have now begun concentrating on mining information in cloud computing and have begun distinguishing the basic factor of moral trust. Moral angles in the cloud rely upon the application and the present conditions. Data mining is a procedure for distinguishing the most significant data from a lot of irregular information. In this paper, a three phased methodology is adopted, involving machine learning techniques to discover the most important parameter on which trust is based in the cloud environment. The methodology was then implemented on data sets, proving privacy is the most important factor to calculate ethical trust in cloud computing. The results can be employed in real cloud environments to establish trust as service providers can now consider privacy as the main issue in this relatively new distributed computing environment.
    Keywords: cloud computing; data mining; classification; decision tree; trust; entropy; multivariate regression.
    DOI: 10.1504/IJGUC.2020.10029811
  • Towards self-optimisation in fog computing environments   Order a copy of this article
    by Danilo Souza Silva, José Dos Santos Machado, Admilson De Ribamar Lima Ribeiro, Edward David Moreno Ordonez 
    Abstract: In recent years, the number of smart devices has grown exponentially. However, the computational demand for latency-sensitive applications has grown and the traditional model of cloud computing is no longer able to meet alone all the needs of this type of application. In this direction, a new paradigm of computation was introduced, it is called Fog Computing. Many challenges need to be overcome, especially those regarding issues such as security, power consumption, high latency in communication with critical IoT applications, and need for QoS. We have presented a container migration mechanism to Fog and the cloud computing that supports the implementation of optimisation strategies to achieve different objectives and solutions to problems of resources allocation. In addition, our work emphasises the performance and latency optimisation, through an autonomic architecture based on the MAPE-K control loop, and provides a foundation for the analysis and optimisation architectures design to support IoT applications.
    Keywords: fog computing; resource management; autonomic; IoT; service migration.
    DOI: 10.1504/IJGUC.2020.10029710
  • Improved African buffalo optimisation algorithm for petroleum product supply chain management   Order a copy of this article
    by Chinwe Peace Igiri, Yudhveer Singh, Deepshikha Bhargava, Samuel Shikaa 
    Abstract: Real-world supply chain network is complex due to large problem size and constraints. An optimum petroleum products scheduling would not only influence the distribution cost but also result in optimal product scheduling. The bio-inspired method is preferred alternative to exact algorithms because it does not require prior knowledge of the initial solution unlike the latter. The study proposes an improved African Buffalo Optimisation (ABO) algorithm for petroleum supply chain distribution. The ABO is a swarm intelligence-based bio-inspired algorithm with significant performance track record. It models the grazing and defending lifestyle of the African buffaloes in the savannah. The chaotic ABO and chaotic-Levy ABO are the ABO's improved variants with outstanding performance in recent studies. The present study applies the standard ABO and its improved variants to obtain a near optimum petroleum distribution scheduling solution. The comparative result shows that the proposed approach outperformed existing exact algorithms.
    Keywords: supply chain network; computational intelligence; petroleum product scheduling; bio-inspired algorithm; swarm intelligence; African buffalo optimisation algorithm; chaotic African buffalo optimisation algorithm; chaotic-Levy flight African buffalo optimisation algorithm.
    DOI: 10.1504/IJGUC.2020.10032061
  • Towards an effective approach for architectural knowledge management considering global software development   Order a copy of this article
    by Muneeb Ali Hamid, Yaser Hafeez, Bushra Hamid, Mamoona Humayun, Noor Zaman Jhanjhi 
    Abstract: Architectural design is expected to provide virtuous outcomes of quality software products by satisfying customer requirements. A foremost apprehension of the customer is to have a better quality product within a minimal time span. The evaporation of architectural knowledge causes snags for the quality of a system being developed. The research study aims to propose and validate a framework bridging the gaps in architectural knowledge management. A mixed research approach has been employed. In order to align closely with industry practices, action research has been considered as research methodology, while the evaluation has been performed using a multiple case study approach. The results show that the framework enables the architects to cope with complex designs in distributed software development environments. The developed tool enabled to shift the theory into practice by assisting in creation of system architecture, knowledge survival and support architectural evolution with changing requirements.
    Keywords: knowledge management; architectural knowledge; GSD; global software development; design decision.
    DOI: 10.1504/IJGUC.2020.10032062
  • Hardware implementation of OLSR and improved OLSR (AOLSR) for AANETs   Order a copy of this article
    by Pardeep Kumar, Seema Verma 
    Abstract: Airborne Ad-hoc Networks (AANETs) are a subclass of Vehicular Ad-hoc Networks (VANETs). The major challenge in AANET implementation is the regular route breaks due to very high mobility of aircraft. To deal with this routing challenge in AANETs, we have designed a new protocol named Airborne OLSR (AOLSR) which is based on more optimisation of the MPR selection technique used in existing OLSR protocol. A hardware implementation of any protocol is always crucial for the validation of its design. So, this manuscript also provides a hardware implementation of both OLSR and AOLSR protocols using Verilog. The node architectures for both the protocols have been implemented in Vivado. The node's sub-component implementation area, execution time and total power consumption have been used as parameters to compare their performance. The simulation analysis shows that the proposed AOLSR performs better as compared to OLSR in terms of total power consumption and execution time.
    Keywords: AANETs; VANETs; AOLSR; OLSR; MPR selection; hardware implementation; power consumption; execution time.
    DOI: 10.1504/IJGUC.2020.10032068

Special Issue on: Current Trends in Ambient Intelligence-Enabled Internet of Things and Web of Things Interface Vehicular Systems

  • Intrusion detection technique using coarse Gaussian SVM   Order a copy of this article
    by Bhoopesh Singh Bhati, C.S. Rai 
    Abstract: In the new era of internet technology, everybody is transferring the data from place to place through the internet. As internet technology is improving, different types of attack have also increased. To detect the attacks it is important to protect transmitted information. The role of Intrusion Detection System (IDS) is imperative to detect various types of attack. Researchers have proposed numerous theories and methods in the area of IDS, the research in area of intrusion detection is still going on. In this paper, a Coarse Gaussian Support Vector Machine (CGSVM) based intrusion detection technique is proposed. The proposed method has four major steps, namely data collection, preprocessing and studying data, training and testing using CGSVM, and decisions. In implementation, KDDcup99 datasets are used as a benchmark and MATLAB programming environment is used. The results of the simulation are represented by Receiver Operating Characteristics (ROC) and confusion matrix. Here, the proposed method achieved high detection rates: 99.99%, 99.95%, 99.53%, 99.19%, and 90.57% for DOS, normal, probe, R2L, and U2R, respectively.
    Keywords: information security; intrusion detection; machine learning; CGSVM.
    DOI: 10.1504/IJGUC.2020.10026645
  • Investigation of multi-objective optimisation techniques to minimise the localisation error in wireless sensor networks   Order a copy of this article
    by Harriet Puvitha, Saravanan Palani, V. Vijayakumar, Logesh Ravi, V. Subramaniyaswamy 
    Abstract: Wireless Sensor Networks (WSN) play a major role in remote sensing environments. In recent trends, sensors are used in various wireless technologies owing to their smaller size, cheaper rates and ability to communicate with each other to create a network. The sensor network is the convergent technology of microelectronic and electromechanical technologies. The localisation process can determine the location of each node in the network. Mobility-assisted localisation is an effective technique for node localisation using a mobility anchor. The mobility anchor is also used to optimise the path planning for the location-aware mobile node. In this proposed system, a multi-objective method is proposed to minimise the distance between the source and the target node using the Dijkstra algorithm with obstacle avoidance. The Grasshopper Optimisation Algorithm (GOA), and the Butterfly Optimisation Algorithm (BOA) based multi-objective models are implemented along with obstacle avoidance and path planning. The proposed system maximises the localisation accuracy. Also it minimises the localisation error and the computation time compared with existing systems.
    Keywords: localisation models; grasshopper optimisation; butterfly optimisation; Dijkstra; path planning.

  • Dynamic group key management scheme for clustered wireless sensor networks   Order a copy of this article
    by R. Vijaya Saraswathi, L. Padma Sree, K. Anuradha 
    Abstract: A Group Key Management is a technique to establish shared group key, between the cluster head and sensor nodes for multiple sessions in a clustered network environment. The common use of this established group key (also termed as conference key) is to permit users to encrypt and decrypt a particular broadcast message that is meant for the total user group. In this work, we propose a cluster based dynamic group key management protocol that is based on public key cryptography. Cluster head initiates establishment of group key to the sensor nodes efficiently and achieves secure communication. Later, the computation of the common group key is performed by each sensor node. Group members have functionality to join and leave from particular communication along with this, other nodes, equal to threshold compute new conference key without involvement of cluster head. The proposed protocol is investigated in terms of security and complexity analysis using network simulator (NS-2).
    Keywords: key management; group key management; wireless networks; privacy; public key cryptography; network simulator.
    DOI: 10.1504/IJGUC.2020.10029963
  • Hybrid energy-efficient and QoS-aware algorithm for intelligent transportation system in IoT   Order a copy of this article
    by N.N. Srinidhi, G.P. Sunitha, S. Raghavendra, S.M. Dilip Kumar, Victor Chang 
    Abstract: The Internet of Things (IoT) consists of large amount of energy consuming devices which are pre-figured to progress the effective competence of several industrial applications. It is very much essential to bring down the energy use of every device deployed in the IoT network without compromising the Quality of Service (QoS) for intelligent transportation system. To achieve this objective, a multiobjective optimisation problem to accomplish the aim of estimating the outage performance of clustering process and the network lifetime is devised. Subsequently, a Hybrid Energy Efficient and QoS Aware (HEEQA) algorithm that is a combination of Quantum Particle Swarm Optimisation (QPSO) along with improved Non-dominated Sorting Genetic Algorithm (NSGA) to achieve energy balance among the devices is proposed. NSGA is applied to solve the problem of multiobjective optimisation and the QPSO algorithm is used to find the optima cooperative nodes and cluster head in the clusters.
    Keywords: energy efficiency; intelligent transportation system; IoT; network lifetime; QoS.
    DOI: 10.1504/IJGUC.2020.10032054
  • Analysing control plane scalability issue of software defined wide area network using simulated annealing technique   Order a copy of this article
    by Kshira Sagar Sahoo, Somula Ramasubbareddy, Balamurugan Balusamy, B. Vikram Deep 
    Abstract: In Software Defined Networks (SDN), the decoupling of the control logic from the data plane provides numerous advantages. Since its inception, SDN is a subject of a wide range of criticism mainly related to the scalability issues of the control plane. To address these limitations, recent architectures have supported the implementation of multiple controllers. Usage of multiple controllers leads to Controller Placement Problems (CPP) particularly in wide area networks. In most of the placement strategies, authors focused on propagation latency, because it is a critical factor in real networks. In this paper, the placement problem has been formulated on the basis of propagation latency as an optimisation problem, and Simulated Annealing (SA) technique has been used to analyse the problem. Further, we investigate the behaviour of SA with four different neighbouring solutions technique. The effectiveness of the algorithms is carried out on TataNld topology and implemented using MATLAB simulator.
    Keywords: software defined networks; scalability; controller placement problem; simulated annealing; wide area network; controller; switches; propagation delay; topology zoo; greedy; inversion; translation.
    DOI: 10.1504/IJGUC.2020.10032055
  • Energy-aware multipath routing protocol for internet of things using network coding techniques   Order a copy of this article
    by S. Sankar, P. Srinivasan, Somula Ramasubbareddy, B. Balamurugan 
    Abstract: The energy conservation is a significant challenge to prolong the network lifetime in Internet of Things (IoT). So in this paper, we propose an energy-aware multipath routing protocol (EAM-RPL) to extend the network lifetime. The multipath model establishes multiple paths from the source node to the sink. In EAM-RPL, the source node applies the randomised linear network coding to encode the data packets and it transmits the data packets into the next cluster level of nodes. The intermediate nodes receive the encoded data packets and forward them to the next cluster of nodes. Finally, the sink node receives the data packets and it decodes the original data packet sent by the source node. The performance of the proposed EAM-RPL is compared with the RPL protocol. The simulation result shows that the proposed EAM-RPL improves the packet delivery ratio by 3-5% and prolongs the network lifetime by 5-10%.
    Keywords: internet of things; network coding; IPv6 routing protocol; low power; lossy networks; multipath routing.
    DOI: 10.1504/IJGUC.2020.10032056

Special Issue on: IoT Integration in Next-Generation Smart City Planning

  • Internet of things based architecture for additive manufacturing interface   Order a copy of this article
    by Swee King Phang, Norhijazi Ahmad, Chockalingam Vaithilingam Aravind, Xudong Chen 
    Abstract: This paper addresses the current challenges in managing multiple additive manufacturing units (i.e., 3D printers) without an online system. Managing multiple 3D printers is troublesome and difficult. The traditional process of selecting free printers and monitoring printing statuses manually has revealed a big flaw in the system as it requires physical interaction between the machine and human. As of today, there is little to none for a 3D printer online managing system. Most printing still requires human monitorisation and the project work to be printed must be fed physically to the printer via external drives. In this paper, a solution to zero physical interaction to additive manufacturing units is proposed. The objective is achieved by using the saturated IoT technologies. Webserver will be used to create a webpage to upload the file, for approval, and to check the printing status. A server will be used to store the files, slicing software, file queueing system and to store temporary information of the manufacturing unit's status. Cameras on multiple 3D printers will be used as sensors to monitor the project progress visually. In the end product of the IoT based 3D printing systems, the user will be able to upload the files, ask for superior approval (optional), queue to a specific manufacturing unit to print out by the algorithm set on the cloud server, receive important data from the server such as time estimation, progress percentage and the extruders temperature, and receive notification of error if any issues arise, and notification of completion. The proposed system is implemented and verified in the Additive Manufacturing Lab in Taylors University Malaysia.
    Keywords: additive manufacturing units; 3D printing; online printing; printer management; cloud printing; printing networking; IoT printer; printing monitoring; heat monitor.

  • Enhanced authentication and access control in internet of things: a potential blockchain-based method   Order a copy of this article
    by Syeda Mariam Muzammal, Raja Kumar Murugesan 
    Abstract: With the rapid growth of Internet of Things (IoT), it can be foreseen that IoT services will be influencing several use-cases. IoT brings along the security and privacy issues that may hinder its widescale adoption. The scenarios in IoT applications are quite dynamic compared with the traditional internet. It is vital that only authenticated and authorised users get access to the services provided. Hence, there is a need for a novel authentication and access control technique that is compatible and practically applicable in diverse IoT scenarios to provide adequate security to devices and data communicated. This article aims to highlight the potential of blockchain for enhanced and secured authentication and access control in IoT. The proposed method relies on blockchain technology, which tends to eliminate the limitations of intermediaries for authentication and access control. Compared with existing systems, it has advantages of decentralisation, secured authentication, authorisation, adaptability, and scalability.
    Keywords: internet of things; security; authentication; access control; blockchain.

  • Control and monitoring of air-conditioning units through cloud storage and control operations   Order a copy of this article
    by Chockalingam Aravind Vaithilingam, Mohsen Majrani 
    Abstract: Temperature control and monitoring of the air conditioning units is critically important towards energy savings. The purpose of this work is to design and develop an air conditioner monitoring system for monitoring and control using internet of things. The developed system uses an integrated mobile app using a cloud service that enables users to monitor and control its operations. The system consists of three subsystems, which are micro-controller, cloud storage and mobile app. The micro-controller can collect data from pressure transducer, differential pressure sensor, current transformer, accelerometer, and temperature and humidity sensor. The data collected by Arduino is sent to the cloud storage platform by using REST API. Cloud storage can store the data, display the data graphically, and send notification to specific users when a rule is activated. A hybrid mobile app is also developed with Ionic Framework. The mobile app can display the data stored in cloud storage. The data is fetched from the cloud storage by using the REST API. The system developed is able to monitor several critical parameters from the air conditioner, which are differential air pressure, refrigerant pressure, power, and angle of vibration on the x-axis, angle of vibration on the y-axis, temperature and humidity. With the data collected an algorithm to monitor and control the performance of such an air conditioning system through this embedded module is envisioned to be part of the energy-efficient systems.
    Keywords: condition monitoring; internet of things; mobile app; cloud storage.

  • Forecasting of solar potential and investigation of voltage stability margin using FACTs device: a synopsis from Geography of Things perspective   Order a copy of this article
    by Masum Howlader, Khandaker Sultan Mahmood, Md.Golam Zakaria, Kazi Mahtab Kadir, Mirza Mursalin Iqbal 
    Abstract: The uncertain and erratic nature of renewable energy in solar form is quite distinctive from traditional and dispatchable fuels for generation and is laborious to integrate into conventional system operation. In the first part of this work, a machine-learning algorithm is used to train models on solar irradiance data and different meteorological weather information to predict solar irradiance for different cities to validate the forecasting model. The above-mentioned data for modelling purposes is taken from publicly available Geographical Information System (GIS) data. This can be realistically collected using Internet of Things (IoT) devices and sensors which, if based on a GIS approach, transforms the system into Geography of Things (GoT). Again, the intermittent and inertia-less nature of photovoltaic systems can produce significant power oscillations that can cause significant problems with the dynamic stability of the power system and also limit the penetration capacity of photovoltaics into the grid. In the second part, it is shown that the residue-based power oscillation damping (POD) controller significantly improves the inter-area oscillation damping. The validity and effectiveness of the proposed controller is demonstrated on a three-machine two-area test system that combines conventional synchronous generators and Flexible AC Transmission Systems (FACTs) devices using simulations. This report overall puts an in-depth analysis with regard to the challenges of solar resources with the integration, planning, operation and particularly the stability of the rest of the power grid, including existing generation resources, customer requirements and the transmission system itself that will lead to improved decision making in resource allocations and grid stability.
    Keywords: solar forecasting; static var compensator; support vector machine; power oscillation damping; geography of things.

Special Issue on: CONIITI 2019 Intelligent Software and Technological Convergence

  • A computer-based decision support system for knowledge management in the swine industry   Order a copy of this article
    by Johanna Trujillo-Díaz, Milton M. Herrera, Flor Nancy Díaz-Piraquive 
    Abstract: The swine industry contributes to food security around the world. However, the most vulnerable point in the industry occurs at the pig production cycle. This production cycle generates an imbalance between supply and demand, which affects profitability. This paper describes a computer-based decision support system for knowledge management (KM) which contributes to improving the profitability performance management into the swine industry. The computer-based system allows assessing decision alternatives on the dimensions of the KM capacity and profitability performance. This tool contributes to generating integration strategies for the swine industry four simulation scenarios was designed for representing a pig company in the Colombian case.
    Keywords: decision support system; simulation; swine; knowledge management; system dynamics.

  • Computational intelligence system applied to plastic microparts manufacturing process   Order a copy of this article
    by Andrés Felipe Rojas Rojas, Miryam Liliana Chaves Acero, Antonio Vizan Idoipe 
    Abstract: In the search for knowledge and technological development, there has been an increase in new analysis and processing techniques closer to human reasoning. With the growth of computational systems, hardware production needs have also increased. Parts with millimetric to micrometric characteristics are required for optimal system performance, so the demand for injection moulding is also increasing. Injection moulding process in a complex manufacturing process because mathematical modelling is not yet established: therefore, to address the selection of correct values of injection variables, computational intelligence can be the solution. This article presents the development of a computational intelligence system integrating fuzzy logic and neural network techniques with CAE modelling system to support injection machine operators, in the selection of optimal machine process parameters to produce good quality microparts using fewer processes. The tests carried out with this computational intelligent system have shown a 30% improvement in the efficiency of the injection process cycles.
    Keywords: computational intelligence; neural networks; fuzzy logic; micro-parts; plastic parts; computer vision; expert systems; injection processes; CAD; computer-aided design systems; CAE; computer-aided engineering.

Special Issue on: Novel Hybrid Artificial Intelligence for Intelligent Cloud Systems

  • QoS-driven hybrid task scheduling algorithm in a cloud computing environment   Order a copy of this article
    by Sirisha Potluri, Sachi Mohanty, Sarita Mohanty 
    Abstract: Cloud computing environment is a growing technology of distributed computing. Typically using cloud computing the services are deployed with individuals or organisations and to allow sharing of resources, services, and information based on the demand of users over the internet. CloudSim is a simulator tool used to simulate cloud scenarios. A QoS-driven hybrid task scheduling architecture and algorithm for dependent and independent tasks in a cloud computing environment is proposed in this paper. The results are compared against the Min-Min task scheduling algorithm, QoS-driven independent task scheduling algorithm, and QoS-driven hybrid task scheduling algorithm. QoS-driven hybrid task scheduling algorithm is compared with time and cost as QoS parameters and it gives a better result for these parameters.
    Keywords: cloud computing; task scheduling; quality of service.

Special Issue on: ICIMMI 2019 Emerging Trends in Multimedia Processing and Analytics

  • Handwritten Odia numeral recognition using combined CNN-RNN   Order a copy of this article
    by Abhishek Das, Mihir Narayan Mohanty 
    Abstract: Detection and recognition is a major task for current research. Almost all the parts of signal processing, including speech and images has the sub-content of it. Data compression mainly uses in multimedia communication, where the recognition is the major challenge. Keeping all these facts in view, the authors have taken an approach for handwritten numbers recognition. To meet the challenge of fake data, a generative adversarial network is used to generate some data and is considered along with original data. The database is collected from IIT, Bhubaneswar, and used in a GAN model to generate a huge amount of data. Further, a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) are considered for recognition purpose. Though Odia numerals are a little complex, the recognition task was found very interesting. A little work has been done in this direction. However, the application of a deep learning based approach is absent. Long Short Term Memory (LSTM) cells are used as recurrent units in this approach. We have added 1000 images generated through Deep Convolutional Generative Adversarial Network (DCGAN) to the IIT-BBSR dataset. In this method we have used the Adam optimisation algorithm for minimising the error, and to train the network we have used the supervised learning method. The result of this method gives 98.32% accuracy.
    Keywords: character recognition; Odia numerals; deep learning; CNN; RNN; LSTM; DCGAN; Adam optimisation.

  • An optimal channel state information feedback design for improving the spectral efficiency of device-to-device communication   Order a copy of this article
    by Prabakar Dakshinamoorthy, Saminadan Vaitilingam 
    Abstract: This article introduces a regularised zero-forcing (RZF) based channel state information (CSI) feedback design for improving the spectral efficiency of device-to-device (D2D) communication. This proposed method exploits conventional feedback design along with the optimised CSI in regulating the communication flows in the communicating environment. The codebook-dependent precoder design improves the rate of feedback by streamlining time/frequency dependent scheduling. The incoming communication traffic is scheduled across the available channels by pre-estimating their adaptability and capacity across the underlying network. This helps to exchange partial channel information between the communicating devices without the help of base station services. These features reduce the transmission error rates to achieve better sum rate irrespective of the distance and transmit power of the devices.
    Keywords: CSI; D2D; feedback design; precoding; zero-forcing.

Special Issue on: WETICE-2019 Novel Approaches to the Management and Protection of Emerging Distributed Computing Systems

  • Benchmarking management techniques for massive IIoT time series in a fog architecture   Order a copy of this article
    by Sergio Di Martino, Adriano Peron, Alberto Riccabone, Vincenzo Norman Vitale 
    Abstract: Within the Industrial Internet of Things (IIoT) scenario, the online availability of a growing number of assets in factories is enabling the collection of huge amounts of data. They can be used for big data analytics, with great possibilities for efficiency improvements and business growth. Each asset produces collections of time series, namely data streams, that must be properly handled with specific techniques providing, at the same time, effective ingestion and retrieval performance, in complex network architectures, maintaining compliance with company and infrastructure boundaries. In this paper, we describe an industrial experience in the management of massive time series from instrumented machinery, conducted in a plant of Avio Aero (part of General Electric Aviation). As a first step, we propose a fog-based architecture to ease the collection of these massive dataset, supporting local and remote data analytics tasks. Then, we present the results of an empirical comparison of four database management systems, namely PostgreSQL, Cassandra, MongoDB and InfluxDB, in the ingestion and retrieval of gigabytes of real IIoT data, collected from an instrumented dressing machine. More in detail, we tested different settings and indexing features offered by these DBMS, under different types of query. Results show that, in the investigated context, InfluxDB is able to provide very good performance, but PostgreSQL can still be a very interesting alternative, when more advanced settings are exploited. MongoDB and Cassandra, on the other hand, are not able to provide the performance of the two other DBMS.
    Keywords: big data; time series; IIoT; fog architecture; TSMS; NoSQL Ddtabase; relational database; benchmarking.

  • DIOXIN: runtime security policy enforcement of Fog Computing applications   Order a copy of this article
    by Enrico Russo, Luca Verderame, Alessandro Armando, Alessio Merlo 
    Abstract: Fog Computing is an emerging distributed computational paradigm that moves the computation towards the edge (i.e., where data are produced). Although Fog operating systems provide basic security mechanisms, security controls over the behaviour of applications running on Fog nodes are limited. For this reason, applications are prone to a variety of attacks. We show how current Fog operating systems (with a specific focus on Cisco IOx) are actually unable to prevent these attacks. We propose a runtime policy enforcement mechanism that allows for the specification and enforcement of user-defined security policies on the communication channels adopted by interacting Fog applications. We prove that the proposed technique reduces the attack surface of Fog Computing w.r.t. malicious applications. We demonstrate the effectiveness of the proposed technique by carrying out an experimental evaluation against a realistic Fog-based IoT scenario for smart irrigation.
    Keywords: Fog Computing; security assessment; Cisco IOx; runtime monitoring.

  • Black-box load testing to support autoscaling web applications in the cloud   Order a copy of this article
    by Marta Catillo, Luciano Ocone, Massimiliano Rak, Umberto Villano 
    Abstract: One of the most interesting features of cloud environments is the possibility to deploy scalable applications, which can automatically modulate the amount of leased resources so as to adapt to load variations and to guarantee the desired level of quality of service. As autoscaling has severe implications on execution costs, making optimal choices is of paramount importance. This paper presents a method based on off-line black-box load testing that allows to obtain performance indexes of a web application in multiple configurations under realistic load. These indexes, along with available resource cost information, can be exploited by autoscaler tools to implement the desired scaling policy, making a trade-off between cost and user-perceived performance.
    Keywords: autoscaling; cloud computing; load testing.

  • LISA: a lean information service architecture for SLA management in multi-cloud environments   Order a copy of this article
    by Nicola Sfondrini, Gianmario Motta 
    Abstract: Cloud computing emerged as a disruptive technology for managing IT services over the Internet, evolving from grid computing, utility computing and Software-as-a-Service (SaaS). After an initial scepticism, international companies are widely migrating their IT workload on private and public clouds to optimise geographical coverage and launch new digital services. Currently, hybrid and multi-cloud environments introduce additional complexity in managing the Quality of Service (QoS), therefore requiring more sophisticated Service Level Agreements (SLAs). To deal with these issues, our research developed a SLA-aware Lean Information Service Architecture (LISA) for managing multi-cloud environments that support users in the whole service lifecycle. LISA's performance was tested in the Innovation Lab of a global telco operator, by deploying services on private cloud and public cloud providers. Experimental results prove not only LISAs effectiveness but also its efficiency in various aspects, such as preventing SLA violations and service performance degradations, optimising the QoS, and controlling the service components deployed across multiple public cloud providers.
    Keywords: multi-cloud; SLA; service level management; QoS; cloud broker; cloud SLM framework; SLA-aware resource allocation.

Special Issue on: Intelligent Evaluations of Resource Management Techniques in Fog Computing

  • Web data mining algorithm based on cloud computing environment   Order a copy of this article
    by Yunpeng Liu, Xiaolong Gu, Jie Zhang 
    Abstract: With the rapid development of the internet, the daily growth of information has developed exponentially. To analyse useful information from it, there is already a bottleneck in the calculation and storage of a single node. In order to quickly extract valuable rules and patterns from massive and noise-containing data and make them easy to understand and apply directly, we used data mining technology. On the other hand, based on the characteristics of low cloud computing cost, large throughput, good fault tolerance and strong stability, the cloud computing method is selected for web data mining processing. This paper studies and analyses the K-means clustering algorithm, and the web data mining algorithm based on cloud computing environment improves the K-means algorithm, overcomes the shortcomings of the K-means algorithm itself, and builds a good cloud computing environment on the Hadoop platform, and parallelises and optimses the improved algorithm. We will focus on the K-means clustering algorithm. In order to solve the shortcomings of the K-means algorithm itself, we will consider improving the K-means algorithm and transplanting it to the Hadoop cloud computing platform. Finally, the experimental results in terms of effectiveness and acceleration ratio show that the improved and optimised algorithm solves the problem of insufficient speed and efficiency in the clustering process.
    Keywords: cloud computing; data mining; clustering algorithm; K-means algorithm.

  • Intelligent manufacturing system based on data mining algorithm   Order a copy of this article
    by Xiaoya Liu, Qiongjie Zhou 
    Abstract: How to reasonably apply data mining methods to intelligent manufacturing systems is a major issue facing the current manufacturing industry. This article focuses on the evaluation model of an intelligent manufacturing system based on a data mining algorithm. Combining the data mining algorithm with the intelligent manufacturing system, the evaluation model of the intelligent manufacturing system is established successfully. A neural network is selected for the final evaluation. After training, perform error analysis, the problems that occur in optimisation algorithms, feature selection, or data collection are analysed. The highest accuracy rate of the training group was 69%, and the highest accuracy rate of the test group was 32.5%. The results show that using data mining algorithms for recognition can effectively cluster control chart patterns and improve recognition efficiency.
    Keywords: data mining algorithm; intelligent manufacturing system; evaluation model; error analysis.

  • Visualisation technology in digital intelligent warehouse management System   Order a copy of this article
    by GuangHai Tang, Hui Zeng 
    Abstract: This study introduces visualisation technology into digital intelligent warehouse management and combines RFID technology and Web GIS technology. Through the pressure and performance test of the system, it is found that the user's waiting time is shorter, the system performance is stable, and the designed system can meet the needs of business operations, providing the warehouse management personnel with real-time information of goods location, inventory and making various reports and data. The results show that the system can reduce the cost of storage management by up to 40.3%, reduce the time of management by nearly half, and greatly improve the management efficiency. At the same time, owing to the use of intelligent information tools, it can also reduce the mistakes caused by manual operation and improve the competitiveness of enterprises.
    Keywords: visualisation technology; intelligent warehouse management; RFID technology; web GIS technology.

  • Image Recognition Technology Based on Neural Network in Robot Vision System   Order a copy of this article
    by Yinggang He 
    Abstract: Robot vision system has great research value and broad application prospects in robot navigation and positioning, human-computer interaction, unmanned driving, disaster rescue and other fields, among which image recognition technology plays an important role. The purpose of this study is to analyse the application of image recognition technology based on neural network in robot vision system. This research uses CamVid training decoder to train the model, then fine tune the parameters on the collected data, label the manually collected data with LabetMe annotation tool, and cross verify the image and scene with neural network algorithm and image recognition principle technology. After five training cycles, the neural network in this study can achieve more than 90% recognition accuracy, and achieve convergence after storing about 10 cycles. Finally, the recognition accuracy in the test dataset can reach more than 95%. In the range of robot vision recognition, the maximum measurement deviation is only 2.54 cm and the error is less than 2%. It can be concluded that the method used in this study has fast convergence speed, high recognition accuracy, small error, and good practicability and effectiveness. It improves the recognition efficiency of the robot, the processing ability of the complex environment and the precise positioning of the object.
    Keywords: neural network; image recognition; machine vision; recognition system.

  • Mechanical fault detection method of weighing device sensor faced on internet of things   Order a copy of this article
    by Yan Dong, Shiying Bi 
    Abstract: With the advancement of science and technology, electronic scales including a load sensor have been widely used in various industries to achieve fast and accurate material weighing. Especially with the advent of microprocessors and continuous improvement of automation degree in industrial production processes, load sensors have become a necessary device in weighing process control, but there is currently no method for mechanical fault diagnosis of load sensors. This experiment samples the zero point output signal of the weight sensor, and then having taken out n consecutive values by using a sliding window, it finds the standard deviation of n values. Finally, it takes the ratio of the standard deviation to the normal output standard deviation as the testing base. When the ratio is greater than the set threshold, the sensor is faulty, otherwise there is no fault. The experimental results show that 20 normal output data are randomly selected from the zero test data of the weighing sensor, and the standard deviations of one or more sequences are calculated based on these 20 data. The average of the 10 standard deviations is used as the weighing sensor, and there is no standard deviation at zero drift. This method can monitor the running status of multiple devices in real time, predict the time of equipment failure, and detect creep faults as early as possible. By setting the critical value, the system can indicate possible faults before reaching the absolute limit, and ensure the maintenance in advance to continue the normal operation of machinery and equipment.
    Keywords: load sensor; fault diagnosis; signal sampling; creep fault.