Forthcoming and Online First Articles

International Journal of Grid and Utility Computing

International Journal of Grid and Utility Computing (IJGUC)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Grid and Utility Computing (51 papers in press)

Regular Issues

  • Research on modelling analysis and maximum power point tracking strategies for distributed photovoltaic power generation systems based on adaptive control technology   Order a copy of this article
    by Yan Geng, Jianwei Ji, Bo Hu, Yingjun Ju 
    Abstract: As is well-known, the distributed photovoltaic power generation technology has been rapidly developed in recent years. The cost of distributed photovoltaic power generation is much higher than that of traditional power generation modes. Therefore, how to improve the effective use of photovoltaic cells has become a popular research direction. Based on the analysis of the characteristics of photovoltaic cells, this paper presents a mathematical model of photovoltaic cells and a maximum power point tracking algorithm based on hysteresis control and adaptive control technology variable step perturbation observation method. This algorithm can balance the control precision and control speed from the disturbance observation method and improve the tracking results significantly. Finally, the feasibility of the algorithm and the tracking effects are simulated by using Matlab/Simulink software.
    Keywords: distributed photovoltaic; adaptive control technology; maximum power point tracking strategies.

  • Cloud infrastructure planning: models considering an optimisation method, cost and performance requirements   Order a copy of this article
    by Jamilson Dantas, Rubens Matos, Carlos Melo, Paulo Maciel 
    Abstract: Over the years, many companies have employed cloud computing systems as the best choice regarding the infrastructure to support their services, while keeping high availability and performance levels. The assurance of the availability of resources, considering the occurrence of failures and desired performance metrics, is a significant challenge for planning a cloud computing infrastructure. The dynamic behaviour of virtualised resources requires special attention to the effective amount of capacity that is available to users, so the system can be correctly sized. Therefore, planning computational infrastructure is an important activity for cloud infrastructure providers to analyse the cost-benefit trade-off among distinct architectures and deployment sizes. This paper proposes a methodology and models to support planning and the selection of a cloud infrastructure according to availability, COA, performance and cost requirements. An optimisation model based on GRASP meta-heuristic is used to generate a cloud infrastructure with a number of physical machines and Virtual Machines (VM) configurations. Such a system is represented using an SPN model and closed-form equations to estimate cost and dependability metrics. The proposed method is applied in a case study of a video transcoding service hosted in a cloud environment. The case study demonstrates the selection of cloud infrastructures with best performance and dependability metrics, considering the use of VP9, VP8 and H264 video codecs, as well as distinct VM setups. The results show the best configuration choice considering a six user profile. The results also show the computation of the probability of finalising a set of video transcoding jobs by a given time.
    Keywords: cloud computing; performance; availability modelling; GRASP; COA; stochastic Petri nets; cost requirements.

  • SDSAM: a service-oriented approach for descriptive statistical analysis of multidimensional spatio-temporal big data   Order a copy of this article
    by Weilong Ding, Zhuofeng Zhao, Jie Zhou, Han Li 
    Abstract: With the expansion of the Internet of Things, spatio-temporal data has been widely used and generated. The rise of big data in space and time has led to a flood of new applications with statistical analysis characteristics. In addition, applications based on statistical analysis of these data must deal with the large capacity, diversity and frequent changes of data, as well as the query, integration and visualisation of data. Developing such applications is essentially a challenging and time-consuming task. In order to simplify the statistical analysis of spatio-temporal data, a service-oriented method is proposed in this paper. This method defines the model of spatio-temporal data service and functional service. It defines a process-based application of spatio-temporal big data statistics to invoke basic data services and functional services, and proposes an implementation method of spatio-temporal data service and functional service based on Hadoop environment. Taking the highway big data analysis as an example, the validity and applicability of this method are verified. The effectiveness of this method is verified by an example. The validity and applicability of the method are verified by a case study of Expressway large data analysis. An example is given to verify the validity of the method.
    Keywords: spatio-temporal data; RESTful; web service.

  • Research on integrated energy system planning method considering wind power uncertainty   Order a copy of this article
    by Yong Wang, Yongqiang Mu, Jingbo Liu, Yongji Tong, Hongbo Zhu, Mingfeng Chen, Peng Liang 
    Abstract: With the development of energy technology, the planning and operation of integrated energy systems coupled with electricity-gas-heat energy has become an important research topic in the future energy field. In order to solve the influence of wind power uncertainty on the unified planning of integrated energy systems, this paper constructs a wind energy uncertainty quantitative model based on intuitionistic fuzzy sets. Based on this, an integrated energy system planning model with optimal economic cost and environmental cost is established. The model is solved by the harmonic search algorithm. Finally, the proposed method is validated by simulation examples. The effectiveness of the integrated energy system planning method can improve the grid capacity of the wind power and reduce the CO2 of the system. And it has guiding significance for the long-term planning of integrated energy systems
    Keywords: wind power uncertainty; planning method; electricity-gas-heat energy.

  • Fine-grained access control of files stored in cloud storage with traceable and revocable multi-authority CP-ABE scheme   Order a copy of this article
    by Bharati Mishra, Debasish Jena, Srikanta Patnaik 
    Abstract: Cloud computing is gaining increasing popularity among enterprises, universities, government departments, and end-users. Geographically distributed users can collaborate by sharing files through the cloud. Ciphertext-policy attribute-based (CP-ABE) access control provides an efficient technique to enforce fine-grained access control by the data owner. Single authority CP-ABE schemes create a bottleneck for enterprise applications. Multi-authority CP-ABE systems deal with multiple attribute authorities performing the attribute registration or key distribution. Type I pairing is used in designing the existing multi-authority systems. They are vulnerable to some reported known attacks on them. This paper proposes a multi-authority CP-ABE scheme that supports attribute and policy revocation. Type III pairing is used in designing the scheme, which has higher security, faster group operations, and requires less memory to store the elements. The proposed scheme has been implemented using the Charm framework, which uses the PBC library. The OpenStack cloud platform is used for computing and storage services. It has been proved that the proposed scheme is collusion resistant, traceable, and revocable. AVISPA tool has been used to verify that the proposed scheme is secure against a replay attack and man-in-the-middle attack.
    Keywords: cloud storage; access control; CP-ABE; attribute revocation; blockchain; multi-authority.

  • On generating Pareto optimal set in bi-objective reliable network topology design   Order a copy of this article
    by Basima Elshqeirat, Ahmad Aloqaily, Sieteng Soh, Kwan-Wu Chin, Amitava Datta 
    Abstract: This paper considers the following NP-hard network topology design (NTD) problem called NTD-CB/R: given (i) the location of network nodes, (ii) connecting links, and (iii) each links reliability, cost and bandwidth, design a topology with minimum cost (C) and maximum bandwidth (B) subject to a pre-defined reliability (R) constraint. A key challenge when solving the bi-objective optimisation problem is to simultaneously minimise C while maximising B. Existing solutions aim to obtain one topology with the largest bandwidth cost ratio. To this end, this paper aims to generate the best set of non-dominated feasible topologies, aka the Pareto Optimal Set (POS). It formally defines a dynamic programming (DP) formulation for NTD-CB/R. Then, it proposes two alternative Lagrange relaxations to compute a weight for each link from its reliability, bandwidth, and cost. The paper further proposes a DP approach, called DPCB/R-LP, to generate POS with maximum weight. It also describes a heuristic to enumerate only k?n paths to reduce the computational complexity for a network with n possible paths. Extensive simulations on hundreds of various sized networks that contain up to 299 paths show that DPCB/R-LP can generate 70.4% of the optimal POS while using only up to 984 paths and 27.06 CPU seconds. With respect to a widely used metric, called overall-Pareto-spread (OR), DPCB/R-LP produces 94.4% of POS with OS = 1, measured against the optimal POS. Finally, all generated POS each contains a topology that has the largest bandwidth cost ratio, significantly higher than 88% obtained by existing methods.
    Keywords: bi-objective optimisation; dynamic programming; Lagrange relaxation; Pareto optimal set; network reliability; topology design.

  • HyperGuard: on designing out-VM malware analysis approach to detect intrusions from hypervisor in cloud environment   Order a copy of this article
    by Prithviraj Singh Bisht, Preeti Mishra, Pushpanjali Chauhan, R.C. Joshi 
    Abstract: Cloud computing provides delivery of computing resources as a service on a pay-as-you-go basis. It represents a shift from products being purchased, to products being subscribed as a service, delivered to consumers over the internet from a large scale data centre. The main issue with cloud services is security from attackers who can easily compromise the Virtual Machines (VMs) and applications running over them. In this paper, we present a HyperGuard mechanism to detect malware that hide their presence by sensing the analysing environment or security tools installed in VMs. They may attach themselves with legitimate processes. Hence, HyperGuard is deployed at the hypervisor, outside the monitored VMs to detect such evasive attacks. It employs open source introspection libraries, such as DRAKVUF, LIbVMI etc., to capture the VM behaviour from hypervisor inform of syscall logs. It extracts the features in the form of n-grams. It makes use of Recursive Feature Elimination (RFE) and Support Vector Machine (SVM) to learn and detect the abnormal behaviour of evasive malware. The approach has been validated with a publicly available dataset (Trojan binaries) and a dataset obtained on request from University of new California (evasive malware binaries). The results seem to be promising.
    Keywords: Cloud secuirty,Intrusion detection,virtual machine introspection,system call traces; machine learning ; anaomaly behviour detection; sypder.

  • Dynamic Bayesian network based prediction of performance parameters in cloud computing   Order a copy of this article
    by Priyanka Bharti, Rajeev Ranjan 
    Abstract: Resource prediction is an important task in cloud computing environments. It can become more effective and practical for large Cloud Service Providers (CSPs) with a deeper understanding of their Virtual Machines (VM) workload's key characteristics. Resource prediction is also influenced by several factors including (but not constrained to) data centre resources, types of user application (workloads), network delay and bandwidth. Given the increasing number of users for cloud systems, if these factors can be accurately measured and predicted, improvements in resource prediction could be even greater. Existing prediction models have not explored how to capture the complex and uncertain (dynamic) relationships between these factors owing to the stochastic nature of cloud systems. Further, they are based on score-based Bayesian network (BN) algorithms having limited prediction accuracy when dependency exists between multiple variables. This work considers time-dependent factors in cloud performance prediction. It considers an application of Dynamic Bayesian Network (DBN) as an alternative model for dynamic prediction of cloud performance by extending the static capability of a BN. The developed model is trained using standard datasets from Microsoft Azure and Google Compute Engine. It is found to be effective in predicting the application workloads and its resource requirements with an enhanced accuracy compared with existing models. Further, it leads to better decision making processes with regard to response time and scalability in dynamic situations of the cloud environment.
    Keywords: cloud computing; dynamic Bayesian network; resource prediction; response time; scalability.

  • A privacy-aware and fair self-exchanging self-trading scheme for IoT data based on smart contract   Order a copy of this article
    by Yuling Chen, Hongyan Yin, Yaocheng Zhang, Wei Ren, Yi Ren 
    Abstract: With the development of the era of big data, the demand for data sharing and usage is increasing, especially in the era of Internet of things, thus putting forward a keen demand for data exchanging and data trading. However, the existing data exchanging and trading platforms are usually centralized and usersrnhave to trust platforms. This paper proposes a secure and fair exchanging and trading protocol based on blockchain and smart contract, especially, self-governance without relying centralized trust. By using the protocol, it can guarantee fairness to defend against trade cheating, and security for data confidentiality. It can also guarantee efficiency by transferring data links instead of data between data owners and data buyers. The extensive analysisrnjustified that the proposed scheme can facilitate the self-exchanging and self-trading for big data in a secure, fair and efficient manner.
    Keywords: big data; IoT; fair exchanging; blockchain; smart contract; oblivious protocol; fair trading.

  • Micro-PaaS fog: container based orchestration for IoT applications using SBC   Order a copy of this article
    by Walter D.O. Santo, Rubens De Souza Matos Júnior, Admilson De Ribamar Lima Ribeiro, Danilo Souza Silva, Reneilson Yves Carvalho Santos 
    Abstract: The Internet of Things (IoT) is an emerging technology paradigm in which ubiquitous sensors monitor physical infrastructures, environments, and people in real-time to help in decision making and improve the efficiency and reliability of the systems, adding comfort and life quality to society. In this sense, there are questions concerning the limitation of computational resources, high latency and different QoS requirements related to IoT that move cloud technologies to the fog computing direction, and the adoption of light virtualised solutions, as technologies based in containers to attend to many needs of different domains. This work, therefore, has as its goal to propose and implement a micro-Paas architecture for fog computing, in a cluster of single-board computers (SBC), for orchestration of applications using containers, applied to IoT and that attend to the QoS criteria, e.g. high availability, scalability, load balance, and latency. From this proposed model, the micro-Paas fog was implemented with virtualisation technology in containers using orchestration services in a cluster built with Raspberry Pi to monitor water and energy consumption at a total cost of property equivalent to 23% of a public platform as a service (PaaS).
    Keywords: fog computing; cluster; orchestration; containers; single board computing.

  • Anomaly detection against mimicry attacks based on time decay modelling   Order a copy of this article
    by Akinori Muramatsu, Masayoshi Aritsugi 
    Abstract: Because cyberattackers attempt to cheat anomaly detection systems, it is required to make an anomaly detection system robust against such attempts. We focus on mimicry attacks and propose a system to detect such attacks in this paper. Mimicry attacks make use of ordinary operations in order not to be detected. We take account of time decay in modelling operations to give lower priorities to preceding operations, thereby enabling us to detect mimicry attacks. We empirically evaluate our proposal with varying time decay rates to demonstrate that our proposal can detect mimicry attacks that could not be detected by a state-of-the-art anomaly detection approach.
    Keywords: anomaly detection; mimicry attacks; time decay modelling; stream processing.

  • A cloud-based spatiotemporal data warehouse approach   Order a copy of this article
    by Georgia Garani, Nunziato Cassavia, Ilias Savvas 
    Abstract: The arrival of the big data era introduces new necessities for accommodating data access and analysis by organisations. The evolution of data is three-fold, increase in volume, variety, and complexity. The majority of data nowadays is generated in the cloud. Cloud data warehouses enhance the benefits of the cloud by facilitating the integration of cloud data in the cloud. A data warehouse is developed in this paper, which supports both spatial and temporal dimensions. The research focuses on proposing a general design for spatiobitemporal objects implemented by nested dimension tables using the starnest schema approach. Experimental results reflect that the parallel processing of such data in the cloud can process OLAP queries efficiently. Furthermore, increasing the number of computational nodes significantly reduces the time of query execution. The feasibility, scalability, and utility of the proposed technique for querying spatiotemporal data is demonstrated.
    Keywords: cloud computing; big data; hive; business intelligence; data warehouses; cloud based data warehouses; spatiotemporal data; spatiotemporal objects; starnest schema; OLAP; online analytical processing.

  • Authentication and authorisation in service oriented grid architecture   Order a copy of this article
    by Arbër Beshiri, Anastas Mishev 
    Abstract: Applications (services) in nowadays request access to resources that are mostly distributed over the network (wide-area network). These applications usually rely on by mediums such as Grid Computing Infrastructure (GCI) that enable them to be executed. GCI has heterogeneous nature and supports security as an essential part in grid systems. Grid Security Infrastructure (GSI) is a technology standard for grid security. Authentication and even authorisation estimate is a security challenge for grids. This paper discusses the authentication and authorisation infrastructures in the grid, including technologies that cover these two major parts of this domain. Here are surveyed the challenges that security encounters, namely grid authentication mechanisms, grid authorisation mechanisms and models. The combination of the grid authorisation technologies and grid authentication technologies with authorisation infrastructures enables role-based and fine-grained authorisation. Such technologies provide promising solutions for service (resources) oriented grid architecture authentication and authorisation.
    Keywords: grid; service oriented grid architecture; authorisation; authentication; security.

  • Recommendation system based on space-time user similarity
    by Wei Luo, Zhihao Peng, Ansheng Deng 
    Abstract: With the advent of 5G, the way people get information and the means of information transmission have become more and more important. As the main platform of information transmission, social media not only brings convenience to people's lives, but also generates huge amounts of redundant information because of the speed of information updating. In order to meet the personalised needs of users and enable users to find interesting information in a large volume of data, recommendation systems emerged as the times require. Recommendation systems, as an important tool to help users to filter internet information, play an extremely important role in both academia and industry. The traditional recommendation system assumes that all users are independent. In this paper, in order to improve the prediction accuracy, a recommendation system based on space-time user similarity is proposed. The experimental results on Sina Weibo dataset show that, compared with the traditional collaborative filtering recommendation system based on user similarity, the proposed method has better performance in precision, recall and F-measure evaluation value.
    Keywords: time-based user similarity; space-based user similarity; recommendation system; user preference; collaborative filtering.

  • Design and analysis of novel hybrid load-balancing algorithm for cloud data centres   Order a copy of this article
    by Ajay Dubey, Vimal Mishra 
    Abstract: In recent the pandemic scenario there is a paradigm shift, from traditional computing to internet-based computing. Now is the time to store and compute the data in the cloud environment. The Cloud Service Providers (CSPs) establish and maintain a huge shared pool of computing resources that provide scalable and on-demand services around the clock without geographical restrictions. The cloud customers are able to access the services and pay according to the accession of resources. When millions of users across the globe connect to the cloud for their storage and computational needs, there might be issues such as delay in services. This problem is associated with load balancing in cloud computing. Hence, there is a need to develop effective load-balancing algorithms. The Novel Hybrid Load Balancing (NHLB) algorithm proposed in this paper manages the load of the virtual machine in the data centre. This paper is focused on certain problems such as optimisation of performance, maximum throughput, minimisation of makespan, and efficient resource use in load balancing. The NHLB algorithm is more efficient than conventional load-balancing algorithms with reduced completion time (makespan) and response time. This algorithm equally distributes the tasks among the virtual machines on the basis of the current state of the virtual machines and the task time required. The paper compares the result of proposed NHLB algorithm with dynamic load-balancing algorithm and honeybee algorithm. The result shows that the proposed algorithm is better than the dynamic and honeybee algorithms.
    Keywords: cloud computing; data centre; load balancing; virtual machine; makespan; performance optimisation.

  • Virtual traditional craft simulation system in mixed reality environment   Order a copy of this article
    by Rihito Fuchigami, Tomoyuki Ishida 
    Abstract: In a previous study, we implemented a high presence virtual traditional craft system using a head-mounted display (HMD) and a mobile traditional craft presentation system using augmented reality (AR). The high presence virtual traditional craft system realised a simulation experience of traditional crafts in virtual space comprising different cultural styles. However, this system had to construct the different cultural architectures in advance, and the amount of work was enormous. The mobile traditional craft presentation system realised an AR application that allows users to easily arrange 3DCG (Three-Dimensional Computer Graphics) of traditional crafts in real space using mobile terminals. However, this system lacks a sense of presence because users experience traditional crafts on the mobile terminals flat display. Therefore, in this study, we focused on mixed reality (MR) technology and developed an MR virtual traditional craft simulation system using an HMD. With MR technology, we have overcome the work cost and low presence issues relative to the construction of a virtual reality space. As a result, our system provides a simulation experience that realises a high sense of presence and intuitive operation. We conducted a comparative evaluation experiment with 30 subjects to evaluate the constructed MR virtual traditional craft simulation system. We obtained high evaluations of the systems presence and applicability; however, several operability issues were identified.
    Keywords: mixed reality; augmented reality; interior simulation; Japanese traditional crafts; immersive system.

  • The role of smartphone-based social media capabilities in building social capital, trust, and credibility to engage consumers in eWOM: a social presence theory perspective   Order a copy of this article
    by Saqib Mahmood, Ahmad Jusoh, Khalil MD Nor 
    Abstract: Smartphone-based social media has become a well-established channel for users to develop and maintain intimate social relations that enable them to engage in brands-related information exchange, regardless of their time and location, such as eWOM. Nevertheless, little is known about the essential elements of smartphone-based social media that help consumers to develop intimate social relationships and engage them in eWOM. To this end, drawing on the theory of social presence, the current study develops a research model that proposes that interactivity and media richness enhance social presence, giving consumers a sense of psychological proximity. Subsequently, it leads to the development of trust and social capital bonding and bridging. As a result of the bridging and bonding of social capital, consumers' perceived credibility is expected to enable them to engage in eWOM. To empirically investigate the theoretical model, a survey of 407 smartphone-based social media users was conducted in Pakistan. Empirical results reveal that the interactivity and media richness enhance the social presence that proffers consumers' psychological proximity to developing trust and social capital, further enhancing their perceived credibility to engage in eWOM. Discussions, implications, and future directions on the results are described in the final section of the study.
    Keywords: interactivity; media richness; social presence; psychological proximity; social capital; eWOM; trust; smartphone; social media.

  • Cloud infrastructure planning considering the impact of maintenance and self-healing routines over cost and dependability attributes   Order a copy of this article
    by Carlos Melo, Jean Araujo, Jamilson Dantas, Paulo Pereira, Felipe Oliveira, Paulo Maciel 
    Abstract: Cloud computing is the main trend regarding internet service provision. This paradigm, which emerged from distributed computing, gains more adherents every day. For those who provide or aim at providing a service or a private infrastructure, much has to be done, costs related to acquisition and implementation are common, and an alternative to reduce expenses is to outsource maintenance of resources. Outsourcing tends to be a better choice for those who provide small infrastructures than to pay some employees monthly to keep the service life cycle. This paper evaluates infrastructure reliability and the impact of outsourced maintenance over the availability of private infrastructures. Our baseline environments focus on blockchain as a service; however, by modelling both service and maintenance routines, this study can be applied to most cloud services. The maintenance routines evaluated by this paper encompass a set of service level agreements and some particularities related to reactive, preventive, and self-healing methods. The goal is to point out which one has the best cost-benefit for those with small infrastructures, but that still plans to provide services over the internet. Preventive and self-healing repair routines provided a better cost-benefit solution than traditional reactive maintenance routines, but this scenario may change according to the number of available resources that the service provider has.
    Keywords: maintenance; reliability; availability; modelling; cloud Ccmputing; blockchain; container; services; SLA.

  • Toward stance parameter algorithm with aggregate comments for fake news detection   Order a copy of this article
    by YinNan Yao, ChangHao Tang, Kun Ma 
    Abstract: In the detection of fake news, the stance of comments usually contains evidence supporting false news that can be used to corroborate the detected results of the fake news. However, owing to the misleading content of fake news, there is also the possibility of fake comments. By analysing the position of comments and considering the falseness of comments, comments can be used more effectively to detect fake news. In response to this problem, we propose Bipolar Argumentation Frameworks of Reset Comments Stance (BAFs-RCS) and Average Parameter Aggregation of Comments (APAC) to use the stance of comments to correct the prediction results of the RoBERTa model. We use the Fakeddit dataset for experiments. Our macro-F1 results on 2way and 3way are improved by 0.0029 and 0.0038 compared with the baseline RoBERTa model's macro-F1 results at Fakeddit dataset. The results show that our method can effectively use the stance of comments to correct the results of model prediction errors.
    Keywords: fake news detection; BAFs-RCS; APAC; RoBERTa.

  • Enhancing the 5G V2X reliability using turbo coding for short frames   Order a copy of this article
    by Costas Chaikalis, Dimitros Kosmanos, Kostas Anagnostou, Ilias Savvas, Dimitros Bargiotas 
    Abstract: For 5th Generation (5G) Vehicle-to-Everything (V2X) communication it would be desirable to build a dynamically changing reconfigurable system, considering different parameters. Turbo codes had a great impact on the realisation and success of 3G and 4G. Despite their complexity, their use for 5G V2X and short frames represents a challenging issue. Therefore, for the physical layer the choice of decoding iterations and algorithm represent two important parameters to achieve low latency and high performance, increasing the reliability of packet delivery. This is particularly useful for traffic emergency situations under strong interference or radio frequency jamming. For the geometry-based, efficient propagation model (GEMV) for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, our simulation results propose a constant number of three iterations. Subsequently, we investigate the main three turbo decoding algorithms for GEMV and flat Rayleigh fading, and our analysis does not recommend soft output Viterbi algorithm (SOVA) owing to its worse performance. We propose either log-Maximum a Posteriori (MAP) (better performance), or max-log-MAP (lower complexity), in comparison to the far more complex MAP algorithm.
    Keywords: 5G; V2V; turbo codes; GEMV channel.

  • An efficient privacy-preservation algorithm for incremental data publishing   Order a copy of this article
    by Torsak Soontornphand, Mizuho Iwaihara, Juggapong Natwichai 
    Abstract: Data can be continuously collected and grown all the time. Privacy protection designed for static data might not be able to cope with this situation effectively. In this paper, we present an efficient privacy preservation approach based on (k, l)-anonymity for incremental data publishing. We first illustrate the three privacy attacks, i.e., similarity, difference and joint attacks. Then, the three characteristics of incremental data publishing are analysed and exploited to efficiently detect privacy violations. With the studied characteristics, the similarity and join attack detection can be skipped for stable releases. In addition, only a subtype of the similarity attack and the latest previously released dataset need to be detected. From experimental results, the proposed method is highly efficient, with an average execution time eleven times less than a comparable static algorithm. In addition, the proposed method can also maintain better data quality than the compared methods at every setting.
    Keywords: privacy preservation; incremental data publishing; privacy attack; full-domain generalisation.

  • Jaya-based CAViaR: Hadamard product and key matrix for privacy preservation and data sharing in cloud computing environment   Order a copy of this article
    by Shubangini Patil, Rekha Patil 
    Abstract: Cloud computing system is a powerful computing resource mainly used for data publication and data subscription. Because the cloud environment handles a large volume of information, privacy is a challenging task during data sharing. Hence, an effective method is designed for data privacy preservation using the proposed Jaya-based CAViaR model. The privacy factor is measured using Database Difference Ratio (DBDR) and the utility factor is designed using Tanimoto similarity measure. The data to be shared is protected using key matrix and Hadamard product, such that the optimal matrix is generated using the proposed Jaya-based CAViaR model. The data sharing is accomplished among cloud storage, user and third party verification and validation (TPVV), respectively. Moreover, the protected data is effectively retrieved using key matrix. The proposed model obtained better performance in terms of metrics, such as accuracy, fitness, privacy, and utility with the values of 0.9737, 0.8691, 0.8976, and 0.8407, respectively.
    Keywords: Privacy preservation; data protection; data sharing; Tanimoto similarity measure; Jaya algorithm.

  • Improved identity-based proxy-oriented outsourcing with public auditing for secure cloud storage   Order a copy of this article
    by Yuanyou Cui, Yunxuan Su, Zheng Tu, Jindan Zhang 
    Abstract: With the advent of cloud computing, to verify the integrity of data stored on cloud service providers by proxy while ensuring proxy's credibility has become an important problem. In 2019, Zhang proposed an identity-based proxy-oriented outsourcing with public auditing in cloud-based medical cyberphysical systems that outsourcing medical data to cloud storage, which facilitated the process and access of medical data. Meanwhile, the proxy is added cleverly for improving the practicability and real-time performance of data processing. Unfortunately, some minor flaws are found in their approach in this paper. The signature generated by the agent can be forged to modify the original data and invalidate the original data integrity audit protocol. We have made some adjustments on the basis of the original data transmission protocol to improve the shortcomings of the original scheme. Finally, the security analysis shows that the security of the new protocol is enhanced.
    Keywords: secure cloud storage; data integrity; proxy-oriented.

  • Edge computing and its boundaries to IoT and Industry 4.0: a systematic mapping study   Order a copy of this article
    by Matheus Silva, Vinícius Meyer, Cesar De Rose 
    Abstract: In the last decade, cloud computing transformed the IT industry, allowing companies to execute many services that require on-demand availability of computational resources with more flexible provisioning and cost models, including the processing of already growing data volumes. But in the past few years, other technologies such as internet of things and the digitised industry known as Industry 4.0 have emerged, increasing data generation even more. The large amounts of data produced by user-devices and manufacturing machinery have made both industry and academia search for new approaches to process all this data. Alternatives to the cloud centralised processing model and its inherent high latencies have been studied, and edge computing is being proposed as a solution to these problems. This study presents a preliminary mapping of the edge computing field, focusing on its boundaries to the internet of things and Industry 4.0. We began with 219 studies from different academic databases, and after the classification process, we mapped 90 of them in eight distinct edge computing sub-areas and nine categories based on their main contributions. We present an overview of the studies on the edge computing area, which evidences the main concentration sub-areas. Furthermore, this study intends to clarify the remaining research gaps and the main challenges faced by this field, considering the internet of things and Industry 4.0 demands.
    Keywords: edge computing; internet of things; Industry 4.0; systematic mapping.

  • Cloud workflow scheduling algorithm based on multi-objective particle swarm optimisation   Order a copy of this article
    by Hongfeng Yin, Baomin Xu, Weijing Li 
    Abstract: Owing to the characteristics of market-oriented cloud computing, the objective function of cloud workflow scheduling algorithm should not only consider the running time, but also consider the running costs. The nature of cloud workflow scheduling is to map each task of a workflow instance to appropriate computing resources. Owing to the existence of temporal dependencies and causal dependencies between tasks, the scheduling of cloud workflow instance becomes more complex. The main contribution of this paper is to propose a cloud workflow scheduling algorithm based on multi-objective particle swarm optimisation. The algorithm takes makespan and total cost as two objectives. It provides users with a set of Pareto optimal solutions to select an optimal scheduling scheme according to their own preferences. The performance of our algorithm is compared with state-of-the-art multi-objective meta-heuristics and classical single-objective scheduling algorithms. The simulation results show that our solution delivers better convergence and optimisation capability compared with others. Hence it is applicable to solve multi-objective optimisation problems for scheduling workflows over a cloud platform.
    Keywords: multi-objective optimisation; cloud computing; particle swarm optimisation; workflow scheduling.

  • Data collection in underwater wireless sensor networks: performance evaluation of FBR and epidemic routing protocols for node density and area shape   Order a copy of this article
    by Elis Kulla 
    Abstract: Data collection in Underwater Wireless Sensor Networks (UWSN) is not a trivial problem, because of unpredictable delays and unstable links between underwater devices. Moreover, when nodes are mobile, continuous connectivity is not guaranteed. Therefore, data collection in UWSN Node scarcity and movement patterns create different environments for data collection in underwater communication. In this paper, we investigate the impact of the area shape and node density in UWSN, by comparing Focused Beam Routing (FBR) and Epidemic Routing (ER) protocols. Furthermore, we also analyse the correlation between different performance metrics. From simulation results we found that when using FBR, delay and delivery probability slightly decrease (2.1%) but the overhead ratio decreases noticeably (46.9%). The correlation between performance metrics is stronger for square area shape, and is not noticeable for deep area shape.
    Keywords: underwater wireless sensor networks; focused beam routing; delay tolerant network; area shape; node density; data collection.

  • Joint end-to-end recognition deep network and data augmentation for industrial mould number recognition   Order a copy of this article
    by RuiMing Li, ChaoJun Dong, JiaCong Chen, YiKui Zhai 
    Abstract: With the booming manufacturing industry, the significance of mould management is increasing. At present, manual management is gradually eliminated owing to need for a large amount of labour, while the effect of a radiofrequency identification (RFID) system is not ideal, which is limited by the characteristics of the metal, such as rust and erosion. Fortunately, the rise of convolutional neural networks (CNNs) brings down to the solution of mould management from the perspective of images that management by identifying the digital number on the mould. Yet there is no trace of a public database for mould recognition, and there is no special recognition method in this field. To address this problem, this paper first presents a novel data set aiming to support the CNN training. The images in the database are collected in the real scene and finely manually labelled, which can train an effective recognition model and generalise to the actual scenario. Besides, we combined the mainstream text spotter and the data augmentation specifically designed for the real world, and found that it has a considerable effect on mould recognition.
    Keywords: mould recognition database; text spotter; mould recognition; data augmentation.

  • Intrusion detection and prevention with machine learning algorithms   Order a copy of this article
    by Victor Chang, Sreeja Boddu, Qianwen Ariel Xu, Le Minh Thao Doan 
    Abstract: In recent decades, computer networks have played a key role in modern life and also have escalated the number of new attacks on internet traffic to avoid malicious activities. An intrusion detection system (IDS) is imperative for researching firewalls, anti-viruses, and intrusion (bad connection). Many researchers are working to overcome the challenges of IDS and are focused on getting better accuracy to predict automatically normal data connection and abnormal data. To resolve the above problems, many researchers are focused on traditional machine learning and deep learning algorithms to detect automatically internal and external connections of network protocol. In this paper, creators adopt dissimilar machine learning and deep learning techniques, comparing performance and accuracy with respective times. The dataset KDDcup-1999, which is the most reliable dataset, contains a wide range of network environments.
    Keywords: machine learning; deep learning; security dataset.

  • Privacy-aware trajectory data publishing: an optimal efficient generalisation algorithm   Order a copy of this article
    by Nattapon Harnsamut, Juggapong Natwichai 
    Abstract: With the increasing location-aware technologies that provide positioning services, it is easy to collect users' trajectory data. Such data could often contain sensitive information, i.e., private locations, private trajectory or path, and other sensitive attributes. Therefore, the privacy issue must be addressed properly when data are to be released to another business collaborator for data analytic purposes, especially when an adversary could have partial trajectory information of the target, i.e. the previous locations could be known. LKC-privacy model is a well-known privacy preservation model to protect the privacy attacks of trajectory data publishing. The model required that the data anonymisation is carried on to guarantee privacy preservation. The data generalisation technique, transforming data values to be in more general form, can be developed to preserve privacy. Since its impact on data utility after the anonymisation is usually low, i.e. the anonymised data can be still useful. However, this technique can be time-consuming computing, especially for high-dimensional trajectory data such as the focused scenario. In this paper, we propose an enhanced look-up table brute-force algorithm, Enhanced-LT, to maintain the data utility while preserving privacy based on the LKC-privacy model efficiently. This algorithm can reduce the computing time of the look-up table creation, which is one of the longest computational processes. Our proposed algorithm is evaluated with extensive experiments. From the experiments, the Enhanced-LT algorithm performance in terms of the execution time outperforms the baseline LT algorithm by 48.5% on average. The results show that our proposed algorithm returns not only the optimal solution but also highly efficient computation times.
    Keywords: LKC-privacy; trajectory data publishing; optimal algorithm; generalisation technique.

  • An SMIM algorithm for reduction of energy consumption of virtual machines in a cluster   Order a copy of this article
    by Dilawaer Duolikun, Tomoya Enokido, Makoto Takizawa 
    Abstract: Applications can take advantage of virtual computation services independently of heterogeneity and locations of servers by using virtual machines in clusters. Here, a virtual machine on an energy-efficient host server has to be selected to perform an application process. In this paper, we newly propose an SMI (Simple Monotonically Increasing) estimation algorithm to estimate the energy consumption of a server to perform application processes and the total execution time of processes on a server. We also propose an SMIM (SMI Migration) algorithm to make a virtual machine migrate from a host server to a guest server to reduce the total energy consumption of the servers by estimating the energy consumption in the SMI algorithm. In the evaluation, we show the energy consumption of servers in a cluster can be reduced in the SMIM algorithm compared with other algorithms.
    Keywords: server selection algorithm; migration of virtual machines; green computing systems; SMI algorithm; SMIM algorithm.

  • FIAC: fine-grained access control mechanism for cloud-based IoT framework   Order a copy of this article
    by Bhagwat Prasad Chaudhury, Kasturi Dhal, Srikant Patnaik, Ajit Kumar Nayak 
    Abstract: Cloud computing technology provides various computing resources on demand to the user on pay per use basis. The users use the services without the need for establishment and maintenance costs. The technology fails in terms of its usage owing to confidentiality and privacy issues. Access control mechanisms are the tools to prevent unauthorised access to remotely stored data. CipherText Policy Attribute-based Encryption (CPABE) is a widely used tool for facilitating authorised users to access the remotely stored encrypted data with fine-grained access control. In the proposed model FIAC (Fine-Grained Access Control Mechanism for cloud-based IoT Framework ), the access control mechanism is embedded in the cloud-based application to measure and generate a report on the air quality in a city. The major contribution of this work is the design of three algorithms all of which are attribute based: key generation algorithm, encryption and decryption algorithms. Only authorised users can view it to take appropriate action plans. Carbon dioxide concentrations, dust, temperature, and relative humidity are the parameters that we have considered for air quality. To enhance the security of the cloud-based monitoring system, we have embedded a security scheme, all of which are attribute based. Further, the computation time of the model is found to be encouraging so that it can be used in low power devices. The experimental outcomes establish the usability of our model.
    Keywords: air pollution; carbon dioxide concentrations; dust density; internet of things; FIAC; access control.

  • New principles of finding and removing elements of mathematical model for reducing computational and time complexity   Order a copy of this article
    by Yaroslav Matviychuk, Natalia Kryvinska, Nataliya Shakhovska, Aneta Poniszewska-Maranda 
    Abstract: The original principle of removing elements of a mathematical model based on its parametric identification of neural network is proposed in the paper. The essence of proposed method is to find a functional subset with less variability results and higher accuracy than for the initial functional set of the model. It allows reducing the computational and time complexity of the applications built on the model. Comparison with dropout technique shows the 1,1 decreasing of Root mean squared error. In addition, reducing the complexity allows increasing the accuracy of neural network models. Therefore, reducing the number of parameters is an essential step in data preprocessing used in almost all modern systems. However, known methods of reducing the dimension depend on the problem area, making it impossible to use them in ensemble models.
    Keywords: regularisation; reduction; identification procedure; incorrectness; neural network.

  • University ranking approach with bibliometrics and augmented social perception data   Order a copy of this article
    by Kittayaporn Chantaranimi, Rattasit Sukhahuta, Juggapong Natwichai 
    Abstract: Typically, universities aim to achieve a high position in ranking systems for their reputation. However, self-evaluating rankings could be costly because the indicators are not only from bibliometrics, but also the results of over a thousand surveys. In this paper, we propose a novel approach to estimate university rankings based on traditional data, i.e., bibliometrics, and non-traditional data, i.e., Altmetric Attention Score, and Sustainable Development Goals indicators. Our approach estimates subject-areas rankings in Arts & Humanities, Engineering & Technology, Life Sciences & Medicine, Natural Sciences, and Social Sciences & Management. Then, by using Spearman rank-order correlation and overlapping rate, our results are evaluated by comparing with the QS subject ranking. From the result, our approach, particularly the top-10 ranking, performed estimating effectively and then could assist stakeholders in estimating the university's position when the survey is not available.
    Keywords: university ranking; rank similarity; bibliometrics; augmented social perception data; sustainable development goals; Altmetrics.

  • Layered management approach to cyber security of IoT solutions   Order a copy of this article
    by Erdal Ozdogan, Resul Das 
    Abstract: The Internet of Things consists of devices, each with its own vulnerabilities and interconnected by various technologies. This broad spectrum, a combination of old and new technologies, shows that there are many aspects to consider in ensuring IoT security. Besides the vulnerabilities inherited from pre-IoT technologies, there are a wide variety of IoT-specific vulnerabilities. The existence of large attack surfaces for cyber threat actors exposes IoT to a variety of attacks, from simple to complex. Developing defence strategies against this wide range of attacks is possible with a systematic approach. In this study, threats to IoT are examined from the perspective of IoT layered architecture and IoT Security Management Model is proposed as a comprehensive solution strategy. Also, based on the developed model, countermeasures related to IoT security have been proposed.
    Keywords: IoT security; IoT security architecture; IoT attacks; IoT attack surfaces; layered management Approach.

  • Research on the extraction of accounting multi-relationship information based on cloud computing and multimedia   Order a copy of this article
    by Yu Ning 
    Abstract: The rapid development of internet and web data continues, leading to a large number of semi-structured data. The large amount of related multimedia description information on the internet allows us to derive possible links based on multiple accounting information in the field of multimedia. In order to obtain the information multimedia catalogue of accounting, this paper focuses on internet resources, and on multi-information network accounting multimedia technology as a field of automatic decimation objectives and tasks of research. The paper first studies the general classification of web information extraction technology and the general implementation method of web information extraction system. On this basis, the characteristics of polyhydric multimedia catalogue information field of accounting, the accounting proposed a multimedia information network polyhydric art automatic extraction system NMPIES, and the problem of identifying acquisition event description sentences and extracting event roles from Chinese texts is studied.
    Keywords: multimedia; accounting field; event extraction; multivariate relationship; SVM.

  • Deer-based chicken swarm optimisation algorithm: a hybrid optimisation algorithm for output domain testing   Order a copy of this article
    by R. Ramgouda, V.,. Chandraprakash 
    Abstract: An effective combinatorial test case generation framework is devised in this research using a hybrid optimisation algorithm, Deer-based Chicken Swarm Optimisation (DCSO) for generating test cases for output domain testing of embedded systems. The developed DCSO is the amalgamation of the Deer Hunting Optimisation (DHO) algorithm and the Chicken Swarm OptimiSation (CSO) algorithm. The three different worst-case resource usage scenarios are considered in this research as an objective. The worst-case resource usage scenario includes Worst-Case Execution Times (WCET), Worst-Case Suit Size (WCSS), and Worst-Case Stack Usage (WCSU). The developed DCSO algorithm is used for output domain testing of embedded systems by generating combinatorial test cases. Metrics, such as fitness, and test suite size are used for analysing the developed DCSO algorithm. The proposed DCSO algorithm obtained a minimum test suite size of 84 and minimum fitness of 3.66, on comparing with the existing test case generation methods.
    Keywords: deer CSO; combinatorial testing; test case generation; embedded systems; output domain testing.

  • Performance comparison of various machine learning classifiers using a fusion of LBP, intensity and GLCM feature extraction techniques for thyroid nodules classification   Order a copy of this article
    by Rajshree Srivastava, Pardeep Kumar 
    Abstract: Machine learning (ML) and feature extraction techniques have shown a great potential in medical imaging field. This work presents an effective approach for the identification and classification of thyroid nodule. In the proposed model, various features are extracted using gray level co-occurrence matrix (GLCM), local binary pattern (LBP) and intensity-based matrix. These features are fed to various ML classifiers like k-nearest neighbour (KNN), decision-tree (DT), artificial neural network (ANN), naive Bayes, extreme gradient boosting (XGBoost), random forest (RF), linear regression (LR) and support vector machine (SVM). From the result analysis, it can be observed that proposed Model-4 has performed better in comparison with the rest of seven proposed models with the reported literature. An improvement of 4% to 5% is seen in performance evaluation of model in comparison with reported literature.
    Keywords: machine learning; LBP; GLCM; intensity; noise removal; feature extraction.
    DOI: 10.1504/IJGUC.2023.10055313
     
  • An event-driven and lightweight proactive autoscaling architecture for cloud applications   Order a copy of this article
    by Uttom Akash, Partha Protim Paul, Ahsan Habib 
    Abstract: The cloud environment is used by the service providers (APs) to host their applications in order to reduce procurement and management costs of the cloud resources. Moreover, the variation in the traffic load of the client applications and the appealing auto-scaling capability of the cloud resources have prompted application providers to seek ways to reduce the cost of their rented services. This paper describes a constructive auto-scaling mechanism based on the events in cloud systems fitted with heuristic predictors. The predictor examines historical data using these approaches: (1) Double Exponential Smoothing (DES), (2) Triple Exponential Smoothing (TES), (3) Weighted Moving Average (WMA), and (4) WMA with Fibonacci numbers. The outcomes of this model simulation in CloudSim indicate that the model can decrease the application provider's cost while preserving application user satisfaction.
    Keywords: cloud computing; autoscaling; elastic computing; resource provisioning; exponential smoothing; triple exponential smoothing.

  • SMGSAF: a secure multi-geocasting scheme for npportunistic Networks   Order a copy of this article
    by Jagdeep Singh, S.K. Dhurandher 
    Abstract: This paper proposes a novel Secure Multi Geocasting Scheme for Opportunistic Networks called 'SMGSAF', which uses secret sharing and disjoint path routing to secure the privacy of messages. In multi-geocasting, routing aims to successfully deliver a given geomessage to all the nodes, or to as many as possible, located inside defined geographic areas within a given time interval. It is desired that as long as a message is outside its destination casts it cannot be read by intermediate nodes. Key distribution and scarcity of resources are the major challenges in this regard. Therefore we have used secret sharing and disjoint path routing to protect the privacy of messages. The mathematical analysis of the proposed SMGSAF protocol provides the intended security but at the expense of performance, which is within acceptable limits. Notably, the SMGSAF protocol outperforms benchmark geocasting schemes in terms of delivery probability.
    Keywords: opportunistic networks; multi-geocasting; disjoint paths; security; ONE simulator; routing protocol; geomessage; synthetic mobility data; real mobility data trace.
    DOI: 10.1504/IJGUC.2023.10056149
     

Special Issue on: CONIITI 2019 Intelligent Software and Technological Convergence

  • Computational intelligence system applied to plastic microparts manufacturing process   Order a copy of this article
    by Andrés Felipe Rojas Rojas, Miryam Liliana Chaves Acero, Antonio Vizan Idoipe 
    Abstract: In the search for knowledge and technological development, there has been an increase in new analysis and processing techniques closer to human reasoning. With the growth of computational systems, hardware production needs have also increased. Parts with millimetric to micrometric characteristics are required for optimal system performance, so the demand for injection moulding is also increasing. Injection moulding process in a complex manufacturing process because mathematical modelling is not yet established: therefore, to address the selection of correct values of injection variables, computational intelligence can be the solution. This article presents the development of a computational intelligence system integrating fuzzy logic and neural network techniques with CAE modelling system to support injection machine operators, in the selection of optimal machine process parameters to produce good quality microparts using fewer processes. The tests carried out with this computational intelligent system have shown a 30% improvement in the efficiency of the injection process cycles.
    Keywords: computational intelligence; neural networks; fuzzy logic; micro-parts; plastic parts; computer vision; expert systems; injection processes; CAD; computer-aided design systems; CAE; computer-aided engineering.

Special Issue on: Novel Hybrid Artificial Intelligence for Intelligent Cloud Systems

  • QoS-driven hybrid task scheduling algorithm in a cloud computing environment   Order a copy of this article
    by Sirisha Potluri, Sachi Mohanty, Sarita Mohanty 
    Abstract: Cloud computing environment is a growing technology of distributed computing. Typically using cloud computing the services are deployed with individuals or organisations and to allow sharing of resources, services, and information based on the demand of users over the internet. CloudSim is a simulator tool used to simulate cloud scenarios. A QoS-driven hybrid task scheduling architecture and algorithm for dependent and independent tasks in a cloud computing environment is proposed in this paper. The results are compared against the Min-Min task scheduling algorithm, QoS-driven independent task scheduling algorithm, and QoS-driven hybrid task scheduling algorithm. QoS-driven hybrid task scheduling algorithm is compared with time and cost as QoS parameters and it gives a better result for these parameters.
    Keywords: cloud computing; task scheduling; quality of service.

Special Issue on: ICIMMI 2019 Emerging Trends in Multimedia Processing and Analytics

  • Handwritten Odia numeral recognition using combined CNN-RNN   Order a copy of this article
    by Abhishek Das, Mihir Narayan Mohanty 
    Abstract: Detection and recognition is a major task for current research. Almost all the parts of signal processing, including speech and images has the sub-content of it. Data compression mainly uses in multimedia communication, where the recognition is the major challenge. Keeping all these facts in view, the authors have taken an approach for handwritten numbers recognition. To meet the challenge of fake data, a generative adversarial network is used to generate some data and is considered along with original data. The database is collected from IIT, Bhubaneswar, and used in a GAN model to generate a huge amount of data. Further, a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) are considered for recognition purpose. Though Odia numerals are a little complex, the recognition task was found very interesting. A little work has been done in this direction. However, the application of a deep learning based approach is absent. Long Short Term Memory (LSTM) cells are used as recurrent units in this approach. We have added 1000 images generated through Deep Convolutional Generative Adversarial Network (DCGAN) to the IIT-BBSR dataset. In this method we have used the Adam optimisation algorithm for minimising the error, and to train the network we have used the supervised learning method. The result of this method gives 98.32% accuracy.
    Keywords: character recognition; Odia numerals; deep learning; CNN; RNN; LSTM; DCGAN; Adam optimisation.

  • An optimal channel state information feedback design for improving the spectral efficiency of device-to-device communication   Order a copy of this article
    by Prabakar Dakshinamoorthy, Saminadan Vaitilingam 
    Abstract: This article introduces a regularised zero-forcing (RZF) based channel state information (CSI) feedback design for improving the spectral efficiency of device-to-device (D2D) communication. This proposed method exploits conventional feedback design along with the optimised CSI in regulating the communication flows in the communicating environment. The codebook-dependent precoder design improves the rate of feedback by streamlining time/frequency dependent scheduling. The incoming communication traffic is scheduled across the available channels by pre-estimating their adaptability and capacity across the underlying network. This helps to exchange partial channel information between the communicating devices without the help of base station services. These features reduce the transmission error rates to achieve better sum rate irrespective of the distance and transmit power of the devices.
    Keywords: CSI; D2D; feedback design; precoding; zero-forcing.

Special Issue on: Green Network Communication for Sustainable Smart Grids Current Uses and Future Applications

  • Design and fabrication of dual band AMC-backed monopole antenna for WLAN and WiFi applications   Order a copy of this article
    by B. Madhavi, M. V. Siva Prasad 
    Abstract: This paper presents the design and fabrication of a coplanar waveguide (CPW) fed dual band antenna with a complementary split ring resonator (CSRR) with and without artificial magnetic conductor (AMC). This proposed model exhibits two operational bands with increased gain. The obtained gains at resonances of working bands are 2.6dB and 3.6dB. The antenna with AMC operates at the bands of 2.33GHz-2.57GHz, and 4.85GHz-5.14GHz, respectively. The attained gains at the resonances of working bands for AMC-backed monopole antenna are 5.15dB, and 6.12dB. From the measured results, the bandwidth and gain are increased for AMC-backed design when compared with no AMC condition. In addition, the analysis of both antenna designs also done in HFSS workbench for analysing the return loss, 3D gains, current distributions, and radiation patterns. As per the observed measures of gain values, the proposed antenna is suitable for the applications of WiFi and WLAN.
    Keywords: artificial magnetic conductor; monopole antenna; dual band; gain; coplanar wave guide antenna.

  • Cloud computing data privacy protection method based on blockchain   Order a copy of this article
    by Yingjun He, Wenhui X. Ouyang, Shaolong Li, Lin Wang, Jing Zhou, Wenwei Su, Shenzhang Li, Donghui Mei, Yan Shi, Yanxu Jin, Chenglin Li, Yonghui Ren 
    Abstract: Different from the uniqueness and tamper resistance of traditional data, users are always facing greater risk of leakage in the data storage of cloud computing network. In order to ensure the security of users private data, a cloud computing data privacy protection method based on blockchain is proposed. On the basis of clarifying the infrastructure of blockchain structure and cloud computing storage structure, design cloud computing data privacy protection methods. With the support of blockchain technology, based on the analytic hierarchy process model of cloud computing user privacy protection, based on the definition of blockchain messages and IPFS storage parameter settings. Cloud computing data privacy protection is realized through three parts: cloud computing data privacy disclosure path identification, cloud computing data aggregation algorithm using privacy homomorphism technology and data privacy protection algorithm based on blockchain technology. The experimental results show that the proposed method can enhance the application security of cloud computing privacy data, make it free from the risk of disclosure in the storage process, and thus realize the privacy protection of cloud computing data.
    Keywords: blockchain technology; cloud computing network; user data; privacy data; privacy protection.

  • Performance enhancement Of MIMO-OFDM using hybrid equalisers-based ICI mitigation with channel estimation in time-varying channels   Order a copy of this article
    by Madhavi Latha Pandala, Samanthapudi Swathi, Abdul Hussain Sharief, Suresh Penchala, Ganga Rama Koteswara Rao, Pala Mahesh Kumar 
    Abstract: High spectrum efficiency and resistant to interferences made multiple-input-multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) an exceptionally good choice for the realization of long-term evaluation-advanced (LTE-A) and other advanced mobile communication technologies. Besides the multiple merits, MIMO-OFDM is suffered due to higher inter carrier interference (ICI) and bit error rate (BER) values. In practical scenario, there are many techniques available to evaluate the accuracy of communication system by reducing the ICI and BER further. However, it is necessary to develop several influential hybrid methods for evaluation and mitigation of ICI and BER performance improvement with least undesirable side effects. In addition, it is quite difficult to achieve high spectral efficiency while improving the performance of BER over fast fading channels. Thus, here in this article, both time variant training and time invariant channel estimation (CE) is suggested in MIMO-OFDM. Further, combination of three equalization techniques named as ZF-MMSE-SIC is presented for improving the possessions of BER and ICI in MIMO-OFDM system, where zero forcing (ZF) is utilized for time variant CE, minimum mean square error (MMSE) is employed for time invariant CE schemes and successive interference cancellation (SIC) is utilized to reduce the ICI further to minimum level by enhancing the carrier-to-interference ratio (CIR). Extensive simulations revealed that proposed hybrid methodology performed superior to that of conventional ICI mitigation algorithms over different time varying channels while improving the performance of both BER and spectral efficiency.
    Keywords: MIMO-OFDM; ICI cancellation; zero-forcing; minimum mean square error; carrier-to-interference ratio; successive interference cancelation.

Special Issue on: AMLDA 2022 Applied Machine Learning and Data Analytics Applications, Challenges, and Future Directions

  • Fuzzy forests for feature selection in high-dimensional survey data: an application to the 2020 US Presidential Election   Order a copy of this article
    by Sreemanti Dey, R. Michael Alvarez 
    Abstract: An increasingly common methodological issue in the field of social science is high-dimensional and highly correlated datasets that are unamenable to the traditional deductive framework of study. Analysis of candidate choice in the 2020 Presidential Election is one area in which this issue presents itself: in order to test the many theories explaining the outcome of the election, it is necessary to use data such as the 2020 Cooperative Election Study Common Content, with hundreds of highly correlated features. We present the fuzzy forests algorithm, a variant of the popular random forests ensemble method, as an efficient way to reduce the feature space in such cases with minimal bias, while also maintaining predictive performance on par with common algorithms such as random forests and logit. Using fuzzy forests, we isolate the top correlates of candidate choice and find that partisan polarisation was the strongest factor driving the 2020 Presidential Election.
    Keywords: fuzzy forests; machine learning; ensemble methods; dimensionality reduction; American elections; candidate choice; correlation; partisanship; issue voting; Trump; Biden.

  • An efficient intrusion detection system using unsupervised learning AutoEncoder   Order a copy of this article
    by N.D. Patel, B.M. Mehtre, Rajeev Wankar 
    Abstract: As attacks on the network environment are rapidly becoming more sophisticated and intelligent in recent years, the limitations of the existing signature-based intrusion detection system are becoming more evident. For new attacks such as Advanced Persistent Threat (APT), the signature pattern has a problem of poor generalisation performance. Research on intrusion detection systems based on machine learning is being actively conducted to solve this problem. However, the attack sample is collected less than the normal sample in the actual network environment, so it suffers a class imbalance problem. When a supervised learning-based anomaly detection model is trained with these data, the results are biased toward normal samples. In this paper, AutoEncoder (AE) is used to perform single-class anomaly detection to solve this imbalance problem. The experimental evaluation was conducted using the CIC-IDS2017 dataset, and the performance of the proposed method was compared with supervised models to evaluate the performance
    Keywords: intrusion detection system; advanced persistent threat; CICIDS2017; AutoEncoder; machine learning; data analytics.

  • Optimal attack detection using an enhanced machine learning algorithm   Order a copy of this article
    by Reddy Saisindhutheja, Gopal K. Shyam, Shanthi Makka 
    Abstract: As computer network and internet technologies advance, the importance of network security is widely acknowledged. Network security continues to be a substantial challenge within the cyberspace network. The Software-as-a-Service (SaaS) layer describes cloud applications where users can connect using web protocols such as hypertext transfer protocol over transport Layer Security. Worms, SPAM, Denial-of-service (DoS) attacks or botnets occur frequently in the networks. In this light, this research intends to introduce a new security platform for SaaS framework, which comprises two major phases: 1) optimal feature selection and (2) classification. Initially, the optimal features are selected from the dataset. Each dataset includes additional features, which further leads to complexity. A novel algorithm named Accelerator updated Rider Optimisation Algorithm (AR-ROA), a modified form of ROA and Deep Belief Network (DBN) based attack detection system is proposed. The optimal features that are selected from AR-ROA are subjected to DBN classification process, in which the presence of attacks is determined. The proposed work is compared against other existing models using a benchmark dataset, and improved results are obtained. The proposed model outperforms other traditional models, in aspects of accuracy (95.3%), specificity (98%), sensitivity (86%), precision (92%), negative predictive value (97%), F1-score (86%), false positive ratio (2%), false negative ratio (10%), false detection ratio (10%), and Matthews correlation coefficient (0.82%).
    Keywords: software-as-a-service framework; security; ROA; optimisation; DBN; attack detection system.

Special Issue on: Cloud and Fog Computing for Corporate Entrepreneurship in the Digital Era

  • Enhanced speculative approach for big data processing using BM-LOA algorithm in cloud environment   Order a copy of this article
    by Hetal A. Joshiara, Chirag S. Thaker, Sanjay M. Shah, Darshan B. Choksi 
    Abstract: In the event that one of the several tasks is being allocated to an undependable or jam-packed machine, a hugely parallel processing job can be delayed considerably. Hence, the majority of the parallel processing methodologies, namely (MR), have espoused diverse strategies to conquer the issue called the straggler problem. Here, the scheme may speculatively introduce extra copies of a similar task if its development is unnaturally slow when an additional idling resource is present. In the strategies-centred processes, the dead node is exhibited. the (RT) along with backup time of the slow task is assessed. The slow task is rerun with the aid of BM-LOA subsequent to the evaluation. In both heterogeneous and homogeneous environments, the proposed approach is performed. Centred on the performance metrics, the proposed research techniques performance is scrutinised in experimental investigation. Thus, when weighed against the other approaches, the proposed technique achieves superior performance.
    Keywords: modified exponentially weighted moving average; speculative execution strategy; Hadoop supreme rate performance; big data processing; rerun.

Special Issue on: ITT 2019 Advances in Next-Generation Communications and Networked Applications

  • Collaborative ambient intelligence based demand variation prediction model   Order a copy of this article
    by Munir Naveed, Yasir Javed, Muhammed Adnan, Israr Ahmed 
    Abstract: Inventory control problem is faced by corporations on a daily basis to optimise the supply chain process and for predicting the optimal pricing for the item sales or for providing services. The problem is heavily dependent on a key factor, i.e. demand variations. Inventories must be aligned according to demand variations to avoid overheads or shortages. This work focuses on exploring various machine learning algorithms to solve demand variation problems in real time. Prediction of demand variations is a complex and non-trivial problem, particularly in the presence of open order. In this work, prediction of demand variation is addressed with the use-cases which are characterised with open orders. This work also presents a novel prediction model that is a hybrid of learning domains as well as domain-specific parameters. It exploits the use of Internet of Things (IoT) to extract domain-specific knowledge, while a reinforcement learning technique is used for predicting the variations in these domain-specific parameters, which depend on demand variations. The new model is explored and compared with state-of-the-art machine learning algorithms using Grupo Bimbo case study. The results show that the new model predicts the demand variations with significantly higher accuracy than other models.
    Keywords: inventory management; reinforcement learning; IoT devices; Grupo Bimbo inventory demand variation.