Forthcoming and Online First Articles

International Journal of Grid and Utility Computing

International Journal of Grid and Utility Computing (IJGUC)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Grid and Utility Computing (107 papers in press)

Regular Issues

  • Public key encryption with equality test for vehicular system based on near-ring   Order a copy of this article
    by Muthukumaran Venkatesan, Ezhilmaran Devarasaran 
    Abstract: In recent years, vehicles have been increasingly integrated with an intelligent transport system (ITS). This has led to the development of Vehicular Ad hoc Networks(VANETs) through which the vehicles communicate with each other in an effective manner. Since VANET assists in both vehicle to vehicle and vehicle to infrastructure communication the matter of security and privacy has become a major concern. In this context, this work presents a public key Encryption with equality test based on DLP with decomposition problems over near-ring The proposed method is highly secure and it solves the problem of quantum algorithm attacks in VANET systems. Further, the proposed system prevents the chosen-ciphertext attack in type-I adversary and it is indistinguishable against the random oracle model for the type-II adversary. The proposed scheme is highly secure and the security analysis measures are stronger than existing techniques.
    Keywords: near-ring; Diffie-Hellman; vehicular ad hoc networks.

  • Research on modelling analysis and maximum power point tracking strategies for distributed photovoltaic power generation systems based on adaptive control technology   Order a copy of this article
    by Yan Geng, Jianwei Ji, Bo Hu, Yingjun Ju 
    Abstract: As is well-known, the distributed photovoltaic power generation technology has been rapidly developed in recent years. The cost of distributed photovoltaic power generation is much higher than that of traditional power generation modes. Therefore, how to improve the effective use of photovoltaic cells has become a popular research direction. Based on the analysis of the characteristics of photovoltaic cells, this paper presents a mathematical model of photovoltaic cells and a maximum power point tracking algorithm based on hysteresis control and adaptive control technology variable step perturbation observation method. This algorithm can balance the control precision and control speed from the disturbance observation method and improve the tracking results significantly. Finally, the feasibility of the algorithm and the tracking effects are simulated by using Matlab/Simulink software.
    Keywords: distributed photovoltaic; adaptive control technology; maximum power point tracking strategies.

  • Cloud infrastructure planning: models considering an optimisation method, cost and performance requirements   Order a copy of this article
    by Jamilson Dantas, Rubens Matos, Carlos Melo, Paulo Maciel 
    Abstract: Over the years, many companies have employed cloud computing systems as the best choice regarding the infrastructure to support their services, while keeping high availability and performance levels. The assurance of the availability of resources, considering the occurrence of failures and desired performance metrics, is a significant challenge for planning a cloud computing infrastructure. The dynamic behaviour of virtualised resources requires special attention to the effective amount of capacity that is available to users, so the system can be correctly sized. Therefore, planning computational infrastructure is an important activity for cloud infrastructure providers to analyse the cost-benefit trade-off among distinct architectures and deployment sizes. This paper proposes a methodology and models to support planning and the selection of a cloud infrastructure according to availability, COA, performance and cost requirements. An optimisation model based on GRASP meta-heuristic is used to generate a cloud infrastructure with a number of physical machines and Virtual Machines (VM) configurations. Such a system is represented using an SPN model and closed-form equations to estimate cost and dependability metrics. The proposed method is applied in a case study of a video transcoding service hosted in a cloud environment. The case study demonstrates the selection of cloud infrastructures with best performance and dependability metrics, considering the use of VP9, VP8 and H264 video codecs, as well as distinct VM setups. The results show the best configuration choice considering a six user profile. The results also show the computation of the probability of finalising a set of video transcoding jobs by a given time.
    Keywords: cloud computing; performance; availability modelling; GRASP; COA; stochastic Petri nets; cost requirements.

  • Performance impact of the MVMM algorithm for virtual machine migration in data centres   Order a copy of this article
    by Nawel Kortas, Habib Youssef 
    Abstract: Virtual machine (VM) migration mechanisms and the design of data centres for cloud computing have a significant impact on energy cost and negotiated Service Level Agreement (SLA). The recent work focuses on how to use VM migration to achieve stable physical machine (PM) usage with the objective of reducing energy consumption, under stated SLA constraints. This paper presents and evaluates a new scheduling algorithm called MVMM (Minimisation of Virtual Machine Migration) for VM migration within a data centre environment. MVMM makes use of a DBN (Dynamic Bayesian Network) to decide where and when a particular VM migrates. Indeed, the DBN takes as input the data centre parameters then computes a score for each VM candidate for migration in order to reduce the energy consumption by decreasing the number of future migrations according to the probabilistic dependencies between the data centre parameters. Furthermore, our performance study shows that the choice of a data centre scheduling algorithm and network architecture in cloud computing significantly impacts the energy cost and application performance under resource and service demand variations. To evaluate the proposed algorithm, we integrated the MVMM scheduler into the GreenCloud simulator while taking into consideration key data centre characteristics such as scheduling algorithm, DCN (Data re Network) architecture, link, load and communication between VMs. The performance results show that the use of the MVMM scheduler algorithm within a three-tier debug architecture can reduce energy consumption by over 35% when compared with five well-known schedulers, namely Round Robin, Random, Heros, Green, and Dens.
    Keywords: MVMM algorithm; virtual machine; cloud computing; dynamic Bayesian networks; SLA; scheduler algorithm; data centre network architectures; VM migration.

  • SDSAM: a service-oriented approach for descriptive statistical analysis of multidimensional spatio-temporal big data   Order a copy of this article
    by Weilong Ding, Zhuofeng Zhao, Jie Zhou, Han Li 
    Abstract: With the expansion of the Internet of Things, spatio-temporal data has been widely used and generated. The rise of big data in space and time has led to a flood of new applications with statistical analysis characteristics. In addition, applications based on statistical analysis of these data must deal with the large capacity, diversity and frequent changes of data, as well as the query, integration and visualisation of data. Developing such applications is essentially a challenging and time-consuming task. In order to simplify the statistical analysis of spatio-temporal data, a service-oriented method is proposed in this paper. This method defines the model of spatio-temporal data service and functional service. It defines a process-based application of spatio-temporal big data statistics to invoke basic data services and functional services, and proposes an implementation method of spatio-temporal data service and functional service based on Hadoop environment. Taking the highway big data analysis as an example, the validity and applicability of this method are verified. The effectiveness of this method is verified by an example. The validity and applicability of the method are verified by a case study of Expressway large data analysis. An example is given to verify the validity of the method.
    Keywords: spatio-temporal data; RESTful; web service.

  • Personality-aware recommendations: an Empirical study in education   Order a copy of this article
    by Yong Zheng, Archana Subramaniyan 
    Abstract: Recommender systems have been developed to deliver item recommendations to the users tailored to user preferences. The impact of the human personality has been realised in user decision making. There are several personality-aware recommendation models which incorporate the personality traits into the recommendations. They have been demonstrated to be effective in improving the quality of the recommendations in several domains, including movies, music and social networks. However, the impact on the area of education is still under investigation. In this paper, we discuss and summarise state-of-the-art personality-based collaborative filtering techniques for recommendations, and perform an empirical study on educational data. Particularly, we collect the personality traits in two ways: a user survey and a natural language processing system. We examine the effectiveness of the recommendation models by using these subjective and inferred personality traits, respectively. Our experimental results reveal that students with different personality traits may make different choices, and the inferred personality traits are more reliable and effective to be used in the process of recommendations.
    Keywords: personality; recommender systems; education; empirical study.

  • Research on integrated energy system planning method considering wind power uncertainty   Order a copy of this article
    by Yong Wang, Yongqiang Mu, Jingbo Liu, Yongji Tong, Hongbo Zhu, Mingfeng Chen, Peng Liang 
    Abstract: With the development of energy technology, the planning and operation of integrated energy systems coupled with electricity-gas-heat energy has become an important research topic in the future energy field. In order to solve the influence of wind power uncertainty on the unified planning of integrated energy systems, this paper constructs a wind energy uncertainty quantitative model based on intuitionistic fuzzy sets. Based on this, an integrated energy system planning model with optimal economic cost and environmental cost is established. The model is solved by the harmonic search algorithm. Finally, the proposed method is validated by simulation examples. The effectiveness of the integrated energy system planning method can improve the grid capacity of the wind power and reduce the CO2 of the system. And it has guiding significance for the long-term planning of integrated energy systems
    Keywords: wind power uncertainty; planning method; electricity-gas-heat energy.

  • Research on design method of manoeuvring targets tracking generator based on LabVIEW programming   Order a copy of this article
    by Caiyun Gao, Shiqiang Wang, Huiyong Zeng, Juan Bai, Binfeng Zong, Jiliang Cai 
    Abstract: Aiming at the issue of poor visual display and non-real-time status output while describing maneuvering target track with data, a new design method of target track generator is proposed based on laboratory virtual instrument engineering workbench (LabVIEW). Firstly, the motion model of maneuvering target is builded. Secondly, the design requirement of track generator is discussed. Finally, target track of multiple targets and multiple maneuvering model is produced by visual panel and code design with LabVIEW. Simulation results indicate that the proposed method can output the target status in real time with different data rates while displaying the multiple targets maneuvering track directly and have good visibility. And also the generated track parameters are of high accuracy and effective data.
    Keywords: LabVIEW; virtual instrument; target track simulation; situation display of radar.

  • Finite state transducer based light-weight cryptosystem for data confidentiality in cloud computing   Order a copy of this article
    by Basappa Kodada, Demian Antony D'Mello 
    Abstract: Cloud computing is derived from parallel, cluster, grid and distributed computing and is becoming one of the advanced and growing technologies. With the rapid growth of internet technology and its speed, the number of users for cloud computing is growing enormously, and huge amounts of data are being generated. With the growth of data in cloud, the security and safety of data, such as data confidentiality and privacy, are a paramount issue because data plays a vital role in the current trend. This paper proposes a new type of cryptosystem based on a finite state transducer to provide data confidentiality for cloud computing. The paper presents the protocol communication process and gives an insight into security analysis on the proposed scheme. The scheme proves that it is stronger and more secure than the existing schemes that can be derived from results as proof of concept.
    Keywords: security; confidentiality; encryption; decryption; automata; finite state machine; finite state transducer; cryptography; data safety.

  • Fine-grained access control of files stored in cloud storage with traceable and revocable multi-authority CP-ABE scheme   Order a copy of this article
    by Bharati Mishra, Debasish Jena, Srikanta Patnaik 
    Abstract: Cloud computing is gaining increasing popularity among enterprises, universities, government departments, and end-users. Geographically distributed users can collaborate by sharing files through the cloud. Ciphertext-policy attribute-based (CP-ABE) access control provides an efficient technique to enforce fine-grained access control by the data owner. Single authority CP-ABE schemes create a bottleneck for enterprise applications. Multi-authority CP-ABE systems deal with multiple attribute authorities performing the attribute registration or key distribution. Type I pairing is used in designing the existing multi-authority systems. They are vulnerable to some reported known attacks on them. This paper proposes a multi-authority CP-ABE scheme that supports attribute and policy revocation. Type III pairing is used in designing the scheme, which has higher security, faster group operations, and requires less memory to store the elements. The proposed scheme has been implemented using the Charm framework, which uses the PBC library. The OpenStack cloud platform is used for computing and storage services. It has been proved that the proposed scheme is collusion resistant, traceable, and revocable. AVISPA tool has been used to verify that the proposed scheme is secure against a replay attack and man-in-the-middle attack.
    Keywords: cloud storage; access control; CP-ABE; attribute revocation; blockchain; multi-authority.

  • On generating Pareto optimal set in bi-objective reliable network topology design   Order a copy of this article
    by Basima Elshqeirat, Ahmad Aloqaily, Sieteng Soh, Kwan-Wu Chin, Amitava Datta 
    Abstract: This paper considers the following NP-hard network topology design (NTD) problem called NTD-CB/R: given (i) the location of network nodes, (ii) connecting links, and (iii) each links reliability, cost and bandwidth, design a topology with minimum cost (C) and maximum bandwidth (B) subject to a pre-defined reliability (R) constraint. A key challenge when solving the bi-objective optimisation problem is to simultaneously minimise C while maximising B. Existing solutions aim to obtain one topology with the largest bandwidth cost ratio. To this end, this paper aims to generate the best set of non-dominated feasible topologies, aka the Pareto Optimal Set (POS). It formally defines a dynamic programming (DP) formulation for NTD-CB/R. Then, it proposes two alternative Lagrange relaxations to compute a weight for each link from its reliability, bandwidth, and cost. The paper further proposes a DP approach, called DPCB/R-LP, to generate POS with maximum weight. It also describes a heuristic to enumerate only k?n paths to reduce the computational complexity for a network with n possible paths. Extensive simulations on hundreds of various sized networks that contain up to 299 paths show that DPCB/R-LP can generate 70.4% of the optimal POS while using only up to 984 paths and 27.06 CPU seconds. With respect to a widely used metric, called overall-Pareto-spread (OR), DPCB/R-LP produces 94.4% of POS with OS = 1, measured against the optimal POS. Finally, all generated POS each contains a topology that has the largest bandwidth cost ratio, significantly higher than 88% obtained by existing methods.
    Keywords: bi-objective optimisation; dynamic programming; Lagrange relaxation; Pareto optimal set; network reliability; topology design.

  • HyperGuard: on designing out-VM malware analysis approach to detect intrusions from hypervisor in cloud environment   Order a copy of this article
    by Prithviraj Singh Bisht, Preeti Mishra, Pushpanjali Chauhan, R.C. Joshi 
    Abstract: Cloud computing provides delivery of computing resources as a service on a pay-as-you-go basis. It represents a shift from products being purchased, to products being subscribed as a service, delivered to consumers over the internet from a large scale data centre. The main issue with cloud services is security from attackers who can easily compromise the Virtual Machines (VMs) and applications running over them. In this paper, we present a HyperGuard mechanism to detect malware that hide their presence by sensing the analysing environment or security tools installed in VMs. They may attach themselves with legitimate processes. Hence, HyperGuard is deployed at the hypervisor, outside the monitored VMs to detect such evasive attacks. It employs open source introspection libraries, such as DRAKVUF, LIbVMI etc., to capture the VM behaviour from hypervisor inform of syscall logs. It extracts the features in the form of n-grams. It makes use of Recursive Feature Elimination (RFE) and Support Vector Machine (SVM) to learn and detect the abnormal behaviour of evasive malware. The approach has been validated with a publicly available dataset (Trojan binaries) and a dataset obtained on request from University of new California (evasive malware binaries). The results seem to be promising.
    Keywords: Cloud secuirty,Intrusion detection,virtual machine introspection,system call traces; machine learning ; anaomaly behviour detection; sypder.

  • Dynamic Bayesian network based prediction of performance parameters in cloud computing   Order a copy of this article
    by Priyanka Bharti, Rajeev Ranjan 
    Abstract: Resource prediction is an important task in cloud computing environments. It can become more effective and practical for large Cloud Service Providers (CSPs) with a deeper understanding of their Virtual Machines (VM) workload's key characteristics. Resource prediction is also influenced by several factors including (but not constrained to) data centre resources, types of user application (workloads), network delay and bandwidth. Given the increasing number of users for cloud systems, if these factors can be accurately measured and predicted, improvements in resource prediction could be even greater. Existing prediction models have not explored how to capture the complex and uncertain (dynamic) relationships between these factors owing to the stochastic nature of cloud systems. Further, they are based on score-based Bayesian network (BN) algorithms having limited prediction accuracy when dependency exists between multiple variables. This work considers time-dependent factors in cloud performance prediction. It considers an application of Dynamic Bayesian Network (DBN) as an alternative model for dynamic prediction of cloud performance by extending the static capability of a BN. The developed model is trained using standard datasets from Microsoft Azure and Google Compute Engine. It is found to be effective in predicting the application workloads and its resource requirements with an enhanced accuracy compared with existing models. Further, it leads to better decision making processes with regard to response time and scalability in dynamic situations of the cloud environment.
    Keywords: cloud computing; dynamic Bayesian network; resource prediction; response time; scalability.

  • A privacy-aware and fair self-exchanging self-trading scheme for IoT data based on smart contract   Order a copy of this article
    by Yuling Chen, Hongyan Yin, Yaocheng Zhang, Wei Ren, Yi Ren 
    Abstract: With the development of the era of big data, the demand for data sharing and usage is increasing, especially in the era of Internet of things, thus putting forward a keen demand for data exchanging and data trading. However, the existing data exchanging and trading platforms are usually centralized and usersrnhave to trust platforms. This paper proposes a secure and fair exchanging and trading protocol based on blockchain and smart contract, especially, self-governance without relying centralized trust. By using the protocol, it can guarantee fairness to defend against trade cheating, and security for data confidentiality. It can also guarantee efficiency by transferring data links instead of data between data owners and data buyers. The extensive analysisrnjustified that the proposed scheme can facilitate the self-exchanging and self-trading for big data in a secure, fair and efficient manner.
    Keywords: big data; IoT; fair exchanging; blockchain; smart contract; oblivious protocol; fair trading.

  • Clustering structure-based multiple measurement vectors model and its algorithm   Order a copy of this article
    by Tijian Cai, Xiaoyu Peng, Xin Xie, Wei Liu, Jia Mo 
    Abstract: Most current multi-measurement vector models are based on the ideal assumption of shared sparse structure. However, owing to time-varying and multiple focuses of complex data, it is often difficult to meet the assumption in reality. Therefore, people had been working hard to use various sparse structures to solve the problem. In this paper, we take the cluster sparsity of signals into account and propose a Cluster Sparsity-based MMV (CS-MMV) model, which not only uses shared sparse structure between coefficients but also considers the cluster characteristic within coefficients. Furthermore, we extend a classic algorithm to implement the new model. Experiments on simulation data and two face benchmarks show that the new model is more suitable for complex data with clustered structure, and the extended algorithm can effectively improve the performance of sparse recovery.
    Keywords: compressed sensing; sparse recovery; multi-measurement vectors; structured sparsity.

  • Micro-PaaS fog: container based orchestration for IoT applications using SBC   Order a copy of this article
    by Walter D.O. Santo, Rubens De Souza Matos Júnior, Admilson De Ribamar Lima Ribeiro, Danilo Souza Silva, Reneilson Yves Carvalho Santos 
    Abstract: The Internet of Things (IoT) is an emerging technology paradigm in which ubiquitous sensors monitor physical infrastructures, environments, and people in real-time to help in decision making and improve the efficiency and reliability of the systems, adding comfort and life quality to society. In this sense, there are questions concerning the limitation of computational resources, high latency and different QoS requirements related to IoT that move cloud technologies to the fog computing direction, and the adoption of light virtualised solutions, as technologies based in containers to attend to many needs of different domains. This work, therefore, has as its goal to propose and implement a micro-Paas architecture for fog computing, in a cluster of single-board computers (SBC), for orchestration of applications using containers, applied to IoT and that attend to the QoS criteria, e.g. high availability, scalability, load balance, and latency. From this proposed model, the micro-Paas fog was implemented with virtualisation technology in containers using orchestration services in a cluster built with Raspberry Pi to monitor water and energy consumption at a total cost of property equivalent to 23% of a public platform as a service (PaaS).
    Keywords: fog computing; cluster; orchestration; containers; single board computing.

  • A review on data replication strategies in cloud systems   Order a copy of this article
    by Riad Mokadem, Jorge Martinez-Gil, Abdelkader Hameurlain, Joseph Kueng 
    Abstract: Data replication constitutes an important issue in cloud data management. In this context, a significant number of replication strategies have been proposed for cloud systems. Most of the studies in the literature have classified these strategies into static vs. dynamic or centralised vs. decentralised strategies. In this paper, we propose a new classification of data replication strategies in cloud systems. It takes into account several other criteria, specific to cloud environments: (i) the orientation of the profit, (ii) the considered objective function, (iii) the number of tenant objectives, (iv) the nature of the cloud environment and (v) the consideration of economic costs. Dealing with the last criterion, we focus on the provider's economic profit and the consideration of energy consumption by the provider. Finally, the impact of some important factors is investigated in a simulation study.
    Keywords: cloud systems; data replication; data replication strategies; classification; service level agreement; economic profit; performance.

  • Anomaly detection against mimicry attacks based on time decay modelling   Order a copy of this article
    by Akinori Muramatsu, Masayoshi Aritsugi 
    Abstract: Because cyberattackers attempt to cheat anomaly detection systems, it is required to make an anomaly detection system robust against such attempts. We focus on mimicry attacks and propose a system to detect such attacks in this paper. Mimicry attacks make use of ordinary operations in order not to be detected. We take account of time decay in modelling operations to give lower priorities to preceding operations, thereby enabling us to detect mimicry attacks. We empirically evaluate our proposal with varying time decay rates to demonstrate that our proposal can detect mimicry attacks that could not be detected by a state-of-the-art anomaly detection approach.
    Keywords: anomaly detection; mimicry attacks; time decay modelling; stream processing.

  • A cloud-based spatiotemporal data warehouse approach   Order a copy of this article
    by Georgia Garani, Nunziato Cassavia, Ilias Savvas 
    Abstract: The arrival of the big data era introduces new necessities for accommodating data access and analysis by organisations. The evolution of data is three-fold, increase in volume, variety, and complexity. The majority of data nowadays is generated in the cloud. Cloud data warehouses enhance the benefits of the cloud by facilitating the integration of cloud data in the cloud. A data warehouse is developed in this paper, which supports both spatial and temporal dimensions. The research focuses on proposing a general design for spatiobitemporal objects implemented by nested dimension tables using the starnest schema approach. Experimental results reflect that the parallel processing of such data in the cloud can process OLAP queries efficiently. Furthermore, increasing the number of computational nodes significantly reduces the time of query execution. The feasibility, scalability, and utility of the proposed technique for querying spatiotemporal data is demonstrated.
    Keywords: cloud computing; big data; hive; business intelligence; data warehouses; cloud based data warehouses; spatiotemporal data; spatiotemporal objects; starnest schema; OLAP; online analytical processing.

  • A truthful mechanism for crowdsourcing-based tourist spots detection in smart cities   Order a copy of this article
    by Anil Bikash Chowdhury, Vikash Kumar Singh, Sajal Mukhopadhyay, Abhishek Kumar, Meghana M. D 
    Abstract: With the advent of new technologies and the internet around the globe, many cities in different countries are involving the local residents (or city dwellers) for making decisions on various government policies and projects. In this paper, the problem of detecting tourist spots in a city with the help of city dwellers, in strategic setting, is addressed. The city dwellers vote against the different locations that may act as potential candidates for tourist spots. For the purpose of voting, the concept of single-peaked preferences is used, where each city dweller reports a privately held single-peaked value that signifies the location in a city. Given the above discussed scenario, the goal is to determine the location in the city as a tourist spot. For this purpose, we have designed the mechanisms (one of which is truthful). For measuring the efficacy of the proposed mechanisms the simulations are done.
    Keywords: tourism; smart cities; crowdsourcing; city dwellers; voting; single-peaked preferences; truthful.

  • A proactive population dynamics load-balancing algorithm in cloud computing for QoS optimisation   Order a copy of this article
    by Shahbaz Afzal, G. Kavitha 
    Abstract: Load unbalancing is currently a concern among Cloud Service Providers (CSP) that has adverse consequences in terms of both deliverable Quality of Service (QoS) and profit turnout. Load balancing as a band-aid tries to overcome load imbalances by ensuring proper task deployment among cloud resources, yielding productive resource use, throughput, profit, estimated deadline and other QoS metrics. However, load unbalancing, being an NP-hard optimisation problem, restricts to achieve the desired QoS results and hence enforcing the implementation of an efficient scheduling mechanism and load-balancing policy in cloud computing systems. A priority-based scheduling technique is proposed with a proactive load-balancing feature performed by set partitioning, parallel queues, and population dynamic model. Tasks are scheduled on virtual machines (VM) using priority scheduling. The set partitioning process performs task partitioning and VM partitioning assisted by filtering and sorting. Tasks and VMs are partitioned into eight distinct classes. The filtered and sorted tasks are passed through parallel queues to avoid waiting times. Finally, the proactive based population dynamic load balancing (PPDLB) model is applied to limit the tasks within the maximum carrying capacity of VM. The experimental results showed that PPDLB approach avoids load unbalancing in advance with higher resource use. The PPDLB algorithm was compared with the existing Improved Weighted Round Robin (IWRR), Harris Hawk optimisation (HHO) and Spider Monkey Optimisation (SMO) algorithms. The proposed algorithm was found to be more efficient than existing algorithms in terms of makespan, resource use, degree of balance and number of task migrations. The PPDLBA improves the makespan time by 3.29%, 5.15%, and 10.85% with respect to IWRR, HHO, and SMO algorithms, respectively. The degree of balance of PPDLBA is improved to 4.68%, 5.74%, and 6.26% when compared with IWRR, HHO and SMO algorithms, respectively. Also, the percentage improvement of PPDLBA in terms of resource use is 2.17%, 3.55%, and 3.99%, respectively, with existing algorithms. The dominance of the proposed algorithm over existing load-balancing techniques lies in the fact that it eliminates the need to solve migration associated metrics such as migration time, migration cost, number of VMs required for migration, and number of task migrations. Moreover, it has greater convergence rate when search space becomes large which makes it feasible for large scale scheduling problems.
    Keywords: cloud computing; scheduling; load unbalancing; proactive load balancing; set partitioning; parallel queues; population dynamics; Quality of Service.

  • Dont hurry, be green: scheduling server shutdowns in grid computing with deep reinforcement learning   Order a copy of this article
    by Mauricio Pillon, Lucas Casagrande, Guilherme Koslovski, Charles Miers, Nelson Gonzales 
    Abstract: Grid computing platforms dissipate massive amounts of energy. Energy efficiency, therefore, is an essential requirement that directly affects their sustainability. Resource management systems deploy rule-based approaches to mitigate this cost. However, these strategies do not consider the patterns of the workloads being executed. In this context, we demonstrate how a solution based on deep reinforcement learning is used to formulate an adaptive power-efficient policy. Specifically, we implement an off-reservation approach to overcome the disadvantages of an aggressive shutdown policy and minimise the frequency of shutdown events. Through simulation, we train the algorithm and evaluate it against commonly used shutdown policies using real traces from GRID5000. Based on the experiments, we observed a reduction of 46% on the averaged energy waste with an equivalent frequency of shutdown events compared with a soft shutdown policy.
    Keywords: deep reinforcement learning; grid computing; energy-aware scheduling; shutdown strategy; Markov decision process; resource management.

  • Authentication and authorisation in service oriented grid architecture   Order a copy of this article
    by Arbër Beshiri, Anastas Mishev 
    Abstract: Applications (services) in nowadays request access to resources that are mostly distributed over the network (wide-area network). These applications usually rely on by mediums such as Grid Computing Infrastructure (GCI) that enable them to be executed. GCI has heterogeneous nature and supports security as an essential part in grid systems. Grid Security Infrastructure (GSI) is a technology standard for grid security. Authentication and even authorisation estimate is a security challenge for grids. This paper discusses the authentication and authorisation infrastructures in the grid, including technologies that cover these two major parts of this domain. Here are surveyed the challenges that security encounters, namely grid authentication mechanisms, grid authorisation mechanisms and models. The combination of the grid authorisation technologies and grid authentication technologies with authorisation infrastructures enables role-based and fine-grained authorisation. Such technologies provide promising solutions for service (resources) oriented grid architecture authentication and authorisation.
    Keywords: grid; service oriented grid architecture; authorisation; authentication; security.

  • FastIoT: an efficient and very fast compression model for displaying a huge volume of IoT data in web environments   Order a copy of this article
    by Mateus Melchiades, Cesar Crovato, Rodrigo Da Rosa Righi, Everton Nedel, Lincoln Schreiber 
    Abstract: The widespread adoption of the Industry 4.0 concepts, including the Internet of Things and Big Data, in industries worldwide, leads to the generation of massive datasets that supervisors must appropriately analyse for an effective decision-making process. A dataset, however, can be excessively large, causing troubles when trying to visualise its content entirety. Furthermore, while efficient for local data analysis, traditional compression systems present slowdowns when dealing with more than a few thousand points. Analysing the state-of-the-art, we did not find initiatives that combine a real-time retrieval of data and a good user experience when sliding and analysing extensive datasets. The present work introduces FastIoT as a novel compression model that focuses on the visual representation of Industry 4.0 data through web environments. As a client-server proposal, FastIoT brings the idea of: (i) speed in data preparation at the server-side, since the proposed method is very simple, and (ii) efficiency, because we consider the target client plotting area so generating an optimised dataset fitted especially for such visual region. We are focusing on client-server web deployments where a high latency internet network usually takes place, mainly when addressing data access on multi-branch companies. FastIoT reduces the file sizes more than 6000 times, which is crucial for large queries through the web. Even adding a short time at the server-side for data preparation, with FastIoT we have data ready in the clients display up to 97% faster when compared with traditional plotting methods.
    Keywords: Industry 4.0; Big Data; IoT; compression; data visualisation; web environment.
    DOI: 10.1504/IJGUC.2021.10041927
  • IoT service distributed management architecture and service discovery method for edge-cloud federation   Order a copy of this article
    by Dongju Yang, Weida Zhang 
    Abstract: With the continuously increasing number of IoT (Internet of Things) services, the distributed management of IoT services becomes an inevitable trend. Under IoT and edge-cloud federation framework, the primary issues to solve in IoT service management are how to design a suitable distributed management architecture for IoT services, reduce network bandwidth overhead, reduce system latency and support the dynamic awareness of service status and rapid discovery of services. In this paper, the distributed management of IoT services is implemented by constructing layer-ring collaboration architecture on the cloud and edge, the service addressing channel is established on the cloud and edge using master and slave node cluster. The slave node is close to service and is dynamically aware of the service status. At the same time, the master-chord is constructed based on chord protocol among masters to enable the collaborated addressing of multiple master nodes on the cloud. So a strong service routing network on the cloud and edge is established to enable the distributed management of IoT services. This paper focuses on the service registration and discovery methods under this framework, and finally verifies the effectiveness of the method through the highway emergency scenario.
    Keywords: IoT service; service discovery; service registration; distributed management architecture; edge-cloud federation.

  • Maximising the availability of an internet of medical things system using surrogate models and nature-inspired approaches   Order a copy of this article
    by Guto Leoni Santos, Demis Gomes, Francisco Airton Silva, Patricia Takako Endo, Theo Lynn 
    Abstract: The emergence of new computing paradigms such as fog and edge computing provides the internet of things with needed connectivity and high availability. In the context of e-health systems, wearable sensors are being used to continuously collect information about our health, and forward it for processing by the Internet of Medical Things (IoMT). E-health systems are designed to assist subjects in real-time by providing them with a range of multimedia-based health services and personalised treatment with the promise of reducing the economic burden on health systems. Nonetheless, any service downtime, particularly in the case of emergency services, can lead to adverse outcomes and in the worst case, loss of life. In this paper, we use an interdisciplinary approach that combines stochastic models with surrogate-assisted optimisation algorithms to maximise e-health system availability considering the budget to acquire redundant components as a constraint, comparing three nature-inspired meta-heuristic optimisation algorithms.
    Keywords: internet of medical things; availability; surrogate models; nature-inspired approaches.

  • Recommendation system based on space-time user similarity
    by Wei Luo, Zhihao Peng, Ansheng Deng 
    Abstract: With the advent of 5G, the way people get information and the means of information transmission have become more and more important. As the main platform of information transmission, social media not only brings convenience to people's lives, but also generates huge amounts of redundant information because of the speed of information updating. In order to meet the personalised needs of users and enable users to find interesting information in a large volume of data, recommendation systems emerged as the times require. Recommendation systems, as an important tool to help users to filter internet information, play an extremely important role in both academia and industry. The traditional recommendation system assumes that all users are independent. In this paper, in order to improve the prediction accuracy, a recommendation system based on space-time user similarity is proposed. The experimental results on Sina Weibo dataset show that, compared with the traditional collaborative filtering recommendation system based on user similarity, the proposed method has better performance in precision, recall and F-measure evaluation value.
    Keywords: time-based user similarity; space-based user similarity; recommendation system; user preference; collaborative filtering.

  • A graphical front-end interface for React.js considering state-transition diagrams   Order a copy of this article
    by Shotaro Naiki, Masaki Kohana, Michitoshi Niibori, Shusuke Okamoto, Yasuhiro Ohtaki, Masaru Kamada 
    Abstract: We present a graphical front-end interface for creating dynamical web pages by means of React.js. Its user does not have to write JavaScript codes for specifying the dynamical behaviour of the web components but has only to draw state-transition diagrams graphically on the developed graphical editor. Using the graphical editor, the user draws a state-transition diagram that specifies the dynamical behaviour of each web component in terms of circles representing the states of the component and arrows representing the conditional transitions among the states. Then the developed translator converts the state-transition diagrams into web components of React.js in JavaScript that constitute the target web page. This system of the graphical editor and the translator enables general users without knowledge and experience in programming to create dynamical web pages. Wannabe programmers may start learning JavaScript and React.js by comparing their diagrams and the translated JavaScript codes.
    Keywords: React.js; graphical programming; JavaScript.

  • An integrity control model for mass data transmission under big data cloud storage   Order a copy of this article
    by Zhengguo Zhu 
    Abstract: In order to improve the control level of massive data transmission integrity and improve the space utilisation efficiency and data transmission integrity under cloud storage, an integrity control model of mass data transmission under big data cloud storage is constructed. Firstly, the basic principle of big data cloud storage is studied. Secondly, the mass data feature extraction model is established. Thirdly, the massive data integrity control algorithm is designed based on ant algorithm. Finally, simulation analysis is carried out, and the effectiveness of the proposed method is verified.
    Keywords: integrity control; mass data transmission; big data cloud storage.

  • Securing utility computing using enhanced elliptic curve cryptography and attribute-based access control policy   Order a copy of this article
    by Saira Varghese, S. Maria Celestin Vigila 
    Abstract: This paper proposes a data security model for utility computing that integrates securely mapped plain text using Elliptic Curve Cryptography (ECC) without certified keys, attribute-based access control using Reduced Ordered Binary Decision Diagram (ROBDD), cryptographically secure 256-bit pseudorandom numbers and fingerprint security. ECC uses standard elliptic curves of large prime and ROBDD is built with positive and negative attributes of data users and 256-bit pseudorandom secrets along its path. The secret key that is generated is based on the attributes along the valid path of the policy to enhance the security. The proposed model allows secure key exchange by making use of the property of the elliptic curve to convert numbers into points and secure data storage by associating authenticated secrets of end-users with the original secret key, which is beneficial in decentralised architecture. The result reveals that proposed model achieves high security and less space and time complexity for securing cloud data.
    Keywords: secure utility computing; ROBDD; reduced ordered binary decision diagram; ECC; elliptic curve cryptography; attribute based access control; cryptographically secure pseudorandom numbers.

  • Design and analysis of novel hybrid load-balancing algorithm for cloud data centres   Order a copy of this article
    by Ajay Dubey, Vimal Mishra 
    Abstract: In recent the pandemic scenario there is a paradigm shift, from traditional computing to internet-based computing. Now is the time to store and compute the data in the cloud environment. The Cloud Service Providers (CSPs) establish and maintain a huge shared pool of computing resources that provide scalable and on-demand services around the clock without geographical restrictions. The cloud customers are able to access the services and pay according to the accession of resources. When millions of users across the globe connect to the cloud for their storage and computational needs, there might be issues such as delay in services. This problem is associated with load balancing in cloud computing. Hence, there is a need to develop effective load-balancing algorithms. The Novel Hybrid Load Balancing (NHLB) algorithm proposed in this paper manages the load of the virtual machine in the data centre. This paper is focused on certain problems such as optimisation of performance, maximum throughput, minimisation of makespan, and efficient resource use in load balancing. The NHLB algorithm is more efficient than conventional load-balancing algorithms with reduced completion time (makespan) and response time. This algorithm equally distributes the tasks among the virtual machines on the basis of the current state of the virtual machines and the task time required. The paper compares the result of proposed NHLB algorithm with dynamic load-balancing algorithm and honeybee algorithm. The result shows that the proposed algorithm is better than the dynamic and honeybee algorithms.
    Keywords: cloud computing; data centre; load balancing; virtual machine; makespan; performance optimisation.

  • Virtual traditional craft simulation system in mixed reality environment   Order a copy of this article
    by Rihito Fuchigami, Tomoyuki Ishida 
    Abstract: In a previous study, we implemented a high presence virtual traditional craft system using a head-mounted display (HMD) and a mobile traditional craft presentation system using augmented reality (AR). The high presence virtual traditional craft system realised a simulation experience of traditional crafts in virtual space comprising different cultural styles. However, this system had to construct the different cultural architectures in advance, and the amount of work was enormous. The mobile traditional craft presentation system realised an AR application that allows users to easily arrange 3DCG (Three-Dimensional Computer Graphics) of traditional crafts in real space using mobile terminals. However, this system lacks a sense of presence because users experience traditional crafts on the mobile terminals flat display. Therefore, in this study, we focused on mixed reality (MR) technology and developed an MR virtual traditional craft simulation system using an HMD. With MR technology, we have overcome the work cost and low presence issues relative to the construction of a virtual reality space. As a result, our system provides a simulation experience that realises a high sense of presence and intuitive operation. We conducted a comparative evaluation experiment with 30 subjects to evaluate the constructed MR virtual traditional craft simulation system. We obtained high evaluations of the systems presence and applicability; however, several operability issues were identified.
    Keywords: mixed reality; augmented reality; interior simulation; Japanese traditional crafts; immersive system.

  • The role of smartphone-based social media capabilities in building social capital, trust, and credibility to engage consumers in eWOM: a social presence theory perspective   Order a copy of this article
    by Saqib Mahmood, Ahmad Jusoh, Khalil MD Nor 
    Abstract: Smartphone-based social media has become a well-established channel for users to develop and maintain intimate social relations that enable them to engage in brands-related information exchange, regardless of their time and location, such as eWOM. Nevertheless, little is known about the essential elements of smartphone-based social media that help consumers to develop intimate social relationships and engage them in eWOM. To this end, drawing on the theory of social presence, the current study develops a research model that proposes that interactivity and media richness enhance social presence, giving consumers a sense of psychological proximity. Subsequently, it leads to the development of trust and social capital bonding and bridging. As a result of the bridging and bonding of social capital, consumers' perceived credibility is expected to enable them to engage in eWOM. To empirically investigate the theoretical model, a survey of 407 smartphone-based social media users was conducted in Pakistan. Empirical results reveal that the interactivity and media richness enhance the social presence that proffers consumers' psychological proximity to developing trust and social capital, further enhancing their perceived credibility to engage in eWOM. Discussions, implications, and future directions on the results are described in the final section of the study.
    Keywords: interactivity; media richness; social presence; psychological proximity; social capital; eWOM; trust; smartphone; social media.

  • A cloud-based system for distance learning supported by fog-cloud cooperation   Order a copy of this article
    by Lucas Larcher, Victor Ströele, Mário Dantas 
    Abstract: New IT approaches are enhancing the cloud computing paradigm when making this model more pervasive and on-demand to share computing resources on the web. These resources can be used readily by anyone anywhere. This fact fosters distance learning modality where students do not need to be physically present on the course place, and the teaching and learning process occurs by a Virtual Learning Environment (VLE). A centralised information unit (a university, for example) provides support to outside units with the necessary equipment and knowledge. In this ordinary structure, the cloud paradigm is mighty and primarily used, but it has some limits. The challenge is to turn this paradigm scalable, considering that the users' infrastructure is mainly characterised by poor internet connectivity. A fog-cloud model can be conceived in these cases to tackle this problem using a local pre-processing data stage before sending them to the cloud. The present research study reveals a situation that occurs in Brazilian universities that offer distance education courses. This paper proposes a low-cost solution using a fog-cloud computing paradigm as an environment to support a VLE. In the evaluation process, an experiment using real data was carried out, and the results obtained successfully point to the viability of the proposal.
    Keywords: fog-cloud cooperation; distance learning; virtual learning environment; application performance; fog computing.

  • Novel crowd synthesis approach and GLCM-derived descriptor for crowd behaviour detection   Order a copy of this article
    by Yu Hao, Ying Liu, Jiulun Fan, Yuquan Gan 
    Abstract: Unlike an individuals behaviour, the dynamic nature of the crowd makes its behaviour difficult to define and classify. Therefore, techniques and approaches for crowd behaviour analysis usually encounter challenges different from an individuals behaviour. This paper aims to tackle several key issues in the procedure of crowd behaviour analysis. Firstly, a novel taxonomy is proposed to provide criteria for the explicit definition of different crowd behaviours. By adapting personal space, relative velocity, and group force into conventional social force, a crowd behaviour synthesis approach is devised to provide visually realistic data for model training. Secondly, this paper introduces an effective entropy-based motion texture extraction algorithm, in order to accurately obtain the spatio-temporal motion information from the crowd. Furthermore, this paper proposes a novel visual descriptor based on GLCM derived patterns to describe the visual essence of crowd anomalies. Experiment results indicate the proposed descriptor outperforms other mainstream patterns on the detection of panic dispersing and congestion.
    Keywords: crowd behaviour; feature extraction; image classification; image texture analysis; information entropy.
    DOI: 10.1504/IJGUC.2021.10041931
  • Decentralised priority-based shortest job first queue model for IoT gateways in fog computing   Order a copy of this article
    by Jayashree Narayana Swamy, B. Satish Babu, Basavraj Talwar 
    Abstract: An increased growth in time-critical IoT applications has led to a rise in real time resource requirements. The stringent deadlines on latency have caused IoT applications to move out from far-away cloud servers to distributed fog computing devices infrastructure, which is available locally. In order to meet the touchstones of deadlines and processing times, there is a need to prioritise the job scheduling through the IoT gateways to appropriate fog devices. Studies showed that the queuing models exhibit uncertainties in choosing suitable computing devices, applying priorities to the jobs, deadline achievements, and minimum latency constraints. In this paper, we propose a decentralised priority-based shortest job first queuing model for the IoT gateways for a fog computing infrastructure, which uses the priority-based jobs sorting technique to achieve a better performance and also to overcome most of the uncertainties in queuing.
    Keywords: queuing model; IoT; fog computing; M/M/c queues; M/M/c/1 queues.

  • Cloud infrastructure planning considering the impact of maintenance and self-healing routines over cost and dependability attributes   Order a copy of this article
    by Carlos Melo, Jean Araujo, Jamilson Dantas, Paulo Pereira, Felipe Oliveira, Paulo Maciel 
    Abstract: Cloud computing is the main trend regarding internet service provision. This paradigm, which emerged from distributed computing, gains more adherents every day. For those who provide or aim at providing a service or a private infrastructure, much has to be done, costs related to acquisition and implementation are common, and an alternative to reduce expenses is to outsource maintenance of resources. Outsourcing tends to be a better choice for those who provide small infrastructures than to pay some employees monthly to keep the service life cycle. This paper evaluates infrastructure reliability and the impact of outsourced maintenance over the availability of private infrastructures. Our baseline environments focus on blockchain as a service; however, by modelling both service and maintenance routines, this study can be applied to most cloud services. The maintenance routines evaluated by this paper encompass a set of service level agreements and some particularities related to reactive, preventive, and self-healing methods. The goal is to point out which one has the best cost-benefit for those with small infrastructures, but that still plans to provide services over the internet. Preventive and self-healing repair routines provided a better cost-benefit solution than traditional reactive maintenance routines, but this scenario may change according to the number of available resources that the service provider has.
    Keywords: maintenance; reliability; availability; modelling; cloud Ccmputing; blockchain; container; services; SLA.

  • Two-phase hybridisation using deep learning and evolutionary algorithms for stock market forecasting   Order a copy of this article
    by Raghavendra Kumar, Pardeep Kumar, Yugal Kumar 
    Abstract: In this paper, a two-phase hybrid model is proposed for stock market forecasting using deep learning approach and evolutionary algorithms. In the first phase of hybridisation, Auto Regressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) are combined to compose linear and non-linear features of the dataset. In the second phase, an improved Artificial Bee Colony (ABC) algorithm using Differential Evolution (DE) is used for the hyperparameter selection of proposed hybrid LSTM-ARIMA model. In this paper, experiments are performed over 10 years of the datasets of Oil Drilling & Exploration and Refineries sector of National Stock Exchange (NSE) and Bombay Stock Exchange (BSE) from 1 September 2010 to 31 August 2020. The result demonstrates that the proposed LSTM-ARIMA hybrid model with improved ABC algorithm has performance superior to its counterparts ARIMA, LSTM and hybrid ARIMA-LSTM benchmark models.
    Keywords: hybrid model; ARIMA; LSTM; ABC.
    DOI: 10.1504/IJGUC.2021.10042116
  • A competition-based pricing strategy in cloud markets using regret minimisation techniques   Order a copy of this article
    by Safiye Ghasemi, M.R. Meybodi, M. Dehghan, A.M. Rahmani 
    Abstract: Cloud computing as a fairly new commercial paradigm, widely investigated by different researchers, already has a great range of challenges. Pricing is a major problem in the cloud computing marketplace, as providers are competing to attract more customers without knowing the pricing policies of each other. To overcome this lack of knowledge, we model their competition by an incomplete-information game. Considering the issue, this work proposes a pricing policy related to the regret minimisation algorithm and applies it to the considered incomplete-information game. Based on the competition-based marketplace of the cloud, providers update the distribution of their strategies using the experienced regret. The idea of iteratively applying the algorithm for updating probabilities of strategies causes the regret get minimised faster. The experimental results show much more increase in profits of the providers in comparison with other pricing policies. Besides, the efficiency of a variety of regret minimisation techniques in a simulated marketplace of the cloud are discussed, which have not been observed in the studied literature. Moreover, return on investment of providers in considered organisations is studied and promising results appeared.
    Keywords: application; cloud computing marketplace; game theory; pricing; regret minimisation.

  • Toward stance parameter algorithm with aggregate comments for fake news detection   Order a copy of this article
    by YinNan Yao, ChangHao Tang, Kun Ma 
    Abstract: In the detection of fake news, the stance of comments usually contains evidence supporting false news that can be used to corroborate the detected results of the fake news. However, owing to the misleading content of fake news, there is also the possibility of fake comments. By analysing the position of comments and considering the falseness of comments, comments can be used more effectively to detect fake news. In response to this problem, we propose Bipolar Argumentation Frameworks of Reset Comments Stance (BAFs-RCS) and Average Parameter Aggregation of Comments (APAC) to use the stance of comments to correct the prediction results of the RoBERTa model. We use the Fakeddit dataset for experiments. Our macro-F1 results on 2way and 3way are improved by 0.0029 and 0.0038 compared with the baseline RoBERTa model's macro-F1 results at Fakeddit dataset. The results show that our method can effectively use the stance of comments to correct the results of model prediction errors.
    Keywords: fake news detection; BAFs-RCS; APAC; RoBERTa.

  • Solving on-shelf availability by a creative inference of data science imputation models   Order a copy of this article
    by Ashok Mahapatra, Srikanta Patnaik, Manoranjan Dash, Ananya Mahapatra 
    Abstract: In retail data science, avenues outside the supply chain, specifically inventory and stocks, are barely explored in the context of on-shelf availability (OSA). In order to explore and propose reliable solutions to estimate OSA implications in retail, particularly the impact of missing sales, we have leveraged our domain experience in retail as well as data science. Consequently, we have presented a holistic perspective to firstly unearth OSA occurrences in a reliable manner and then employ modern techniques in data science to estimate their impact on overall sales. Accordingly, drawing from a wide range of experience in data science and machine learning, we explored and established a correlation between missing value imputations in the realm of data science with the missing sale scenarios in retail.
    Keywords: OOS; OSA; out of stock; missing sales; missing value imputation; on-shelf availability; data science framework in retail; data analytics.

  • Enhancing the 5G V2X reliability using turbo coding for short frames   Order a copy of this article
    by Costas Chaikalis, Dimitros Kosmanos, Kostas Anagnostou, Ilias Savvas, Dimitros Bargiotas 
    Abstract: For 5th Generation (5G) Vehicle-to-Everything (V2X) communication it would be desirable to build a dynamically changing reconfigurable system, considering different parameters. Turbo codes had a great impact on the realisation and success of 3G and 4G. Despite their complexity, their use for 5G V2X and short frames represents a challenging issue. Therefore, for the physical layer the choice of decoding iterations and algorithm represent two important parameters to achieve low latency and high performance, increasing the reliability of packet delivery. This is particularly useful for traffic emergency situations under strong interference or radio frequency jamming. For the geometry-based, efficient propagation model (GEMV) for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, our simulation results propose a constant number of three iterations. Subsequently, we investigate the main three turbo decoding algorithms for GEMV and flat Rayleigh fading, and our analysis does not recommend soft output Viterbi algorithm (SOVA) owing to its worse performance. We propose either log-Maximum a Posteriori (MAP) (better performance), or max-log-MAP (lower complexity), in comparison to the far more complex MAP algorithm.
    Keywords: 5G; V2V; turbo codes; GEMV channel.

  • Rotating machinery fault diagnosis using a quadratic neural unit   Order a copy of this article
    by Ricardo Rodriguez-Jorge 
    Abstract: In this work, a quadratic neural unit was implemented for rotating machinery fault diagnoses of an industrial machine, where the input data that were used were taken from a vibration test on an alternating current motor. The data that were obtained from the vibrometer were the frequency and the average of the vibration, which were previously trained and input into the neural unit. The output of this unit was a value that can be used to categorise the severity level of an engine, according to the severity table provided by the norm ISO 10816 for industrial machines.
    Keywords: quadratic neural unit; industrial machine; predictive maintenance; vibration test; alternating current motor.

  • An efficient privacy-preservation algorithm for incremental data publishing   Order a copy of this article
    by Torsak Soontornphand, Mizuho Iwaihara, Juggapong Natwichai 
    Abstract: Data can be continuously collected and grown all the time. Privacy protection designed for static data might not be able to cope with this situation effectively. In this paper, we present an efficient privacy preservation approach based on (k, l)-anonymity for incremental data publishing. We first illustrate the three privacy attacks, i.e., similarity, difference and joint attacks. Then, the three characteristics of incremental data publishing are analysed and exploited to efficiently detect privacy violations. With the studied characteristics, the similarity and join attack detection can be skipped for stable releases. In addition, only a subtype of the similarity attack and the latest previously released dataset need to be detected. From experimental results, the proposed method is highly efficient, with an average execution time eleven times less than a comparable static algorithm. In addition, the proposed method can also maintain better data quality than the compared methods at every setting.
    Keywords: privacy preservation; incremental data publishing; privacy attack; full-domain generalisation.

  • Energy-based cost model of container provisioning on clouds   Order a copy of this article
    by Mauricio Pillon, Aline Moreira, Charles Miers, Guilherme Koslovski, Nelson Gonzalez 
    Abstract: Cloud computing has revolutionised the development and execution of distributed applications by providing on-demand access to virtual resources. Containerisation simplifies management and support of the cloud infrastructure and applications. Clouds typically are consumed in a pay-as-you-go pricing model. However, when applied to containerised environments, such traditional models do not consider resource utilisation values, leading to inaccurate estimates. Moreover, these models do not consider energy consumption, a dominant component of the data centres total cost of ownership. This paper proposes Energy Price Cloud Containers (EPCC), a cost model based on energy consumption that accounts for containers effective resource utilisation. We compare EPCC with AWS Fargate to highlight the benefits of using an energy-based pricing model. Thus, by comparing the cost of an application running using Amazon Web Services (AWS) Fargate with the estimated cost of that application in EPCC, it is possible to identify the benefits of using an energy-based pricing model. The weekly costs estimated when running computational resources at EPCC vary between US$ 2.31 and US$ 10.59. In contrast, when estimating the same amount of resources on AWS Fargate, the costs vary between US$ 2.71 and US$ 29.94. EPCC resulted in a cost reduction of up to 35%.
    Keywords: Pricing Model; Containers; Cloud Computing; Energy Consumption.

Special Issue on: IoT Integration in Next-Generation Smart City Planning

  • Internet of things based architecture for additive manufacturing interface   Order a copy of this article
    by Swee King Phang, Norhijazi Ahmad, Chockalingam Vaithilingam Aravind, Xudong Chen 
    Abstract: This paper addresses the current challenges in managing multiple additive manufacturing units (i.e., 3D printers) without an online system. Managing multiple 3D printers is troublesome and difficult. The traditional process of selecting free printers and monitoring printing statuses manually has revealed a big flaw in the system as it requires physical interaction between the machine and human. As of today, there is little to none for a 3D printer online managing system. Most printing still requires human monitorisation and the project work to be printed must be fed physically to the printer via external drives. In this paper, a solution to zero physical interaction to additive manufacturing units is proposed. The objective is achieved by using the saturated IoT technologies. Webserver will be used to create a webpage to upload the file, for approval, and to check the printing status. A server will be used to store the files, slicing software, file queueing system and to store temporary information of the manufacturing unit's status. Cameras on multiple 3D printers will be used as sensors to monitor the project progress visually. In the end product of the IoT based 3D printing systems, the user will be able to upload the files, ask for superior approval (optional), queue to a specific manufacturing unit to print out by the algorithm set on the cloud server, receive important data from the server such as time estimation, progress percentage and the extruders temperature, and receive notification of error if any issues arise, and notification of completion. The proposed system is implemented and verified in the Additive Manufacturing Lab in Taylors University Malaysia.
    Keywords: additive manufacturing units; 3D printing; online printing; printer management; cloud printing; printing networking; IoT printer; printing monitoring; heat monitor.

  • Enhanced authentication and access control in internet of things: a potential blockchain-based method   Order a copy of this article
    by Syeda Mariam Muzammal, Raja Kumar Murugesan 
    Abstract: With the rapid growth of Internet of Things (IoT), it can be foreseen that IoT services will be influencing several use-cases. IoT brings along the security and privacy issues that may hinder its widescale adoption. The scenarios in IoT applications are quite dynamic compared with the traditional internet. It is vital that only authenticated and authorised users get access to the services provided. Hence, there is a need for a novel authentication and access control technique that is compatible and practically applicable in diverse IoT scenarios to provide adequate security to devices and data communicated. This article aims to highlight the potential of blockchain for enhanced and secured authentication and access control in IoT. The proposed method relies on blockchain technology, which tends to eliminate the limitations of intermediaries for authentication and access control. Compared with existing systems, it has advantages of decentralisation, secured authentication, authorisation, adaptability, and scalability.
    Keywords: internet of things; security; authentication; access control; blockchain.

  • Control and monitoring of air-conditioning units through cloud storage and control operations   Order a copy of this article
    by Chockalingam Aravind Vaithilingam, Mohsen Majrani 
    Abstract: Temperature control and monitoring of the air conditioning units is critically important towards energy savings. The purpose of this work is to design and develop an air conditioner monitoring system for monitoring and control using internet of things. The developed system uses an integrated mobile app using a cloud service that enables users to monitor and control its operations. The system consists of three subsystems, which are micro-controller, cloud storage and mobile app. The micro-controller can collect data from pressure transducer, differential pressure sensor, current transformer, accelerometer, and temperature and humidity sensor. The data collected by Arduino is sent to the cloud storage platform by using REST API. Cloud storage can store the data, display the data graphically, and send notification to specific users when a rule is activated. A hybrid mobile app is also developed with Ionic Framework. The mobile app can display the data stored in cloud storage. The data is fetched from the cloud storage by using the REST API. The system developed is able to monitor several critical parameters from the air conditioner, which are differential air pressure, refrigerant pressure, power, and angle of vibration on the x-axis, angle of vibration on the y-axis, temperature and humidity. With the data collected an algorithm to monitor and control the performance of such an air conditioning system through this embedded module is envisioned to be part of the energy-efficient systems.
    Keywords: condition monitoring; internet of things; mobile app; cloud storage.

  • Forecasting of solar potential and investigation of voltage stability margin using FACTs device: a synopsis from Geography of Things perspective   Order a copy of this article
    by Masum Howlader, Khandaker Sultan Mahmood, Md.Golam Zakaria, Kazi Mahtab Kadir, Mirza Mursalin Iqbal 
    Abstract: The uncertain and erratic nature of renewable energy in solar form is quite distinctive from traditional and dispatchable fuels for generation and is laborious to integrate into conventional system operation. In the first part of this work, a machine-learning algorithm is used to train models on solar irradiance data and different meteorological weather information to predict solar irradiance for different cities to validate the forecasting model. The above-mentioned data for modelling purposes is taken from publicly available Geographical Information System (GIS) data. This can be realistically collected using Internet of Things (IoT) devices and sensors which, if based on a GIS approach, transforms the system into Geography of Things (GoT). Again, the intermittent and inertia-less nature of photovoltaic systems can produce significant power oscillations that can cause significant problems with the dynamic stability of the power system and also limit the penetration capacity of photovoltaics into the grid. In the second part, it is shown that the residue-based power oscillation damping (POD) controller significantly improves the inter-area oscillation damping. The validity and effectiveness of the proposed controller is demonstrated on a three-machine two-area test system that combines conventional synchronous generators and Flexible AC Transmission Systems (FACTs) devices using simulations. This report overall puts an in-depth analysis with regard to the challenges of solar resources with the integration, planning, operation and particularly the stability of the rest of the power grid, including existing generation resources, customer requirements and the transmission system itself that will lead to improved decision making in resource allocations and grid stability.
    Keywords: solar forecasting; static var compensator; support vector machine; power oscillation damping; geography of things.

  • An analytical approach to real-time cloud services on IoT-based applications for smart city planning   Order a copy of this article
    by M.D. Shahrukh Adnan Khan, Kazi Mahtab Kadir, Md. Khairul Alam, Shoaib Mahmud, Shah Reza Mohammad Fahad Ul Hossain, Md. Pabel Sikder, Fiza Jefreen, Ainun Kamal 
    Abstract: This paper illustrates the cloud-based services on next generation smart living technology, first by providing the concept of smart technology of the next generation, followed by the wide area of application in this sector. Next, the current generation of technological enhancement, IoT (Internet of Things) is brought into the smart living. The entire IoT-based smart application sector has then been divided into five categories: power and energy sector, transport sector, healthcare sector, retail sector, and education sector. Each sector has been analysed in detail with respect to possible real-time cloud services, which can be incorporated into the respective area. Consequently, existing cloud services, current trends, limitations, and future scopes have been discussed, followed by recommendations in each section. For example, for IoT-based application in the power and energy sector, the limitations of cloud service have been found, such as unoptimised communication scheme, data complexity, interoperability, cyber security risk and data integrity issues. To address these limitations respectively, a policy to ensure backward and horizontal compatibility, a proposal to increase local processing, data compression and prediction, implementation of big data techniques, authentication and encryption, planning and redundancy have been included as recommendations. For the other four categories, an intense analysis has been carried out in a similar fashion. Finally, the recommendations have been added for each category for the next barrier scope of research.
    Keywords: cloud service; cloud storage; smart city; IoT application; real-time system.

Special Issue on: CONIITI 2019 Intelligent Software and Technological Convergence

  • Computational intelligence system applied to plastic microparts manufacturing process   Order a copy of this article
    by Andrés Felipe Rojas Rojas, Miryam Liliana Chaves Acero, Antonio Vizan Idoipe 
    Abstract: In the search for knowledge and technological development, there has been an increase in new analysis and processing techniques closer to human reasoning. With the growth of computational systems, hardware production needs have also increased. Parts with millimetric to micrometric characteristics are required for optimal system performance, so the demand for injection moulding is also increasing. Injection moulding process in a complex manufacturing process because mathematical modelling is not yet established: therefore, to address the selection of correct values of injection variables, computational intelligence can be the solution. This article presents the development of a computational intelligence system integrating fuzzy logic and neural network techniques with CAE modelling system to support injection machine operators, in the selection of optimal machine process parameters to produce good quality microparts using fewer processes. The tests carried out with this computational intelligent system have shown a 30% improvement in the efficiency of the injection process cycles.
    Keywords: computational intelligence; neural networks; fuzzy logic; micro-parts; plastic parts; computer vision; expert systems; injection processes; CAD; computer-aided design systems; CAE; computer-aided engineering.

Special Issue on: Novel Hybrid Artificial Intelligence for Intelligent Cloud Systems

  • QoS-driven hybrid task scheduling algorithm in a cloud computing environment   Order a copy of this article
    by Sirisha Potluri, Sachi Mohanty, Sarita Mohanty 
    Abstract: Cloud computing environment is a growing technology of distributed computing. Typically using cloud computing the services are deployed with individuals or organisations and to allow sharing of resources, services, and information based on the demand of users over the internet. CloudSim is a simulator tool used to simulate cloud scenarios. A QoS-driven hybrid task scheduling architecture and algorithm for dependent and independent tasks in a cloud computing environment is proposed in this paper. The results are compared against the Min-Min task scheduling algorithm, QoS-driven independent task scheduling algorithm, and QoS-driven hybrid task scheduling algorithm. QoS-driven hybrid task scheduling algorithm is compared with time and cost as QoS parameters and it gives a better result for these parameters.
    Keywords: cloud computing; task scheduling; quality of service.

  • SD-6LN: improved existing internet of things framework by incorporating software defined network approach   Order a copy of this article
    by Rohit Das, Arnab Maji, Goutam Saha 
    Abstract: The Internet of Things (IoT) is a prominent technology in today's world, where its real-time applications are being used in various areas. In spite of the fact that the advances made by IoT are remarkable, the current IoT foundation experiences problems such as availability, reliability, scalability, resiliency, interoperability, security and privacy. Software-Defined Network (SDN) is an approach that can help in resolving many of these IoT limitations. SDN uses a controller that can improve upon the system performance. In this paper, a novel SD-6LN architecture is proposed for existing IPv6 over Low Power Wireless Personal Area Network (6LoWPAN) based IoT infrastructure. The SD-6LN architecture incorporates the network layer of IoT and SDN to address some of the challenges of traditional resource constraint IoT network systems, such as availability, reliability, scalability and resiliency. The experimentation was carried out in simulation. The experimental results indicated improved performance with respect to round trip time, jitter, packet drop, latency and throughput.
    Keywords: Architecture; Internet of Things; OpenFlow; Software-Defined Network; 6LoWPAN.

  • A security analysis of lightweight consensus algorithm for wearable kidney   Order a copy of this article
    by Saurabh Jain, Adarsh Kumar 
    Abstract: Blockchain is a distributed ledger-based technology and provides a solution to many data-centric problems. In recent times, this area has encouraged innovations to handle challenges in many useful applications in which traditional approaches are not found to be successful. A smart healthcare system is one such application where it has been observed that blockchain can play a vital role in terms of combining technologies such as security, data storage, data retrieval, patient-centric approach, and data visualisation. This work proposes a game theory-based approach for consensus building in a distributed network. This approach builds consensus in a trustworthy environment where technologies are explored to provide a problem-centric solution. In this work, the wearable kidney model is analysed to understand the working of the game-theory-based consensus model. This example shows how blockchain technology can be used for consensus building in the healthcare system. The lightweight consensus algorithm consumes fewer resources (suitable for resource constraint devices) and provides an efficient solution to simulate the functionality of a wearable kidney model. The comparative analysis of the result shows that the proposed approach is efficient in fast bit-matching and quick consensus establishment. Results show that kidney blood and urine production are mapped to almost ideal conditions and variations in delay for bit-matching, and algorithm executions are evaluated thereafter. The comparative analysis of the algorithms shows that algorithm 1 outperforms (at least 2.1%) algorithm 2 in delay analysis because of less distributed functionality. Both algorithms are found to be efficient compared with state-of-the-art algorithms for trust establishment.
    Keywords: game theory; blockchain; cryptocurrency; lightweightness; hash rate; bit-exchange; challenge-response; attacks.

Special Issue on: ICIMMI 2019 Emerging Trends in Multimedia Processing and Analytics

  • Handwritten Odia numeral recognition using combined CNN-RNN   Order a copy of this article
    by Abhishek Das, Mihir Narayan Mohanty 
    Abstract: Detection and recognition is a major task for current research. Almost all the parts of signal processing, including speech and images has the sub-content of it. Data compression mainly uses in multimedia communication, where the recognition is the major challenge. Keeping all these facts in view, the authors have taken an approach for handwritten numbers recognition. To meet the challenge of fake data, a generative adversarial network is used to generate some data and is considered along with original data. The database is collected from IIT, Bhubaneswar, and used in a GAN model to generate a huge amount of data. Further, a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) are considered for recognition purpose. Though Odia numerals are a little complex, the recognition task was found very interesting. A little work has been done in this direction. However, the application of a deep learning based approach is absent. Long Short Term Memory (LSTM) cells are used as recurrent units in this approach. We have added 1000 images generated through Deep Convolutional Generative Adversarial Network (DCGAN) to the IIT-BBSR dataset. In this method we have used the Adam optimisation algorithm for minimising the error, and to train the network we have used the supervised learning method. The result of this method gives 98.32% accuracy.
    Keywords: character recognition; Odia numerals; deep learning; CNN; RNN; LSTM; DCGAN; Adam optimisation.

  • An optimal channel state information feedback design for improving the spectral efficiency of device-to-device communication   Order a copy of this article
    by Prabakar Dakshinamoorthy, Saminadan Vaitilingam 
    Abstract: This article introduces a regularised zero-forcing (RZF) based channel state information (CSI) feedback design for improving the spectral efficiency of device-to-device (D2D) communication. This proposed method exploits conventional feedback design along with the optimised CSI in regulating the communication flows in the communicating environment. The codebook-dependent precoder design improves the rate of feedback by streamlining time/frequency dependent scheduling. The incoming communication traffic is scheduled across the available channels by pre-estimating their adaptability and capacity across the underlying network. This helps to exchange partial channel information between the communicating devices without the help of base station services. These features reduce the transmission error rates to achieve better sum rate irrespective of the distance and transmit power of the devices.
    Keywords: CSI; D2D; feedback design; precoding; zero-forcing.

Special Issue on: ITT 2019 Advances in Next-Generation Communications and Networked Applications

  • Comparing the performance of supervised machine learning algorithms when used with a manual feature selection process to detect Zeus malware   Order a copy of this article
    by Mohamed Ali Kazi, Steve Woodhead, Diane Gan 
    Abstract: The Zeus banking malware is one of the most prolific banking malware variants ever to be discovered, and this paper compares and analyses the performance of several supervised machine learning (ML) algorithms when used to detect the Zeus banking malware (Zeus). The key to this paper is that the features that are used for the analysis and detection of Zeus are manually selected, providing the researcher better control over which features can and should be selected. This also helps the researcher to understand the features and the impact that the various feature combinations have on the accuracy of the algorithms when used to detect Zeus. The empirical analysis showed that the decision tree and random forest algorithms produced the best results as they detected all the Zeus samples. The empirical analysis also showed that selecting the feature combinations manually produces varying results, allowing the researchers to understand how the features impact the detection accuracy.
    Keywords: Zeus banking malware; machine learning; binary classification algorithms; supervised machine learning; manual feature selection.

  • Collaborative ambient intelligence based demand variation prediction model   Order a copy of this article
    by Munir Naveed, Yasir Javed, Muhammed Adnan, Israr Ahmed 
    Abstract: Inventory control problem is faced by corporations on a daily basis to optimise the supply chain process and for predicting the optimal pricing for the item sales or for providing services. The problem is heavily dependent on a key factor, i.e. demand variations. Inventories must be aligned according to demand variations to avoid overheads or shortages. This work focuses on exploring various machine learning algorithms to solve demand variation problems in real time. Prediction of demand variations is a complex and non-trivial problem, particularly in the presence of open order. In this work, prediction of demand variation is addressed with the use-cases which are characterised with open orders. This work also presents a novel prediction model that is a hybrid of learning domains as well as domain-specific parameters. It exploits the use of Internet of Things (IoT) to extract domain-specific knowledge, while a reinforcement learning technique is used for predicting the variations in these domain-specific parameters, which depend on demand variations. The new model is explored and compared with state-of-the-art machine learning algorithms using Grupo Bimbo case study. The results show that the new model predicts the demand variations with significantly higher accuracy than other models.
    Keywords: inventory management; reinforcement learning; IoT devices; Grupo Bimbo inventory demand variation.

Special Issue on: 3PGCIC Cloud and Edge Systems and Applications

  • Quality of service prediction model in cloud computing using adaptive dynamic programming parameter tuner   Order a copy of this article
    by Monika Rd, Om Prakash Sangwan 
    Abstract: With the continuous proliferation of cloud services, the recommendation of optimal cloud service according to user requirement has become an important and critical issue and makes it highly infeasible for a single user, who wants to use the cloud services for some specific application with QoS requirements to try all the cloud services and thus depends on the information collected by other users about the QoS of various cloud services. These collected QoS values are highly nonlinear, complex and uncertain. To deal with the given scenario, there is a specific requirement to develop a recommender system for the prediction of unknown QoS values using some optimization techniques. In this paper, we have developed two models: i) optimised matrix factorisation prediction model ii) optimised fuzzy C-means prediction model. matrix factorisation and fuzzy C-means are some basic traditional techniques used with static model parameters for the prediction of missing values. But these techniques with static parameters are not able to handle the significant changes under the unpredictable internet conditions and sparsity of available historical QoS data. To overcome this problem, we have implied a novel backpropagation based ADP parameter tuning strategy to these two basic prediction techniques where backpropagation is an important mathematical tool of neural network. This is the first time it has been applied with ADP parameter tuner, to the best of our knowledge, for developing the self-adaptive intelligent system and this system provides an automatic parameter tuning capability to our proposed QoS prediction models. To evaluate the proposed approach, we have done a simulation of the approach on a real QOS dataset and experimental results show that our proposed approach yields better prediction accuracy when compared with other traditional approaches.
    Keywords: cloud computing; QoS prediction; ADP parameter tuner; fuzzy C-means clustering; matrix factorisation; backpropagation neural network.

  • Towards a cloud model choice evaluation: comparison between cost/features and ontology-based analysis   Order a copy of this article
    by Pasquale Cantiello, Beniamino Di Martino, Michele Mastroianni, Luigi Colucci Cante, Mariangela Graziano 
    Abstract: In academic institutions, there is frequently the need to provide new services, in a cloud model, to be used in either teaching or research activities. One of the main decisions to be addressed is related to which cloud model to adopt (private, public or hybrid), and which mixing of functionalities to use for the hybrid one. In this paper, two different methodologies (cost/features and semantic-based) are tested in order to identify the best suited cloud model to adopt for a specific problem. The long-term perspective is to build a methodology to serve as a tool to be used as decision support for the ICT manager in order to assist in this decision. The comparison between the two different methodologies may show the strengths and weaknesses of both approaches.
    Keywords: cloud model; decision support system; SWRL; OWL; cloud evaluation; cloud cost analysis.

  • An evaluation environment for high-performance computing combining supercomputing and cloud   Order a copy of this article
    by Yusuke Gotoh, Toshihiro Kotani 
    Abstract: Based on the characteristics of the supercomputer and the cloud system installed in the Hokkaido University Information Initiative Center, Japan, we aim to construct a high-performance computing environment by linking the two types of system. In this paper, we propose a high-performance computing environment for deep reinforcement learning that links the supercomputer and cloud systems. Our proposed system can construct a high-performance computing environment based on the scale of the computing process by the cooperation of the supercomputing and cloud systems with short physical distance and short network distance. In our evaluation of deep reinforcement learning using our proposed system, we confirmed that computer resources can be effectively used by allocating suitable processing for the supercomputer and the cloud according to the usage situations of the CPU, the GPU, and the memory.
    Keywords: cloud service; high-performance computing; processing time; supercomputer.

  • A survey on auto-scaling: how to exploit cloud elasticity   Order a copy of this article
    by Marta Catillo, Massimiliano Rak, Umberto Villano 
    Abstract: Elasticity plays an essential role as far as the wide diffusion of cloud computing is concerned. It enables a cloud application deployment to 'scale' automatically, adapting to workload changes, guaranteeing the performance requirements with minimum infrastructure leasing costs. However, auto-scaling poses challenging problems. This paper gives a detailed overview of the current state of the art on auto-scaling. Firstly the key design points for auto-scaling tools are presented and discussed. Then literature proposals and on-going research are dealt with. Finally existing auto-scaling implementations, including those used by commercial cloud providers, are reviewed.
    Keywords: auto-scaling; elasticity; cloud computing.

  • An effort to characterise enhancements I/O of storage environments   Order a copy of this article
    by Laercio Pioli, Victor Ströele, Mario A. R. Dantas 
    Abstract: Data management and storage are becoming challenging nowadays owing to the huge amount of created, processed and stored data. The growing gap between power processing and storage latency increases this performance disparity. Targeting reducing I/O bottleneck in storage environments, researchers are proposing interesting improvements in I/O architectures. High-Performance Computing (HPC) and Data-Intensive Scalable Computing (DISC) applications are types of such systems that are faced with data challenges owing to the need to deal with many parameters when managing data. This study described our characterisation model for classifying research works on I/O performance improvements for storage systems and devices that improves HPC and DISC overall applications performance. This paper presents a set of experiments using a synthetic I/O benchmark performed inside the Grid'5000. We demonstrated that the latency when performing I/O operations can undergo many variations if we take into account the presented factors evaluated in the experiments.
    Keywords: I/O characterisation; I/O performance; I/O improvement; I/O model; HPC; DISC; Big Data; GRID5000; storage system; storage environments.

  • A vision about lifelong learning and its barriers   Order a copy of this article
    by Jordi Conesa 
    Abstract: Around 25 years ago, some researchers argued for moving towards innovative learning models characterized by being more personalized and where the students would have a more active role in deciding what to learn, when to learn and how to learn. Nowadays, there is a need for a flexible, efficient, universal and lifelong education. Lifelong learning is fully integrated into our society and, from the student point of view, it is very different from regular learning. Among these differences there is the maturity of students, the fact that the domains of interest are much broader, the way how learning occurs at different depths, the fact that the topics to study may be related both to work, family and leisure, and that students have little availability due to their necessity to conciliate home, work, leisure and learning. Lifelong learning requires personalized models that adapt to students needs and constraints, but lifelong learners keep suffering from models that are adapted neither to their necessities, nor to the needs of society. This paper reflects on the actual situation of lifelong learning, analyses some of the relevant literature and discusses the challenges to conceptualize, from a transdisciplinary point of view, innovative e-learning models that promote self-determination of students.
    Keywords: lifelong learning; heutagogy; self-determined learning; e-learning.

Special Issue on: Intelligent Evaluations of Resource Management Techniques in Fog Computing

  • Web data mining algorithm based on cloud computing environment   Order a copy of this article
    by Yunpeng Liu, Xiaolong Gu, Jie Zhang 
    Abstract: With the rapid development of the internet, the daily growth of information has developed exponentially. To analyse useful information from it, there is already a bottleneck in the calculation and storage of a single node. In order to quickly extract valuable rules and patterns from massive and noise-containing data and make them easy to understand and apply directly, we used data mining technology. On the other hand, based on the characteristics of low cloud computing cost, large throughput, good fault tolerance and strong stability, the cloud computing method is selected for web data mining processing. This paper studies and analyses the K-means clustering algorithm, and the web data mining algorithm based on cloud computing environment improves the K-means algorithm, overcomes the shortcomings of the K-means algorithm itself, and builds a good cloud computing environment on the Hadoop platform, and parallelises and optimses the improved algorithm. We will focus on the K-means clustering algorithm. In order to solve the shortcomings of the K-means algorithm itself, we will consider improving the K-means algorithm and transplanting it to the Hadoop cloud computing platform. Finally, the experimental results in terms of effectiveness and acceleration ratio show that the improved and optimised algorithm solves the problem of insufficient speed and efficiency in the clustering process.
    Keywords: cloud computing; data mining; clustering algorithm; K-means algorithm.

  • Intelligent manufacturing system based on data mining algorithm   Order a copy of this article
    by Xiaoya Liu, Qiongjie Zhou 
    Abstract: How to reasonably apply data mining methods to intelligent manufacturing systems is a major issue facing the current manufacturing industry. This article focuses on the evaluation model of an intelligent manufacturing system based on a data mining algorithm. Combining the data mining algorithm with the intelligent manufacturing system, the evaluation model of the intelligent manufacturing system is established successfully. A neural network is selected for the final evaluation. After training, perform error analysis, the problems that occur in optimisation algorithms, feature selection, or data collection are analysed. The highest accuracy rate of the training group was 69%, and the highest accuracy rate of the test group was 32.5%. The results show that using data mining algorithms for recognition can effectively cluster control chart patterns and improve recognition efficiency.
    Keywords: data mining algorithm; intelligent manufacturing system; evaluation model; error analysis.

  • Visualisation technology in digital intelligent warehouse management System   Order a copy of this article
    by GuangHai Tang, Hui Zeng 
    Abstract: This study introduces visualisation technology into digital intelligent warehouse management and combines RFID technology and Web GIS technology. Through the pressure and performance test of the system, it is found that the user's waiting time is shorter, the system performance is stable, and the designed system can meet the needs of business operations, providing the warehouse management personnel with real-time information of goods location, inventory and making various reports and data. The results show that the system can reduce the cost of storage management by up to 40.3%, reduce the time of management by nearly half, and greatly improve the management efficiency. At the same time, owing to the use of intelligent information tools, it can also reduce the mistakes caused by manual operation and improve the competitiveness of enterprises.
    Keywords: visualisation technology; intelligent warehouse management; RFID technology; web GIS technology.

  • Image Recognition Technology Based on Neural Network in Robot Vision System   Order a copy of this article
    by Yinggang He 
    Abstract: Robot vision system has great research value and broad application prospects in robot navigation and positioning, human-computer interaction, unmanned driving, disaster rescue and other fields, among which image recognition technology plays an important role. The purpose of this study is to analyse the application of image recognition technology based on neural network in robot vision system. This research uses CamVid training decoder to train the model, then fine tune the parameters on the collected data, label the manually collected data with LabetMe annotation tool, and cross verify the image and scene with neural network algorithm and image recognition principle technology. After five training cycles, the neural network in this study can achieve more than 90% recognition accuracy, and achieve convergence after storing about 10 cycles. Finally, the recognition accuracy in the test dataset can reach more than 95%. In the range of robot vision recognition, the maximum measurement deviation is only 2.54 cm and the error is less than 2%. It can be concluded that the method used in this study has fast convergence speed, high recognition accuracy, small error, and good practicability and effectiveness. It improves the recognition efficiency of the robot, the processing ability of the complex environment and the precise positioning of the object.
    Keywords: neural network; image recognition; machine vision; recognition system.

  • Mechanical fault detection method of weighing device sensor faced on internet of things   Order a copy of this article
    by Yan Dong, Shiying Bi 
    Abstract: With the advancement of science and technology, electronic scales including a load sensor have been widely used in various industries to achieve fast and accurate material weighing. Especially with the advent of microprocessors and continuous improvement of automation degree in industrial production processes, load sensors have become a necessary device in weighing process control, but there is currently no method for mechanical fault diagnosis of load sensors. This experiment samples the zero point output signal of the weight sensor, and then having taken out n consecutive values by using a sliding window, it finds the standard deviation of n values. Finally, it takes the ratio of the standard deviation to the normal output standard deviation as the testing base. When the ratio is greater than the set threshold, the sensor is faulty, otherwise there is no fault. The experimental results show that 20 normal output data are randomly selected from the zero test data of the weighing sensor, and the standard deviations of one or more sequences are calculated based on these 20 data. The average of the 10 standard deviations is used as the weighing sensor, and there is no standard deviation at zero drift. This method can monitor the running status of multiple devices in real time, predict the time of equipment failure, and detect creep faults as early as possible. By setting the critical value, the system can indicate possible faults before reaching the absolute limit, and ensure the maintenance in advance to continue the normal operation of machinery and equipment.
    Keywords: load sensor; fault diagnosis; signal sampling; creep fault.

  • Rapid analysis and detection algorithm and prevention countermeasures of urban traffic accidents under artificial intelligence   Order a copy of this article
    by Zhao Yang, Yingjie Qi 
    Abstract: This article is a study on the rapid analysis and detection algorithm and preventive countermeasures of urban traffic accidents under the artificial intelligence threshold. It analyses the characteristics of artificial intelligence technology and uses its flexibility, comprehensiveness, and practicality to simulate a set of rapid analysis and detection models for accidents. The experimental data show that: the accident probability of the accident warning is highest when the vehicle density is 50 vehicles/km, and the detection accuracy is highest when the vehicle density is maintained at 20 vehicles/km. The experimental conclusions show that the artificial intelligence-based urban traffic accident risk prediction model constructed in this paper can effectively predict the possible and potential accidents.
    Keywords: artificial intelligence visual threshold; urban traffic accidents; particle filter lane detection; traffic accident black spots.

  • Deep learning-based comprehensive monitor for smart power station   Order a copy of this article
    by Yerong Zhong 
    Abstract: With the wider distribution of power substations, monitoring and control of substations at large scale become more difficult by solely relying on human inspection. Smart monitoring systems are increasingly important to realise fast response, low-cost maintenance, and autonomous control. In this paper, we develop a novel inspection system based on deep learning and edge computing techniques. Firstly, the on-site video acquisition is completed by drones only when abnormal situations are detected, realising flexible and low-cost inspection. Using deep Q-learning, we design an efficient and reliable navigation algorithm that guides drones to the target location with minimum human intervention. To reduce the response latency and support large-scale data processing, we take the advantages of edge computing and build a high-performance edge system. Moreover, several strategies from algorithm to hardware are proposed to optimise the processing pipeline of constructed edge computing system. The experiment and simulation results demonstrate the reliability and efficiency of our proposed system in the case of autonomous substation monitoring.
    Keywords: UAV; deep reinforcement learning; power substation control; edge computing.

  • Power transmission line anomaly detection scheme based on CNN-transformer model   Order a copy of this article
    by Min Gao, Wenfei Zhang 
    Abstract: The anomaly of power transmission lines has resulted in the failure of power delivery system, which brings about tremendous loss for the economy and industry. The wider distribution of power delivery system has imposed huge challenges on monitoring and making response to the anomaly cases in a short time. In this work, we introduce an autonomous anomaly detection system by exploiting computer vision (CV) and Internet of Thing (IoT) techniques. At the first step, we design and develop an IoT sensor that can detect and feedback physical conditions around the power tower. Once the anomaly situation occurs, the on-site image acquisition is carried out by drones. To simplify the construction of image analysis pipelines while maintaining high accuracy, we adopt the state-of-the-art (SOTA) cascaded convolutional neural network (CNN)-transformer model. According to our experiment results, the CNN-transformer model is able to provide promising performance for anomaly detection of power lines, achieving higher average precision while consuming almost the same computing resources. The proposed anomaly detection scheme is of importance for realising large-scale and autonomous anomaly detection for power lines.
    Keywords: IoT sensor; power automation; anomaly detection; computer vision; neural network.

  • High-performance polar decoder for wireless sensor networks   Order a copy of this article
    by Sufang Wang 
    Abstract: The Internet of Things (IoT) has promoted lots of advanced applications and become the hot topic for the development of next-generation networks. As an important part of the IoT, low power and high reliability are two crucial metrics when designing wireless sensor networks (WSN). In this paper, we propose a high-performance decoder for polar codes to improve the link reliability and transmission efficiency. We modify and optimise the original polar belief propagation decoding algorithms through investigating approximation and several types of factor. Moreover, the systematic coding of polar codes is used to further improve the error-correction performance by increasing acceptable encoding complexity. We conduct detailed experiment results to demonstrate the low-complexity and high-performance advantages of the proposed decoder. The improved robustness and low complexity of communication reduce the energy consumption and improving the information transmission reliability, which is suitable for the battery-operated low-complexity WSN applications.
    Keywords: wireless sensor network; forward error correction; polar codes; internet of things.

  • A medical specialty outpatient clinic recommendation system based on text mining   Order a copy of this article
    by Qing-Chang Li, Xiao-Qi Ling, Hsiu-Sen Chiang, Kai-Jui Yang 
    Abstract: Many prospective medical patients have difficulty determining which type of outpatient specialist to consult for their complaint, and their resulting enquiries impose an additional administration cost for hospitals. This research collects illness control data from various hospitals to establish a database fronted by a chatbot-based interface for the development of a medical specialty outpatient clinic recommendation system using speech recognition and text mining. The proposed system integrates speech recognition, the Jieba word segmentation algorithm and the conditional random field algorithm to retrieve keywords during the dialogue process. Based on C4.5 decision tree, analysis results are used to provide clinical department referrals for the patients reported symptoms. Results are tracked and feedback is sent to the cloud database to gradually correct errors and improve decision performance. Tree nodes reduce the error rate of outpatient recommendations, freeing medical staff from having to make such referrals or redirect patients to the correct department, thus reducing medical staff workload and the amount of time patients spend in the hospital.
    Keywords: chatbot; text-mining; medical department; recommendation system; decision tree.

  • Neural network classifier based on genetic algorithm image segmentation of subject robot optimisation system   Order a copy of this article
    by Hongbo Ji, Mingyue Wang, Mingwei Sun, Qiang Liu 
    Abstract: A robot optimisation system is a kind of complex, nonlinear, strong coupling system with serious uncertainty. The effect of image segmentation has become an important index to judge the merits of many algorithms. The purpose of this study is to explore the effect of neural network based on genetic algorithm on image segmentation in the optimisation system of a classifier subject robot. The method used is to calculate the pretrained VGGl6 NET model as the pretraining model through the framework of the genetic algorithm. The resolution of the training picture used is 640 * 480, the learning rate is 10-5, the value of batch size is l, the number of iterations is set to 12,000, and then the trained model is used to detect the image. The results show that the average error of group B of SNN trained by BP algorithm is 11.62%, the SNN trained by SGA has reduced the result to 9.75%, and the error is reduced to 7.75% by the genetic algorithm in this study. Moreover, the genetic algorithm is better in feature point extraction, and the detection rate reaches 94.62%, which is higher than 77.53% and 88.74% of other methods. The missing rate of this study is only 3.04%, far lower than 12.49% and 7.36%. The conclusion is that our genetic algorithm has obvious advantages, small error, high efficiency and good applicability. The neural network based on genetic algorithm in this study has a certain value in image segmentation technology.
    Keywords: genetic algorithm; neural network classifier; robot optimisation system; image segmentation; feature point extraction.

Special Issue on: Edge Computing Based Formal Modelling, Verification and Testing

  • Goal recognition method using intelligent analysis of basketball images under 6G internet of things technology   Order a copy of this article
    by Guangyu Qin, Yang Yu 
    Abstract: Goal recognition is an important technology for basketball special automatic tests. Accurate and timely judgement of basketball goals is very helpful during basketball games. Therefore, an intelligent basketball goal recognition and analysis system based on 6G internet of things is proposed. First, data mining technology is used to explore the factors affecting the goal rate, and the basketball images are preprocessed. Second, the background-difference and the three-frame difference method are fused to detect the basketball target during the image preprocessing. Finally, the basketball features are extracted by image calibration technology, and they are compared with the parameters in the system to judge whether the goal can be scored. The experimental results show that the frame rate of the proposed algorithm is 58 frames per second, and meets the real-time processing requirement of 25 frames. Therefore, false detection rate of the algorithm is low no matter if its a layup from the left or the right, and the algorithm can be used in basketball games or examinations.
    Keywords: object detection; data mining; internet of things; Hough circle transform.

  • Optimizing Enterprise Financial Sharing Process using Cloud Computing and Big Data Approaches   Order a copy of this article
    by Yimin Deng 
    Abstract: With the rise and application of new information technology led by big data and cloud computing, traditional enterprise financial work faces the pressure of transformation and upgrading. To improve the efficiency of enterprise management and reduce the cost of financial treatment, in this exploration, the differences, advantages, and disadvantages between the traditional financial management mode and the sharing model are compared; the basic situation of Company H is analysed under the operation status of financial shared service center. Company Hs total asset exceeds 10 billion Chinese yuan, and its annual construction capacity exceeds 40 billion Chinese yuan. As its development scale continues to increase, the drawbacks of its decentralized financial management model gradually become prominent. For the problems of Company Hs current financial management mode, such as high management cost, complex process, low efficiency, and personnel redundancy, the solution to optimise the financial shared service center is proposed. Under the cloud computing background, the reasons to construct a financial shared service center for Company H are analysed, the design scheme is optimized, and the information system is constructed. The application effect of the financial shared service center supported by cloud computing is analysed on this basis. Moreover, the sharing models management system and the corresponding personnel allocation are introduced through the financial process optimization and reengineering of the financial shared service center. An example further verifies the effectiveness of the proposed model. The results show that the financial management mode improves financial work processing efficiency and reduces financial processing costs. After the financial sharing service process reform, the cost of financial sharing service is significantly reduced, and enterprise managements efficiency is further improved. This exploration can provide a theoretical basis and research ideas for the financial and operational management of enterprises.
    Keywords: cloud computing; big data; enterprise finance; shared services; process optimization; management efficiency.

  • Human resource analysis and management using mobile edge computing   Order a copy of this article
    by Changlin Wang 
    Abstract: The purpose is to process large data, classify unclear information in the human resource management system, and improve the human resource management efficiency for enterprises. Here, a new system is proposed for human resource analysis and management based on mobile edge computing (MEC) technology. First, the MEC technology is analysed. Then, the MEC technology is integrated into the human resource management system through network protocol according to the distributed computing function of MEC technology and human resources management requirements. Subsequently, the B/S and C/S data models are combined. Finally, the new human resource analysis and management model is proposed through the combination of B/S and C/S data models based on MEC. The proposed system can manage human resources and business operations locally to save transportation costs and increase system calculation speed through a series of analyses and research, thus improving system performance. Hence, the performance of the proposed human resource management system has been improved, and enterprises can reasonably allocate personnel and arrange work appropriately. The proposed system can improve the efficiency of human resource management, strengthen the innovation ability of enterprises, and improve enterprises' talent competitiveness. Meanwhile, human resource management becomes more scientific, convenient, and efficient, realising the enterprise's strategy and increasing revenue. Therefore, the proposed human resource management system based on MEC is key to the construction of an innovative human resource management system and is of strategic significance to enhance enterprise strength.
    Keywords: mobile edge computing; MEC platform; human resource system; human resource management.

  • The application of BIM technology in variation control of construction material of expressway asphalt pavement under machine vision   Order a copy of this article
    by Hanyao Sun 
    Abstract: In order to improve the working life and service quality of asphalt pavement, the material properties during the construction of asphalt pavement are investigated. First, the performance of construction material variation of asphalt pavement is studied and analysed. Second, the screening in the asphalt mixture is monitored using BIM (Building Information Modeling) technology. Meanwhile, the method of machine vision is used to study the screening composite gradation of aggregates. Finally, BIM technology is used to estimate the permanent deformation and fatigue life of the asphalt mixture layer. The results show that the gradation, oil-rock ratio, gradation segregation and temperature segregation of the asphalt mixture mixing process have different effects on the asphalt mixture performance. The use of BIM technology improves the accuracy of the composite gradation ratio, thereby improving the performance of the asphalt mixture layer. The prediction of the permanent deformation and fatigue life of the asphalt mixture layer can provide control of the construction quality of the asphalt pavement, and facilitate the maintenance of asphalt pavement by relevant personnel in the later period. The use of BIM technology greatly facilitates the study of material variability in asphalt pavement construction and is more conducive to relevant personnel to make reasonable construction decisions. Finally, the service quality and working life of asphalt pavement are improved.
    Keywords: BIM technology; machine vision; composite gradation ratio; asphalt mixture; material variation.

  • Big data acquisition of parallel battery back state and energy management system using edge computing   Order a copy of this article
    by Xianglin Zhang 
    Abstract: The purpose of this work is to ensure the big data acquisition of parallel battery back state, and the safe and effective operation of the energy management system. Edge devices are combined with cloud computing to achieve a big data acquisition and processing model based on the edge computing, which makes the speed of the big data acquisition of parallel battery back state faster, and avoids data fitting. The algorithm is optimised based on the combination of the Lyapunov method and the distributed method of ADMM (Alternating Direction Multipliers Method). The optimised edge computing improves the performance of the energy management system of parallel battery back state. The experimental results show that the two methods can effectively avoid the fitting phenomenon of data acquisition, and the distributed method can simplify the complexity of data processing and make the energy management system consume minimum energy. The big data acquisition speed of parallel battery back state based on the improved edge computing is faster, and the battery energy management is more effective, which has enormous significance for enlarging the application of parallel battery.
    Keywords: edge computing; parallel battery; Lyapunov method; big data.

  • The use of path planning algorithm using edge computing technology in college football training   Order a copy of this article
    by Jun Zhou, Yingying Zheng 
    Abstract: The purpose of this work is to study the role and influence of football robots on the actual football training in the current popularity of football robot competition. The theoretical basis and key technologies of the football robot system are deeply studied. The design of the wireless communication subsystem, vision subsystem, decision subsystem and fuzzy PID (port ID) control system of the football robot is completed. The application of edge computing in wireless communication subsystem is discussed. The immune algorithm (IA), particle swarm optimisation (PSO) algorithm and immune PSO (IPSO) algorithm are compared and analyzed. IPSO algorithm is used for path planning of football robots. In the comparison of the three algorithms, the convergence stability of the IPSO algorithm is significantly higher than that of IA and PSO algorithm. This research can arouse the common concern and thinking of the scientific and technological circles and the traditional football circles. It is a new field, and a challenge to traditional football. It plays an important role in the research and development of intelligent robot technology and traditional football training, especially the college football team.
    Keywords: edge computing; path planning algorithm; football vision system; football training.

  • Tourism scene monitoring using mobile edge computing and business-driven action learning   Order a copy of this article
    by Hong Yu 
    Abstract: In order to improve the environment of domestic tourism scenes, enforce its monitoring means and strength, and improve its ability to respond to emergencies, the monitoring means of domestic tourism scenes under big data is discussed based on MEC (Mobile Edge Computing) and BDAL (Business-Driven Action Learning). First, the concept and construction of edge computing are analysed, including tourist number monitoring, tourist density, and the security of tourist scenes. These monitoring data are processed in time to make a reasonable tourist diversion scheme; second, the BDAL is analysed to explore its function of data analysis and processing; finally, a tourism scene monitoring model is constructed. The study of the tourism scene monitoring system reveals that edge computing can improve the monitoring ability and efficiency, and its error rate is low, with a percentage of less than 3%; SVM (Support Vector Machine) model in BDAL can effectively improve the accuracy of monitoring, and the accuracy rate is as high as 96%. The study provides a reference for improving the existing tourism monitoring system, expands the application field of edge computing, and makes the tourism scene monitoring meet the requirements in the era of big data, contributing to new operable schemes and means for the sustainable development of tourism scenes.
    Keywords: edge computing; business-driven action learning; data analysis; tourism management; scene monitoring.

  • Application of intelligent grammar error correction system following deep learning algorithm in English teaching   Order a copy of this article
    by Yang Zhang 
    Abstract: The purpose is to explore the application of deep learning technology in the English grammar error correction algorithm. An English grammar error correction model based on seq2seq (Sequence to Sequence) is established. And then its ability is improved applied to English teaching. In addition, the advanced edge calculation in recent years is used to improve the training efficiency and error-correction ability of the model. The TensorFlow framework is used to implement the model, which verifies the effectiveness of the error-correction algorithm model and greatly improves the accuracy of the error-correction system. The results show that the application of artificial intelligence in grammar error correction has gradually attracted the attention of relevant researchers. The edge computing model performs better than the seq2seq model in a variety of data types, and the accuracy in spelling error correction is improved by nearly 44.21%. The technology can not only effectively reduce the workload of teachers, but also help students' autonomous learning. In short, it greatly improves the work efficiency of all walks of life in society. The attention mechanism is introduced into the deep learning technology based on seq2seq, which ensures the accuracy of grammar-error correction and improves the efficiency of the grammar error correction model. And edge computing in the model can reduce the pressure of the central processor, thereby shortening the delay of the system. The feedback recommendation module in the model helps to discover and optimize the deficiencies of the system in time. This study has a great reference for English grammar teaching and promotes the application of an intelligent grammar-error-correction system in English teaching.
    Keywords: deep learning; intelligent error correction; English grammar; English teaching; edge computing.

  • Design of intelligent classroom teaching scheme using artificial intelligence   Order a copy of this article
    by Yingying Su 
    Abstract: The study aims to improve the efficiency and quality of teaching, improve students' interest in learning and performance. A design scheme of intelligent classroom is proposed based on artificial intelligence technology, which can be divided into online part of personalized learning recommendation and offline part of classroom quality evaluation. First, a multi-layer back propagation (BP) neural network combined with genetic algorithm and an actual entropy model is designed to predict students' performance and analyze their learning behavior. Second, in the design of offline system, the model of YOLOV3 (you only look once) is used to design the model of students' concentration detection. Finally, the online system and offline system are combined to realize the functions of students' learning behaviour feature extraction, students' performance prediction, students' learning rule analysis, and students' personalized learning recommendation. The experimental results show that the students with strong regularity and periodicity of learning behaviour get better results in the corresponding examinations. When the students' attention object is blackboard content, the accuracy rate of YOLOV3 is 93.35%, the recall rate is 88.22%, and F1 is 90.23, which proves that the network can better judge whether the students' attention is focused, and help teachers improve their teaching methods and teaching state. The system proposed not only effectively improves the teaching efficiency of teachers and the learning efficiency of students, but also provides some reference for the application of artificial intelligence technology in the construction of intelligent classroom.
    Keywords: artificial intelligence; neural network; intelligent classroom; teaching method design.

  • The Use of Interactive Data Visualization Technology in Cultural Digital Heritage Display Using Edge Computing   Order a copy of this article
    by Xinjie Zhao 
    Abstract: The purpose is to solve the survival dilemma of intangible cultural heritage caused by economic growth and social ecological environment changes in recent years, so as to protect and inherit the intangible cultural heritage scientifically. With Chinese folk dance as an example, the display and inheritance of cultural heritage are studied. Virtual reality, interactive data visualization and the computer technology of edge computing are used to analyse the characteristics of digital folk dance activities. Then, the design of the real touching face generation and body interaction structure of folk dance inheritors is completed. In the visual analysis and inheritance of folk dance, considering that face matching is an important step, a new face matching algorithm is proposed based on edge computing. The simulation results show that the accuracy of the new face matching algorithm is as high as 98%, and the average matching time is reduced to 1.79s. The optimisation of face matching algorithm is of great value to realize the visualization and inheritance of traditional folk dance cultural heritage. This exploration promotes the development and application of digital technology of intangible cultural heritage in China, and is of great significance to the innovation and comprehensive protection of intangible cultural heritage.
    Keywords: cultural heritage digitisation; data visualisation; edge computing; realistic 3D digitization.

  • Internet of things and its applications to marathon events: from the perspective of sport tourism and urban development   Order a copy of this article
    by Eusebio C. Leou, Rachel M. Ruan, Runze Yu 
    Abstract: As a consumer-oriented technology for individuals, the Internet of Things (IoT) is a new frontier which can be one of the most rapidly growing areas in healthcare, sports and wellness, as well as sports events and recreational activities. The technology of IoT has been popularly developed, however, the application of IoT is in the stage of development and has been rarely investigated. Besides the application of IoT technology on individual equipment for healthcare and wellness, it seems there would be more room for its application in urban sport events and recreational activities. From the perspective of sport tourism, this study tried to describe the application of IoT technology in urban sports events. The authors adopted the urban marathon events as an example and illustrated how the IoT can accommodated in sport events, which can improve the level of participants sport involvement and satisfaction, and its impact on the willingness of those participants to revisit. By using the approach of interpretative phenomenological analysis (IPA), the authors observed the marathon event participants by understanding the innermost deliberation of the 'live experience' stories to research participants. As result, the application of IoT in marathon events was indeed a stimulus to enhance the satisfaction and sport involvement of participants. Besides, marathon participants are willing to revisit urban sport events for the coming year because the IoT technology is an advantageous element for themselves.
    Keywords: internet of things; marathon events; interpretative phenomenological analysis; satisfaction; revisit.

  • Assessing information security performance of enterprise internal financial sharing in cloud computing environment using analytic hierarchy process   Order a copy of this article
    by Xiuying Zhou, Huaqun Weng 
    Abstract: The present work aims to improve the efficiency of enterprises by solving a series of problems such as high financial costs, low financial management efficiency, and structural redundancy. The maturity model subdivides the four indicators: information security strategy, information security technology, information security organization, and information security operation. An information security evaluation system is constructed for financial sharing. Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE) are combined to evaluate financial sharing capabilities in protecting information security. The proposed model is validated using example data. Results demonstrate that cloud computing-based financial sharing management can effectively reduce financial costs and improve the financial management efficiency of enterprises. The AHP-based model for evaluating information security performances can effectively protect financial information and guarantee enterprises financial information security. Through experiments, significant and practical references can be provided for promoting the development of large enterprises.
    Keywords: cloud computing; financial sharing; information security.

  • Data acquisition and management of wind farms using edge computing   Order a copy of this article
    by Xin Cao, Zhiwei Wu, Xiaoliang Qin, Fang Ye, Yifei Xu 
    Abstract: The purpose is to deal with the increasingly serious energy crisis and prominent environmental problems in recent years, collect and manage the power data of wind farms, and ensure the normal transmission of data information in the wide weak signal coverage area of the wind farm. First, the principle and advantages of edge computing technology are described. Then, the application of this technology to the data acquisition system of wind farms is proposed, making the data acquisition and management work normally. Finally, the application of edge computing technology in wind farm data acquisition and management is simulated, and the results are compared. The results show that the application of edge computing in wind farm data acquisition and management can improve the difficulties of data information transmission caused by the geographical situation and weak signal coverage of wind farms. Moreover, using edge computing technology makes data acquisition and management more efficient and accurate. This exploration provides a new direction for the application of emerging computer technology in the power Internet of things system, and sets an example for future related research.
    Keywords: edge computing; power internet of things; wind farms; data acquisition and management.

  • Exploring the role of edge computing on the legal effect of secure collaborative download protocol   Order a copy of this article
    by Peifang Zhong 
    Abstract: To make peoples car driving experience safer and more comfortable, the architecture of intrusion recognition model based on the K-DNN (K-nearest neighbour classification Deep Neural Networks) is proposed to classify and identify various network intrusion factors, thereby strengthening the security level of IoV (Internet of Vehicles) and alleviating the hardware resource scarcity for IoV. Then, a secure collaborative download system based on edge computing is proposed, which can accurately and timely collect the information of roads, vehicles, and nearby infrastructure in the driving process and facilitate people to safely and quickly download the required content. In the proposed system, the vehicle encrypts the content download request and sends it to the RSU (Roadside Unit) and the content server, respectively. The content server sends the content corresponding to the download request to the vehicle through the RSU. Concurrently, after some ciphertext requests are collected, the RSU analyses the high-frequency access requests and caches the corresponding content to the edge computing vehicle. Afterward, the data download request can be directly extracted from the nearby edge calculation vehicles. Security analysis shows that a single vehicle can download the required content privately and safely without suspicious attacks. The network analysis of packet loss rate and transmission delay shows that the collaborative download of edge-computing vehicles has a low packet loss rate and transmission delay. Still, the current laws and regulations on the legal effect of collaborative download documents in China are not perfect, so the legal effect of collaborative download protocol based on edge computing needs urgent solutions.
    Keywords: edge computing; collaborative download; legal effect; internet of vehicles; roadside unit.

  • Application of artificial intelligence in 6G internet of things communication in interactive installation art   Order a copy of this article
    by Xiaolong Wang, Ling Cai 
    Abstract: The study aims to further analyse the application of intelligent Internet of Things (IoT) technology and artificial intelligence (AI) technology in the art field, and promote the intelligent development of the art field. The study explores the application of AI technology in art management and art exhibition in the era of IoT, designs the intelligent management system of IoT art exhibits, and discusses the practical application effect of the system. It is found that there is excellent application effect of the art management system, and the shortest response time of the system is only 0.379 seconds, which can meet the actual application of the museum. Meanwhile, the multi-touch system based on BP neural network (BPNN) technology is designed, and BPNN is optimized. The results suggest that the multi-touch system has the highest gesture recognition accuracy of 99.4%, which is better than Hidden Markov Models (HMM). The results can provide theoretical support and reference for the development and research of intelligent IoT technology in the field of interactive art.
    Keywords: artificial intelligence; data mining; interactive installation art.

  • Application of Edge Computing to the Design and Planning of Urban Sculpture Space   Order a copy of this article
    by Xiaohua Huang 
    Abstract: The purpose of this study is to realize the construction of urban culture and save the cost of urban sculpture space design. First, edge computing technology is combined with urban sculpture space design and planning. Second, the services, architecture, advantages, and characteristics of urban sculpture, as well as the key and difficult points in the construction are briefly discussed, and the hierarchical architecture of urban sculpture space based on edge computing is put forward. Then, cloud and edge computing are combined to analyse the specific functions of urban sculpture. On this basis, the architecture platform of the security monitoring system of urban sculpture is constructed. The energy consumption of the optimal path planning algorithm based on reinforcement learning is proposed and compared with that of the previous algorithms. Finally, the actual energy consumption required is predicted and evaluated, and a specific monitoring system is established. According to the security monitoring system of urban sculpture, the energy consumption of machines within the monitoring range is calculated mathematically. An optimal path planning algorithm based on the energy consumption of reinforcement learning is proposed and compared with the previous algorithms. The results show that when the seven monitoring devices cover less than 800 detection points, the energy consumption is linear, and the energy consumption is linear. When the detection devices cover more than 800 detection points, the energy consumption is stable between 10,000 and 12,000, that is, there are about seven monitoring devices, and the number of monitoring points is about 800. When the number of detection points is fixed, the number of monitoring devices in the cell will increase, and the total energy consumption is reduced. The optimal path planning algorithm based on reinforcement learning can obtain the approximate optimal solution. The application of edge computing technology to the design of urban sculpture can optimize the function of urban sculpture and make it serve human beings better.
    Keywords: edge computing; urban sculpture; space design; optimisation strategies.

  • Construction and implementation of music recommendation model using deep learning artificial neural network and mobile edge computing   Order a copy of this article
    by Juan Xia 
    Abstract: The purpose is to better develop China's music industry, facilitate users online music works queries, and encourage the development of excellent music works. The music recommendation system is analysed, and a new music recommendation system is constructed based on the combination of the hybrid DL (Deep Learning) ANN (Artificial Neural Network) and MEC (Mobile Edge Computing) technologies. First, the principle of MEC is analysed. Then, the combination of DLANN (Deep Learning Artificial Neural Network) technology and MEC technology is discussed. Afterward, DLANN and MEC are fused to implement the music recommendation model, and the proposed model is evaluated through experiment. The results show that the combination of DLANN and MEC technologies can increase the DL efficiency of the computer, the storage capacity, and the overall efficiency of the servers. This proves the feasibility of the proposed music recommendation system. The user satisfaction of the proposed music recommendation system exceeds all mainstream music recommendation algorithms on the market. Thus, the research can provide a reference for the improvement of the music recommendation algorithm in the future and is of great significance. The proposed music recommendation system outperforms most of the current music recommendation systems on the market.
    Keywords: deep learning; artificial neural network; mobile edge computing; music recommendation model.

  • Application of crack detection algorithm using convolutional neural network in concrete pavement construction   Order a copy of this article
    by Wuqiang Wei, Xiaoyan Xu 
    Abstract: Concrete pavement cracks detection is specifically studied based on the application principle of CNN (Convolutional Neural Network) in large-scale image recognition and processing. Firstly, the feature extraction and selection principle are introduced for sample data of concrete pavement crack, and the sample IP (Image Processing) method is expounded. Secondly, the preprocessing of concrete pavement crack images is proposed. Through the establishment of the functional relationship between CR (Crack Rate) and CRI (Crack Rate Index), the samples are trained, and the crack detection model based on CNN is implemented. The CNN model is evaluated by comparing the accuracy and loss rate of the CNN model with the traditional Alex model in processing different numbers of images. Afterward, a concrete pavement crack detection platform is developed with cross-platform Python, OpenCV, and QT framework, combined with DL (Deep Learning), graphical interface development, and image processing. Thereupon, a concrete pavement health evaluation method is proposed. Regression analysis shows that the evaluation method can reasonably evaluate concrete pavement crack.
    Keywords: convolutional neural network; crack detection; Gaussian filtering algorithm; Canny algorithm; concrete pavement.

  • The use of optimised SVM method in human abnormal behaviour detection   Order a copy of this article
    by Dongxing Gao, Helong Yu 
    Abstract: The purpose of this work is to improve the accuracy of human behaviour detection methods. After the video surveillance is used to track and detect human behaviours, an improved SVM (Support Vector Machine) behaviour detection method based on dynamic and static characteristics is studied. In behaviour detection, the various background modelling methods are compared before the background difference and the dual-mode background modelling methods are used to optimise the original background modelling method. In terms of video frames, the average background method is used to model the static background, and the optical flow is used to model the dynamic background. Concerning target tracking, a multi-feature particle filter is used for nonlinear invariant target tracking without Gaussian conditions. The results show that the integration of dynamic and static characteristics of human behaviour can more comprehensively express behavioural characteristics.
    Keywords: abnormal behaviour detection; support vector machine; target detection; multi-feature fusion.

Special Issue on: Smart Living Technology and AIoT Innovations

  • The use of intelligent remote monitoring system in ship energy efficiency management based on internet of things   Order a copy of this article
    by Jianhua Deng, Songyan Mai, Zeng Ji, He Zhang, Bowen Jin, Longyu Bu, Chaochun Huang, Hui Jiang 
    Abstract: The aim of this work is to carry out remote monitoring of ships and realise intelligent recording of ship energy efficiency. Big data technology, 6G communication technology and embedded technology are used to build a remote monitoring system of ship operation energy efficiency from the level of hardware and software. The specific research work can be divided into three aspects. First, the existing ship operation energy efficiency indicators are analysed, and the ship energy efficiency operation indicator (EEOI) with the most balanced performance is found as the evaluation indicator of ship operation energy efficiency. Then, the requirement analysis is carried out, and the overall framework design is completed according to the analysed requirements. Moreover, the hardware module selection and peripheral circuit design are completed based on the framework parameters. Finally, the software part of the system is built. The test results prove that the functionality and stability of the platform meet the actual needs. Then, the system is tested on board for 20 days. The test results show that, in 20 days of data collection, the experimental ships are only in the range of level 1 operation energy efficiency 15 times, in the range of level 2 operation energy efficiency 40 times, and in the range of level 3 operation energy efficiency 5 times. Then, according to the classification results, the ship handling is continuously optimised, and the ship energy efficiency level is continuously improved. The system proposed has a certain reference value to help ships in China to achieve energy saving and emission reduction. Meanwhile, to a certain extent, it solves the problem of the lack of effective monitoring means and management methods for inland ship operation energy efficiency in China.
    Keywords: big data; ship energy efficiency management; ship remote monitoring; internet of things.

  • Analysis of electronic bill authentication and security storage performance using machine learning algorithm   Order a copy of this article
    by Jingcheng Tian, Lingbo Yang, Yutao Zhang, Wei Qian 
    Abstract: The study aims to ensure the security and authentication efficiency of the bill image, and the encryption and decryption methods and security protection of the electronic bill are studied in the experiment. First, aiming at the not high traditional electronic bill security performance, a method is proposed, namely, embedding a watermark into a binary image with edge information. Second, aiming at the weak compression robust character of electronic bill image, the method of chaotic encryption of digital watermark through wavelet coefficient matrix algorithm is proposed to be combined with the binary sequence. Then, machine learning algorithms and image feature matching methods can carry out multiple watermark encryption and anti-counterfeiting authentication for bills, both visible and invisible. Finally, the deep learning algorithm combined with the convolution algorithm can detect the quality of electronic bill watermark images. The results show that the method of embedding watermark with edge information effectively has improved the confidentiality of electronic bills. The method of chaotic encryption of digital watermarks by wavelet coefficient matrix algorithm combined with binary sequence has improved the anti-compression ability of digital watermarks. The multi-watermark encryption method has enhanced the tamper-proof ability of electronic bills and has improved the security performance of bills, and the deep convolution algorithm has improved the security and efficiency of electronic bill processing. The results also indicate that the multi-watermark encryption method and the algorithm of note feature extraction in the experiment can greatly improve the security of electronic bills, and improve the authentication efficiency of electronic bills.
    Keywords: electronic bills; machine learning algorithms; deep convolution algorithm; multiple watermark encryption; binary image; chaotic encryption.

  • Comparison of static and dynamic characteristics of electromagnetic bearing using machine learning algorithm   Order a copy of this article
    by Xiangxi Du, Yanhua Sun 
    Abstract: To improve the performance and service life of the bearing and improve the overall performance of the mechanical system, the characteristics of the electromagnetic bearing and the elastic foil gas bearing are analyzed based on the machine learning algorithm. First, the bearing capacity of electromagnetic bearing is analyzed, including the nonlinear stiffness of electromagnetic bearing, the influence of air gap on the electromagnetic force, the determination of optimal linear range and the improvement of PID control based on support vector machine. At the same time, the characteristics of the elastic foil gas bearing are analyzed, including the static characteristics of the bearing aeroelastic coupling calculation process and the calculation of dynamic stiffness and damping coefficient. The results show that with the gradual increase of the current, the radial electromagnetic force of the electromagnetic bearing also increases, and the increase range is larger and larger; when the current is constant, the electromagnetic force decreases with the increase of the air gap. When the frequency is small, the response curve of electromagnetic force of electromagnetic bearing fluctuates greatly with the change of control square wave. With the increase of frequency, the response curve of the electromagnetic force of electromagnetic bearing tends to a straight line; when the current frequency of radial bearing coil is greater than 1000Hz, the amplitude fluctuation of electromagnetic force of electromagnetic bearing tends to be stable. When the eccentricity is zero, the force between node 1 of elastic foil gas bearing and adjacent nodes is non-zero, and the force of other nodes is zero. With the increase of eccentricity, the force of node 1 increases first and then decreases; the stiffness coefficient and main damping coefficient in y-direction increase with the increase of foil thickness, and the stiffness coefficient and cross stiffness coefficient in x-direction are close to zero. When the eccentricity is 1.1, with the increase of foil thickness, the main stiffness coefficient increases, and the main damping coefficient first decreases and then increases. When the foil thickness is 0.2mm, the main damping coefficient is the largest.
    Keywords: aeroelastic coupling; machine learning; support vector machine; electromagnetic bearing; elastic foil gas bearing.

  • Design and planning of urban ecological landscape using machine learning   Order a copy of this article
    by Yajuan Zhang, Tongtong Zhang 
    Abstract: The purpose of this work is to improve the air quality of the urban ecological environment and increase the green rate of the urban garden ecological landscape. Machine learning (ML) algorithms are used to analyse and calculate the dust retention outcomes of different plants. Dust retention capabilities and spectral characteristics of several different plants are researched. Pearson correlation analysis and different ML algorithms are adopted to analyze the dust retention effect of different plants. Experiments obtain the dust retention capabilities and characteristic spectral data of different plant leaves. Results demonstrate a significant correlation between plants and dust retention rate. Different plants have different bands and approaches for constructing high-precision inversion models. Red sandalwood has 150 inversion bands, and the optimal inversion algorithm is random forest (RF). Zhu jiao has 74 inversion bands, and the optimal inversion algorithm is the support vector machine (SVM). Ficus microcarpa has 80 inversion bands, and the optimal inversion algorithms are SVM and RF, without many differences. ML algorithms provide better accuracy than correlation analysis, more suitable for calculating plants dust retention capabilities. To sum up, ML algorithms can calculate the dust retention amounts of plants to better plan and design regional ecological landscapes, thereby reducing dust pollution in the air and improving the urban air quality.
    Keywords: dust retention effect; spectral characteristics; correlation analysis method; machine learning algorithm.

  • Exploration of new community fitness mode using intelligent life technology and AIoT   Order a copy of this article
    by Li Cao, Chongjiang Zhan 
    Abstract: The purpose of this work is to better mine the fitness motion data for intelligent wearable devices and promotes the development of the new community fitness mode. First, the defects of the traditional fitness motion recognition system are analysed. Then, software engineering technology and DL (Deep Learning) technology are used to build a multi-layer fitness motion monitoring system. Finally, the data of running, riding, race walking, and rope skipping in the PAMAP2 dataset are used for system evaluation. The results show that the proposed motion data monitoring system has an average accuracy of 97.622%, an average precision of 96.322%, and a recall rate of 96.021% for fitness data recognition. The experimental results suggest that intelligent wearable devices with the proposed monitoring system can effectively mine wears motion data and promote the development of the new community fitness mode.
    Keywords: AIoT; motion recognition; intelligent life technology; intelligent wearable device.

  • Exploring the role of sports app in (campus fitness) intelligent solutions using data fusion algorithm and Internet of Things   Order a copy of this article
    by Chao Zhu 
    Abstract: The purpose is to study how the multi-sensor IoT (Internet of Things) data fusion algorithm calculates in data fusion systems and improves a system's fusion efficiency. An improved WLS (weighted least-squares) algorithm is proposed for IoT data fusion, and how it processes massive data in a multi-sensor system is studied. Accordingly, a multi-sensor system for sports fitness is designed based on a data fusion algorithm. First, the purposes and demand for a sports app (Application) are analysed, to understand the problems and necessary improvements of such an app. Then, the implementation of the IoT and the classification of the data fusion algorithm of the IoT are explored. The WLS method is selected through comparison, and its implementation process and the data processing process are analysed and explained. Finally, the sports fitness system based on the IoT data fusion algorithm is designed and analysed. The results show that the wireless communication of a multi-sensor data fusion system is feasible and reliable. The variances calculated through the WLS method and the AM (arithmetic mean) method in data fusion are compared. The former value is about one-thousandth of the latter value, indicating that the data fusion based on WLS is advantageous over the traditional data fusion.
    Keywords: internet of things; multi-sensors; data fusion; weighted least-squares; intelligent solutions.

  • Construction method of knowledge graph under machine learning   Order a copy of this article
    by Peifu Han, Junjun Guo, Hua Lai, Qianli Song 
    Abstract: With the increased economic ties and trade among China and Southeast Asian countries, cultural exchanges have become more and more intensified. Convenient language communication constitutes an important part of the cooperation channels among different countries. To explore the named entity recognition in the field of knowledge graph construction, the Vietnamese grammar and word formation are analysed deeply in this study, aiming to solve the low recognition precision and low network calculation efficiency in Vietnamese named entity recognition. Firstly, the Vietnamese person names, location names, and institution names in Vietnamese corpus are collected statistically to build a corresponding entity database to assist the Vietnamese named entity recognition. Then, a Vietnamese named entity recognition model is proposed based on residual dense block convolutional neural network.
    Keywords: Vietnamese grammar; named entity recognition; residual network; knowledge graph.

  • Implementation of fitness and health management system using deep learning neural network and Internet of Things technology   Order a copy of this article
    by Xiaojun Zhang 
    Abstract: People's health indicating data are analysed, and a fitness and health management system is established using deep learning and Internet of Things (IoT) technology, to manage their fitness and health more scientifically. The proposed system aims at enhancing people's health awareness, preventing diseases caused by sub-health. Meanwhile, the system also tries to provide an all-around fitness and health management plan, including understanding, maintenance, improvement of their health status, protecting the overall health of the public. The data of people's fitness and body indicators are detected by IoT technology. The neural network algorithm in deep learning is used to analyse and calculate these data. The various types of data detected are integrated and classified. The characteristic data are extracted for modeling and analysis. The results are input into the backpropagation (BP) neural network model constructed. Finally, the Dempster-Shafer (DS) evidence theory is used to optimise the analysis results. The test of sample data shows that the established composite model is more accurate than the health management service constructed by the traditional simple model. Hence, the fitness and health status of users can be accurately evaluated and managed. Through the proposed intelligent fitness and health management system composed of IoT devices, users can gain a better health status by self-monitoring, self-control, self-discovery, self-analysis, and self-search.
    Keywords: health management; backpropagation neural network; Dempster-Shafer evidence theory; composite prediction model.

  • Application of de-noising automatic coding method in freight volume prediction under intelligent logistics   Order a copy of this article
    by Zheng Tang 
    Abstract: With the advent of the information age, there appear many problems in cargo transportation, such as traffic jams, delayed information transmission, and low freight efficiency. The purpose of the study is to make freight transportation better adapt to the intelligent logistics and study the application of de-noising automatic coding networks based on deep learning in freight volume prediction. The de-noising auto-coding network and the stack de-noising auto-coding network are discussed, and a freight volume prediction model based on the stack de-noising auto-coding network is constructed. The de-noising auto-coding prediction method is compared with the traditional prediction method and the deep-learning prediction method of the same kind. According to the comparative analysis, the average error of the stack de-noising auto-coding prediction method is 5.96% in 2019 and 2020, which is smaller than that of traditional prediction methods.
    Keywords: intelligent logistics; deep learning; de-noising auto-coding; cargo volume prediction.

  • The use of edge computing-based internet of things big data in the design of power intelligent management and control platform   Order a copy of this article
    by Xin Ju 
    Abstract: The purpose is to apply the Internet of Things (IoT) and big data technology to the power management and control platform, and improve the intelligent level of power management and control. The power intelligent management and control platform is designed, focusing on the online monitoring and management of equipment. A lightning location system based on IoT technology is proposed, including the application model of edge computing combined with the cloud platform. With Shaanxi regional power grid as an example, the power consumption data of 8000 users are selected as the data source for cluster analysis. The results show that the power intelligent management and control platform has good performance and can monitor power equipment online. Edge computing is superior to the traditional centralised algorithm in classification accuracy and platform running time. In different fault environments, the total time of fault repair and defect elimination and the road consumption time of the power intelligent management and control platform are better than the traditional power management and control platform in fault location and defect elimination time saving, respectively. It shows that the proposed power intelligent management and control platform can realise the intelligent analysis of multi-source data, and its good management and control performance can be used in practice.
    Keywords: edge computing; internet of things; big data; power intelligent management and control platform; cloud platform.

  • The use of deep learning and AIoT technology in loan word translation   Order a copy of this article
    by Xiaoyan Xu, Zhenhuan Yang 
    Abstract: The purpose is to translate loan words accurately, standardize their use, and better integrate with Chinese culture. The role of intelligent translation in loan words translation is studied based on deep learning and intelligent internet of things. Moreover, a new neural translation mechanism of divide and conquer strategy is proposed based on the original artificial intelligence (AI) translation technology. Its translation quality is tested and compared with manual translation. The new neural translation mechanism can achieve the accuracy of human daily language. The accuracy rate of system MNP (mobile number portability) identification is 74.67%, the recall rate is 71.42%, and the F value is 74.01%. Moreover, the average time is 0.01s/character. Therefore, the research results suggest that the constructed AI translation can more efficiently complete the translation task, save a lot of time cost and labor cost, and produce a crucial reference for the intelligent development of the translation industry.
    Keywords: intelligent translation; loan word translation; deep learning; intelligent internet of things.

Special Issue on: Computational Intelligence Methods for Smart Connectivity in IoT

  • Machine learning for cloud, fog, edge and serverless computing environments: comparisons, performance evaluation benchmark and future directions   Order a copy of this article
    by Parminder Singh, Avinash Kaur, Sukhpal Singh Gill 
    Abstract: The compute-intensive and latency-sensitive Internet of Things (IoT) applications need to use the services from various computing paradigms, but they are facing many challenges such as large values of latency, energy and network bandwidth. To analyse and understand these challenges, we designed a performance evaluation benchmark that integrates cloud, fog, edge and serverless computing to conduct a comparative study for IoT-based healthcare applications. It gives the platform for the developers to design IoT applications based on user guidelines to run various applications concurrently on different paradigms. Furthermore, we used recent machine learning techniques for the optimisation of resources, energy, cost and overheads to identify the best technique based on important Quality of Service (QoS) parameters. Experimental results show that serverless computing performs better than non-serverless in terms of energy, latency, bandwidth, response time and scalability by 3.8%, 3.2%, 4.3%, 1.5% and 2.7%, respectively. Finally, various promising future directions are highlighted.
    Keywords: artificial intelligence; fog computing; edge computing; internet of things; machine learning; serverless computing; cloud computing.

  • Deep learning based object detection between train and rail transit platform door   Order a copy of this article
    by Fen Cheng, Hao Cai 
    Abstract: This paper proposes a deep learning machine model algorithm and designs a wide-gap outdoor platform foreign object detection system to detect foreign objects between rail transit platform doors and trains. The goal of the system is to monitor the space through video in various environments, and use image detection algorithms to detect and determine whether the space contains foreign objects in the space. The main research ideas surrounding the system are as follows: explore the real environment of the subway, create a virtual experimental environment, design the hardware equipment of the detection system, and obtain video images taken under various weather and light conditions through image analysis and processing. The method designed in this paper can well detect the influence of pedestrians getting on and off the vehicle and the intermediate objects when the train stops on the platform and the detection accuracy of large objects can reach 100%.
    Keywords: internet of things; passenger safety; deep learning; machine vision; foreign object detection.