Forthcoming articles


International Journal of Internet Technology and Secured Transactions


These articles have been peer-reviewed and accepted for publication in IJITST, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.


Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.


Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.


Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.


Register for our alerting service, which notifies you by email when new issues of IJITST are published online.


We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.


International Journal of Internet Technology and Secured Transactions (58 papers in press)


Regular Issues


  • Design and Efficient Implementation of a Chaos-based Stream Cipher   Order a copy of this article
    by Mohammed Abutaha, Safwan EL ASSAD, Audrey Queudet, Olivier Deforges 
    Abstract: We designed and implemented a stream cipher cryptosystem based on anrnefficient chaotic generator of finite computing precision (N = 32). The proposed structurernof the chaotic generator is formed by a Key-Setup, an IV-Setup, a non-Volatile memory,rnan output and an internal state function. The chaotic generator uses internal feedbackrnmode and the generated keystream is used for secure stream ciphers. The cryptographicrncomplexity mainly lies in the internal state containing two recursive filters, with one,rntwo or three delays. Each recursive filter includes a perturbation technique using a linearrnfeedback shift register (LFSR). The first recursive filter includes a discrete skew tentrnmap, and the second one includes a discrete piecewise linear chaotic map (PWLCM). Thernchaotic generator is implemented in sequential and parallel versions using Pthread library.rnThe proposed Stream ciphers have a very good performance in terms of security andrnexecution time. The parallel version of the proposed chaos-based stream cipher is fasterrnthan the eSTREAMS project, and other known chaos-based stream cipher when size ofrndata is big. Security performance of chaos-based stream cipher is analyzed, cryptanalyticrnanalysis and statistical tests such as Histogram with Chi-square test, correlation and NISTrntest are applied. Experimental results highlight the robustness of proposed system. Thernsecurity of the implemented stream ciphers is investigated by applying several softwarernsecurity tools.
    Keywords: Stream cipher; Chaotic generator; Chaotic multiplexing; Parallel computing.

  • Blockchain and Bitcoin as a way to lift a country out of poverty Tourism 2.0 and e-governance in the Republic of Moldova   Order a copy of this article
    by Marc Pilkington 
    Abstract: In this article, we explore the formidable yet untapped capabilities of Blockchain technology and Bitcoin in order to alleviate poverty. We focus on the Republic of Moldova, which has been plagued by endemic corruption and persistently high poverty levels since her Independence in 1991 following the collapse of the Soviet Union. The transformative power of Blockchain technology and Bitcoin are then evidenced through a dual analysis of tourism 2.0 (with a real-world case study) and e-governance, which can contribute to increased inward capital investment flows, and help fight off corruption practices. Finally, we conclude that these new technologies constitute a significant step in the right direction, in order to break away from twenty-five years of disappointing socio-economic development performance.
    Keywords: Blockchain technology; tourism 2.0; e-governance; corruption; Republic of Moldova.

  • An Assessment of the Application of IT Security Mechanisms to Industrial Control Systems   Order a copy of this article
    by Allan Cook, Helge Janicke, Leandros Maglaras, Richard Smith 
    Abstract: Industrial control systems (ICS) are increasingly becoming the target of cyber attacks. In order to counter this threat, organisations are turning to traditional IT security mechanisms to protect their operations. However, ICS includes a range of technologies which are often unfamiliar to contemporary IT security professionals or the tools they deploy. This paper explores the applicability of these tools within an ICS and critically analyses contemporary ICS architectures. The contribution of this paper is a clear identification of the areas of ICS to which IT security mechanisms can be applied and the challenges that are faced in the others. The paper continues to explore what mechanisms may be considered in these non-traditional areas of technology.
    Keywords: ics; cyber; security; scada.

  • A Simulation-Based Correlation Power Analysis Attack to Hardware Implementation of KASUMI Block Cipher   Order a copy of this article
    by Massoud Masoumi 
    Abstract: Power analysis attacks implies measuring the supply current of a cipher-circuit in an attempt to uncover part of a cipher key. Cryptographic security gets compromised if current waveforms obtained during the execution of the cipher correlate with those from a hypothetical power model of the circuit. Correlation Power Analysis (CPA) is a powerful kind of power analysis attack that is able to break ciphers using correlation between the power consumption of the device and hamming weight of the key-dependent values of the algorithm. This paper describes a Correlation Power Analysis (CPA) attack against FPGA implementation of KASUMI, a block cipher used in the confidentiality and integrity algorithms of the 3GPP (3rd Generation Partnership Project) mobile communications which is also very suitable for hardware implementation. Another contribution of this paper is that it presents a simulation-based CPA attack. The main advantage of using simulation-based attack is that it does not need any experimental setup which leads to considerable saving in time and cost. To the best of our knowledge, there are a few article that present a simulation-based power analysis of block ciphers in detail, and specifically, there is no report about mounting of simulation or nonsimulation-based power analysis attack against hardware implementation of KASUMI in the open literature.
    Keywords: KASUMI Block Cipher; FPGA Implementation; Correlation Power Analysis.

  • A Forward Compatible IoT Protocol and Framework Addressing Concerns due to Internet-outage   Order a copy of this article
    by Gourinath Banda, Krishna Chaitanya Bommakanti, Harsh Mohan, Abhay Chandra 
    Abstract: Internet of Things (IoT) paradigm is becoming ubiquitous in everyday life. IoT-based systems are increasingly getting deployed in generic household,health-care, public utilities, etc. related application. The reason for such rapid acceptance is in majority due to the Internets feature of connect from anywhere. However, due to some weaknesses inherent with the internet itself,IoT deployments also face certain challenges. Though there is a growing number of IoT-related reference architectures, frameworks, guidelines, platforms and standards, majority of them are yet to address the concerns due to internet outage. Furthermore, such growing numbers of architectures, frameworks, etc. also means bountiful amounts of both opportunities and risks for IoT vendors and original equipment manufacturers (OEMs). Of course, the IoT-device consumers also run the risks if the purchased products frameworks incompatible. We present OneIoT version2 (v2), an IoT framework and protocol that addresses the internet outage scenario. Similar to version1 (v1), One IoT v2 is also forward compatible.
    Keywords: Forward compatible; Internet of Things; Internet Outage; One IoT; Safe-mode of operation; etc.

  • WebRTC Security Measures and Weaknesses
    by Ben Feher, Lior Sidi, Asaf Shabtai, Rami Puzis, Leonardas Marozas 
    Abstract: WebRTC is a technology that enables real-time communication between Web browsers for information streaming, including text, sound or direct data transfer. WebRTC is supported by all major browsers and has a flexible underlying infrastructure. In this study, we review current state of WebRTC and analyse security shortcomings during acts of communication disruption, modification, and eavesdropping. In addition, we examine WebRTC security in experimental scenarios.
    Keywords: WebRTC; attack patterns; browser streaming; telecommunication services; lawful interception; security; weaknesses; mitigation

Special Issue on: Approaching Security Challenges and their Remedies

  • TAACS-FL: Trust Aware Access Control System using Fuzzy Logic for Internet of Things   Order a copy of this article
    by Thirukkumaran Raman, Muthukannan P 
    Abstract: The Internet of Things (IoT) is technological revolution that has recently become more important to the real world because of the growth of smart devices, embedded and ubiquitous communication technologies and combined with cyber world to provide many smart services. These services are particularly creating new challenges in security and privacy concerns. To address this issue Trust management system and Access control system must be focused in IoT. This paper presents a Trust Aware Access control system using Fuzzy Logic (TAACS-FL) for IoT. Access control is an important mechanism to ensure only trusted users/devices access the data from the sensor device or command the actuators to perform some task in the IoT context. First we monitor the devices and gather the trust parameters like Successful Forward Ratio (SFR), Data Integrity (DI) and Energy Consumption Rate (ECR). Second, by using fuzzy engine trust parameters are combined and overall trust value is calculated. Third, based on the trust value access control method is defined.NS-2 based simulation result shows TAACS-FL guarantees scalability and energy efficient.
    Keywords: Internet of Things; Trust; Access control; Security; Fuzzy; Cluster.

  • A dependable and lightweight trust proliferation approach for the collaborative IoT systems   Order a copy of this article
    by Hayet Benkerrou, Mawloud Omar, Fatah Bouchebbah, Younes Ait-Mouhoub 
    Abstract: Trust and reputation evaluation in the Internet of Things (IoT) constitutes nowadays a major challenge that is still attracting the research community to work on and investigate. The IoT with its widely heterogeneous objects requires collaborative computation to perform heavy cryptographic operations in the most constrained resource devices. Indeed, the effectiveness of the trust and reputation assessment plays an important role for selecting the best collaborators. In this paper, we propose a dependable and lightweight trust proliferation approach for the collaborative IoT systems. By recording the malicious tasks which could be performed during the service execution, our approach introduces a new derivation of direct trust. Furthermore, it combines dynamically the direct and indirect trust evaluation to ensure more trustworthiness and reliability in the selection of the best collaborator objects. Through the simulations, we show that our trust derivation approach achieves high accuracy in the selection of efficient and confident collaborators.
    Keywords: Internet of Things; Trust; Reputation; Security; Collaboration.

Special Issue on: ICRTCCM'17 Intelligent Machine Learning Algorithms for High Performance Computing

  • Design and Analysis of Smart Card based Authentication Scheme for Secure Transactions   Order a copy of this article
    by Akshat Pradhan, Marimuthu Karuppiah, Niranchana R, Asha Jerlin M, Rajkumar S 
    Abstract: Remote authentication scheme utilizing smart cards have become a prevalent concept due to their convenience and simplicity. Recently, Lee et al. proposed a low cost authentication scheme without verifier tables. However, in this paper we show that Lees scheme is susceptible to various attacks andrnfails to provide essential security properties. We then present our own scheme and perform an informal analysis to substantiate the claim that our scheme is able to resist the previous schemes weaknesses.
    Keywords: User Anonymity; Offline password guessing attack; Smart Card; User impersonation attack; Forgery attack.

    by Karthi Shankar, Prabu S 
    Abstract: Advancement in satellite remote sensing technology, leads to growth in data exponentially. Managing complex data for have been made reasonable and simpler by Geospatial Information Systems (GIS) by means of cloud employment. Processed geospatial data is stored in public cloud via secured platform. Main objective of this paper is to propose efficient way of storing geospatial data using spatial Hadoop mechanism in cloud environment. Excessive measure request can be achieved by platform with better scalability by spatial Hadoop GIS.
    Keywords: Geospatial Data Storage; Geospatial Information Systems (GIS); Cloud Environment; Authentication; Authorization.

  • QOS Routing Protocol to Detect Maximum Available Bandwidth in WMCs   Order a copy of this article
    by Manikandan Ramamurthy 
    Abstract: Wi-Fi Mesh Community has turn out to be an important area community to offer internet to challenge the thrown domains and wireless link in metropolitan environments. In this paper, hassle of figuring out the most to be had bandwidth direction is targeted, which is an essential problem in helping pleasant-of-carrier in WMCs. Due to the concept of interference, bandwidth, which is a familiar challenging metric in restive networks, a new direction is validated in which the weight captures the available route bandwidth statistics. It is also proved that the hop using the hop routing procedure based on the new path weight completely satisfies the consistency and loop-freeness necessities. The property of consistency ensures that each node follows a proper packet forwarding choice, in order that a statistics packet traverses over the supposed path. The experimental results additionally show that our proposed work on direction weight outperforms all the other existing direction metrics in figuring out excessive-throughput paths.
    Keywords: Distributed algorithm; Wireless mesh networks; proactive hop-by-hop routing; QOS routing.

    by Santhosh Kumar P 
    Abstract: Cloud computing is an emerging technology which supports the storage of data via internet. The process of communication is done in an open-access environment and this in turn creates some security and privacy issues which is a real challenge for the cloud users. This facility shoots up the necessity of secure data auditing mechanism over outsourced data. Most of the existing schemes lack the security feature, which can withstand collusion attacks between the cloud server and the unauthorized users. Another most important problem in existing system is data integrity. This paper presents a technique to overthrow the collusion attacks and the data auditing mechanism is achieved by means of vector commitment and backward unlinkable verifier local revocation group signature. The proposed work involves double encryption technique to deal with the privacy measures in cloud server. To extend the security measures a single file has been split into different blocks and stored with different file names. These may leads to the ambiguity for the attackers to trace out the original data. The performance of the proposed work is analysed and compared with the existing techniques and the experimental results are observed to be satisfactory in terms of computational and time complexity.
    Keywords: Cloud computing; Data auditing; Vector Commitment; Double Encryption; Privacy.

  • Propels in Compiler Construction for Versatile Figuring   Order a copy of this article
    by Desurkannadasanr Rajendran 
    Abstract: This paper shows a compiler machine for adaptable figuring. Our technique assembles the flexibility and comfort in a way that grants to port the structure to different centers with an irrelevant effort. In light of a present arrangement stream, we endeavor to accomplish another tier of handiness in the way of exploring and dividing programs written in C on most hoisted able to be done delineation tier. We show that the examination on this level is more successful than on lower ones as a result of usage of more communicative fabricate of programming. The better examination comes to fruition merged with another Static single assignment based estimation for data way creation might provoke upper game plan nature of the last structure setup
    Keywords: equipmentprogramming dividing; versatile frameworks;compiler frameworks; reconfiguration booking;.

  • Automatic Segmentation of Pathological Region (Tumor and Edema) in High Grade Glioma Multi-sequence MR Images through Voted Prediction from Pixel Level Feature Sets   Order a copy of this article
    by Geetha Ramani R, Sivaselvi Krishnamoorthy 
    Abstract: Automatic region segmentation of brain from the neuroimages is an active research area in the medical domain. Currently, different kinds of Magnetic Resonance Imaging acquisition are performed such that each technique highlights a specific region in the brain making multi-sequence images a better candidate for investigation when compared to single sequence. The abnormal regions (tumor and edema) in glioma images are segmented through hybrid technology involving preprocessing, feature extraction and classification. The extracted features are grouped and random forest procedure is applied on each set and the prediction is obtained that minimizes the randomization. The final prediction of a pixel is obtained by aggregation of individual predictions from feature set through maximum voting which increases the ensembling and improves the outcome appreciably. The average Dice Coefficient of tumor and edema segmentation is 0.96 and 0.94 respectively with 3-fold cross validation. The results show significant improvement when compared to earlier methodologies.
    Keywords: Magnetic Resonance Imaging; Brain Tumor Segmentation; Image Analysis; Data Mining; Classification; MICCAI BRATS 2012 Challenge Dataset; Random Forest; Glioma; Tumor; Edema.

  • Improving Performance an Artificial Bee Colony Optimization on Cloud sim   Order a copy of this article
    Abstract: The major aspire of this proposed Improved Weis to identify an accurate data search and also to generate data that comes from anywhere. Furthermore, the data itself may be too large to store on a single machine such that the computers are inter connected with each other by the massive internet storage technologies. This approach mainly focuses on design of search engines and its infrastructure grave. Improved Micro partitioning is a modularized approach of cloud computing mainly framed to overcome the pitfalls in the traditional search engine and also in manipulation of large information stored in a single computer. Artificial Bee Colony (ABC) count is an improvement figuring which reenacts the watchful scavenging behaviour of honey bees. In this way in my wander, ABC computation is associated with enhance the arranging of Virtual Machine (VM) on Cloud figuring pre-emptively and in heterogeneous errands. The essential duty of work is to analyze the refinement of Virtual Machine stack conforming count and to reduce the make span of data planning time that is indicate length of the timetable. The arranging framework was replicated using CloudSim gadgets. Exploratory results demonstrated that the blend of the proposed ABC estimation, arranging in perspective of the degree of assignments, and the Longest Job First (LJF) booking figuring played out a better than average execution arranging framework in changing environment and conforming work stack which can diminish the make span of data get ready time.
    Keywords: Artificial Bee Colonyrn;Virtual Machine;heterogeneousrn; framework; Cloud computing.

  • Enhanced Efficient SYN spoofing Detection and Mitigation Scheme for DDoS attack   Order a copy of this article
    by Kavisankar Leelasankar, Chellappan C, Venkatesan S, Sivasankar P 
    Abstract: Protection of critical server from cyber attacks is vital, especially in the case of active attacks like Distributed Denial of Service (DDoS). Attackers start with the Denial of Service (DoS) attack first, since the DoS attack does not need the distributed infrastructure to perform the Distributed Denial of Service (DDoS) attack. A number of attack packets is generated from the single attacking system itself to the victim server, to cause denial of service to the legitimate users. Generally, DoS is an action that prevents or impairs the authorized use of networks, systems, or applications by exhausting the resources, such as central processing units (CPU), memory, bandwidth, and disk space. Seamless services are provided by the constant availability of the server which plays an important factor in providing the customer good Quality of Service (QoS). Monitoring and rate limiting the flow of packets will protect the victim systems by allowing only trusted users during the DDoS attack. The job of the security professionals becomes complex, when the attacks are launched from trusted IP addresses, using Synchronization (SYN) spoofing. The work presented in this paper is experimented with Efficient Spoofed Mitigation Scheme (ESMS) which uses the TCP probing method along with the bloom filter trust model. The experiment is carried out in both IPv4 and IPv6 environment in the SSE (Smart and Secure Environment) real time test bed and the proposed scheme provides accurate and robust information for the detection and controlling of the spoofed packets, during the DDoS attacks.
    Keywords: DDoS; ESMS; IP Spoofing; SYN Spoofing; TCP SYN flooding; Trust value.

  • An Efficient Probabilistic Authentication Scheme for Converging VANET   Order a copy of this article
    by HEMAMALINI , Zayaraz , Susmitha Vasanthakumar, Saranya Vadivelu 
    Abstract: Vehicular Ad Hoc Network (VANET) is the subgroup of the Mobile Ad Hoc Network. VANET interconnects the nodes for transferring secure information between nodes; here vehicle acts as a node. In ACPN, the public-key cryptography (PKC) to the pseudonym generation is used, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles real IDs. The self-generated PKC-based pseudonyms are also used as identifiers instead of vehicle IDs for the privacy preserving authentication, while the update of the pseudonyms depends on vehicular demands. The Convergence Point (CPA)is used to enable communication when the vehicle is moved out of the particular range, handover scheme occurs. The scheme is used to provide a seamless connectivity. It also provide the secure message transmission. However, adversarial nodes could provide false position information or disrupt the acquisition of such information. Thus, in VANETs, the discovery of neighbor positions should be performed in a secure manner. In spite of a multitude of security protocols in the literature, there is no secure discovery protocol for neighbors positions. We address this problem in our paper: we design a distributed protocol that relies solely on information exchange among one-hop neighbors, we analyze its security properties in presence of one or multiple (independent or colluding) adversaries, and we evaluate its performance in a VANET environment using realistic mobility traces. We show that our protocol can be highly effective in detecting falsified position information, while maintaining a low rate of false positive detections.
    Keywords: Public Key Cryptography(PKC); Converging Point Access(CPA); Neighbor Discovery.

Special Issue on: High-Performance Computing Technologies and Emerging Services for IoT Systems

  • A Review of Testing Cloud Security   Order a copy of this article
    by Eric Zenker, Maryam Shahpasand 
    Abstract: The cloud computing paradigm is a nascent technology with many benefits for organisations. The adoption process is constantly advancing as the global revenue for SaaS increased by 14.8% in 2016. On the other side, security of clouds is still one of the major concerns of clients to adopt and use the new computing paradigm. The security of cloud environment underlies three key IT principles: availability, integrity and confidentiality. Likely attacks to cause security breaches are, for example, SQL injections or DoS attacks. One of the most important and effective security measures are encryption and authentication/authorisation to prevent such occurrences. To ensure a high level of security of cloud services and applications, testing is an appropriate approach to detect possible vulnerabilities before real case scenarios occur. In terms of clouds, testing is distinguished in TaaS and testing the cloud. Thus, many academic papers have been published to identify and address challenges in cloud security, vulnerabilities and threats. However, most of the researchers focused on TaaS rather than on testing the cloud, which led to a current gap in academics. This paper presents a systematic literature review of testing cloud security. The authors elucidate a general and consistent topic overview, beginning with defining and introducing key terms. Furthermore, gaps in recent related publications are revealed, hence prospective research implications are pointed out. This survey addresses challenges, vulnerabilities and threats regarding cloud security to foster the understanding and relations of current research fields.
    Keywords: Cloud Computing; SaaS; Security; Testing as a Service; Threat; Vulnerability.

  • Rearranging links: A Cost-Effective Approach to Improve the Reliability of Multistage Interconnection Networks   Order a copy of this article
    by Fathollah Bistouni, Mohsen Jahanshahi 
    Abstract: One of the main ways to achieve a high computational power is the use of multiprocessor systems. Multistage interconnection networks (MINs) are widely used in parallel multiprocessor systems to connect processors and memory modules. Therefore, design of an efficient MIN is very critical for the construction of high performance multiprocessor systems. On the other hand, reliability is one of the most important performance parameters in the context of interconnection networks. However, hardware cost is a limitation in the design of high-reliable interconnection networks. In this paper, a new approach to improve the reliability of the MINs, called the rearranging links is proposed. The proposed approach is implemented on two common MINs namely extra-stage shuffle-exchange network (SEN+) and augmented shuffle-exchange network (ASEN). Meticulous analysis of terminal reliability proves that the proposed approach is an efficient method to improve the reliability of MINs. In addition, performed cost analysis confirms that utilizing it leads to emerge cost-effective MINs.
    Keywords: Multiprocessor systems; Multistage interconnection network; Reliability; Cost-effectiveness; Reliability block diagrams.

  • Distributed algorithm to fight the state explosion problem   Order a copy of this article
    by Lamia Allal, Ghalem Belalem, Dhaussy Philippe, Ciprian Teodorov 
    Abstract: Model checking, introduced 20 years ago, combines several fully automatic techniques in which the property to be checked is tested exhaustively on all the possible executions of the system. It is an automated approach to verifying that a system meets its specifications. The main limit to the use of model checking is related to the state explosion problem, which occurs when the number of states increases exponentially according to the complexity of the system. In this Article, we presented a distributed exploration algorithm executed on two different architectures to fight this problem. The first one is using 2 real machines interconnected across the network and the second using 2 virtual machines in a cloud computing. We carried out a comparative study between these two distributed approaches as well as a parallel approach studied in Allal et al. (2016). The aim of this paper is to give the advantages and drawbacks of each solution.
    Keywords: Model checking; state explosion problem; parallel exploration; distributed exploration; execution time; memory space.

  • Voice Over IP on Windows IoT Core   Order a copy of this article
    by Maryam Shahpasand, Ramlan Mahmod, Nur Izura Udzir 
    Abstract: Tremendous growth of Internet of Thing integrated with VoIP applications give a serious challenge to digital forensic researchers. This integration gives more challenges in investigation process because there are various types of VoIP application with different design and implementation features. There are diverse VoIP applications used by criminal that caused difficulties in the identification, acquisition, and preservation of evidential data. This paper used Skype to determine the data remnants on Windows 7 and Windows 8.1 when users run Skype functions including installing and uninstalling, instant messaging, voice and video call, video messaging, sending and receiving files, screen sharing and taking pictures. The link where the victim and suspect may have been in contact on Skype is provable by the proposed methods. It is found that potential evidences can be found in capturing memory, capturing network traffic, user application data and the data remnants available in users device.
    Keywords: Voice over IP; Digital Forensics; VoIP Application; Skype; Application investigation; Internet of Thing (IoT).

  • Fuzzy Based Dynamic Packet Priority Determination and Queue Management method For Wireless Sensor Network   Order a copy of this article
    by Maya Shelke, Akshay Malhotra, Parikshit Mahalle 
    Abstract: The IOT era, where the world is expecting to connect 50 billion devices on internet by 2020, has increased the significance of Wireless Sensor Networks (WSN) manifolds. WSN is made up of large number of resource constrained sensor nodes. Network becomes more valuable when it can be used to its full potential. Hence scheduling of different types of packets, for example emergency real-time, real-time and non-real-time data packets, at sensor nodes in WSNs is of crucial importance to reduce sensors energy consumption, packet loss and end-to-end data transmission delays. Majority of the existing packet scheduling algorithms such as First Come First Serve (FCFS), preemptive priority scheduling and non-preemptive priority scheduling are not adaptive to changes in the data traffic. They also have more processing overhead and large communication delays. In this paper, we propose Fuzzy Based Dynamic Packet priority determination and queue management mechanism (FDMP). In the proposed scheme, each node determines priority of packets dynamically as Gold, Silver and Bronze. Gold packets are placed in high priority queue, silver packets in medium priority queue and bronze packets in lower priority queue. Higher priority packets can preempt medium and lower priority packets, lower priority packets are preempted by medium priority packets. The proposed scheme increases fairness in scheduling packets and energy efficiency by delivering the packets before their expiry (expired packets will not roam around the network), thus reducing network load. The results have been obtained from Network Simulator Version 2 (NS-2) and demonstrate that the enhanced approach outperforms FCFS and Dynamic Multilevel priority queue scheduling algorithms in terms of lower end-to-end delay, throughput, packet delivery ratio and average residual energy.
    Keywords: Wireless Sensor Networks;Fuzzy Logic;Packet Scheduling;FCFS;Preemptive priority scheduling;non-preemptive priority scheduling;data waiting time;end-to-end delay;network lifetime.

  • An improved prediction based strategy for target tracking in wireless sensor networks   Order a copy of this article
    by Hanen Ahmadi, Ridha Bouallegue, Federico Viani, Andrea Massa 
    Abstract: The indoor localization of moving target in Wireless Sensor Networks using Received Signal Strength Indicator (RSSI) is addressed in this paper. A novel location tracking algorithm which combines an ensemble learning method and Kalman filter is proposed. An ensemble based regression tree using received signal strength method has been proposed to localize static sensor nodes. In this paper, this approach is employed to solve the complex relation between the RSSI behavior and the target position. Then, the estimated location is introduced in the Kalman Filter as the observed information, leading to more accurate state of the moving target. Experimental results show that the adopted solution achieves a high accuracy compared to localization algorithms currently available in the literature.
    Keywords: target tracking; localization; wsn; machine learning; kalman filter.

  • Energy-efficient Adaptive Distributed Data Collection method for Periodic Sensor Networks   Order a copy of this article
    by Ali Kadhum IDREES, Ali Al-Qurabat 
    Abstract: This article suggests a method, called energy-efficient adaptive distributed data collection method (EADiDaC), which collects periodically sensor readings and prolong the lifetime of a periodic sensor network (PSN). The lifetime of EADiDaC method is divided into cycles. Each cycle is composed of four stages. First, data collection. Second, dimensionality reduction using adaptive piecewise constant approximation (APCA) technique. Third, frequency reduction using symbolic aggregate approximation (SAX) approach. Fourth, sampling rate adaptation based dynamic time warping (DTW) similarity. EADiDaC allows each sensor to remove the redundant collected data and adapts its sampling rate in accordance with the monitored environment conditions. The simulation experiments on real sensor data by applying OMNeT++ simulator explains the effectiveness of the EADiDaC method in comparison with two other existing methods.
    Keywords: periodic sensor networks; PSNs; data collection; adaptive sampling rate; adaptive piecewise constant approximation; APCA; dynamic time warping; DTW similarity; symbolic aggregate approximation; SAX; network lifetime.
    DOI: 10.1504/IJITST.2017.10007731

Special Issue on: Wireless Networks and the Internet of Things

  • TRAM Based VM Handover with Dynamic Scheduling for Improved QoS of Cloud Environment   Order a copy of this article
    by Nadesh R.K, Aramudhan M 
    Abstract: The generic view of cloud has seen changes with the entry of cloudlet which combines the servers located in any local area network that can be accessed through any wireless communication. Modern cloud computing has opened avenues for resource and service providers for deployment of their resources irrespective of location. The only issue is the choice of the relevant service provider for the cloud client. There are a number of scheduling algorithms discussed earlier. Resources have been grouped on the basis of various constraints. Yet the methods are unable to achieve the required quality of service parameters. With a view to overcome the issue of previous methods, an efficient TRAM (Throughput-Resource Availability-Makespan) based clustering, VM Handover, with Scheduling is discussed in this paper. The TRAM based approach maintains a list of requests processed on each moment by different cloudlets and groups of the cloudlets according to the TRAM measure. The same TRAM measure is used for making decisions on the handover and scheduling of the request based on TRAM. The recommended approach produces auspicious results on the various qualities of service parameters and produces satisfactory results.
    Keywords: Cloud Computing; Cloudlet; VM Handover; Clustering; Scheduling; TRAM.

  • Improved Scrum Method through Staging Priority and Cyclomatic Complexity to enhance Software Process and Quality.   Order a copy of this article
    by Vijayanand Rajasekaran, Dinakaran Muruganandam 
    Abstract: Software Development has been inevitable in the modern era. In olden days, organizations followed intense traditional software process; but currently, the focus has turned greatly towards agile methodologies. In agile methodology, Scrum is the mostly followed process. But it comes with a bunch of technical and generic issues. For instance, Assigning, prioritizing and integrating product backlog items prove to be difficult to deal with in agile methodology. On the other hand, it poses several other generic issues ranging from environment problems due to idle team participants and Developer-Tester issues. In this paper mainly concentrates on overcoming the technical issues mentioned above with the assistance of a framework which is perfectly refined in addition to the introduction of a new term called RScrum which is the extension of Scrum which will greatly help to overcome the glitches
    Keywords: Agile; Scrum; Staging Priority; Cyclomatic Complexity; Product Backlog Item; sprint;.

  • Intelligent Intrusion Detection System Using Temporal Analysis and Type-2 Fuzzy Neural Classification   Order a copy of this article
    by Rama Prabha Krishnamoorthy Pakkirisamy, Jeyanthi N 
    Abstract: Cyber-attack detection is an important and challenging area of research inthe field of information technology. In such a scenario, intruders introduce new mechanisms by applying polymorphic mechanisms in order to escape from the intrusion detection systems. This leads to loss in data and increase in security vulnerabilities. In the past, many soft computingtechniques were used from the field of machine learningfor enhancing the efficiency of intrusion detection systems (IDSs) in computer networks. Among them, fuzzy logic playeda major role for making effective decisions. In addition, the neural networks are also contributing more in this area for training the datasets to form rules which can be used to develop an effective intrusion detection system. In this paper, we propose a new intrusion detection system by combining neural networks with temporal and type-2 fuzzy logic for performing effective classification of the dataset. In addition, a new feature selection algorithm is also proposed in this paper which uses information gain of attributes with fuzzy logic decision making for selecting the optimal number of features from the dataset. This work has been tested by using NSL-KDD dataset and through the experiments conducted in this work it is proved that the proposed system increases the intrusion detection accuracy and reduces the false positive rate when it is compared with other existing systems.
    Keywords: Neural Networks; Type-2 Fuzzy Logic; Intrusion detection System; NSL-KDD dataset; Feature Selection; Classification.

  • Wireless Camera Network with enhanced SIFT Algorithm for Human Tracking Mechanism   Order a copy of this article
    by Ushadevi G, Priyan M K, Gokulnath C 
    Abstract: In order to deal with the Wireless Camera Networks (WCN), whose detectingrnpower of conventional camera networks with elasticity, re configurability and with an simple deployment of Wireless Sensor Networks (WSN) for efficiently addressing the significant responsibilities in the method of cluster based human (object) tracking, such as integrating the measurement, including or ruling out in the cluster and cluster head rotation. The WCN effectively uses division friendly representation and methods in which every node contributesrnto the estimation in each methodology without the requirement of any previous information of the remaining the nodes. These methods are integrated in two different schema so that they can be deployed with the same mean time without considering the cluster size. Thus, the observation and practical evaluation shows that the proposed schemes and methodology drastically reduces the energy consumption and computational trouble in accordance to the existing methodology.
    Keywords: cluster; human frames; sensors; SIFT.

  • BMAQR: Balanced Multi Attribute QoS Aware Replication in HDFS   Order a copy of this article
    by Kumar PJ, Ilango P 
    Abstract: The Hadoop Distributed File system (HDFS) replicates data to ensure data availability in case of a failure caused by events such as data node crash, disk failure, switch/rack failure or corruption in the data block. The evolution of big data leads to large population of data stored and managed in the clusters of cloud. The degree of replication is directly proportional to availability of data with an increase in the replication cost and update cost of data blocks in cloud. The applications executed on the data nodes demand various QoS needs while a block of its data is replicated such as disk access latency, constant bandwidth, delay, jitter etc. Existing replication algorithms replicates data based on the replication factor and the specified QoS needs of application. At a given point of time we expect the types of replication request from different applications varies largely and there is a need to allocate replica based on the request type and the replication factor to achieve a balanced replication cost and availability of data with the available block spaces in the entire cluster. We propose a multi attribute QoS replica allocation algorithm to replicate data considering the different types of replica request, replication factor and the total available space to achieve a balanced replica allocation. The proposed algorithm satisfies different QoS needs of applications and reduces the number of QoS violated replicas when the request consists of different QoS types. We measure the performance of the proposed algorithm in allocating replica and the reduction in number of QoS violated replica count over the existing algorithms such as Random replication. The simulation result shows a better performance over the existing algorithms with a slightly increase in the computational time.
    Keywords: HDFS; Replication; Multi Attribute QoS aware replica allocation; QoS Violation.

  • Generating Various Kolam Patterns using New Kolam Picture Grammar   Order a copy of this article
    by Ramya Govindaraj, Anand Mm 
    Abstract: Kolam is an artistic creation .It is a ubiquitous art form predominant in South India, while also seen in a few places in northern India and South East Asia.Kolam holds a rich tradition of cultural and medicinal significance. Kolams are generated using kolam grammar. This paper consists of set of rules which is used for manipulating kolam patterns under defined rules using axiom. It is enclosed under defined alphabets used for creating kolam patterns.We can generate many kolams with n number of pullis(dots) with finite number of rules.
    Keywords: Formal theory;Picture languages;Kolam pattern;Kolam grammar ;kolam picture language.

  • Hilbert Fast-SAMP with Different Channel Estimation Schemes of BER Analysis in MIMO-OFDM system   Order a copy of this article
    by Kumutha Duraiswamy, Amutha Prabha N 
    Abstract: In the OFDM system, there are huge number of sub carrier and high range of signal that produce a very high peak to average ratio reduction (PAPR), so that the signal degrades and further effects the overall Bit Error Rate (BER) performance. The channel estimation techniques are used to avoid much training overhead, that makes an issue in coherent detection. Sparsity Adaptive Matching Pursuit (SAMP) is an existing, thereby the backward pursuit iteration can be repeated for many times, if the support set expands. Due to this iteration, the performance of BER increases, delay occurs and also provides computationally complexity. To avoid the backward pursuit iteration, Hibert-fast sparsity adaptive matching pursuit (HF-SAMP) is proposed which reduces the computational complexity and decreases the BER performance to the maximum extent. Zero padding is also appended, results in better improvement of BER performance. The performance of BER, SNR, MSE and PAPR using the Channel Estimation (CE) and Pilot Design (PD) techniques are analyzed and the proposed technique yields better and thereby performance rate increases than the other existing conventional algorithms. MATLAB is used for performing the simulations.
    Keywords: CE; HF-SAMP; OFDM; BER; PAPR; Pilot Estimation; ZP and SOMP.

Special Issue on: Cloud Computing, Big Data and Data Science

  • A Scalable Fine-Grained Analytic Model for Container Cloud Data Centers   Order a copy of this article
    by Bingwei Liu, Yu Chen 
    Abstract: Cloud Computing is today's main-stream computing paradigm because of many attractive features. Although Cloud service providers have deployed numerous large scale Cloud data centers around the world, research in performance modeling for Cloud data centers are still in its infancy. A precise model of a Cloud data center can help the service providers improve their service quality, capacity planning, load balance and reduce operation costs. Most studies in literature focused on modeling hypervisor based cloud, typically IaaS. With the growing popularity of containers in Cloud service providers, there is a need to develop performance models specifically for these systems. A novel Cloud analytic model (CAM) for container-based Cloud data center was proposed. In the model, all schedulers in the Cloud logical hierarchy are modeling as unified Markov Chains with different model inputs. We identified the process at a scheduler as a Quasi-Birth-Death (QBD) process and provided algorithmic solutions using matrix-geometric analytic methods for infinite and finite cases. Physical machines (PMs) are modeled differently due to their underlying characteristics. CAM was able to capture the critical features in the Cloud. We utilized these interactive stochastic models to analyze the performance of the system in terms of mean job delay and probability of job rejection. Finally, a Container emulation framework, ConSim, was developed and tested against the analytic model. ConSim runs on actual container Cloud hardware and measures desired performance matric such as number of rejected jobs and delay of each job. Experimental development using real data were compared with theoretical calculation. The results showed promises in using the proposed analytic model to help service planning in container Clouds.
    Keywords: Cloud Computing; container; virtualization; performance modeling; quasi-birth-death process.

    by Kamali Gupta, Vijay Katiyar 
    Abstract: Cloud computing as a paradigm has led to its adoption in large scale parallel processing and distributed computing. The consumer's computational needs serviced by the providers resulted in significant rise in its demand as latest services can be accessed with different pricing models, value-added features and instance types. Resource selection is a tedious task and places momentous challenges of resource management before consumers and service providers. As a remedy to this vanguard issue, brokers stipulates resource provisioning options to ease the task of selecting the best resource that can match the submitted requests by facilitating a standardized management interface across global boundaries. Keeping in consideration this point, a Deadline-Aware Sufferage (DSufferage) algorithm is proposed and implemented at platform level in this research work. The algorithm is an improvisation in the existing sufferage heuristic. Deadline parameter has been inculcated to assign precedence levels to the jobs to be submitted to the machines apart from minimum completion time. The novelty of the current research study is that the heuristic is centered towards both user and providers goals in comparison to the existing batch-mode heuristics. The efficacy of the algorithm has been verified using CloudSim tool and is concluded that it is proficient enough to allocate resources to users tasks with in constraints of deadline, resource utilization maximization and SLA violation avoidance.
    Keywords: Cloud Computing; Resource Management; Scheduling; Heuristic; Makespan.

  • Financial Default Payment Predictions Using A Hybrid of Simulated Annealing Heuristics and Extreme Gradient Boosting Machines   Order a copy of this article
    by Bichen Zheng 
    Abstract: Online Peer-to-Peer (P2P) lending platforms face multiple challenges in today's e-commerce, but one of the most outstanding concerns evaluating loan risk based on borrowers' financial status and histories. Traditionally, financial experts assess borrowers' risk of default payments manually, but this process is tedious and time consuming, which are not widely applicable concerns for online P2P platforms. This paper proposes a hybrid of the Simulated Annealing and the Extreme Gradient Boosting Machine models in order to predict the likelihood of default payments based on users' finance histories. Based on the experimental results, the proposed model demonstrates its predictability with high recall scores. The proposed model not only out-performs over conventional algorithms including Logistic Regressions, Support Vector Machines, Random Forests, and Artificial Neural Networks, but it also provides an efficient method for optimizing hyper-parameters in the machine learning algorithms. Through the utilization of the proposed data-driven models, the necessity of tedious and potentially inaccurate human labor can be significantly reduced, and service level agreements (SLAs) can be further improved by time reduction made possible through the introduction of advanced data mining approaches.
    Keywords: Big Data; Data Mining; Extreme Gradient Boosting Machines; Credit Risk; Credit Scoring; Simulated Annealing.

  • Agile Polymorphic Software-Defined Fog Computing Platform for Mobile Wireless Controllers and Sensors   Order a copy of this article
    by Haymanot Gebre-Amlak, Abdoh Jabbari, Yu Chen, Baek-Young Choi, Chin-Tser Huang, Sejun Song 
    Abstract: Softwarization approaches in networks, storage systems, and smart devices aim to optimize costs and processes and bring new infrastructure definitions and functional values. A recent integration of wireless and mobile cyber-physical systems, with dramatically growing smart sensors, enable new types of pervasive smart and mobile urban surveillance infrastructures that open up new opportunities for boosting the accuracy, efficiency, and productivity of uninterrupted target tracking and situational awareness.Wireless sensors provide the tool for communications and security applications. They offer low-power, multi-functioning and computational capabilities. In this paper, we present a design and prototype of an efficient and effective fog system using light-weight agile software-defined control for mobile wireless nodes. Fog Computing or edge computing, a recently proposed extension and complement for cloud computing, enables computing at the network edge in a smart device without outsourcing jobs to a remote cloud. We investigate an effective softwarization approach in the Fog environment for dynamic big data driven, real-time urban surveillance tasks of uninterrupted target tracking. We address key technical challenges of node mobility to improve the system awareness. We built a preliminary proof-of-concept Light-weight controller architecture on both Android- and Linux-based smart devices and tested various collaborative scenarios among the mobile nodes.
    Keywords: Software-Defined Network (SDN); Internet of Things (IoT); Fog Computing; Cloud Computing; Wireless Sensors; Network Softwarization.

  • Outlier Detection Techniques for Big Data Streams: Focus on Cyber Security   Order a copy of this article
    by Fatima-Zahra Benjelloun, Ayoub AIT LAHCEN, Samir Belfkih 
    Abstract: In recent years, detecting outliers in Big Data streams has become a main challenge in several domains (i.e., medical monitoring, government security, information security, natural disasters, and online financial frauds). In fact, unlike regular static data, streams raise many issues like high multidimensionality, dynamic data distribution, unpredictable relationships, data sequences, uncertainty and transiency. Most of the proposed approaches can handle some of these issues but not all. In addition, they provide limited considerations with regard to scalability and performance. Real-world applications require high performance, resources optimization and real-time responsiveness when detecting outliers. This is useful to extract knowledge, detect incidents and predict patterns changes. In this paper, we review and compare recent studies in detecting outliers for streaming. We investigate how researchers improved the outcome of different models and monitoring systems, especially in the context of cyber security.
    Keywords: Outlier Detection; Data Streams; Streaming; Big Data; High Dimension; Cyber Security.

  • Improving Cloud Computing Services Indexing based on BCloud-Tree with Users Preferences   Order a copy of this article
    by Ahmed Khalid Yassine SETTOUTI, Fedoua DIDI, Mohammed HADDAD 
    Abstract: Wireless Sensor Networks and Cloud Computing are different but complementary. In a hand, the wireless nodes are resources limited and battery constrained. In the other hand, Cloud computing is unlimited in terms of computing, storage, network and power resources. Integrating such different concepts results obviously some troubles; especially for WSN owners who want to pick up the most suitable Cloud Computing provider. In addition, we suppose that both of the clients (WSN owners) and services are heterogeneous, various and dissimilar. In this paper, we propose an indexation method of public IaaS virtual machines in an AVL-Tree. For that, we employ a Z-order function to arrange services in the structure and make the research more efficient. Experiments prove the performance superiority of the proposed approach in comparison with similar works in the literature.
    Keywords: Cloud Computing; Service Selection; User Preference; Quality Measure; Public IaaS; Wireless Sensor Networks; Service Ranking; Indexing; BCloud-tree.

  • Autonomic Resource Management Framework for Virtualized Environments   Order a copy of this article
    by Raman Bane, Annappa B. 
    Abstract: Virtualization enables multiples Virtual Machines (VMs) in a sense multiple Operating Systems to co-locate on a same physical machine with total isolation. Hence using VMs to launch web services or applications is the common trend nowadays in enterprise Information Technology (IT). Data Center provides infrastructure to create, configure and manage VMs. It includes autonomic computing, a scenario in which the data center environment will be able to manage itself based on perceived activity and demands for different resources. It has seen as a utility that clients can pay for only as needed. The growing complexity of modern networked computer systems with Virtualization Technology necessitates the need for analysis of the performance of these systems concerning resource management. We have developed an intelligent resource manager which uses statistical learning methods to control the resource allocation in Xen virtualized environment for dynamically allocating resources to individual VM. Our resource management architecture comprises of fuzzy logic based local and global controllers. Experimental results show that with the proposed system data center can efficiently allocate CPU resources to VMs that have been produced by customers. The scaling of CPU resources is automatically done in accordance with the dynamically changing workload at a minimum granularity of 2 seconds. It improves the resource utilization by 30% as compared to the ideal method while maintaining throughput as equivalent to the ideal workload allocation.
    Keywords: Autonomic Computing; Resource Management; Virtualization; Fuzzy Logic; Kalman filter.

Special Issue on: Sustainable Network Architectures

  • Efficient Storage Management Framework for Software Defined Cloud   Order a copy of this article
    by Sonika Shrivastava, R.K. Pateriya 
    Abstract: Abstract :Exponential growth of data is matter of concern for every organization. Storage of huge data mountain is only possible through adoption of cloud. Now a days popularity of Software Defined System is increasing and virtualized cloud data centre are also moving towards Software Defined Data Centre. This new change is possible only because of advancement in Software Defined Network, Software Defined Storage etc. The main characteristics of Software Defined Systems are abstraction layers or interfaces that hide the complexity and provide support for service management. Software Defined Storage allow user to properly communicate their storage need and allow automated mobility and management of data which can reduce storage cost and enhances data reliability. This paper presents a framework to develop data management interface for Software Defined Storage using well-known redundancy technique replication and erasure coding. This work focuses to solve the two issue of reliability and cost of data storage in cloud by continuous monitoring and scanning of storage system. This new Framework for Software Defined Storage decreases the total cost of ownership and provide efficient technique for storage management in cloud which propel the development of Software Defined Cloud.
    Keywords: Data; cloud; erasure codes; replication; reedsolomon codes,software defined network(SDN) ,software defined storage(SDS).

  • A Conceptual Comparison of NSGA-II, OMPSO and AbYss Algorithms   Order a copy of this article
    by Rajani Kumari, Dinesh Kumar, Vijay Kumar 
    Abstract: The optimization of multi-objective problems is currently an important area of research and development. The importance gained by this type of problem has given rise to the development of multi-objective metaheuristics to attain solutions for such problems. In this paper, an experimental comparison of NSGA II (Nondominated Sorting Genetic Algorithm II), AbYss (Archive-Based hYbrid Scatter Search), and OMOPSO, (Optimized Multi-objective Particle Swarm Optimization) using ZDT benchmark, has been done to determine which multi-objective metaheuristic has the best performance with respect to a problem. The results thus obtained are compared and analyzed based on three performance metrics namely Hypervolume, GD, and IGD that evaluate the dispersion of the solutions on the Pareto front and its proximity to it.
    Keywords: Multiobjective optimization; Nondominated sorting genetic algorithm-II; Archive-based hybrid scatter search; Optimized multiobjective particle swarm optimization.

  • MDI-SS: Matched Filter Detection with Inverse covariance matrix based Spectrum Sensing in Cognitive Radio   Order a copy of this article
    by Budati Anil Kumar, P. Trinatha Rao 
    Abstract: Spectrum Sensing has been a major issue while dealing with the Cognitive Radio (CR) Networks. Predominantly the situation arises where noise is falsely interpreted as primary user signal. This type of interpretation is called probability of false alarm (Pfa). As per the available literature, Matched Filter Detection (MFD) and Neyman Pearson (NP) observer approaches are few methods used to identify the Pfa. The user presence is estimated based on the parameters Pfa, probability of detection (PD), Threshold and Receiver Operating Characteristics (ROC). The method which is in existence, MFD measures ROC with different algorithms and suggests that by using NP observer to improve the PD and minimize the Pfa. Which leads to the problem as identified, for which samples the Pfa is affected. The presence or absence of the user is identified by evaluating the threshold namely Pfa and PD. Generalized Log Likelihood Ratio Test (GLRT) is used to measure the threshold. The estimation of the accurate threshold value inverse covariance matrix approach is added to MFD and framed as Matched Filter Detection with Inverse covariance matrix based Spectrum Sensing (MDI-SS). As per the threshold values, PD and Pfa is evaluated for NP observer and MDI-SS. An attempt is made to improve the PD by MDI-SS and compare with the existing MFD algorithm. The Pfa affected samples are identified with reference of NP observer, by employing the parameters of PD, Pfa, thresholds are measured at different SNR levels and comparative analysis has been proposed between MDI-SS with NP observer. The PD, Pfa, ROC results are observed by implementing on MATLab software.
    Keywords: Cognitive Radio Networks; Matched Filter Detection; Neyman Pearson Observer; GLRT.

  • Prevention of a SYNflood attack using Extreme XOS Modular Operating System   Order a copy of this article
    Abstract: Network security has become a critical concern for protecting systems from attackers. Even though by securing these systems, a skilled attacker may use some techniques to launch a sophisticated attack. SYNflood attack is the commonly evolving attack which is a form of Denial of Service (DoS) attack, where an attacker sends SYNflood in a form of request to the targets system in a successive manner causing the system unresponsive to the legitimate request. Thereby, this attack causes bandwidth congestion by reducing the processing capability of network systems and servers. In this paper, we proposed a scheme for preventing SYNflood based on statistical parameters of incoming traffic using the hard threshold method which is implemented by using Extreme XOS software. It is a software embedded with the switch for controlling network traffic statistical parameters.
    Keywords: SYNflood attack; DoS attack; Hping3; switch; ExtremeXOS.

  • Research on Na   Order a copy of this article
    by Muralishankar R., A.M.J.Md.Zubair Rahman 
    Abstract: In the frontier of increasing research in cluster computing the data processing becomes simple to manage and access. The available innovative methods are mapping and parallel processing which fails in large data processing. The complexity increases as the data processing capability increases and also maintaining the production of data is vital role in data processing and it is related to the infrastructure of environment. This needs to be monitored in order to develop the technologies along with control management. The Hadoop is the proposed data processing method in cluster computing environment which gives efficient data processing with high accuracy with no error. Cluster computing has a tradition of processing data using random field model out of this approaches the computation time is much greater as of now. This proposed model utilizes the Na
    Keywords: Cluster Computing; Bayesian; Markov model; Hadoop.

  • Incorporating privacy and security in military application based on opportunistic sensor network   Order a copy of this article
    by Mohammed Salman Arafath, Khaleel Ur Rahman Khan, K.V.N. Sunitha 
    Abstract: Wireless ad-hoc and sensor networks (WASN) or opportunistic sensor networks (Oppnets) have been improving their availability in providing secure communication between military candidates in military applications. The existing work has not addressed this issue effectively. This paper proposes secure route selection and three different encryption algorithms for Commando-Commando (CC), Soldier-Soldier (SS) and Commando-Soldier (CS) communication. Routing is based on path protected hop-by-hop routing protocol (PPHH) by which a secure path is selected with authentication by certificate authority (CA). The cryptography algorithms used are code-based cryptography, hyperelliptic curve cryptography, and design-based cryptography. The proposed strategy in this paper shows improvements in security, energy consumption, throughput, packet overhead and network reliability.
    Keywords: routing; military application; secure communication; cryptography techniques; security in Oppnets; confidentiality in Oppnets; sensor network security; confidentiality in sensor networks.

Special Issue on: ICCE 2017 Recent Advances on Consumer Electronics

  • An intelligent emergency rescue assistance system for mountaineers   Order a copy of this article
    by Shih-Hsiung Lee, Cheng-Yao Hsu, Chu-Sing Yang 
    Abstract: Mountaineering is one of main recreational activities of modern people. The mountain accidents include mountain sickness, hypothermia, missing, accidental fall and so on. This paper proposes an intelligent emergency rescue assisted mountaineering system, including a wearable device and a management platform. The physiological signals are monitored for mountain sickness and hypothermia, asking the user to take precautionary measures in advance. In the missing and accidental fall scenarios, the search and rescue team is provided with timely information, and the wearable device is switched to emergency beacon signal automatically, helpful to increase the efficiency of search and rescue. Added to this, the mobile device is integrated with Blue tooth, the mobile device can send the geo information and the information on the wearable device to the management system by means of its communication capability. When an accident happens, the mobile device and wearable device send the beacon signal crosswise, the power usage rate is maximized effectively, the search and rescue time is prolonged. This system architecture effectively prevents mountain accidents, indicates the location and informs related units to provide emergency rescue at the soonest.
    Keywords: Hiking; Search and Rescue; Wearable Device; Internet of Things; Beacon Signal.

  • Blind Spot Monitoring at Night-time using Rear-View Camera   Order a copy of this article
    by Seon-Geol Kim, Kang Yi, Kyeong-Hoon Jung 
    Abstract: The Blind Spot Monitoring (BSM) is one of the key functions of Advanced Driver Assistance System (ADAS). In this paper, we propose a vision-based blind spot detection and tracking algorithm which is applicable at night-time. The feature of highly bright blobs due to the headlights of behind vehicle is employed for the detection of vehicles in blind spot area, because the appearance-based approach that can be used at daytime is not any more appropriate. We calculate the motion vectors of detected blobs to find an approaching vehicle and make headlight pairing to estimate its position by use of the projection map. We can successfully generate an alarm signal by detection and tracking the overtaking vehicle in blind spot area at night-time.
    Keywords: ADAS; BSM; vehicle detection; projection map.

  • An Auto-Configuring Mesh Protocol with Proactive Source Routing for Bluetooth Low Energy   Order a copy of this article
    by Julio León, Abel Dueñas, Cibele Makluf, Frank Cabello, Guillermo Kemper, Yuzo Iano 
    Abstract: The Internet of Things (IoT) is spreading rapidly towards creating smart environments. Wireless sensor networks (WSN) are among the most popular applications discussed in IoT literature, with most of them considering many forms of wireless mesh communications. One of the most available and popular wireless technologies for short-range operations (yet not designed for mesh) is Bluetooth. Literature shows some studies on mesh networks BLE, based on Bluetooth 4.1 (which supports Master/Slave Multirole). Those approaches require more powerful hardware than a simple wireless sensor peripheral.rnNonetheless, none address dynamic address allocation and topology mapping for BLE. rnWe propose a new autoconfiguring dynamic address allocation scheme for a BLE Ad-Hoc network, and a network map discovery mechanism that doesn't require role changing, compatible with BLE 4.0 or later versions.
    Keywords: Bluetooth Low Energy; Mesh; WSN; Proactive Source Routing; Auto-configuring.

  • Light Field Compression on Sliced Lenslet Array   Order a copy of this article
    by Cristian Perra, Daniele Giusto 
    Abstract: Recent advances in light field technologies is fostering the research and development of novel imaging applications. Such applications perform processing of the light field information to create new visual effects such as, for example, refocusing, perspective change, colour adjustment. Light field imaging is very data intensive compared with usual digital photographic imaging, and novel compression algorithms are needed for addressing the problem of light field storage and transmission. Raw digital light field images exhibit low spatial correlation if compared to regular images and hence the performance in terms of rate-distortion of current state-of-the-art image encoders can be superseded by devising novel image compression architectures. In this paper, an architecture for lossy compression of unfocused light field images is proposed. Raw light fields are preprocessed by demosaicing, devignetting and slicing of the raw lenslet array image. The slices are then compressed with the JPEG 2000 image coding standard. The performance of the proposed method is compared against direct application of JPEG 2000 compression on the 4D light field. The experimental analysis has been conducted under a set of different compression ratios and the obtained results show that the proposed method outperforms direct application of the reference architecture.
    Keywords: light field compression; plenoptic compression; JPEG 2000; objective evaluation.

  • Variable Switching Frequency Control for Active Cell Balancing Systems   Order a copy of this article
    Abstract: A new active cell balancing control method is presented for improving the active cell balancing speed of a Li-ion battery pack. The proposed method calculates the switching frequency based on the sensed charging/discharging current of the battery pack. It then modulates the balancing power of the active cell balancing circuit within the value of the charging current limit and discharging current limit. The method can achieve high balancing performance without much computational work and additional power electronic components. A multi-cell balancing algorithm is also introduced for using the proposed method in an actual battery management system, which was verified by PSIM simulations with an irregular profile of the battery load current. The proposed method shows faster balancing speed than a conventional cell balancing method with a fixed switching frequency.
    Keywords: Li-ion battery; Buck-boost converter; Cell balancing; Variable switching frequency.

Special Issue on: Security, Privacy and Trust in Computing and Secured Transactions

  • A Novel Dyadic Multiresolution Wavelet Image Steganography using N-ary   Order a copy of this article
    by Chandrasekaran Vanmathi, S. Prabu 
    Abstract: Steganography plays an important role in information sharing. In this paper a wavelet based image steganography using Haar wavelet and an N-ary notation is proposed. The high and middle frequency coefficients are selected based on the Shannon entropy value to identify the best sub band for data hiding. Peak signal to noise ratio and Structure similarity index metric values are computed from the stego image. The empirical results show the proposed method provides good, better imperceptibility and data capacity with average Peak Signal to Noise Ratio of 53.60 dB for 359,112 bits in terms of effectiveness. To prove this accomplishment of the method, several experiments were conducted and compared the results with existing works.
    Keywords: Steganography;Haar; IWT; Nary.

  • Event Detection in Sports Video based on Audio- Visual and support vector machine. Case-Study: Foot Ball   Order a copy of this article
    Abstract: In this paper we propose a novel audio-visual feature-based framework, for event detection in field sports broadcast video. The system is evaluated via a case-study involving MPEG encoded football video. Specifically, the features gathered by various feature detectors is combined by means of a support vector machine, which infers the occurrence of an event, based on a model generated during a training phase, utilizing a corpus of 2.5 hours of content. The system is evaluated using 2.5 hours of separate test content.
    Keywords: Event Detection; Field sports video; MPEG; Support vector machine.

  • Risk - based availability modelling and reputation management on fault tolerant cloud computing systems   Order a copy of this article
    by Deepa M, Anand Mahendran 
    Abstract: This paper exhibits a risk-based philosophy in cloud computing to estimate ideal quality and maintenance which maximise the high availability of cloud administration infrastructure. The procedure is confined into two stages: 1. accessibility displaying the cloud frameworks, 2. Hazard based assessment and maintenance estimations. The technique is easy but difficult to apply and needs promptly accessible information in a cloud environment. The approach can be further improved by comparing the existing inspections with the previous consequences of reviews. The projected procedure is practical to the health care scheme using Markov chain process. The proposed system predicts, mitigate and eliminate the risk-based inspection and Markov chain process to efficiently identify the repair and failures of each virtual machine and reduce the energy consumption rate, cost accordingly. The result of our proposed method will definitely bound optimum solutions and specifies the conquest of an efficient risk-based availability modelling on minimal cost and energy consumption. It is developing as a dynamic innovation to modernise and rebuild health care services association to give best administrations to the customers. Finally, the proposed model is verified by the Nagios and OPTIMIS toolkit. The result obtained in our proposed model definitely bounds optimum solutions and deceptively specifies the triumph of an efficient risk analysis of failures on minimal cost and energy consumption.
    Keywords: Risk-based approach; Risk-based Inspection; Mitigation; Markov Process; Revenue; Energy consumption.

  • Cost Based Constrained Task Scheduling in Cloud Environment   Order a copy of this article
    Abstract: In recent years the most important requirement in the cloud computing environment is the task scheduling which is the key role for efficiency in sharing of cloud resources among multiple users. Cloud provider is always assumed to own its large data centre which has significant computational resources with it. In the cloud environment all the tasks are allocated and executed by using the available services in order to achieve higher performance, least total time for computation, shortest period of response time and utilization of resources and so on. There are some serious challenges that the existing models could allocate the task to the resources without the profit maximization scheme. The main motivation that is behind this paper, is to design and to develop a CLOUD manager(CM) to efficiently manage the cloud resources and also completing the jobs for the allocated resources for the given task in the cloud environment. It is implemented using a prior resource optimization (PRO) based allocation algorithm that takes into account execution time, transmission time and cost. The desirable feature of this paper is that the resources are allocated as per the cloud consumer request. The potency of this is presented in the cloud environment by finding the optimal and suboptimal allocation scheme of resources which maximize the profit of cloud provider and also improve the quality of scheduling solution.
    Keywords: Cloud Computing; Cloud Manager; Resources Scheduling.

  • An Efficient Spectrum Handoff Decision Making Scheme for Cognitive Radio Networks   Order a copy of this article
    by Preetha K S, Kalaivani S 
    Abstract: There has been a gigantic spike in the usage and development of wireless devices since the time wireless technology has come into existence. This has contributed to a very serious problem of spectrum unavailability or spectrum scarcity. The solution to this problem comes in the form of cognitive radio networks where the secondary user which is also known as the unlicensed users makes use of the spectrum in an opportunistic manner. The secondary user uses the spectrum in a manner such that the primary or the licensed user doesnt face interference above a threshold level of tolerance. Whenever a primary user comes back to reclaim its licensed channel, the secondary user using it needs to perform a spectrum handoff to another channel which is free of primary user. Our primary focus is on performing spectrum handoff decision making. The SU selects staying or changing policy based on the average extended delivery time. The spectrum handoff decision making is performed, i.e. the optimal channel for spectrum handoff is decided only if changing policy is adopted which is based on Multiple Attribute Decision Making (MADM). This spectrum handoff decision making scheme is later extended using Artificial Neural Networks (ANN) and probabilistic Markov Model.
    Keywords: secondary user; spectrum handoff; Cognitive radio Networks; Multiple Attribute Decision Making (MADM),Artificial Neural Networks (ANN) and Markov Model,.

  • Dynamic High Bandwidth (DHB) Nodes for Routing in MANETs   Order a copy of this article
    by Jayalakshmi Periyasamy, Saravanan R 
    Abstract: High bandwidth routing is the most desired advantage in networks with portable nodes. The main objective of this work is to obtain a high bandwidth route for increasing the delivery rate in MANETs. Dynamic high bandwidth (DHB) nodes are picked for data transmission in MANETs. A measure from the current packets received by a node and the packet delay and the bandwidth are used to calculate the total transmission efficiency of a node. Thereby the routes are identified by this dynamic estimation to improve routing capability in MANETS. Analysis of the performance of the protocol is carried out by using the network simulator (NS-2).
    Keywords: Routing; bandwidth; mobile ad hoc networks; performance analysis.

Special Issue on: IoT Services for Trustworthy Secured Crowd Sourcing Applications

  • An Integrated Approach For Network Traffic Analysis Using Unsupervised Clustering And Supervised Classification   Order a copy of this article
    by Chokkanathan Kothandapani 
    Abstract: Traffic classification and analysis is a significant task to control the network traffic in a heterogeneous manner. The unsupervised learning system or network environment fails to expand the supervised classification model for network analysis. The several data mining techniques identified the network traffic pattern and classified the network traffic accurately using unsupervised learning approach. However, the continuous evaluation of network traffic on multi-dimensional data is a difficult task in real time data traffic. In order to overcome the problem in traffic analysis, An Integrated K-means Unsupervised Clustering and Supervised C4.5 Classification (KUC-SC) technique is introduced. An integrated technique is used to evaluate the network traffic conditions to classify the patterns of real time and non-real time traffic. An integrated KUC-SC technique performs two types of processing steps such as clustering and classification. At first, K means unsupervised learning algorithms is applied in KUC-SC technique to form a k number of clusters using the different input data point with the nearest mean. The clustering approach is used for classifying the given data. After that, C4.5 is used to classify the data whether it is real time or non-real time traffic through the construction of decision-tree. At every node of the tree, C4.5 algorithm classifies the data point that most efficiently divides the set of samples into subsets with similar characteristics. This in turn improves the classification accuracy in network traffic data analysis. An experimental result shows that the proposed KUC-SC technique obtains the better performance in terms of classification accuracy, classification time, true positive rate and communication overhead compared to the state-of-the-art works.
    Keywords: K-means Unsupervised Clustering; Supervised C4.5 Classification; real time and non-real time traffic analysis.

  • Response Time Based Resource Allocation According to Service Level Agreements in Cloud Computing   Order a copy of this article
    by G. Hemanth Kumar Yadav, K. Madhavi 
    Abstract: Cloud computing is a technology which offers various services as and when required by the user through various cloud providers. The scalable nature of cloud has made it to reach various domains and have a strong root in every organization. The resource provisioning has become a challenging task for many cloud providers. This work proposes an efficient framework for handling storage, application and computation services, offering Service-Level Agreements (SLA) backed performance and uptime promises for their services in cloud computing .Further this tries to benefit both the users as well as cloud providers by enhancing the features for the customers and by gaining profit for the providers. The proposed SLA based resource provisioning system is found to perform better than the existing other resource provisioning systems in terms of response time and other QoS parameters.
    Keywords: Cloud computing; Resource Allocation; Service level agreement.

Special Issue on: Security for Smart Grid and Internet of Things Applications

  • Managing Incident Response in Industrial Internet of Things   Order a copy of this article
    by Allan Cook, Leandros Maglaras, Richard Smith, Helge Janicke 
    Abstract: Industrial control systems are an essential element of critical national infrastructure, often managing processes and utilities that are essential to a nation's well being and prosperity. These systems are increasingly becoming the target of cyber attacks, and as a result are required to adopt a stronger cyber defense posture. In the next years, the Internet of Things revolution will dramatically alter manufacturing, energy, agriculture, transportation and other industrial sectors of the economy. Industrial Internet of Things(IIoT) will bring new opportunities to business and society, along with new threats and security risks. However, much of the technology within a control system is based on proprietary devices optimized for performance at the expense of security and does not necessarily integrate well into an overall defense-in-depth architecture. Similarly, the control system technology does not readily support the processes and procedures found in IT incident response plans. In this paper we explore the characteristics of an industrial control system and consider them within the framework of an established incident response framework by Prosise at al. We conclude that existing incident response processes are applicable to industrial control systems, but their nature must be modeled, especially in the pre-incident phase of planning, in order to accommodate the idiosyncrasies of such technologies. We recommend that these models be developed and tested within synthetic environments to test and quantify cyber attack impacts and drive architectural improvements and incident response investment.
    Keywords: Incident Response; Industrial Control Systems; Industrial Internet of Things.