Forthcoming articles


International Journal of Information and Communication Technology


These articles have been peer-reviewed and accepted for publication in IJICT, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.


Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.


Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.


Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.


Register for our alerting service, which notifies you by email when new issues of IJICT are published online.


We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.


International Journal of Information and Communication Technology (58 papers in press)


Regular Issues


  • Improving Multidimensional Point Query Search using Multiway Peer-to-Peer Tree Network   Order a copy of this article
    by Shivangi Surati, Devesh Jinwala, Sanjay Garg 
    Abstract: Nowadays, Peer-to-Peer (P2P) networks are widely accepted inrnmultidimensional applications like social networking, multiplayer games, P2Prne-learning, P2P mobile ad-hoc networks etc. Various P2P overlay networksrncombining Multidimensional Indexing(MI) methods are preferable for efficientrnmultidimensional point or range search in a distributed environment. However,rnpoint query search in existing P2P has limitations viz. (i) either doesnt givernsupport to MI or uses replications to support MI or (ii) point query search cost is limited to O(log2N). Hence, traditional MI techniques based on the multiway tree structure (having larger fanout) can be employed to enhance the multidimensional point query search capabilities. Based on our observations, a hybrid model combining m-ary (m fanout of the tree, > 2) P2P tree network and MI based on space containment relationships is preferred to reduce point query search performance bound to O(logmN) using single overlay network. The present paper shows how this model improves the search performance of the point queries in O(logmN) steps, independent of the dimensionality of objects.
    Keywords: Peer-to-Peer overlay networks; Distributed computing; Multidimensional Indexing; Point query search; Multiway trees.

  • Low-Complexity LDPC-Convolutional Codes based on Cyclically Shifted Identity Matrices   Order a copy of this article
    by Fotios Gioulekas, Constantinos Petrou, Athanasios Vgenis, Michael Birbas 
    Abstract: In this study, a construction methodology for Low-Density Parity-Check Convolutional Codes (LDPC-CCs) ensembles based on cyclically shifted identity matrices is proposed. The proposed method directly generates the syndrome former matrices according to the specified code parameters and constraints i.e. code-rate, degree-distribution, constraint length, period and memory, in contrast to the majority of the available approaches that produce relevant error-correcting codes based on either block ones, protographs or spatially-coupled type of codes. Simulation results show that the constructed ensembles demonstrate advanced error-correcting capability of up to 0.2 dB in terms of frame-error and bit-error rates at the convergence region, when compared with the performance of error-correcting schemes adopted by various communication standards, with equivalent hardware complexity even at short codeword-lengths. Specifically, the constructed LDPC-CCs have been assessed against the corresponding error-correcting codes used in WiMAX and standards for wireless and wireline telecommunications, respectively.
    Keywords: FEC; LDPC-Convolutional Codes; Complexity; error-correction; WiMAX;; cyclically shifted identity matrices; LDPC-Block Codes; Schedulable memory; syndrome-former.

  • A Big Data and Cloud Computing Specification, Standards and Architecture: Agricultural and Food Informatics   Order a copy of this article
    by N.P. Mahalik, Na Li 
    Abstract: Big data has gone from emerging to a widely used technology in industrial, commercial, research, and database applications. It is used for processing and analyzing massive sets of data to derive useful patterns, inferences, and relations. The real-time data storage and management architecture plays important role. This paper introduces big data, includes the background and definitions, characteristics, related technologies, and challenges associated with implementing big data-based application technologies. This paper also introduces cloud computing, a related yet independent emerging technology and includes the modern technologies and standards, definitions, service and deployment models, advantages and challenges, and development prospects. The paper considers computing specification, standardized procedure, and system architecture in regard to big data systems and could computing.
    Keywords: Big data; KDD; data mining; cloud computing; Virtualization; Architecture.

  • Wi-Fi Received Signal Strength Based Hyperbolic Location Estimation for Indoor Positioning Systems   Order a copy of this article
    by Anvar Narzullaev, MOHD Hasan Selamat, Khaironi Yatim Sharif, Zahriddin Muminov 
    Abstract: Nowadays, Wi-Fi fingerprinting-based positioning systems provide enterprises the ability to track their various resources more efficiently and effectively. Main idea behind fingerprinting is to build signal strength database of target area prior to location estimation. This process is called calibration and the positioning accuracy highly depends on calibration intensity. Unfortunately, calibration procedure requires huge amount of time and effort, and makes large scale deployments of Wi-Fi based indoor positioning systems non-trivial. In this research we present a novel location estimation algorithm for Wi-Fi based indoor positioning systems. Proposed algorithm combines signal sampling and hyperbolic location estimation techniques to estimate the location of mobile users. The algorithm achieves cost-efficiency by reducing the number of fingerprint measurements while providing reliable location accuracy. Moreover, it does not require any additional hardware upgrades to the existing network infrastructure. Experimental results show that the proposed algorithm with easy-to-build signal strength database performs more accurate than conventional signal strength-based methods.
    Keywords: indoor positioning; hyperbolic location estimation; Wi-Fi fingerprinting; TDOA; trilateration; received signal strength.
    DOI: 10.1504/IJICT.2019.10013126
  • Target coverage algorithm with energy constraint for wireless sensor networks   Order a copy of this article
    by Liandong Lin, Chengjun Qiu 
    Abstract: As wireless sensor networks are made up of low-cost and low-power tiny sensor nodes, it is of great importance to study on how to both cover targets and save energy consumptions. In this paper, we propose a novel target coverage algorithm with energy constraint for wireless sensor networks. Wireless sensor networks can be described as a graph model, in which nodes and edges represent sensors and maximum signal transmission ranges respectively. Particularly, there types of sensor nodes are utilized: 1) base stations, 2) gateways, and 3) sensors. The main innovations of this paper lie in that we organize the network lifetime by a cycle mode, and divide the network lifetime to rounds of equal period. At the beginning of each round, sensors independently determine which sensing units should be turned on in the working step. Afterwards, the status of each sensing unit is determined by integrating the sensing ability and remaining energy together. Finally, we construct a simulation environment to test the performance of our algorithm. Experimental results demonstrate that the proposed algorithm performs better than the Remaining energy first and Max-lifetime target coverage scheme under various number of sensors and attributes, and performance of our proposed algorithm is next only to integer programming. Furthermore, we also find that the proposed algorithm is able to effectively cover target with low energy consumption.
    Keywords: Wireless sensor networks; Target coverage; Energy constraint; Network lifetime.

  • Safety Message Data Transmission Model and Congestion Control Scheme in VANET   Order a copy of this article
    by Zhixiang Hou, Jiakun Gao 
    Abstract: When VANET meets large traffic density, Beacons produced by periodical safety message may occupy the whole bandwidth of channel, resulting in link congestion. In order to ensure the safety in message data transmission and promote the effectiveness of congestion control, in this paper, we propose an actively safe information congestion control framework via analyzing VANET features which contain channel detection, load estimation, congestion control, sending restoration, and so on. Therefore, we propose a novel congestion control mechanism based on adjusting Beacon frequency and Vehicle communication model. Based on the control theory and the features of VANET, an active safety information congestion structure is put forward at first. Then CACP algorithm is adopted to estimate the link bandwidth for congestion prediction. For periodic status messages and security messages a channel assignment algorithm is also proposed to ensure there is enough channel resource to transfer emergency messages. As a consequence of this, the status message accuracy and the system safety can be ensured and the number of accommodating users can be increased, which will avoid network congestion and improve the channel utilization. From the simulation results, it can be observed that the proposed algorithm can effectively and accurately detect link load degree, improve throughput, decrease delay, reduce networked energy consumption and then guarantee data fidelity. Finally, the conclusion can be drawn that the proposed can achieve a safe and efficient transmission mechanism for VANET
    Keywords: VANET; Congestion control; Channel; Beacon; D-FPAV.

  • Design and Implementation of ITS Information Acquisition System under IoS Environment   Order a copy of this article
    by Ying Zhang, Jiajun Li, Baofei Xia 
    Abstract: The problems and efficient in existing intelligent transportation system (ITS) models are studied and the significance of the model and architecture of ITS under Internet of things (IoS) are also discussed. By the research status at home and abroad about the relationship between IoS and ITS, the necessity of IoS technology introduce to ITS is explained first. Then, based on logical structure and physical model of ITS, the ITS structure model under IoS environment is established. Furthermore, the system architecture on a prefect and comprehensive ITS information system acquisition system is formed. The design principles of system demand, overall design, function modules design, database design are described in detail. The parts of key modules in system are introduce and tested. The tests results show the acquisition system can effectively monitor real-time of vehicle speed, real-time traffic of road vehicles, and obtain effective information of vehicle and road environment. It can also send the information acquired via Internet or GPRS network to data processing center, for future process and intelligent decision of ITS.
    Keywords: ITS; IoS; information acquisition; GPRS; communication.

  • Traffic Route Optimization Based on Clouding Computing Parallel ACS   Order a copy of this article
    by Changyu Li 
    Abstract: Intelligent traffic has demand for massive data environment and high performance processing, which needs cloud computing platform to process massive data, and applying distributed parallel guidance algorithms to improve system efficiency. Therefore, this paper proposes an improved scheme based on clouding computing ACS algorithm. It first adopts MapReduce to parallelize traditional ACS, to process the solving problem with distributed parallel mode, and to improve the defects in ACS. The improved ACS applies Map function to parallelize the part which has the most time consuming, that is, the independent solving process of each ant. Then Reduce function is used to describe the processes of pheromone updating and obtaining better solutions. Simultaneously, for the defects of ACS on long searching time, and premature convergence to a non optimal solution, we integrate simulated annealing algorithm to ACS and provide corresponding realization process. The experiments construct Hadoop cloud computing platform and the improved algorithm is operated and tested on this platform. By the analysis on experimental results, we find the parallel ACS designed by us has improved the query efficiency of the shortest path, which also has advantage on the performance of running time and speedup ratio compared to classic algorithms.
    Keywords: cloud computing; ACS; MapReduce; Traffic network; Pheromone.

  • Optimal configuration of M-for-N shared spare-server systems   Order a copy of this article
    by Hirokazu Ozaki 
    Abstract: In this study, we investigate the user-perceived availability of M-for-N shared spare-server systems. We assume that there are N identical working servers, each serving a single user group, and M identical shared spare servers in the system. We also assume that the time to failure of the server is subject to an exponential distribution, and the time needed to repair a failed server is subject to the Erlang type-k distributions. Under these assumptions, our numerical computation shows that there exists an optimal size (M + N) for shared spare-server systems with respect to the availability and cost for a given condition.
    Keywords: Cloud computing; user-perceived reliability; shared protection systems; probability distribution; availability.

  • Image analysis by efficient Gegenbauer Moments Computation for 3D Objects Reconstruction   Order a copy of this article
    by Bahaoui Zaineb, Hakim El Fadili, Khalid Zenkouar, Hassan Qjidaa 
    Abstract: In this paper, we suggest a new technique for fast computation of Gegenbauer orthogonal moments for the reconstruction of 3D images/objects; A typical comparison of the proposed method with the conventional ZOA methods shows significant improvements in term of error reduction, image quality and consumption time . Then we compare our new approach with an existing methods using Legendre and Zernike moments in the case of 3D image/object. The obtained results prove that Legendre and Zernike moments are slightly better than Gegenbauer, they are still very efficient and gives very good results in terms of MSE and PSNR. But, Zernike moments have higher computational cost than Gegenbauer moments.
    Keywords: Gegenbauer Moments computation; Legendre Moments; Zernike moments; 3D images/object; Computation time.

  • Integration of a quantum scheme for key distribution and authentication within EAP-TLS protocol   Order a copy of this article
    by GHILEN AYMEN, Mostafa AZIZI 
    Abstract: The extensive deployment of wireless networks has led to a significant progress in security approaches that aim to protect confidentiality. The current methods for exchanging a secret key within Extensible Authentication Protocol-Transport Layer Security (EAP-TLS) protocol is based on Public Key Infrastructure (PKI). Although this technique remains one of the most widely implemented solution to authenticate users and to ensure secure data transmission, its security is only computational. In other words, by the emergence of the quantum computer, the existing cryptosystems will become completely insecure. Improving the contemporary cryptographic schemes by integrating quantum cryptography becomes a much more attractive prospect since its technology does not rely on difficult mathematical problems such as factoring large integers or computing discrete logarithms. Thus, we propose a quantum extension of EAP-TLS that allows exchanging a cryptographic key and authenticating a remote client with unconditional security, ensured by the laws of quantum physics. PRISM tool is applied as a probabilistic model checker to verify specific security properties for the new scheme.
    Keywords: EAP-TLS; Quantum Cryptography; Authentication;Key Agreement; Entanglement; PRISM; Model Checking.

  • A rapid detection method of earthquake infrasonic wave based on decision-making tree and the BP neural network   Order a copy of this article
    by Yun Wu, Zuoxun Zeng 
    Abstract: In this paper, a rapid detection method of earthquake infrasonic wave combining decision-making tree and neural network is proposed. This method is designed for the automated monitoring system of earthquake occurring and advance forecast. Firstly, different kinds of signal data are collected together and analyzed to find the most meaningful attributes, which is important to describe the features of signal. Then, in the decision part, two decision line are designed to supply a final result. In the first one part, these important attributes are chosen to determine to be the nodes of decision-making tree. In the second part, many previous stored signals are analyzed utilizing the neural network to build the mapping model between attributes of signals and its classification. Lastly, the most suitable nodes sequence and the thresholds are determined in this process according to two experiments. Experiments analysis is also presented in this paper to discuss some important issues in decision tree, and determine the decision system finally.
    Keywords: Infrasonic Wave; Earthquake; Decision-making Tree; Neural NetWork.

  • Data Dissemination on MANET by repeatedly transmission via CDN nodes   Order a copy of this article
    by Nattiya Khaitiyakun, Teerapat Sanguankotchakorn, Kanchana Kanchanasut 
    Abstract: Recently, many researches on MANET (Mobile Ad Hoc network) have been carried out due to its various applications in information exchange. The efficient data dissemination in such an infrastructure-less environment as MANET is considered as one of the challenging issues. This paper proposes to adopt the concept of CDN (Content Delivery Network) Technique, normally used in Internet, for disseminating information in MANET. The source node disseminates data to surrounding nodes by repeatedly transmitting batches of packets via a set of CDN nodes acting as relay nodes. Our proposed data dissemination technique via CDN nodes is developed based on the OLSR (Optimized Link State Routing Protocol) on MANET. The limited number of CDN nodes is selected from MPRs (Multi-Point Relay) in OLSR in order to optimally cover all subscriber nodes and to avoid the interference problem as well. The packets are transmitted from CDN to destination nodes using the broadcasting technique the same technique as the one adopted in MPR, a broadcasting technique. In this work, the performance of our proposed technique is evaluated in terms of probability of successful transmission by simulation using NS3. The performance is compared with the typical OLSR and the recent work called Clustering-based data transmission algorithm in VANET (Vehicular Ad Hoc Network). It is apparent that our proposed algorithm improves drastically the overall probability of successful transmission when comparing with the typical OLSR. Additionally, it achieves higher probability of successful transmission at high nodes density when comparing with Clustering-based data transmission algorithm in VANET. Finally, the closed-form mathematical expression of the probability of successful transmission of our proposed algorithm in multi-hop network environment is derived and verified.
    Keywords: Content Delivery Network(CDN); MANET; OLSR.

  • Suboptimal Joint User Equipment Pairing and Power Control for Device-to-Device Communications Underlaying Cellular Networks   Order a copy of this article
    by Chaoping Guo, Xiaoyan Li, Wei Li, Hongyang Li 
    Abstract: Device-to-device(D2D) communications underlying cellular networks, results in the cellular interference to D2D User Equipment(DUE) which is larger than the D2D interference to Cellular User Equipment(CUE). A joint resource allocation scheme is presented to both perform user equipment pairing and power allocation to minimize total interference of DUE and CUE. The scheme is composed of two parts: in the first one the base station assigns power to each CUE and each D2D transmitter by graphic method, and in the second one it selects the optimal CUE to pair with D2D pair by modified Hungarian algorithm in order to minimize total interference. The simulation results show that the proposed scheme can not only decrease the interference caused by D2D pair and that caused by CUE, but also increase the number of permitted D2D connections.
    Keywords: device-to-device communications; power control; resources allocation; Suboptimal Joint; Cellular Networks; User Equipment.

  • A Cognitive Approach of Collaborative Requirements Validation based on Action Theory   Order a copy of this article
    by Sourour MAALEM, Nacereddine ZAROUR 
    Abstract: The requirements must be validated at an early stage of the analysis. Requirements validation usually involves natural language processing, which is often inaccurate and error-prone, or translated in formal models, which are difficult to understand and use for non-technical stakeholders. The majority of existing approaches for validating requirements are used in a heterogeneous process, using a variety of techniques, relatively independent without any methodological or cognitive approach through in which, mechanisms of human thought or artificial are used. In this work we present a cognitive approach to collaborative requirements validation based on the theory of action, through a set of steps that must increase the involvement of the Client in this stage of the engineering cycle; and to bring the customer mental model to the analyst one. In the proposed process, the analyst starts by extracting needs one by one from requirements documents. Each need goes through a step of formulating the intention, which results in a transformation of needs into requirements. This transformation is performed by respecting a new syntax in any generated a checklist (quality attributes) from the viewpoint of each stakeholder, followed by a step of specification of actions that engenders on the basis of the intentions, actions which the analyst perceives and verifies to build a rapid prototype of software interface, executable on machine. The customer perceives the prototype, interprets it and validates these needs. A valid needs database is created, needs that remain invalid must be negotiated, and if conflicts persist, another error base is created. At the end of this collaborative process of requirements validation, decisions will be made concerning who must participate during needs validation meetings with respect to the Mental Effort metric which will classify according to the mental difficulty of execution of the prototype, the articulatory and semantic problems, and measure commitment and motivation of stakeholders.
    Keywords: Requirements engineering; Requirements validation; Action Theory; Prototype; Cognitive Approach.

  • Approach to Terrain Pretreatment for the Yazidang Reservoir Based on Image Processing   Order a copy of this article
    by Lingxiao Huang 
    Abstract: The terrain pretreatment based on image processing in the Yazidang Reservoir is presented. The image processing is based on the image stitching and the image edge detection. The SURF algorithm is used to extract feature points of reservoir partial images. To match feature points and stitch reservoir partial images, the wide view reservoir image is stitching using KD-tree algorithm and linear gradient fusion algorithm. The improved mathematical morphology is adopted, and five different shapes and four different scales of structural elements is implemented to obtain precise waterfront edge of reservoir image. The *.dxf file is obtained by using CAD software to depict reservoir shore contours. It transforms *.dxf into *.kml to extract latitude and longitude information of reservoir waterfront edge. The satisfied mesh effect is generated by using the Delaunay triangulation algorithm.The result shows that accurate and detailed reservoir terrain can be obtained using the image processing method and the GE software, which provides a prerequisite for the numerical simulation of sedimentation in the Yazidang Reservoir.
    Keywords: terrain pretreatment; image processing; SURF algorithm; KD-tree algorithm; linear gradient fusion algorithm; mathematical morphology; Delaunay triangulation.

  • A novel imbalanced data classification algorithm based on fuzzy rule   Order a copy of this article
    by Zhiying Xu, YiJiang Zhang 
    Abstract: The classification of imbalanced data can increase the comprehensibility and expansibility of data and improve the efficiency of data classification. The accuracy of classification is poor when the data is classified by the current method for imbalanced data analysis of big data. To this end, this paper presents an imbalanced data classification algorithm based on fuzzy rule. The algorithm firstly collects the imbalanced data, selects the features of the imbalanced data, and optimises the imbalanced data classification algorithm by using the fuzzy rule classification algorithm. The experimental results show that when the classifier maintains a certain size of the weak classifier, the classification accuracy of the proposed algorithm will be gradually improved as the training time increases, and gradually be stable within a certain range of accuracy, this method can improve the accuracy of imbalanced data classification.
    Keywords: imbalanced data; data feature selection; data classification.
    DOI: 10.1504/IJICT.2019.10015386
  • A Personalized Recommendation Algorithm Based on Probabilistic Neural Networks   Order a copy of this article
    by Long Pan, Jiwei Qin, Liejun Wang 
    Abstract: Collaborative filtering is widely used in recommendation system. Our work is motivated by the observation that users caught in their attention relationship network, and their opinions about items will be directly or indirectly affected by others through such a network. Based on behaviors of users with similar interest, the technique focuses on the use of their opinions to recommend items. Therefore, the quality of similarity measure between users or items has a great impact on the accuracy of recommendation. This paper proposes a new recommendation algorithm with graph-based model. The similarity between two users (or two items) is computed from the connections on graph with nodes of users and items. The computed similarity measure is based on probabilistic neural networks to generate predictions. The model is evaluated on a recommendation task which suggests that which videos users should watch based on what they watched in the past. Our experimental results on the YouKu and Epinions datasets demonstrate the effectiveness of the presented approach in comparison with both collaborative filtering with traditional similarity measures and simplex graph-based methods and further improve user satisfaction, our approach can better improve the overall recommendation performance in precision, recall and coverage.
    Keywords: recommendation system; similarities; graphs-based approach; collaborative filtering; probabilistic neural networks.

  • Temporal Impact Analysis and Adaptation for Service-Based Systems   Order a copy of this article
    by Sridevi Saralaya, Rio D'Souza, Vishwas Saralaya 
    Abstract: Temporality is an influential aspect of Service-Based Systems (SBS). Inability of a service to achieve time requirements may lead to violation of Service-Level Agreements (SLAs) in a SBS. Such non-conformity by a service may introduce temporal inconsistency between dependent services and the composition. The temporal impact of the anomaly on related services and also the composition will need to be identified if SLA violations have to be rectified. Existing studies concentrate on impact analysis due to web service evolution or changes to a web service. There is a huge lacunae regarding studies on impact of time delay on temporal constraints of dependent services and obligations of business process. Although reconfiguration of SBS to overcome failures is extensively addressed, reconfiguration triggered due to temporal delay is not well explored. In this study we try to fill the gap between reconfiguration and impact analysis invoked due to temporal violations. Once the impacted region and the amount of temporal deviation to the business process are known, we try recovery by localizing the reconfiguration of services to the impacted zone.
    Keywords: Service-Based Systems; Impact-Analysis; Proactive Adaptation; Reactive Adaptation; Reconfiguration; Cross-Layer Adaptation; SLA violation handling; Anomaly handling; Service-Based Applications.

  • Considering the environment's characteristics in wireless networks simulations: case of the simulator NS2 and the WSN   Order a copy of this article
    by Abdelhak EL MOUAFFAK, Abdelbaki EL BELRHITI EL ALAOUI 
    Abstract: Recently, the wireless networks, particularly the wireless Sensor Networks (WSN) occupy an important place in several application areas due to the progress in microelectronics and wireless communications domains. Thus, a set of researches had addressed this issue in order to broaden the possibilities offered by these networks and circumvent the encountered problems. The test of any new solution is an essential phase to validate its performances. This phase is done in network simulators; which NS is the most used. The impact of the physical layer and the radio signal propagation environment criteria on the simulations results is indisputable. In this context, and after presenting and classifying the radio propagation models, we study in detail the implemented models in NS-2. The focus is on the ability of these models to consider the characteristics of the wireless networks deployment environment (e.g. nature, position and mobility of the obstacles). And to consider the specificities of WSN, the effect of other parameters (e.g. antenna height) will be discussed.
    Keywords: Wireless network; Wireless sensor network; Simulation; Network Simulator; NS-2; Radio propagation model; Deployment environment.

  • Improved Biometric Identification System Using a New Scheme of 3D Local Binary Pattern   Order a copy of this article
    by KORICHI Maarouf, Meraoumia Abdallah, Aiadi Kamal Eddine 
    Abstract: In any computer vision application, integration of relevant feature extraction module is vital to help in making accurate decision ofrnthe classification. In the literature, several methods that havernachieved promising results and high accuracies are based on texturernanalysis. Thus, there exist various feature extraction techniques torndescribe the texture information, among them; the Local BinaryrnPattern (LBP) is widely used to characterize the image sufficiently.rnGenerally, LBP descriptor and their variants are applied onrngray-scale images. Thus, in this paper, we propose a new methodrnthat can be applied to any type of image either in grayscale, color,rnmultispectral or hyperspectral. It is a new scheme of 3D LocalrnBinary Pattern. We have developed biometric system for personrnidentification and an edge detection technique to evaluate it. Thernobtained results have showed that it has higher performancesrncompared to other methods developed in the literature in terms ofrnidentification rates.
    Keywords: Feature extraction;Local Binary Pattern (LBP);Biometrics;Person identification;Palmprint;Data fusion.

  • An Agricultural Data Storage Mechanism Based on HBase   Order a copy of this article
    by Changyun Li, Qingqing Zhang, Pinjie He, Zhibing Wang, Li Chen 
    Abstract: With the development of agricultural space-time localization, sensor network and cloud computing, the amount of agricultural data is increasing rapidly and the data structure becomes more complicated and changeable. Currently, the widely used agricultural database is the relational database. This database handles a large amount of data with very limited throughput and is not suitable for organization and management of distributed data. Hbase is a non-relational database of distributed file storage built on Hadoop platform. Hbase is suitable for unstructured data storage database and it can handle large volume of data with high scalability. To better store agricultural big data in Hbase, we propose a special agricultural data buffer structure which stores data based on the data category and a two-level indexing memory organization strategy on HBase. The method proposed saves more than a quarter of the time compared to traditional buffering methods. Experimental results show the higher efficiency of the agricultural data buffer structure and memory-organization strategy.
    Keywords: agricultural big data; data buffer structure; HBase; two-level indexing strategy.

  • A Real-time Multi-Agent System for Cryptographic Key Generation using Imaging-based Textural Features   Order a copy of this article
    by Jafar Abukhait, Ma'en Saleh 
    Abstract: Traditional network security protocols depend on exchanging the security keys between the network nodes, and thus opposing the network to different classes of security threats. In this paper, a multi-agent system for cryptographic key generation is proposed for real-time networks. The proposed key generation technique generates a 256-bit security key for the Advanced Encryption Standard (AES) algorithm using the textural features of digital images. By implementing this key generation technique at both the sender and receiver network nodes, the process of exchanging the security keys through the network would be eliminated, and thus making communication between network nodes robust against different security threats. Simulation results over a real-time network show the efficiency of the proposed system in reducing the overhead of the security associations performed by the IPsec protocol. On the other hand, the proposed agent-based system shows a high efficiency in guaranteeing the quality-of-service (QoS) of the real-time requests in terms of miss-ratio and total average delay through applying the best scheduling algorithm.
    Keywords: Cryptography; Textural Features; Gray Level Co-occurrence Matrix (GLCM); Advanced Encryption Standard (AES); QoS; Security Key; Network Node.

  • Modeling of learner behavior in massive open online video-on-demand services   Order a copy of this article
    by Ji-Wei Qin, Xiao Liu 
    Abstract: Video-on-demand service as a popular Internet application provides lively learning resource, learner can freely selects and watches his/her interesting videos in massive open online education. Learner video-on-demand behavior as feedback shows preference among learners is available to help video provider to design, deployment and manage learning video in massive open online VoD services. In this paper, we collected the learner video-on-demand behavior reports in 875 days, on the basis of real-word data, the learner video-on-demand model is presented in massive open online VoD service. Three main findings are proposed. 1) The Educational video popularity matches better with the Stretched Exponential model than the Zipf model. 2) The long-session educational video attends with the less-popularity. 3) The Poisson distribution is considered the best fit for the arrival learner in massive open online vod services. Educational video popularity distribution would be helpful to define the number copy of educational video file for deployment on video-on-demand server. Session and arrvial pattern would be helpful to design the content of educational video in massive open online VoD services.
    Keywords: learner behavior; video-on-demand; massive open online.

  • Research on equalization scheduling algorithm for network channel under the impact of big data   Order a copy of this article
    by Zheng Yu, Dangdang Dai, Zhiyong Zha, Yixi Wang, Hao Feng 
    Abstract: In order to improve the equalization scheduling ability of network channel, an equalization scheduling algorithm for big data network communication based on the baud-spaced equilibration and decision feedback modulation technology is proposed in this paper. With this algorithm, a model for network communication channel under the impact of big data is constructed to analyze the multipath characteristics of network channel; coherent multipath channel modulation method is used to conduct wave filtering to intersymbol interference and adaptive baud-spaced equilibration technology is also used to design channel equalization; the model for tap delay line of channel is used for multipath suppression of network channel, and the decision feedback modulation technology is also used for network channel equalization scheduling to overcome the impact of phase shift caused by big data impact on channel and improve channel equalization. Simulation results show that when the proposed algorithm is used for network channel equalization scheduling, the fidelity of symbols output through network communication is good and the bit error rate is low, and the performance of network channel equalization scheduling under the impact of big data and multipath is good, which improves the robustness of network channel.
    Keywords: big data; multipath effect; network channel; equalization scheduling; baud space; modulation \r\n\r\n.

  • Study on high accurate localization using multipath effects via physical layer information   Order a copy of this article
    by Yajun Zhang, Yawei Chen, Hongjun Wang, Meng Liu, Liang Ma 
    Abstract: Device-free passive localization (DFL) has shown great potentials to localizing target(s) without carrying any device in the area of interests (AoI). It is especially useful for many applications, such as hostage rescuing, wildlife monitoring, elder care, intrusion detection, etc. Current RSS (received signal strength) based DFL approaches, however, can only identify underlying the prerequisite that the signal collected is mainly conveying along a direct line-of-sight (LOS) path, but cannot perform well in a typical indoor building with multipath effects. This paper explains the fine-grained CSI-based localization system that is effective to locate within multipath environment and non-line-of-sight scenarios. The intuition underlying our design is that CSI (Channel State Information) benefits from the multipath effect, because the received signal measurements at different sampling positions will be the combination of different CSI measurements. We adapt the improved maximum likelihood method to pinpoint single target's location. Final, we propose a prototype of our design utilizing commercial IEEE 802.11n NICs. Results from the experimental scenes of a Lobby and a laboratory of our university, comparing to RSS-based and CSI-based scheme, demonstrate that our design can locate best single target with the accuracy of 0.95m.
    Keywords: device-free passive localization; channel state information; improved maximum likelihood methodrnrn.

  • A fast particle filter pedestrian tracking method based on color, texture and corresponding space information   Order a copy of this article
    by Yang Zhang, Dongrong Xin 
    Abstract: A fast particle filter pedestrian tracking method based on color, texture and corresponding space information is proposed. In this algorithm, firstly, we extract space information of object pedestrian and disintegrate it into three local regions. In addition, employ the improved texture and color information extract algorithm to get the joint texture and color information from the corresponding sub-region. Finally, determine the position of object by color-texture similarity indicator based on space division, and get the result of accurately track. In consideration of the multi thread information fusion algorithm need a larger number of particles, this factor could reduce the computational efficiency. Therefore, a wave integral histogram algorithm is proposed for improving arithmetic speed. The experiment carried out on videos, result indicates the effectiveness and efficiency of the proposed method, which can achieve higher accuracy than other tow state of the art algorithms in the actual traffic scene, and the real-time performance also has been improved considerably.
    Keywords: pedestrian tracking; particle filter; integral histogram; texture information.

  • Velocity Monitoring Signal Processing Method of Track Traffic Based on Doppler Effect   Order a copy of this article
    by Xiaojuan Hu, Tie Chen, Nan Zhao 
    Abstract: In order to improve the monitoring efficiency of track traffic speed, a signal processing method based on Doppler effect is purposed to meet the accurate velocity measurement of high speed trains. As a result, the track traffic radar monitoring signal processing method based on the Doppler effect is studied. First of all, the Doppler effect and the Doppler principles are analyzed. Then, the research of signal processing algorithm of rail traffic radar system is focused on, and the improvement method of a real sequence FFT (fast Fourier transform) imaginary part arithmetic algorithm is put forward. The process of FFT algorithm is simplified, and finally, through the Matlab software simulation, the improved FFT algorithm spectrum analysis effect is further verified. The test results showed that the use of improved FFT algorithm ensured the measurement accuracy, and improved the Doppler frequency calculation speed. In addition, it can meet the processing requirements of rail traffic radar velocity measurement system on the rail traffic velocity monitoring signal.
    Keywords: Signal processing; Doppler effect; Track traffic; FFT algorithm.

  • Research and Application on Logistics Distribution Optimization Problem using Big Data Analysis   Order a copy of this article
    by Yuming Duan, Hai-tao FU 
    Abstract: The optimization of logistics distribution center location is discussed under the environment of big data. Then the features of logistics distribution and algorithm design idea are provided using basic platform of MapReduce and integrated with data mining clustering algorithm. A clustering analysis algorithm based on geodesic distance is proposed, combined with the features of logistics, also with the parallel algorithm design and improved scheme based on MapReduce. It is considered in real situation, there is no linear distance between nodes, while Dijkstra distance can measure the actual distance between two points, so GDK-means clustering algorithm is put forward. The improved clustering algorithm gets parallel design to process a lot of unstructured data in big data of logistics. In parallel design, taking into account the complexity of algorithm and time efficiency, the parallelization algorithm is improved. Our clustering algorithm can be implemented on logistics distribution center location problem. It is verified to provide a decision scheme for any logistics route optimization in logistics distribution chain according to the size of the granularity of space division.
    Keywords: logistics distribution; center location; k-means; geodesic distance; big data.

  • Efficient Scheduling of Power Communication Resources Based on Fuzzy Weighted Constraint Equalization   Order a copy of this article
    by Jinkun Sun, Kang Yang 
    Abstract: Resources in the power communication system are affected by crosstalk between power grid subnets during scheduling, resulting in a poor equalization of resource allocation. In order to improve equalization allocation of resources in the power communication system, an efficient scheduling method of power communication system resources based on fuzzy weighted constraint equalization is proposed. In this method, the nonlinear time series analysis method is used to construct an information flow model of power communication system resource data and the channel equalization control is carried out in transmission links of the power communication system; the fuzzy weighted constraint equalization method is used to implement adaptive baud-spaced equalization handling during resource scheduling and the fuzzy mesh-based clustering algorithm is used to carry out pairing of classification attribute weights of power resources, so as to achieve efficient clustering of resources and to improve the link equalization in resource scheduling. Simulation results show that with this method, the average data recall rate during resource scheduling reaches about 90%, and the equalization curve changes very smoothly, which fully indicates that with this method, the data recall rate and scheduling efficiency are high, and the equalization capability of communication channels is strong
    Keywords: power communication system; resource scheduling; data clustering; channel equalization.

  • Mechanical fault diagnosis based on digital image processing technology   Order a copy of this article
    by Hengqiang Gao, HongJuan CAI 
    Abstract: Modern mechanical equipment continues to generate high productivity, at the same time, it has brought many new challenges and new problems. It is of great importance to detect and diagnose mechanical fault diagnosis in time and then provide alarm messages. Therefore, in this paper, we aim to propose a novel mechanical fault diagnosis with the digital image processing technology. Firstly, overall structure of mechanical equipment fault dynamic motion detection system is illustrated, and the infrared image segmentation is the key part this system. Secondly, we convert the infrared image segmentation problem to a cluster problem, and then provide a novel immune neural network cluster algorithm. Finally, experimental results demonstrate that the proposed algorithm can detect mechanical faults with high accuracy.
    Keywords: Mechanical fault diagnosis; Infrared image; Image segmentation; Immune neural network; Fitness value.

  • Performance Analysis of Distributed Transmit Beamforming in Presence of Oscillator Phase Noise   Order a copy of this article
    by Ding Yuan, Chuanbao Du, Houde QUAN 
    Abstract: This paper analyses the performance of distributed transmit beamforming in presence of oscillator phase noise. Average beampattern of arbitrary array is adopted here as performance index. In the analysis, the phase noise process of each node oscillator is described by a stationary model taking into account and shape single-sideband (SSB) spectrum. By using non-parametric kernel method, the average beampattern in presence of phase noise is derived and corresponding beampattern characteristics are evaluated. Theoretical analysis and simulation results show that the accumulated PN can result in the degradation of DTBF performance, which are also reflected in the change of beampattern characteristics, such as 3dB width, 3dB sidelobe region and average directivity. With the increase of time duration, the cumulative phase noise becomes greater, and the degradation is more obvious.
    Keywords: distributed transmit beamforming; average beampattern; phase noise; non-parametric kernel method; oscillator; performance analysis.

  • The Research of Image Super-Resolution Algorithm Using Convolutional Sparse Coding   Order a copy of this article
    by Bin Wang, Jun Deng, YanJing Sun 
    Abstract: According to super-resolution image reconstruction for convolution sparse coding, a novel super-resolution reconstruction algorithm named four-channel convolutional sparse coding method has proposed via improving convolutional sparse coding method. In the proposed method, a testing image was put in four-channel via rotating image ninety degrees in four times. Then, the high-frequent part and low-frequent part were reconstructed by means of convolutional sparse coding method and cubic interpolation method respectively. Finally, the reconstructed high-resolution image has obtained via the process of weighting on four images. The proposed method not only overcomes the problem of consistency for the overlapping patches, but also improves the detail contour for the reconstructed image and enhances its stability. The experimental results have shown that the proposed method has better PSNR, SSIM, noise immunity than some classical super-resolution reconstruction methods.
    Keywords: Image Reconstruction; Super-Resolution; Convolutional Sparse Coding; Four-Channel; Stability.

  • Three-dimensional Dynamic Tracking Learning Algorithm for Pedestrians on Indefinite Shape Base Based on Deep Learning   Order a copy of this article
    by Yaomin Hu 
    Abstract: In order to improve the three-dimensional dynamic tracking and recognition ability to pedestrians, a three-dimensional dynamic tracking learning algorithm for pedestrians on indefinite shape base based on deep learning is proposed in this paper. First, the indefinite shape base mesh of body imaging is segmented to extract three-dimensional dynamic similarity features of pedestrians, and the three-dimensional feature points are marked; the deep learning method is adopted for fusion of gray pixel value and extraction of difference feature to images during three-dimensional dynamic tracking. Then a motion vector library is constructed based on the extraction results, and the template matching equation of three-dimensional dynamic feature points of pedestrians is obtained. The simulation results show that this method can accurately track moving bodies in three-dimensional dynamic tracking and recognition and can provide good robustness in moving body target extraction with accuracy up to 100% at maximum and detection time of 48.83ms at maximum.
    Keywords: indefinite shape base; pedestrian; three-dimensional dynamic tracking; deep learning; image.
    DOI: 10.1504/IJICT.2019.10013628
  • A 3D model retrieval method based on multi-feature fusion   Order a copy of this article
    by Hong Tu 
    Abstract: 3D model retrieval is a hot topic in information retrieval, and it is of great importance to fuse multi-feature of 3D models to achieve high quality retrieval. Therefore, in this paper, we propose a novel 3D model retrieval method based on the multi-feature fusion technology. Motivation for the proposed 3D model retrieval method lies in that we convert the 3D model retrieval problem to a discriminative feature space mapping problem. The framework of the multi-feature fusion based 3D model retrieval system contains two main modules: 1) model normalization, and 2) multi-feature fusion. The proposed 3D model retrieval method is designed based on multiple feature fusion and online projection learning. In order to effectively fuse multiple features, we train a model to learn a low dimensional and discriminative feature space from the multiple views of 3D models. Particularly, to effectively retrieve the newly added samples, we propose an online projection learning algorithm, which learns a projection matrix by handling the least square regression model. Experimental results show that the proposed method can achieve higher precision for a given recall than others methods, that is, the proposed method can obtain higher quality 3D model retrieval results than state-of-the-art methods.
    Keywords: 3D model retrieval; Multi-feature fusion; Visual feature; Eigenvalue decomposition; Projection matrix.

  • Dual control algorithms for fault diagnosis of stochastic systems with unknown parameters   Order a copy of this article
    by Jinkun Sun, Kang Yang 
    Abstract: This paper researches the problem of fault diagnosis for stochastic system characterized by slowly changing, unknown parameters. This paper puts forward a conception of logic parameter decay rate based on the threshold, and has designed a rolling control algorithm learning control law based on Kalman filtering theory and lqg control law, while this algorithm can estimate the parameters, on the side of the parameter learning, the system has good fault tolerance ability, thus more accurate fault detection and isolation. The simulation results verify the effectiveness of the proposed method.
    Keywords: Dual control; Parameter decay rate; Kalman filter; Lqg control; Rolling control algorithm.

  • The Research of Multimedia Dynamic Image 3d Adaptive Rendering Method   Order a copy of this article
    by Su-ran KONG, Jun-ping YIN 
    Abstract: The 3d adaptive rendering of the multimedia dynamic image is conductive to improve the quality of the image. The current method renders the multimedia dynamic image by geometric information scenario modeling which has the problem that rendering efficiency is low. To solve this problem, a 3d adaptive rendering method based on OGRE is presented in this paper. This method firstly USES compressed domain to correct segmentation of the image, and then using SIFT operator and Forstner operator classifying image characteristics, finally according to the array image complete 3d adaptive image rendering. The experimental results show that this method has obvious advantages in terms of classification time, rendering energy consumption, segmentation efficiency and other aspects in comparison with other methods, which fully demonstrates that this method improves the rendering efficiency of images.
    Keywords: multimedia; dynamic image; 3d adaptive; rendering methods rnrn.

  • Automated Random Color Keypad   Order a copy of this article
    by Kumar Abhishek, Manish Kumar Verma, M.P. Singh 
    Abstract: In early 1970s Automated teller machine came into existence which was a replacement for cash counters at banks. People could now do transactions 24X7 with ease. But as ATM expanded its reach into human life, crimes related to ATM theft and fraud increased exponentially. There are flaws related to ATM security which is exploited by the criminals. ATM card cloning, card skimming, ATM pin theft are a few of the most common crimes related to the ATM. These crimes result in loss of money measured in billions. The newspapers are full of these crime reports and hacks. In this paper, we have proposed an automated random color keypad by which ATM security can be enhanced; making these ATM attacks and fraud very difficult. This ARC keypad will act as a secure replacement for traditional ATM keypad. Several experimental results of the ARC keypad have been included in this paper which proves that our mechanism enhances ATM pin security and reduces the chances of fraud to a great extent.
    Keywords: ATM; ATM keypad;ATM cloning; ATM skimming.

  • NFK: a Novel Fault-tolerant K-Mutual Exclusion Algorithm for Mobile and Opportunistic Ad hoc Networks   Order a copy of this article
    by Tahar Allaoui, Mohamed Bachir Yagoubi, Chaker Abdelaziz Kerrache, Carlos T. Calafate 
    Abstract: This paper presents a fault-tolerant algorithm ensuring multiple resources sharing in Mobile Adhoc Networks (MANETs) that is able to handle the known k-mutual exclusion problem in such mobile environments. The proposed algorithm relies on a token-based strategy, and requires information about resources and their use to be carried in routing protocol control messages. This way, our solution avoids any additional exchange of messages. Furthermore, experimental results show that it offers a fast response time. Moreover, we introduce a dual-layer fault-tolerance mechanism that tolerates the faults of several sites at the same time without affecting the well functioning of the system. Simulation results also evidence the high efficiency of our proposal, which achieves reduced overhead and response delay even in the presence of critical situations where multiple simultaneous faults occur.
    Keywords: NFK; Resource sharing; K-Mutual Exclusion; Fault tolerance; MANETs.

  • GIS Information Feature Estimation Algorithm Based on Big Data   Order a copy of this article
    by Chunyang Lu, Feng WEN 
    Abstract: In order to improve the data mining and information scheduling capabilities of Geo-information system(GIS), it is necessary to optimize GIS information feature estimation and perform GIS information feature extraction, so a GIS information feature estimation algorithm based on big data analysis is proposed. In this algorithm, the piecewise linear estimation method is adopted to reconstruct feature data in the GIS information database in group, and associated information fusion is performed to the GIS data in the database, and adaptive scheduling is performed to the GIS information feature database through the cascaded distributed scheduling method; according to the spatial distribution of geographic information, vector adjustment is performed to the cluster center, and the frequent item mining method is adopted to extract features of GIS information, and then sequential processing is adopted to the extracted feature quantity of GIS information; the regularized power density spectrum estimation method is adopted to perform unbiased estimation to GIS information feature data. Simulation results show that in GIS information feature estimation, the proposed method can provide estimation with low bias and high accuracy, so it has good GIS information scheduling capability and precision.
    Keywords: big data; GIS; information feature estimation; associated information fusion.

  • Uncertain chance-constrained programming based on optimistic and pessimistic values: models and solutions   Order a copy of this article
    by Yao Qi, Ying Wang, Xiangfei Meng, Ning Wang 
    Abstract: To solve the uncertainty in real decisions and overcome the limitations of random programming and fuzzy programming in application, we proposed two novel uncertain chance-constrained programming models based on optimistic and pessimistic value of uncertain variables in this paper. Firstly, the optimistic value and pessimistic value of uncertain variables were introduced as the objective functions and the chance constraints of uncertain programming were defined as constraint functions, then the optimistic value model and pessimistic value model were established. Secondly, two lemmas were proposed and proved to transform the uncertain chance-constrained programming model into an equivalent deterministic programming model. Finally, the feasibility and effectiveness of the proposed models and solutions were verified by a numerical example.
    Keywords: uncertain programming; chance-constrained; optimistic value; pessimistic value; equivalent deterministic model.

  • Multilingual Named Entity Recognition Based on the BiGRU-CNN-CRF Hybrid Model   Order a copy of this article
    by Maimaiti AYIFU, Silamu WUSHOUER, Muhetaer PALIDAN 
    Abstract: Uyghur, Kazak, and Kyrgyz (UKK languages) are agglutinative languages belonging to the Altaic language system and mainly located Xinjiang Uyghur Autonomous Region of China and Central Asia, which are low-resource languages with rich morphological features. Determining how to obtain a better general entity recognition method without relying on artificial features and resources is a problem that remains to be solved. In this paper, a hybrid neural network model based on bidirectional GRU (BiGRU)-CNN-CRF is proposed. This model takes the prefix or suffix character-level feature vectors captured by the convolutional neural network (CNN) layer, part-of-speech vectors, and word vector concatenated vectors of words as inputs and constructs a deep neural network of BiGRU-CRF suitable for the recognition of UKK named entities. Then, the output of the BiGRU layer is decoded by the conditional random field (CRF) layer, and the dependencies between the output tags are considered. Finally, the global optimal labeling sequence is outputted. The experimental results show that this model can solve the problem of automatic recognition of named entities, achieving the best results to date for the UKK data set provided by the laboratory. In addition, the model has good robustness. The F1 value of UKK named entity recognition reached 93.11%, 90.29%, and 89.22% for the Uyghur, Kazak, and Kyrgyz languages, respectively. This model has been verified in named entity identification tasks in three languages and can be extended to named entity identification tasks in other agglutinative languages.
    Keywords: Recurrent neural network; convolutional neural network; conditional random field; named entity recognition; Uyghur; Kazak; Kyrgyz.

  • Latent Semantic Text Classification Method Research based on Support Vector Machine   Order a copy of this article
    by Qingmei Lu, Yulin Wang 
    Abstract: Text classification, as an important process of network public opinion analysis, will directly affect the judgment of text public opinion. The accuracy of text classification is an important prerequisite for textual public opinion analysis. At present, the commonly used text classification methods mainly focus on clustering and machine learning. In general, the accuracy is not ideal. Moreover, text classification method based on latent semantics has the characteristics of insensitivity to feature dimension and simple classification methods, so it has become the focus of extensive research. However, as the type of text increases, local semantic analysis will occur, resulting in the dropping of classification accuracy of text. In this paper, a latent semantic classification method based on support vector machine (LR-LSA) is proposed to solve the problem of local semantic analysis brought by too much text category, it can be better to solve the impact of feature dimension surge on effect.
    Keywords: LSA; Semantics; Vector machine; Machine learning.

  • Autonomous Multi-target Tracking Technology of Unmanned Surface Vessel Based on Navigation Radar Image   Order a copy of this article
    by JiaWei Xia, DeChao Zhou, Xufang Zhu 
    Abstract: There are three prominent problems in traditional multi-target tracking technologies of unmanned surface vessel: repeated observations and observation omission due to instable reference system of vessel, the lack of data concerning radar observation point features and low utilization of it, and intermittent loss of radar image sequence signals due to environment interference. In order to solve the above problems, a radar image stitching algorithm based on interpolation, an improved target multi-feature extraction algorithm and a multi-target track management model based on multi-feature matching are introduced based on the architecture for multi-target tracking system of unmanned surface vessel to improve the timeliness and accuracy of target tracking. The feasibility of this technology has been tested through a field experiments on lake. The result reveal that the proposed method can provide better multiple targets tracking ability with lower predicted error than traditional target tracking methods.
    Keywords: unmanned surface vessel; navigation radar; multi-target tracking; autonomous system.

  • MEM: A New Mixed Ensemble Model for Identifying Frauds   Order a copy of this article
    by Chen Zhenhua, Jiang WeiLi, Lei Ma, Zhang JunPeng, Hu JinShan, Xiang Yan, Shao DangGuo 
    Abstract: In the social security system, there still exist wilful insurance frauds. In this paper, to address the insufficient stability and randomness of the traditional insurance fraud evaluation model, we propose a new classifier called MEM (Mixed Ensemble Model). Based on the principle of ensemble learning, MEM combines several different individual learners and uses Q statistical methods to evaluate diversity. MEM has been tested on two fraud related datasets to compare with three state-of-the-art classifiers: Neural Network, Naive Bayes and Logistic Regression. The experimental results show that MEM performs better than the other three classifiers in both datasets under the four measures: Accuracy, Recall, F-value and Kappa. MEM can be a useful method for the detection of social insurance fraud.
    Keywords: Detection of social insurance frauds; Measurement of social insurance frauds; Social insurance identification techniques; Mixed ensemble model.

  • Research on Security Assessment Algorithm of Navigation control system based on Big Data   Order a copy of this article
    by Jianxin Ge, Jiaomin Liu 
    Abstract: There is a trend of integration and modularization in navigation control system. It has the characteristics of high resource sharing, fast information transmission and integration of software and hardware. It has high requirements for the safety of navigation control systems. In view of the low accuracy of traditional algorithms, a new big data-based security assessment algorithm of navigation control system is proposed. The structure of navigation control system is introduced, and the realization of navigation control system is analyzed. In big data environment, CCS (Common Criteria) is used to determine the safety assessment objectives of the navigation control system. According to the different safety weights among the areas in the criterion layer, the pilots determine the weight of the assessment objectives by analytic hierarchy process based on the evaluation of the navigation control system and the corresponding safety rating indices. GRAP algorithm is used to acquire the security level of navigation control system so as to realize the safety assessment of the navigation control system. Experimental results show that the proposed algorithm can effectively improve the accuracy of assessment and reduce misclassification rate due to the comprehensive ability of this algorithm.
    Keywords: Big data; Navigation control system; Safety; Assessment; Aircraft.

  • Decision of Mechanical Allocation in Subgrade Earthwork Construction under Uncertainty   Order a copy of this article
    by Bo Wang, Ying Wang, Lijie Cui 
    Abstract: There are two common phenomena in a subgrade earthwork construction, one is the queuing problem caused by mechanical resource confliction, the other is the hysteresis of construction schedule caused by uncertainties. These two phenomena will have a huge impact on the mechanical allocation. Based on the Petri net, queuing theory and uncertainty theory, we firstly establish the whole construction process model under uncertain conditions. Secondly, we solve the mechanical configuration problem according to a designed construction progress. Thirdly, we add some uncertain factors into the Mechanical allocation. Finally, through a numerical application, we verify and compare the decisions above..
    Keywords: subgrade earthwork construction;Mechanical allocation;Uncertain Conditions.

  • Study of Big Data Mining Based on Cloud Computing   Order a copy of this article
    by Jiangyi Du, Fu-ling Bian 
    Abstract: In the era of big data, how to discover knowledge from various types of massive data is an important research direction of big data processing technology. So this is where the value of big data mining lies. Based on the comparison of conventional data mining and big data mining, this paper discusses the typical data mining algorithms, especially the parallel implementation of the algorithms, it also analyzes the architecture of big data mining system and the framework of big data mining platform based on cloud computing, which has provides reference to the users cognition and application of big data mining.
    Keywords: Big Data; Data Mining; Cloud Computing.

  • Part-Based Pyramid Loss for Person Re-Identification   Order a copy of this article
    by Yuanyuan Wang, Zhijian Wang, Mingxin Jiang 
    Abstract: Person re-identification (ReID) is a challenging problem in computer vision, meanwhile attracted the attention of industry. Person ReID focuses on identifying person among multiple different cameras. A key under-addressed problem is to learn a good metric for measuring the similarity among images. Recently, deep learning networks with metric learning loss has become a common framework for person ReID, such as triplet loss and its variants. However, the previous method mainly uses the distance to measure the similarity, and the distance measure is more sensitive when the scale changes. In this paper, we propose part-based pyramid loss to learn better similarity metric for the person ReID, in which batches of quadruplet samples as the input. Specifically, we simultaneously use the relationship of distance and angle among samples learn the local body-parts features of person images. Our approach uses the pyramid relationship in triangles as a measure of similarity, minimizing the angle at the negative point of the triangle. Pyramid loss can learn better similarity metric and achieve a higher performance on the person ReID benchmark datasets. The experimental results show that, our method yields competitive accuracy with the state-of-the-art methods.
    Keywords: Person Re-identification; Metric Learning; Pyramid Loss; Part-based.

  • Feature Extraction Algorithm for Fast Moving Pedestrians with Frame Drop Constraint based on Deep Learning   Order a copy of this article
    by Yaoming Hu 
    Abstract: When the existing method extracts the information of the fast moving pedestrian, the frame dropping phenomenon may occur, resulting in low extraction precision. A fast moving pedestrian frame loss constrained feature extraction algorithm based on depth tilt is proposed. Block matching and denoising are performed on the pedestrian image. The contour feature extraction method is used to reconstruct the adjacent frames, and the reconstructed image frame vector is sub-block fusion. The depth learning algorithm is used to extract the feature quantity of the gray pixel from the frame falling part of the image. Improved feature extraction algorithm for pedestrians with frame loss constraints. The simulation results show that the standard deviation of the frame loss of the extraction result is 8.235, and the standard deviation of the non-drop frame is 4.353. It proves that the algorithm has low frame loss rate and high extraction and recognition ability.
    Keywords: pedestrian; frame drop; feature extraction; tracking and identification; deep learning.

  • Building Energy Consumption Forecasting Algorithm Based on Piecewise Linear Fusion and Exponential Spectrum Analysis   Order a copy of this article
    by Chenqiang Zhan 
    Abstract: In order to solve the problem of large error in traditional statistical prediction methods, a large data prediction method based on pie chart is proposed. Linear fusion and exponential spectrum analysis methods are proposed. The method establishes the target model of building energy consumption prediction and carries out nonlinear exponential sequence analysis. Game analysis of building energy consumption, segmentation linear fusion method is used to decompose the characteristics of building energy consumption map, and statistical analysis is carried out. According to the evolution of feature decomposition and learning trends, the analysis and accurate prediction of building energy consumption big data is realized. The simulation results show that the method reduces energy consumption, is conducive to building energy-saving emission reduction and green building, and provides a new idea for building energy conservation. Provide scientific support for the development of building energy conservation and environmental protection.
    Keywords: big data environment; building energy consumption; forecasting algorithm; map feature analysisrnrn.

  • CNN-based Text Multi-Classifier using Filters Initialized by N-gram Vector   Order a copy of this article
    by Yan Xiang, Ying Xu, Zhengtao Yu, Hongbin Wang, Yantuan Xian 
    Abstract: Text classification based on Convolutional Neural Networks (CNN) has got more attention recently. This paper presents an improved CNN-based text multi-classifier. First, word vector training is performed on the corpus to be classified. Then, the most important n-grams for a particular category are selected and clustered into different groups. Finally the centroid vectors of different groups are used to initialize the center weights of filters. Initialization weights enable CNN to extract n-gram features more effectively and ultimately improve text classification results. Multi-classification experiments using multiple advanced models were performed on different data sets. Experiments show that the proposed model is more accurate and stable than other baseline models.
    Keywords: Convolutional Neural Networks; Text Classification; n-gram; Word Embedding; Clustering.

  • Source Code Based Context-Sensitive Dynamic Slicing of Web Applications   Order a copy of this article
    by Jagannath Singh, D.P. Mohapatra 
    Abstract: Web applications are broadly utilized for spreading business around the globe. To meet the necessities of the clients, web applications must have better quality and robustness than any other applications. Web slicing improves the understanding of the important information which transitively improves the quality of the web application. The system dependence graph is the most popular intermediate representation for explicitly represent all dependencies that have to be considered in slicing. The system dependence graph has been extended to Web Dependence Graph (WDG). A partial tool has been developed for automatic generation of the WDG. We proposed a Context-sensitive Web Slicing (CSWS) algorithm for computation of slices using WDG. During our literature survey, we noticed that majority of the automatic graph generation tools are mainly based on byte-code whereas our tool uses the dependency analysis from the source code of the given program. Using our tool WDG, we compared the performance of our proposed CSWS algorithm for slicing with another closely related slicing techniques.
    Keywords: Program Slicing; JSP Application; Source Code Analysis; Context sensitive; Dynamic Slicing.

  • An Item Recommendation Model with Content Semantic   Order a copy of this article
    by Yunpeng Jiang, Liejun Wang, Ji-Wei Qin 
    Abstract: Current recommender service providers are offering interesting items for user based user behavior (e.g. the users rating, the trust value) and videos feature manual tagging, and ignoring the content semantic of items. As an accurate reflection of the item content, the item semantic is should be taken into account to avoid subjective feeling of the user marked items feature, we present a recommender model that leverages content semantic and user rating. In this model, the item similarity is firstly calculated with content semantic by best Word2vec method, where the item content as words is mapped into vector space and the distance between vectors is described as the similarity between items by Euclidean Distance, the distances are sorted in ascending order to form one item list recommended by the content semantic. Next, the user rating is used to model the user preference and build the other item list recommended by traditional recommendation method, such as SVD. Then, the two video list is mixed together as final item list recommended for user. Comparing the above algorithm to traditional collaborative filtering on different sparsity rating matrix, Movielens, Filmtrust and Online_Retail, we run experiments that show the presented algorithm has is greatly improved on accuracy, compared to the traditional algorithms, the accuracy of the model increases by an average of 25.32% to 31.41%, and the presented user preference has the good scalability.
    Keywords: recommender model; semantic feature; similarities; Word2vec; data sparsity.

  • Stargan Based Camera Style Transfer for Person Retrieval   Order a copy of this article
    by Yuanyuan Wang, Zhijian Wang, Mingxin Jiang 
    Abstract: Person retrieval is also known as person re-identification (ReID) aiming to match person among cross cameras. Although the results of the person ReID have performed well in small datasets, the issues of the large number of identities in real scenarios or with more cameras have not been fully investigated. Being an image retrieval task under cross multi-cameras of intelligent video security, person ReID is influenced by the image style change caused by different camera illumination and view angles. The number of cameras in the latest datasets is increasing and more camera transfer models need to be trained. Traditional methods of generative adversarial network (GAN) can only handle transfer of two domains. To facilitate the research towards solving these problems, we use star generative adversarial networks (StarGAN) to transfer the image from one camera to another camera in the latest large benchmark datasets. We train multiple transfer models simultaneously, minimizing the bias among different cameras. Label smooth regularization (LSR) algorithm is utilized to mitigate the effects of noise in the model. We learn part-based descriptors from pedestrian samples to generate robust feature representation. Our work is competitive compared to the state-of-the-art.
    Keywords: StarGAN; Person retrieval; LSR.

  • Fast Mining Algorithm for Multi-level Association Rule Data under Temporal Constraints   Order a copy of this article
    by Yicheng Mu 
    Abstract: Redundant interference occurs between frames of multi-level association rule data under temporal constraints, which brings poor clustering and anti-interference performance to data mining. In order to improve the multi-level association rule data mining ability, this paper proposes a fast mining algorithm for multi-level association rule data based on temporal constraints. It constructs a fitting state model of multi-level association data distribution, and uses the reorganization method of multi-level association rules to re-arrange data structure and extract the average mutual information feature; it constructs detection statistics to conduct multi-level linear programming design for association rules data, and uses the autocorrelation detection method to conduct de-interference processing and the fuzzy directional clustering method to conduct fuzzy clustering processing for multi-level association rule data, to realize fast mining of multi-level association rule data under temporal constraints. The simulation results show that compared with traditional methods, the proposed method reduces the execution time of multi-level association rule data mining by 12.77%, and the mining accuracy is improved by 23.34%. High mining accuracy and strong anti-interference ability make the data mining efficiency improved.
    Keywords: temporal constraints; association rules; data mining; feature extraction.

  • Fuzzy Judgment of Boundary Features under Dynamic Constraints in Pedestrian Tracking   Order a copy of this article
    by Yaoming Hu 
    Abstract: Pedestrian tracking and recognition is influenced by the pedestrian environment and edge factors of dynamic features, which is easy to tracking errors, so in order to improve the pedestrian tracking and recognition ability, it is required to conduct fuzzy judgment to edge features. Therefore, a fuzzy judgment method of edge features under dynamic constraints in pedestrian tracking based on local motion planning and edge contour segmentation was proposed. In this method, a geometry mesh area model for pedestrian tracking and recognition was constructed, and the fuzzy dynamic feature segmentation method was adopted to reconstruct dynamic edge feature points in pedestrian tracking to extract the grayscale pixel set under dynamic constraints in pedestrian tracking; edge feature quantity was fused based on the distribution intensity of grayscale pixels to realize pedestrian tracking image fusion and information enhancement processing; the three-dimensional dynamic constraint method was adopted for local motion planning of pedestrian tracking, and then fuzzy judgment was carried out to edge features in pedestrian tracking based on the edge contour segmentation results. The simulation results show that in pedestrian tracking and recognition, this method has strong fuzzy judgment ability of edge features and can provide results with error below 10mm and relatively stable fluctuation, so this method can provide relatively high recognition accuracy and good robustness.
    Keywords: pedestrian tracking; dynamic constraint; boundary feature; recognition; image fusion.

  • Design of Cloud Computing-based Foreign Language Teaching Management System Based on Parallel Computing   Order a copy of this article
    by Kanmanli Maimaiti 
    Abstract: In view of the long response time of the traditional foreign language teaching management system and the inability to guide students to improve their learning interest, a foreign language teaching management system based on parallel computing is proposed. In this method, the cloud architecture of the foreign language teaching system is given under the cloud computing environment, on which the parallel computing method is adopted to design the foreign language teaching management system hardware, which is scalable and flexible. The parallel algorithm is designed and the communication between the modules of the system is implemented with C# language. The experimental results show that response time for each of the four scores of Zhang is 1s in reference [3], the response time for each of them is only 0.6s in this paper. The system can shorten the response time for query and improve the development speed of web-based foreign language teaching, effectively promotes the development of web-based foreign language teaching and students interests in foreign language learning.
    Keywords: parallel computing; cloud environment; foreign language teaching; management system.