Forthcoming articles

International Journal of Information and Communication Technology

International Journal of Information and Communication Technology (IJICT)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Information and Communication Technology (62 papers in press)

Regular Issues

  • A Personalized Recommendation Algorithm Based on Probabilistic Neural Networks   Order a copy of this article
    by Long Pan, Jiwei Qin, Liejun Wang 
    Abstract: Collaborative filtering is widely used in recommendation system. Our work is motivated by the observation that users caught in their attention relationship network, and their opinions about items will be directly or indirectly affected by others through such a network. Based on behaviors of users with similar interest, the technique focuses on the use of their opinions to recommend items. Therefore, the quality of similarity measure between users or items has a great impact on the accuracy of recommendation. This paper proposes a new recommendation algorithm with graph-based model. The similarity between two users (or two items) is computed from the connections on graph with nodes of users and items. The computed similarity measure is based on probabilistic neural networks to generate predictions. The model is evaluated on a recommendation task which suggests that which videos users should watch based on what they watched in the past. Our experimental results on the YouKu and Epinions datasets demonstrate the effectiveness of the presented approach in comparison with both collaborative filtering with traditional similarity measures and simplex graph-based methods and further improve user satisfaction, our approach can better improve the overall recommendation performance in precision, recall and coverage.
    Keywords: recommendation system; similarities; graphs-based approach; collaborative filtering; probabilistic neural networks.

  • Temporal Impact Analysis and Adaptation for Service-Based Systems   Order a copy of this article
    by Sridevi Saralaya, Rio D'Souza, Vishwas Saralaya 
    Abstract: Temporality is an influential aspect of Service-Based Systems (SBS). Inability of a service to achieve time requirements may lead to violation of Service-Level Agreements (SLAs) in a SBS. Such non-conformity by a service may introduce temporal inconsistency between dependent services and the composition. The temporal impact of the anomaly on related services and also the composition will need to be identified if SLA violations have to be rectified. Existing studies concentrate on impact analysis due to web service evolution or changes to a web service. There is a huge lacunae regarding studies on impact of time delay on temporal constraints of dependent services and obligations of business process. Although reconfiguration of SBS to overcome failures is extensively addressed, reconfiguration triggered due to temporal delay is not well explored. In this study we try to fill the gap between reconfiguration and impact analysis invoked due to temporal violations. Once the impacted region and the amount of temporal deviation to the business process are known, we try recovery by localizing the reconfiguration of services to the impacted zone.
    Keywords: Service-Based Systems; Impact-Analysis; Proactive Adaptation; Reactive Adaptation; Reconfiguration; Cross-Layer Adaptation; SLA violation handling; Anomaly handling; Service-Based Applications.

  • Considering the environment's characteristics in wireless networks simulations: case of the simulator NS2 and the WSN   Order a copy of this article
    by Abdelhak EL MOUAFFAK, Abdelbaki EL BELRHITI EL ALAOUI 
    Abstract: Recently, the wireless networks, particularly the wireless Sensor Networks (WSN) occupy an important place in several application areas due to the progress in microelectronics and wireless communications domains. Thus, a set of researches had addressed this issue in order to broaden the possibilities offered by these networks and circumvent the encountered problems. The test of any new solution is an essential phase to validate its performances. This phase is done in network simulators; which NS is the most used. The impact of the physical layer and the radio signal propagation environment criteria on the simulations results is indisputable. In this context, and after presenting and classifying the radio propagation models, we study in detail the implemented models in NS-2. The focus is on the ability of these models to consider the characteristics of the wireless networks deployment environment (e.g. nature, position and mobility of the obstacles). And to consider the specificities of WSN, the effect of other parameters (e.g. antenna height) will be discussed.
    Keywords: Wireless network; Wireless sensor network; Simulation; Network Simulator; NS-2; Radio propagation model; Deployment environment.

  • Improved Biometric Identification System Using a New Scheme of 3D Local Binary Pattern   Order a copy of this article
    by KORICHI Maarouf, Meraoumia Abdallah, Aiadi Kamal Eddine 
    Abstract: In any computer vision application, integration of relevant feature extraction module is vital to help in making accurate decision ofrnthe classification. In the literature, several methods that havernachieved promising results and high accuracies are based on texturernanalysis. Thus, there exist various feature extraction techniques torndescribe the texture information, among them; the Local BinaryrnPattern (LBP) is widely used to characterize the image sufficiently.rnGenerally, LBP descriptor and their variants are applied onrngray-scale images. Thus, in this paper, we propose a new methodrnthat can be applied to any type of image either in grayscale, color,rnmultispectral or hyperspectral. It is a new scheme of 3D LocalrnBinary Pattern. We have developed biometric system for personrnidentification and an edge detection technique to evaluate it. Thernobtained results have showed that it has higher performancesrncompared to other methods developed in the literature in terms ofrnidentification rates.
    Keywords: Feature extraction;Local Binary Pattern (LBP);Biometrics;Person identification;Palmprint;Data fusion.

  • An Agricultural Data Storage Mechanism Based on HBase   Order a copy of this article
    by Changyun Li, Qingqing Zhang, Pinjie He, Zhibing Wang, Li Chen 
    Abstract: With the development of agricultural space-time localization, sensor network and cloud computing, the amount of agricultural data is increasing rapidly and the data structure becomes more complicated and changeable. Currently, the widely used agricultural database is the relational database. This database handles a large amount of data with very limited throughput and is not suitable for organization and management of distributed data. Hbase is a non-relational database of distributed file storage built on Hadoop platform. Hbase is suitable for unstructured data storage database and it can handle large volume of data with high scalability. To better store agricultural big data in Hbase, we propose a special agricultural data buffer structure which stores data based on the data category and a two-level indexing memory organization strategy on HBase. The method proposed saves more than a quarter of the time compared to traditional buffering methods. Experimental results show the higher efficiency of the agricultural data buffer structure and memory-organization strategy.
    Keywords: agricultural big data; data buffer structure; HBase; two-level indexing strategy.

  • A Real-time Multi-Agent System for Cryptographic Key Generation using Imaging-based Textural Features   Order a copy of this article
    by Jafar Abukhait, Ma'en Saleh 
    Abstract: Traditional network security protocols depend on exchanging the security keys between the network nodes, and thus opposing the network to different classes of security threats. In this paper, a multi-agent system for cryptographic key generation is proposed for real-time networks. The proposed key generation technique generates a 256-bit security key for the Advanced Encryption Standard (AES) algorithm using the textural features of digital images. By implementing this key generation technique at both the sender and receiver network nodes, the process of exchanging the security keys through the network would be eliminated, and thus making communication between network nodes robust against different security threats. Simulation results over a real-time network show the efficiency of the proposed system in reducing the overhead of the security associations performed by the IPsec protocol. On the other hand, the proposed agent-based system shows a high efficiency in guaranteeing the quality-of-service (QoS) of the real-time requests in terms of miss-ratio and total average delay through applying the best scheduling algorithm.
    Keywords: Cryptography; Textural Features; Gray Level Co-occurrence Matrix (GLCM); Advanced Encryption Standard (AES); QoS; Security Key; Network Node.

  • Modeling of learner behavior in massive open online video-on-demand services   Order a copy of this article
    by Ji-Wei Qin, Xiao Liu 
    Abstract: Video-on-demand service as a popular Internet application provides lively learning resource, learner can freely selects and watches his/her interesting videos in massive open online education. Learner video-on-demand behavior as feedback shows preference among learners is available to help video provider to design, deployment and manage learning video in massive open online VoD services. In this paper, we collected the learner video-on-demand behavior reports in 875 days, on the basis of real-word data, the learner video-on-demand model is presented in massive open online VoD service. Three main findings are proposed. 1) The Educational video popularity matches better with the Stretched Exponential model than the Zipf model. 2) The long-session educational video attends with the less-popularity. 3) The Poisson distribution is considered the best fit for the arrival learner in massive open online vod services. Educational video popularity distribution would be helpful to define the number copy of educational video file for deployment on video-on-demand server. Session and arrvial pattern would be helpful to design the content of educational video in massive open online VoD services.
    Keywords: learner behavior; video-on-demand; massive open online.

  • Research on equalization scheduling algorithm for network channel under the impact of big data   Order a copy of this article
    by Zheng Yu, Dangdang Dai, Zhiyong Zha, Yixi Wang, Hao Feng 
    Abstract: In order to improve the equalization scheduling ability of network channel, an equalization scheduling algorithm for big data network communication based on the baud-spaced equilibration and decision feedback modulation technology is proposed in this paper. With this algorithm, a model for network communication channel under the impact of big data is constructed to analyze the multipath characteristics of network channel; coherent multipath channel modulation method is used to conduct wave filtering to intersymbol interference and adaptive baud-spaced equilibration technology is also used to design channel equalization; the model for tap delay line of channel is used for multipath suppression of network channel, and the decision feedback modulation technology is also used for network channel equalization scheduling to overcome the impact of phase shift caused by big data impact on channel and improve channel equalization. Simulation results show that when the proposed algorithm is used for network channel equalization scheduling, the fidelity of symbols output through network communication is good and the bit error rate is low, and the performance of network channel equalization scheduling under the impact of big data and multipath is good, which improves the robustness of network channel.
    Keywords: big data; multipath effect; network channel; equalization scheduling; baud space; modulation \r\n\r\n.

  • Study on high accurate localization using multipath effects via physical layer information   Order a copy of this article
    by Yajun Zhang, Yawei Chen, Hongjun Wang, Meng Liu, Liang Ma 
    Abstract: Device-free passive localization (DFL) has shown great potentials to localizing target(s) without carrying any device in the area of interests (AoI). It is especially useful for many applications, such as hostage rescuing, wildlife monitoring, elder care, intrusion detection, etc. Current RSS (received signal strength) based DFL approaches, however, can only identify underlying the prerequisite that the signal collected is mainly conveying along a direct line-of-sight (LOS) path, but cannot perform well in a typical indoor building with multipath effects. This paper explains the fine-grained CSI-based localization system that is effective to locate within multipath environment and non-line-of-sight scenarios. The intuition underlying our design is that CSI (Channel State Information) benefits from the multipath effect, because the received signal measurements at different sampling positions will be the combination of different CSI measurements. We adapt the improved maximum likelihood method to pinpoint single target's location. Final, we propose a prototype of our design utilizing commercial IEEE 802.11n NICs. Results from the experimental scenes of a Lobby and a laboratory of our university, comparing to RSS-based and CSI-based scheme, demonstrate that our design can locate best single target with the accuracy of 0.95m.
    Keywords: device-free passive localization; channel state information; improved maximum likelihood methodrnrn.

  • A fast particle filter pedestrian tracking method based on color, texture and corresponding space information   Order a copy of this article
    by Yang Zhang, Dongrong Xin 
    Abstract: A fast particle filter pedestrian tracking method based on color, texture and corresponding space information is proposed. In this algorithm, firstly, we extract space information of object pedestrian and disintegrate it into three local regions. In addition, employ the improved texture and color information extract algorithm to get the joint texture and color information from the corresponding sub-region. Finally, determine the position of object by color-texture similarity indicator based on space division, and get the result of accurately track. In consideration of the multi thread information fusion algorithm need a larger number of particles, this factor could reduce the computational efficiency. Therefore, a wave integral histogram algorithm is proposed for improving arithmetic speed. The experiment carried out on videos, result indicates the effectiveness and efficiency of the proposed method, which can achieve higher accuracy than other tow state of the art algorithms in the actual traffic scene, and the real-time performance also has been improved considerably.
    Keywords: pedestrian tracking; particle filter; integral histogram; texture information.

  • Velocity Monitoring Signal Processing Method of Track Traffic Based on Doppler Effect   Order a copy of this article
    by Xiaojuan Hu, Tie Chen, Nan Zhao 
    Abstract: In order to improve the monitoring efficiency of track traffic speed, a signal processing method based on Doppler effect is purposed to meet the accurate velocity measurement of high speed trains. As a result, the track traffic radar monitoring signal processing method based on the Doppler effect is studied. First of all, the Doppler effect and the Doppler principles are analyzed. Then, the research of signal processing algorithm of rail traffic radar system is focused on, and the improvement method of a real sequence FFT (fast Fourier transform) imaginary part arithmetic algorithm is put forward. The process of FFT algorithm is simplified, and finally, through the Matlab software simulation, the improved FFT algorithm spectrum analysis effect is further verified. The test results showed that the use of improved FFT algorithm ensured the measurement accuracy, and improved the Doppler frequency calculation speed. In addition, it can meet the processing requirements of rail traffic radar velocity measurement system on the rail traffic velocity monitoring signal.
    Keywords: Signal processing; Doppler effect; Track traffic; FFT algorithm.

  • Research and Application on Logistics Distribution Optimization Problem using Big Data Analysis   Order a copy of this article
    by Yuming Duan, Hai-tao FU 
    Abstract: The optimization of logistics distribution center location is discussed under the environment of big data. Then the features of logistics distribution and algorithm design idea are provided using basic platform of MapReduce and integrated with data mining clustering algorithm. A clustering analysis algorithm based on geodesic distance is proposed, combined with the features of logistics, also with the parallel algorithm design and improved scheme based on MapReduce. It is considered in real situation, there is no linear distance between nodes, while Dijkstra distance can measure the actual distance between two points, so GDK-means clustering algorithm is put forward. The improved clustering algorithm gets parallel design to process a lot of unstructured data in big data of logistics. In parallel design, taking into account the complexity of algorithm and time efficiency, the parallelization algorithm is improved. Our clustering algorithm can be implemented on logistics distribution center location problem. It is verified to provide a decision scheme for any logistics route optimization in logistics distribution chain according to the size of the granularity of space division.
    Keywords: logistics distribution; center location; k-means; geodesic distance; big data.

  • Efficient Scheduling of Power Communication Resources Based on Fuzzy Weighted Constraint Equalization   Order a copy of this article
    by Jinkun Sun, Kang Yang 
    Abstract: Resources in the power communication system are affected by crosstalk between power grid subnets during scheduling, resulting in a poor equalization of resource allocation. In order to improve equalization allocation of resources in the power communication system, an efficient scheduling method of power communication system resources based on fuzzy weighted constraint equalization is proposed. In this method, the nonlinear time series analysis method is used to construct an information flow model of power communication system resource data and the channel equalization control is carried out in transmission links of the power communication system; the fuzzy weighted constraint equalization method is used to implement adaptive baud-spaced equalization handling during resource scheduling and the fuzzy mesh-based clustering algorithm is used to carry out pairing of classification attribute weights of power resources, so as to achieve efficient clustering of resources and to improve the link equalization in resource scheduling. Simulation results show that with this method, the average data recall rate during resource scheduling reaches about 90%, and the equalization curve changes very smoothly, which fully indicates that with this method, the data recall rate and scheduling efficiency are high, and the equalization capability of communication channels is strong
    Keywords: power communication system; resource scheduling; data clustering; channel equalization.

  • Mechanical fault diagnosis based on digital image processing technology   Order a copy of this article
    by Hengqiang Gao, HongJuan CAI 
    Abstract: Modern mechanical equipment continues to generate high productivity, at the same time, it has brought many new challenges and new problems. It is of great importance to detect and diagnose mechanical fault diagnosis in time and then provide alarm messages. Therefore, in this paper, we aim to propose a novel mechanical fault diagnosis with the digital image processing technology. Firstly, overall structure of mechanical equipment fault dynamic motion detection system is illustrated, and the infrared image segmentation is the key part this system. Secondly, we convert the infrared image segmentation problem to a cluster problem, and then provide a novel immune neural network cluster algorithm. Finally, experimental results demonstrate that the proposed algorithm can detect mechanical faults with high accuracy.
    Keywords: Mechanical fault diagnosis; Infrared image; Image segmentation; Immune neural network; Fitness value.

  • Performance Analysis of Distributed Transmit Beamforming in Presence of Oscillator Phase Noise   Order a copy of this article
    by Ding Yuan, Chuanbao Du, Houde QUAN 
    Abstract: This paper analyses the performance of distributed transmit beamforming in presence of oscillator phase noise. Average beampattern of arbitrary array is adopted here as performance index. In the analysis, the phase noise process of each node oscillator is described by a stationary model taking into account and shape single-sideband (SSB) spectrum. By using non-parametric kernel method, the average beampattern in presence of phase noise is derived and corresponding beampattern characteristics are evaluated. Theoretical analysis and simulation results show that the accumulated PN can result in the degradation of DTBF performance, which are also reflected in the change of beampattern characteristics, such as 3dB width, 3dB sidelobe region and average directivity. With the increase of time duration, the cumulative phase noise becomes greater, and the degradation is more obvious.
    Keywords: distributed transmit beamforming; average beampattern; phase noise; non-parametric kernel method; oscillator; performance analysis.

  • The Research of Image Super-Resolution Algorithm Using Convolutional Sparse Coding   Order a copy of this article
    by Bin Wang, Jun Deng, YanJing Sun 
    Abstract: According to super-resolution image reconstruction for convolution sparse coding, a novel super-resolution reconstruction algorithm named four-channel convolutional sparse coding method has proposed via improving convolutional sparse coding method. In the proposed method, a testing image was put in four-channel via rotating image ninety degrees in four times. Then, the high-frequent part and low-frequent part were reconstructed by means of convolutional sparse coding method and cubic interpolation method respectively. Finally, the reconstructed high-resolution image has obtained via the process of weighting on four images. The proposed method not only overcomes the problem of consistency for the overlapping patches, but also improves the detail contour for the reconstructed image and enhances its stability. The experimental results have shown that the proposed method has better PSNR, SSIM, noise immunity than some classical super-resolution reconstruction methods.
    Keywords: Image Reconstruction; Super-Resolution; Convolutional Sparse Coding; Four-Channel; Stability.

  • Three-dimensional Dynamic Tracking Learning Algorithm for Pedestrians on Indefinite Shape Base Based on Deep Learning   Order a copy of this article
    by Yaomin Hu 
    Abstract: In order to improve the three-dimensional dynamic tracking and recognition ability to pedestrians, a three-dimensional dynamic tracking learning algorithm for pedestrians on indefinite shape base based on deep learning is proposed in this paper. First, the indefinite shape base mesh of body imaging is segmented to extract three-dimensional dynamic similarity features of pedestrians, and the three-dimensional feature points are marked; the deep learning method is adopted for fusion of gray pixel value and extraction of difference feature to images during three-dimensional dynamic tracking. Then a motion vector library is constructed based on the extraction results, and the template matching equation of three-dimensional dynamic feature points of pedestrians is obtained. The simulation results show that this method can accurately track moving bodies in three-dimensional dynamic tracking and recognition and can provide good robustness in moving body target extraction with accuracy up to 100% at maximum and detection time of 48.83ms at maximum.
    Keywords: indefinite shape base; pedestrian; three-dimensional dynamic tracking; deep learning; image.
    DOI: 10.1504/IJICT.2019.10013628
     
  • A 3D model retrieval method based on multi-feature fusion   Order a copy of this article
    by Hong Tu 
    Abstract: 3D model retrieval is a hot topic in information retrieval, and it is of great importance to fuse multi-feature of 3D models to achieve high quality retrieval. Therefore, in this paper, we propose a novel 3D model retrieval method based on the multi-feature fusion technology. Motivation for the proposed 3D model retrieval method lies in that we convert the 3D model retrieval problem to a discriminative feature space mapping problem. The framework of the multi-feature fusion based 3D model retrieval system contains two main modules: 1) model normalization, and 2) multi-feature fusion. The proposed 3D model retrieval method is designed based on multiple feature fusion and online projection learning. In order to effectively fuse multiple features, we train a model to learn a low dimensional and discriminative feature space from the multiple views of 3D models. Particularly, to effectively retrieve the newly added samples, we propose an online projection learning algorithm, which learns a projection matrix by handling the least square regression model. Experimental results show that the proposed method can achieve higher precision for a given recall than others methods, that is, the proposed method can obtain higher quality 3D model retrieval results than state-of-the-art methods.
    Keywords: 3D model retrieval; Multi-feature fusion; Visual feature; Eigenvalue decomposition; Projection matrix.

  • Dual control algorithms for fault diagnosis of stochastic systems with unknown parameters   Order a copy of this article
    by Jinkun Sun, Kang Yang 
    Abstract: This paper researches the problem of fault diagnosis for stochastic system characterized by slowly changing, unknown parameters. This paper puts forward a conception of logic parameter decay rate based on the threshold, and has designed a rolling control algorithm learning control law based on Kalman filtering theory and lqg control law, while this algorithm can estimate the parameters, on the side of the parameter learning, the system has good fault tolerance ability, thus more accurate fault detection and isolation. The simulation results verify the effectiveness of the proposed method.
    Keywords: Dual control; Parameter decay rate; Kalman filter; Lqg control; Rolling control algorithm.

  • The Research of Multimedia Dynamic Image 3d Adaptive Rendering Method   Order a copy of this article
    by Su-ran KONG, Jun-ping YIN 
    Abstract: The 3d adaptive rendering of the multimedia dynamic image is conductive to improve the quality of the image. The current method renders the multimedia dynamic image by geometric information scenario modeling which has the problem that rendering efficiency is low. To solve this problem, a 3d adaptive rendering method based on OGRE is presented in this paper. This method firstly USES compressed domain to correct segmentation of the image, and then using SIFT operator and Forstner operator classifying image characteristics, finally according to the array image complete 3d adaptive image rendering. The experimental results show that this method has obvious advantages in terms of classification time, rendering energy consumption, segmentation efficiency and other aspects in comparison with other methods, which fully demonstrates that this method improves the rendering efficiency of images.
    Keywords: multimedia; dynamic image; 3d adaptive; rendering methods rnrn.

  • Automated Random Color Keypad   Order a copy of this article
    by Kumar Abhishek, Manish Kumar Verma, M.P. Singh 
    Abstract: In early 1970s, automated teller machine came into existence which was placement for cash counters at banks. People could now do transactions 24X7 with ease. But as ATM expanded its reach into human life, crimes related to ATM theft and fraud increased exponentially. There are flaws related to ATM security which is exploited by the criminals. ATM card cloning, card skimming, ATM pin theft are few of the most common crimes related to the ATM. These crimes result in loss of money measured in billions. The newspapers are full of these crime reports and hacks. In this paper, we have proposed an automated random colour keypad by which ATM security can be enhanced; making these ATM attacks and fraud very difficult. This ARC keypad will act as a secure replacement for traditional ATM keypad. Several experimental results of the ARC keypad have been included in this paper which proves that our mechanism enhances ATM pin security and reduces the chances of fraud to a great extent.
    Keywords: automated teller machine; ATM; ATM keypad; ATM cloning; ATM skimming.
    DOI: 10.1504/IJICT.2019.10018383
     
  • NFK: a Novel Fault-tolerant K-Mutual Exclusion Algorithm for Mobile and Opportunistic Ad hoc Networks   Order a copy of this article
    by Tahar Allaoui, Mohamed Bachir Yagoubi, Chaker Abdelaziz Kerrache, Carlos T. Calafate 
    Abstract: This paper presents a fault-tolerant algorithm ensuring multiple resources sharing in Mobile Adhoc Networks (MANETs) that is able to handle the known k-mutual exclusion problem in such mobile environments. The proposed algorithm relies on a token-based strategy, and requires information about resources and their use to be carried in routing protocol control messages. This way, our solution avoids any additional exchange of messages. Furthermore, experimental results show that it offers a fast response time. Moreover, we introduce a dual-layer fault-tolerance mechanism that tolerates the faults of several sites at the same time without affecting the well functioning of the system. Simulation results also evidence the high efficiency of our proposal, which achieves reduced overhead and response delay even in the presence of critical situations where multiple simultaneous faults occur.
    Keywords: NFK; Resource sharing; K-Mutual Exclusion; Fault tolerance; MANETs.

  • GIS Information Feature Estimation Algorithm Based on Big Data   Order a copy of this article
    by Chunyang Lu, Feng WEN 
    Abstract: In order to improve the data mining and information scheduling capabilities of Geo-information system(GIS), it is necessary to optimize GIS information feature estimation and perform GIS information feature extraction, so a GIS information feature estimation algorithm based on big data analysis is proposed. In this algorithm, the piecewise linear estimation method is adopted to reconstruct feature data in the GIS information database in group, and associated information fusion is performed to the GIS data in the database, and adaptive scheduling is performed to the GIS information feature database through the cascaded distributed scheduling method; according to the spatial distribution of geographic information, vector adjustment is performed to the cluster center, and the frequent item mining method is adopted to extract features of GIS information, and then sequential processing is adopted to the extracted feature quantity of GIS information; the regularized power density spectrum estimation method is adopted to perform unbiased estimation to GIS information feature data. Simulation results show that in GIS information feature estimation, the proposed method can provide estimation with low bias and high accuracy, so it has good GIS information scheduling capability and precision.
    Keywords: big data; GIS; information feature estimation; associated information fusion.

  • Uncertain chance-constrained programming based on optimistic and pessimistic values: models and solutions   Order a copy of this article
    by Yao Qi, Ying Wang, Xiangfei Meng, Ning Wang 
    Abstract: To solve the uncertainty in real decisions and overcome the limitations of random programming and fuzzy programming in application, we proposed two novel uncertain chance-constrained programming models based on optimistic and pessimistic value of uncertain variables in this paper. Firstly, the optimistic value and pessimistic value of uncertain variables were introduced as the objective functions and the chance constraints of uncertain programming were defined as constraint functions, then the optimistic value model and pessimistic value model were established. Secondly, two lemmas were proposed and proved to transform the uncertain chance-constrained programming model into an equivalent deterministic programming model. Finally, the feasibility and effectiveness of the proposed models and solutions were verified by a numerical example.
    Keywords: uncertain programming; chance-constrained; optimistic value; pessimistic value; equivalent deterministic model.

  • Multilingual Named Entity Recognition Based on the BiGRU-CNN-CRF Hybrid Model   Order a copy of this article
    by Maimaiti AYIFU, Silamu WUSHOUER, Muhetaer PALIDAN 
    Abstract: Uyghur, Kazak, and Kyrgyz (UKK languages) are agglutinative languages belonging to the Altaic language system and mainly located Xinjiang Uyghur Autonomous Region of China and Central Asia, which are low-resource languages with rich morphological features. Determining how to obtain a better general entity recognition method without relying on artificial features and resources is a problem that remains to be solved. In this paper, a hybrid neural network model based on bidirectional GRU (BiGRU)-CNN-CRF is proposed. This model takes the prefix or suffix character-level feature vectors captured by the convolutional neural network (CNN) layer, part-of-speech vectors, and word vector concatenated vectors of words as inputs and constructs a deep neural network of BiGRU-CRF suitable for the recognition of UKK named entities. Then, the output of the BiGRU layer is decoded by the conditional random field (CRF) layer, and the dependencies between the output tags are considered. Finally, the global optimal labeling sequence is outputted. The experimental results show that this model can solve the problem of automatic recognition of named entities, achieving the best results to date for the UKK data set provided by the laboratory. In addition, the model has good robustness. The F1 value of UKK named entity recognition reached 93.11%, 90.29%, and 89.22% for the Uyghur, Kazak, and Kyrgyz languages, respectively. This model has been verified in named entity identification tasks in three languages and can be extended to named entity identification tasks in other agglutinative languages.
    Keywords: Recurrent neural network; convolutional neural network; conditional random field; named entity recognition; Uyghur; Kazak; Kyrgyz.

  • Latent Semantic Text Classification Method Research based on Support Vector Machine   Order a copy of this article
    by Qingmei Lu, Yulin Wang 
    Abstract: Text classification, as an important process of network public opinion analysis, will directly affect the judgment of text public opinion. The accuracy of text classification is an important prerequisite for textual public opinion analysis. At present, the commonly used text classification methods mainly focus on clustering and machine learning. In general, the accuracy is not ideal. Moreover, text classification method based on latent semantics has the characteristics of insensitivity to feature dimension and simple classification methods, so it has become the focus of extensive research. However, as the type of text increases, local semantic analysis will occur, resulting in the dropping of classification accuracy of text. In this paper, a latent semantic classification method based on support vector machine (LR-LSA) is proposed to solve the problem of local semantic analysis brought by too much text category, it can be better to solve the impact of feature dimension surge on effect.
    Keywords: LSA; Semantics; Vector machine; Machine learning.

  • Autonomous Multi-target Tracking Technology of Unmanned Surface Vessel Based on Navigation Radar Image   Order a copy of this article
    by JiaWei Xia, DeChao Zhou, Xufang Zhu 
    Abstract: There are three prominent problems in traditional multi-target tracking technologies of unmanned surface vessel: repeated observations and observation omission due to instable reference system of vessel, the lack of data concerning radar observation point features and low utilization of it, and intermittent loss of radar image sequence signals due to environment interference. In order to solve the above problems, a radar image stitching algorithm based on interpolation, an improved target multi-feature extraction algorithm and a multi-target track management model based on multi-feature matching are introduced based on the architecture for multi-target tracking system of unmanned surface vessel to improve the timeliness and accuracy of target tracking. The feasibility of this technology has been tested through a field experiments on lake. The result reveal that the proposed method can provide better multiple targets tracking ability with lower predicted error than traditional target tracking methods.
    Keywords: unmanned surface vessel; navigation radar; multi-target tracking; autonomous system.

  • MEM: A New Mixed Ensemble Model for Identifying Frauds   Order a copy of this article
    by Chen Zhenhua, Jiang WeiLi, Lei Ma, Zhang JunPeng, Hu JinShan, Xiang Yan, Shao DangGuo 
    Abstract: In the social security system, there still exist wilful insurance frauds. In this paper, to address the insufficient stability and randomness of the traditional insurance fraud evaluation model, we propose a new classifier called MEM (Mixed Ensemble Model). Based on the principle of ensemble learning, MEM combines several different individual learners and uses Q statistical methods to evaluate diversity. MEM has been tested on two fraud related datasets to compare with three state-of-the-art classifiers: Neural Network, Naive Bayes and Logistic Regression. The experimental results show that MEM performs better than the other three classifiers in both datasets under the four measures: Accuracy, Recall, F-value and Kappa. MEM can be a useful method for the detection of social insurance fraud.
    Keywords: Detection of social insurance frauds; Measurement of social insurance frauds; Social insurance identification techniques; Mixed ensemble model.

  • Research on Security Assessment Algorithm of Navigation control system based on Big Data   Order a copy of this article
    by Jianxin Ge, Jiaomin Liu 
    Abstract: There is a trend of integration and modularization in navigation control system. It has the characteristics of high resource sharing, fast information transmission and integration of software and hardware. It has high requirements for the safety of navigation control systems. In view of the low accuracy of traditional algorithms, a new big data-based security assessment algorithm of navigation control system is proposed. The structure of navigation control system is introduced, and the realization of navigation control system is analyzed. In big data environment, CCS (Common Criteria) is used to determine the safety assessment objectives of the navigation control system. According to the different safety weights among the areas in the criterion layer, the pilots determine the weight of the assessment objectives by analytic hierarchy process based on the evaluation of the navigation control system and the corresponding safety rating indices. GRAP algorithm is used to acquire the security level of navigation control system so as to realize the safety assessment of the navigation control system. Experimental results show that the proposed algorithm can effectively improve the accuracy of assessment and reduce misclassification rate due to the comprehensive ability of this algorithm.
    Keywords: Big data; Navigation control system; Safety; Assessment; Aircraft.

  • Decision of Mechanical Allocation in Subgrade Earthwork Construction under Uncertainty   Order a copy of this article
    by Bo Wang, Ying Wang, Lijie Cui 
    Abstract: There are two common phenomena in a subgrade earthwork construction, one is the queuing problem caused by mechanical resource confliction, the other is the hysteresis of construction schedule caused by uncertainties. These two phenomena will have a huge impact on the mechanical allocation. Based on the Petri net, queuing theory and uncertainty theory, we firstly establish the whole construction process model under uncertain conditions. Secondly, we solve the mechanical configuration problem according to a designed construction progress. Thirdly, we add some uncertain factors into the Mechanical allocation. Finally, through a numerical application, we verify and compare the decisions above..
    Keywords: subgrade earthwork construction;Mechanical allocation;Uncertain Conditions.

  • Study of Big Data Mining Based on Cloud Computing   Order a copy of this article
    by Jiangyi Du, Fu-ling Bian 
    Abstract: In the era of big data, how to discover knowledge from various types of massive data is an important research direction of big data processing technology. So this is where the value of big data mining lies. Based on the comparison of conventional data mining and big data mining, this paper discusses the typical data mining algorithms, especially the parallel implementation of the algorithms, it also analyzes the architecture of big data mining system and the framework of big data mining platform based on cloud computing, which has provides reference to the users cognition and application of big data mining.
    Keywords: Big Data; Data Mining; Cloud Computing.

  • Part-Based Pyramid Loss for Person Re-Identification   Order a copy of this article
    by Yuanyuan Wang, Zhijian Wang, Mingxin Jiang 
    Abstract: Person re-identification (ReID) is a challenging problem in computer vision, meanwhile attracted the attention of industry. Person ReID focuses on identifying person among multiple different cameras. A key under-addressed problem is to learn a good metric for measuring the similarity among images. Recently, deep learning networks with metric learning loss has become a common framework for person ReID, such as triplet loss and its variants. However, the previous method mainly uses the distance to measure the similarity, and the distance measure is more sensitive when the scale changes. In this paper, we propose part-based pyramid loss to learn better similarity metric for the person ReID, in which batches of quadruplet samples as the input. Specifically, we simultaneously use the relationship of distance and angle among samples learn the local body-parts features of person images. Our approach uses the pyramid relationship in triangles as a measure of similarity, minimizing the angle at the negative point of the triangle. Pyramid loss can learn better similarity metric and achieve a higher performance on the person ReID benchmark datasets. The experimental results show that, our method yields competitive accuracy with the state-of-the-art methods.
    Keywords: Person Re-identification; Metric Learning; Pyramid Loss; Part-based.

  • Feature Extraction Algorithm for Fast Moving Pedestrians with Frame Drop Constraint based on Deep Learning   Order a copy of this article
    by Yaoming Hu 
    Abstract: When the existing method extracts the information of the fast moving pedestrian, the frame dropping phenomenon may occur, resulting in low extraction precision. A fast moving pedestrian frame loss constrained feature extraction algorithm based on depth tilt is proposed. Block matching and denoising are performed on the pedestrian image. The contour feature extraction method is used to reconstruct the adjacent frames, and the reconstructed image frame vector is sub-block fusion. The depth learning algorithm is used to extract the feature quantity of the gray pixel from the frame falling part of the image. Improved feature extraction algorithm for pedestrians with frame loss constraints. The simulation results show that the standard deviation of the frame loss of the extraction result is 8.235, and the standard deviation of the non-drop frame is 4.353. It proves that the algorithm has low frame loss rate and high extraction and recognition ability.
    Keywords: pedestrian; frame drop; feature extraction; tracking and identification; deep learning.

  • Building Energy Consumption Forecasting Algorithm Based on Piecewise Linear Fusion and Exponential Spectrum Analysis   Order a copy of this article
    by Chenqiang Zhan 
    Abstract: In order to solve the problem of large error in traditional statistical prediction methods, a large data prediction method based on pie chart is proposed. Linear fusion and exponential spectrum analysis methods are proposed. The method establishes the target model of building energy consumption prediction and carries out nonlinear exponential sequence analysis. Game analysis of building energy consumption, segmentation linear fusion method is used to decompose the characteristics of building energy consumption map, and statistical analysis is carried out. According to the evolution of feature decomposition and learning trends, the analysis and accurate prediction of building energy consumption big data is realized. The simulation results show that the method reduces energy consumption, is conducive to building energy-saving emission reduction and green building, and provides a new idea for building energy conservation. Provide scientific support for the development of building energy conservation and environmental protection.
    Keywords: big data environment; building energy consumption; forecasting algorithm; map feature analysisrnrn.

  • CNN-based Text Multi-Classifier using Filters Initialized by N-gram Vector   Order a copy of this article
    by Yan Xiang, Ying Xu, Zhengtao Yu, Hongbin Wang, Yantuan Xian 
    Abstract: Text classification based on Convolutional Neural Networks (CNN) has got more attention recently. This paper presents an improved CNN-based text multi-classifier. First, word vector training is performed on the corpus to be classified. Then, the most important n-grams for a particular category are selected and clustered into different groups. Finally the centroid vectors of different groups are used to initialize the center weights of filters. Initialization weights enable CNN to extract n-gram features more effectively and ultimately improve text classification results. Multi-classification experiments using multiple advanced models were performed on different data sets. Experiments show that the proposed model is more accurate and stable than other baseline models.
    Keywords: Convolutional Neural Networks; Text Classification; n-gram; Word Embedding; Clustering.

  • Source Code Based Context-Sensitive Dynamic Slicing of Web Applications   Order a copy of this article
    by Jagannath Singh, D.P. Mohapatra 
    Abstract: Web applications are broadly utilized for spreading business around the globe. To meet the necessities of the clients, web applications must have better quality and robustness than any other applications. Web slicing improves the understanding of the important information which transitively improves the quality of the web application. The system dependence graph is the most popular intermediate representation for explicitly represent all dependencies that have to be considered in slicing. The system dependence graph has been extended to Web Dependence Graph (WDG). A partial tool has been developed for automatic generation of the WDG. We proposed a Context-sensitive Web Slicing (CSWS) algorithm for computation of slices using WDG. During our literature survey, we noticed that majority of the automatic graph generation tools are mainly based on byte-code whereas our tool uses the dependency analysis from the source code of the given program. Using our tool WDG, we compared the performance of our proposed CSWS algorithm for slicing with another closely related slicing techniques.
    Keywords: Program Slicing; JSP Application; Source Code Analysis; Context sensitive; Dynamic Slicing.

  • An Item Recommendation Model with Content Semantic   Order a copy of this article
    by Yunpeng Jiang, Liejun Wang, Ji-Wei Qin 
    Abstract: Current recommender service providers are offering interesting items for user based user behavior (e.g. the users rating, the trust value) and videos feature manual tagging, and ignoring the content semantic of items. As an accurate reflection of the item content, the item semantic is should be taken into account to avoid subjective feeling of the user marked items feature, we present a recommender model that leverages content semantic and user rating. In this model, the item similarity is firstly calculated with content semantic by best Word2vec method, where the item content as words is mapped into vector space and the distance between vectors is described as the similarity between items by Euclidean Distance, the distances are sorted in ascending order to form one item list recommended by the content semantic. Next, the user rating is used to model the user preference and build the other item list recommended by traditional recommendation method, such as SVD. Then, the two video list is mixed together as final item list recommended for user. Comparing the above algorithm to traditional collaborative filtering on different sparsity rating matrix, Movielens, Filmtrust and Online_Retail, we run experiments that show the presented algorithm has is greatly improved on accuracy, compared to the traditional algorithms, the accuracy of the model increases by an average of 25.32% to 31.41%, and the presented user preference has the good scalability.
    Keywords: recommender model; semantic feature; similarities; Word2vec; data sparsity.

  • Stargan Based Camera Style Transfer for Person Retrieval   Order a copy of this article
    by Yuanyuan Wang, Zhijian Wang, Mingxin Jiang 
    Abstract: Person retrieval is also known as person re-identification (ReID) aiming to match person among cross cameras. Although the results of the person ReID have performed well in small datasets, the issues of the large number of identities in real scenarios or with more cameras have not been fully investigated. Being an image retrieval task under cross multi-cameras of intelligent video security, person ReID is influenced by the image style change caused by different camera illumination and view angles. The number of cameras in the latest datasets is increasing and more camera transfer models need to be trained. Traditional methods of generative adversarial network (GAN) can only handle transfer of two domains. To facilitate the research towards solving these problems, we use star generative adversarial networks (StarGAN) to transfer the image from one camera to another camera in the latest large benchmark datasets. We train multiple transfer models simultaneously, minimizing the bias among different cameras. Label smooth regularization (LSR) algorithm is utilized to mitigate the effects of noise in the model. We learn part-based descriptors from pedestrian samples to generate robust feature representation. Our work is competitive compared to the state-of-the-art.
    Keywords: StarGAN; Person retrieval; LSR.

  • Fast Mining Algorithm for Multi-level Association Rule Data under Temporal Constraints   Order a copy of this article
    by Yicheng Mu 
    Abstract: Redundant interference occurs between frames of multi-level association rule data under temporal constraints, which brings poor clustering and anti-interference performance to data mining. In order to improve the multi-level association rule data mining ability, this paper proposes a fast mining algorithm for multi-level association rule data based on temporal constraints. It constructs a fitting state model of multi-level association data distribution, and uses the reorganization method of multi-level association rules to re-arrange data structure and extract the average mutual information feature; it constructs detection statistics to conduct multi-level linear programming design for association rules data, and uses the autocorrelation detection method to conduct de-interference processing and the fuzzy directional clustering method to conduct fuzzy clustering processing for multi-level association rule data, to realize fast mining of multi-level association rule data under temporal constraints. The simulation results show that compared with traditional methods, the proposed method reduces the execution time of multi-level association rule data mining by 12.77%, and the mining accuracy is improved by 23.34%. High mining accuracy and strong anti-interference ability make the data mining efficiency improved.
    Keywords: temporal constraints; association rules; data mining; feature extraction.

  • Fuzzy judgment of edge features under dynamic constraints in pedestrian tracking   Order a copy of this article
    by Yaomin Hu 
    Abstract: Pedestrian tracking and recognition is influenced by the pedestrian environment and edge factors of dynamic features, which is easy to tracking errors, so in order to improve the pedestrian tracking and recognition ability, it is required to conduct fuzzy judgment to edge features. Therefore, a fuzzy judgment method of edge features under dynamic constraints in pedestrian tracking based on local motion planning and edge contour segmentation was proposed. In this method, a geometry mesh area model for pedestrian tracking and recognition was constructed, and the fuzzy dynamic feature segmentation method was adopted to reconstruct dynamic edge feature points in pedestrian tracking to extract the greyscale pixel set under dynamic constraints in pedestrian tracking; edge feature quantity was fused based on the distribution intensity of greyscale pixels to realise pedestrian tracking image fusion and information enhancement processing; the three-dimensional dynamic constraint method was adopted for local motion planning of pedestrian tracking, and then fuzzy judgment was carried out to edge features in pedestrian tracking based on the edge contour segmentation results. The simulation results show that in pedestrian tracking and recognition, this method has strong fuzzy judgment ability of edge features and can provide results with error below 10 mm and relatively stable fluctuation, so this method can provide relatively high recognition accuracy and good robustness.
    Keywords: pedestrian tracking; dynamic constraint; edge feature; recognition; image fusion.

  • Design of Cloud Computing-based Foreign Language Teaching Management System Based on Parallel Computing   Order a copy of this article
    by Kanmanli Maimaiti 
    Abstract: In view of the long response time of the traditional foreign language teaching management system and the inability to guide students to improve their learning interest, a foreign language teaching management system based on parallel computing is proposed. In this method, the cloud architecture of the foreign language teaching system is given under the cloud computing environment, on which the parallel computing method is adopted to design the foreign language teaching management system hardware, which is scalable and flexible. The parallel algorithm is designed and the communication between the modules of the system is implemented with C# language. The experimental results show that response time for each of the four scores of Zhang is 1s in reference [3], the response time for each of them is only 0.6s in this paper. The system can shorten the response time for query and improve the development speed of web-based foreign language teaching, effectively promotes the development of web-based foreign language teaching and students interests in foreign language learning.
    Keywords: parallel computing; cloud environment; foreign language teaching; management system.

  • Study on the Subway Transfer Recognition during Rush Hour Based on Big Data   Order a copy of this article
    by Shushen YAO, Xiaoxiong WENG 
    Abstract: With the development of the subway network, multipath coexistence becomes very common in big cities. Its followed that the tickets clearing problem is highly concerned by co-investors, which relies on accurate transfer paths identification. Different from the commonly used Logit models for subway transfer recognition problem, we adopted the Adaptive Gauss Cloud Transformation (A-GCT) model, which transformed the distribution of passengers trip time into multiple concepts of different granularity, and evaluated the maturity of the concept by the of parameter named Confusion Degree (CD). The case in this paper shows that, the A-GCT model has higher accuracy in dealing with uncertain problem such as subway transfer recognition.
    Keywords: Gaussian Cloud Transformation(GTC); subway transfer recognition; big data.

  • Intelligent Monitoring System for Thermal Energy Consumption of Buildings under the IoT Technology   Order a copy of this article
    by Lu Wang, Difei Jiang 
    Abstract: In view of the poor accuracy of single value of heat energy consumption and the weak real-time monitoring of energy consumption in buildings, an intelligent monitoring system for thermal energy consumption of buildings under the Internet of Things (IoT) technology is designed. The overall structure of the intelligent monitoring system for thermal energy consumption of buildings is constructed. The DSP integrated signal processor is used for data acquisition and real-time information processing of thermal energy of buildings. A wireless intelligent gateway for building thermal energy consumption monitoring is designed by using Internet of things technology, a wireless sensor network model is constructed. The VME bus is used as information transmission channel to realize intelligent monitoring for thermal energy consumption. The test results show that the real-time monitoring accuracy of the system is better than that of the traditional method, and it has a good application prospect in all aspects.
    Keywords: Internet of Things (IoT) technology; thermal energy of buildings; energy consumption monitoring; system design.

  • An Algorithm of Decomposition and Combinatorial Optimization Based on 3-Otsu   Order a copy of this article
    by Liejun Wang, Junhui Wu, Ji-Wei Qin 
    Abstract: Recently, the 3-Otsu(three-dimensional maximum between-class variance algorithm) have drawn great attention in image segmentation. However the time consumption and calculation amount of 3-Otsu is large, so this paper provide a compositing 3-Otsu decomposed algorithm. Firstly, the histogram of 3-Otsu is resolved into three two-dimensional histogram by projecting, and the projection plane is three coordinate plane of its own space. Secondly, the two-dimensional histogram formed after segmented by using 2-Otsu, then three segmentation results are obtained. Finally, three segmentation results are combined in linear manner, and combination result is the output of segmentation result, under the ideal noise-free, gaussian noise, salt noise, pepper noise and salt and pepper mixture noise, respectively. The results show that the proposed algorithm is nearly 30 times smaller in time consumption than 3-Otsu, although slightly more than 2-otsu,its value is still small. Meanwhile the anti-noise performance, especially for mixed noise, is better than two other algorithms.
    Keywords: three-dimensional Otsu; decomposition and reduction; linear combination.

  • Video Encryption Based on Chaotic Array System: Working with Image Directly   Order a copy of this article
    by Hongyan Zang, Jing Yang, Guodong Li 
    Abstract: A new three-dimensional discrete chaotic system is proposed according to the Marotto theorem in this paper. On this basis, a coupled array chaotic system is given as the drive system. Moreover, with the help of the bidirectional generalized synchronization theorem, the response system is constructed and proved to be chaotic. According to the chaotic matrix generated by the above systems, a special video encryption scheme was proposed. While the experiment shows that the encryption scheme occupies a large key space and enjoys an approximately uniform distribution of the ciphertext entropy already, the main contribution of the new constructed system is the higher speed than the vector system since once the encryption starts, the chaotic matrix generated by the array systems and the image with the same size to the matrix can be operated directly.
    Keywords: generalized synchronization; video encryption; coupled array chaotic system; computing simulation.

  • User Information Intrusion Prediction Method Based on Empirical Mode Decomposition and Spectrum Feature Detection   Order a copy of this article
    by Zheng Ma, Yan Ma, Xiaohong Huang, Manjun Zhang, Bo Su 
    Abstract: In distributed intelligent computing environment, user information is vulnerable to plaintext intrusion, resulting in information leakage. In order to ensure the security of user information, a user information intrusion prediction method based on empirical mode decomposition and spectrum feature detection in distributed intelligent computing is proposed in this paper. Firstly, a model of user information and intrusion signal in distributed intelligent computing is established; then an intrusion detection model is established with signal processing method; finally, time-frequency analysis and feature decomposition are conducted for intrusion information in distributed intelligent computing with empirical mode decomposition method, and accurate prediction of user intrusion information is achieved based on joint probability density distribution of spectrum feature, so as to improve the algorithm design. The simulation results show that when the signal to noise ratio is 12.4dB, the detection probability of the method proposed in this paper is 1, and then the false alarm probability can be 0, which indicates that this method can provide good intrusion detection probability and low false alarm probability even at relatively low signal to noise ratio. Therefore, the method proposed in this paper has good intrusion interception and prediction ability.
    Keywords: distributed intelligent computing; user information; intrusion prediction; feature extraction; empirical mode decomposition.

  • Deep Forest-based Hypertension and OSAHS Patient Screening Model   Order a copy of this article
    by Pingping Wang, Lei Ma, Yun-Hui Lv, Yang Xiang, Dang-guo Shao, Xin Xiong 
    Abstract: Incidence of OSAHS is high in hypertension patients. To make the OSAHS diagnosis more precise and simple, an OSAHS screening model is built hereof by deep forest algorithm with the collected information of hypertension and OSHAS patients from the Sleep and Respiration Center of a hospital. Firstly, variation in index and dimensions and inter-class imbalance in sample dataset is resolved by normalization and SMOTE method; and OSAHS screening model is built by deep forest method (gcForest) after redundant information in features is removed with modified chi-square test single feature selection. The results show that with modified chi-square test single feature selection method, the redundant features can be effectively removed and performance of classifier can be improved; Deep forest-based OSAHS screening model is superior to other classification models in classification performance and can effectively improve the precision of OSAHS patient screening, reduce the incidence of OSAHS missed diagnosis.
    Keywords: Hypertension; OSAHS; Unbalanced data; Feature selection; Deep forest; Screening model.

  • Design of Automatic Detection System for Vehicle networking Communication Abnormal Data based on CAN Bus   Order a copy of this article
    by Qun Le, Kun Jiang, Feng Zhang 
    Abstract: Aiming at the problems of low detection rate and accuracy rate, high false detection rate and missed detection rate caused by the proportion of abnormal data in the current design system when detecting abnormal data in the process of vehicle networking communication, an automatic detection system of abnormal data in vehicle networking communication based on CAN bus is proposed and designed. The system mainly includes two parts: vehicle networking communication subsystem and vehicle networking communication data anomaly detection subsystem. The vehicle networking communication subsystem includes vehicle networking communication data acquisition unit and C/S structure. The real-time interaction process between client and server is given. The experimental results and discussions prove that the design system can automatically detect abnormal data in vehicle networking communication, and it is less affected by the proportion of abnormal data, which has the advantages of higher detection rate and accuracy, lower false detection rate and missed detection rate. At the same time, the preliminary conclusion is drawn that the sampling ratio will hardly affect the detection results.
    Keywords: CAN Bus; Vehicle networking; Communication; Abnormal Data; Automatic Detection; System Design;rnrn.

  • Slices Reconstructing Algorithm for Single Image Dedusting   Order a copy of this article
    by Haiyan Zhang, Shangbing Gao, Mingxin Jiang 
    Abstract: For solving the image degradation in the non-uniform dusting environment with multiple scattering lights, the slices reconstructing algorithm for single image was proposed for dust elimination in the paper. Firstly, the slices along the depth orientation were produced based on McCartney model in dust environment. Secondly, the union algorithm of dust detection was used to detect dust patches in the slices where non-dust areas were reserved while the dust zones were marked as the candidate detecting areas of the next slice image. Then, the image was reconstructed by combining these non-dust areas of each slice and the dust zone of the last slice. Finally, the faster guided filter was applied to the reconstructed area. The experimental results had proved that the reconstruction algorithm could get rid of dust in object image not only effectively but also fast. The papers work had laid the foundation for object detection and recognition work had based on computer vision in dusts environment.
    Keywords: Image Restoration; Dust Detection; Image Reconstruction; Multiple Scattering; Single Image.

  • A Magnetic Resonance Imaging Denoising Technique Using Non-Local Means and Unsupervised Learning   Order a copy of this article
    by Tao Wu, Lei Xie 
    Abstract: We propose a new non-local mean (NLM) algorithm using unsupervised learning and k-means clustering for denoising magnetic resonance (MR) images. Our technique improves image pro-cessing speeds with enhanced denoising performance on multiple types of images. The calculation of similarity weights at the cluster level improves computational efficiency. We conducted experiments with brain MR images of various sizes, including three T1- and T2-weighted images. Three quality metrics show that our algorithm achieves moderate improvements in denoising accuracy with significant reductions in execution time. The proposed method processed the sample data in one-fifth of the time of the original NLM method. Compared to several state-of-the-art methods, our method offers improved peak signal-to-noise ratios (PSNRs) for samples with large amounts of noise.
    Keywords: Magnetic resonance imaging; Image denoising; Non-local mean; k-means; Unsupervised Learning.

  • Research on Equilibrium scheduling of Airborne Network Resource based on Load Gini Coefficient   Order a copy of this article
    by Jin Guo, Shengbing Zhang 
    Abstract: Aiming at the problems of long running time and serious data loss in traditional equilibrium scheduling method for airborne network resource, a new equilibrium scheduling method for airborne network resource based on load Gini coefficient is proposed. According to the related principle of Gini coefficient of income distribution in the field of economics, the Gini coefficient of network load distribution is monitored. Through genetic algorithm, an optimal allocation scheme which can satisfy the load change constraints and effectively avoid dynamic migration is found. The experimental results show that the data loss rate of this method is between 0 and 2, and the average running time is 6 milliseconds and 12.5 milliseconds shorter than that of the other two methods, which can effectively reduce the data loss rate and running time of the system, and improve the overall efficiency of the system.
    Keywords: load Gini coefficient; airborne network resources; equilibrium scheduling.

  • Fault Diagnosis of Fan Gearboxes Based on EEMD Energy Entropy and SOM Neural Networks   Order a copy of this article
    by Biao Ma, Gang Li, Guping ZHENG 
    Abstract: Aiming at the difficulty of feature extraction for gear fault diagnosis and the problem of traditional classification methods cannot diagnose the faults in wind turbine Gearboxes adaptively, a new fault diagnosis method based on Ensemble Empirical Mode Decomposition (EEMD) energy entropy and SOM Neural Networks (SOM-NN) is proposed. Firstly, the EEMD method is used to decompose the original vibration signal of the gear under all kinds of condition into several Intrinsic Mode Functions (IMF) and calculate the energy value of each IMF and the energy entropy of the signal. Then the IMF energy proportion and the signal energy entropy are selected to form a set of features which can reflect the fault vibration signal. The values of these features are inputted to SOM neural network for classification. The numerical simulation results show that the accuracy of the method is 100% in the fault diagnosis of wind turbine gearbox.
    Keywords: ensemble empirical mode decomposition; energy entropy; self-organizing feature mapping (SOM); wind turbine; Gearboxes; fault diagnosis.

  • Design of Art Interactive Teaching System Based on Multiple Intelligence Theory   Order a copy of this article
    by Junliang Dong, Zhuomin Huang 
    Abstract: It is of great practical significance to study the interactive teaching system of art. The traditional art teaching system is used to analyze the information interaction model, which leads to the poor interaction performance of the system. Therefore, an art interactive teaching system based on multiple intelligence theory was proposed in this paper. The art interactive teaching system mainly consists of interactive teaching service information transmitter, student mark database and web page layering server. Based on hardware design, the entire art interactive teaching system was designed through the information interaction algorithm based on multiple intelligence theory. In order to verify the effectiveness of the designed system, a simulation experiment is performed, the results show that the system has high output signal-to-noise ratio (SNR) and the interaction efficiency is over 80%, which demonstrated the good information interaction performance and excellent application prospects of the system. The system has good application prospects.
    Keywords: Multiple intelligence theory; art teaching; information interaction; teaching system.

  • Clustering based Word Segmentation from Off-line Handwritten Uyghur Text-line Images   Order a copy of this article
    by Askar Hamdulla, Aysadet Abliz, Abdusalam Dawut, Kamil Moydin, Palidan Tuerxun 
    Abstract: For the word segmentation of handwritten Uyghur text images, this paper proposes a segmentation method based on clustering algorithm. In this paper, firstly, the preprocessed text line images are projected to the vertical direction, which can get the initial probable segmentation points and record the blank spaces and text length between connected domains. By using clustering algorithm, the blank spaces are classified into two categories: within word gap and between words gap. Then the first mergence is completed according to the clustering results. For the existed phenomenon of over segmentation, one merging method based on threshold is proposed through the combination of text region length and blank space length so that the final segmentation points are obtained. And the experimental results show that this method can effectively solve the word segmentation problem in the handwritten text images.
    Keywords: Uyghur handwritten text; Word segmentation; Clustering; Coloring process.

  • The Image Classification Algorithm Research using Class Information Loss and Joint Structural Similarity   Order a copy of this article
    by Shian Wang 
    Abstract: Aiming at the supervised training of Convolutional Neural Networks, the weighted joint structural similarity and class information supervised training method has been proposed. Firstly, for a small image, the Convolutional Neural Networks that can extract high-level information of images is designed. Secondly, a weighted joint structural similarity and class information loss function training convolutional neural network are established. Finally, handwritten numbers and Cifar10 images are obtained by Mnist dataset. The image classification experiments can validate the effectiveness of the proposed network. The experimental results can show that the image classification error rate of improved network on Mnist handwritten digits and Cifar10 dataset is 0.23% and 10% respectively. Under the premise that there is no dataset increase on the Mnist dataset, the performance of proposed network exceeds the performance. The performance of all single networks on the dataset, on the Cifar10 dataset, the proposed network can achieve higher image classification accuracy with less computational effort. At the same time, the supervision of joint structural similarity and class information loss can speed up the training process of proposed network.
    Keywords: Convolutional Neural Network; Image Classification; Structure Similarity; Deep Learning; Metric Learning.

  • Color Image Encryption Algorithm Based on Hyperchaos and DNA Sequences   Order a copy of this article
    by Ji-xian Cui, Guodong Li, Le-le Wang, Cong Ma 
    Abstract: In order to overcome the shortcomings of the single chaotic encryption and the problems of simple structure and low security, the hyperchaotic color image encryption algorithm based on DNA sequences is proposed. Firstly, the color images are layered and encoded by DNA, then, the hyper-chaotic sequence is used to scramble the DNA matrix, the hyper-chaos is used to generate natural DNA matrix and DNA addition operations is dynamically selected to perform the DNA rule operations, Finally, the cipher-text image is obtained. The performance test of the ciphertext image shows that the gray sca le distribution of the histogram is relatively uniform; the test values of the related index parameters NPCR and UACI are 99.65% and 33.51%, respectively, which are close to the theoretical value. Through simulation experiments, The algorithm can effectively improve various anti-attack capabilities of the encryption system and has high security.
    Keywords: Image encryption; Hyperchaotic Lorenz systems; DNA coding; Chaotic encryption.

  • Research on Image Denoising Algorithm Based on Non-local Block Matching   Order a copy of this article
    by Ying Yang, Dongrui LI, Xiaofeng Huang 
    Abstract: In order to study how to better suppress image noise, improve image resolution, and enhance the visual effect of images, three-dimensional joint filtering - block matching (BM3D) algorithm is used to explore image denoising and image detail retention, and in image denoising, sparse expression and low rank recovery theory are introduced. The results show that BM3D algorithm has a good effect in removing Gaussian white noise, but it is lacking in suppressing impulse noise and mixed noise. The non-local BM3D algorithm is superior to sparse expression and low rank recovery in removing Gaussian white noise, but sparse expression and low rank recovery has a good effect in removing impulse noise.
    Keywords: non-local block matching; image denoising; BM3D; sparse expression; low rank recovery.

  • Optimization of Surrounding Layout of Enterprise Building using an Improved Genetic Algorithm   Order a copy of this article
    by Lin Cheng 
    Abstract: In order to achieve the optimal design of environment and overall layout of enterprise building, the improved genetic algorithm is applied in it. Firstly, the relationship between the environment and layout of enterprise building. Secondly, the optimal model of environment and overall layout of enterprise building is constructed, and the objective function and boundary conditions are confirmed. Thirdly, the procedure of improved genetic algorithm is designed, and the mathematical models of cross operation and niche computation are established. Finally, the simulation analysis of environment and overall layout of enterprise building is carried out, and simulation results show that the improved genetic algorithm can obtain the better environment and overall plan of enterprise building.
    Keywords: Optimization; Environment; Overall layout; enterprise building; Improved genetic algorithm.

  • Research on Fault Diagnosis Technology based on FD-GT method   Order a copy of this article
    by Weijie Kang, Jiyang Xiao, Mingqing Xiao, Xilang Tang, Bin Hu 
    Abstract: Fault diagnosis can be divided into two main tasks: fault feature extraction and fault data classification. Firstly, aiming at the problem that the fault feature extraction method is not significant, this paper proposes a fault feature extraction method based on fuzzy distance. The concept of feature separation is proposed. The initial fuzzy distance calculation method is established by expert knowledge. The key parameters of fuzzy distance are optimized based on DE algorithm combined with historical test data. This method can effectively distinguish fault data from normal data and achieve more significant fault characteristics extract. Secondly, aiming at the problem of low accuracy when classifying fault data, this paper proposes a fault data classification method based on grey target decision. The concept of fault type grey scale is proposed, and the grey number decay matrix is updated with the current test data. The fault data classification is realized by calculating the target distance of each fault type with the current input signal set. The method can effectively improve the accuracy of fault data classification and has a certain time cost advantage. Finally, the FD-GT method is verified by an example. The results show that it can effectively improve the saliency of fault feature extraction and the accuracy of fault type classification, thus achieving more efficient and reliable fault diagnosis.
    Keywords: Fuzzy Reasoning; Grey Target Decision; Fault Diagnosis; Differential Evolution Algorithm; Grey Number.

  • Research on highway vehicle detection algorithm based on video image   Order a copy of this article
    by Yucai Zhou, Zhimin Lv, Yuelin Li, Jiangchun Mo 
    Abstract: In order to solve problem that vehicle detection rate can affected in complex scenarios, authors put forward the adaptive method based on GMM which sets up and updates the background, and use the average neighborhood based on HSV fast shadow elimination algorithm which improves the speed of shadow elimination. For occluded vehicles, authors use the recognition algorithm based on Kalman Filter which blocks vehicle identification, then authors adopts pyramid hierarchical search algorithm based on the template matching which segments the occluded vehicles. The experimental results show that the algorithm is simple and effective, and the detection rate of vehicle is 97%, which meets the requirements of vehicle detection completely.
    Keywords: vehicle occlusion; GMM; shadow removal; The Kalman Filter; vehicle detection.

  • Traffic incident detection based on the width characteristic of the moving object marker   Order a copy of this article
    by Shangbing Gao 
    Abstract: Aiming at the current situation that the existing traffic event detection algorithm is complex and cumbersome, a traffic incident detection method based on the width characteristics of the moving object was proposed. Firstly, the foreground object is extracted by ViBe (visual background extractor), and the foreground target is subjected to hole removal and smoothing processing. Then, the moving target is marked with a label, the width change characteristic of the moving target mark frame is analyzed, and the traffic event detection model is introduced, thereby achieving detection of traffic events. The experimental results show that the method can effectively detect the rear-end event, the crossing event and the collision-fixing event in the traffic incident.
    Keywords: ViBe (visual background extractor); Classification of vehicle types; Traffic incident detection; Image preprocessing.

  • Design of orthogonal phase encoding for basis function based on genetic algorithm   Order a copy of this article
    by Kui Li, Ting Zhang, Hao Guo, Hua Wang 
    Abstract: A new basis function generation algorithm based on genetic algorithm is proposed to overcome the drawbacks of the traditional basis function phase encoding algorithm in transform domain communication system (TDCS). Firstly, by analyzing the requirements and the characteristics of the basis function in TDCS, the evaluation criterion of the basis function is presented, and the corresponding genetic algorithm is designed according to the criterion. Then, the algorithm is used to generate the orthogonal basis function set. The proposed algorithm not only improves the correlation performance and randomness of the basis function, but also improves the system multiple access performance and security. Furthermore, the length and quantity of the basis functions can be flexibly selected and the systems applicability is improved. By analyzing the simulation results, the feasibility and validity of the algorithm are verified.
    Keywords: Basis Function; Phase Encoding; Genetic Algorithm; Multiple Access.