International Journal of Information and Communication Technology (54 papers in press)
Image analysis by efficient Gegenbauer Moments Computation for 3D Objects Reconstruction
by Bahaoui Zaineb, Hakim El Fadili, Khalid Zenkouar, Hassan Qjidaa
Abstract: In this paper, we suggest a new technique for fast computation of Gegenbauer orthogonal moments for the reconstruction of 3D images/objects; A typical comparison of the proposed method with the conventional ZOA methods shows significant improvements in term of error reduction, image quality and consumption time . Then we compare our new approach with an existing methods using Legendre and Zernike moments in the case of 3D image/object. The obtained results prove that Legendre and Zernike moments are slightly better than Gegenbauer, they are still very efficient and gives very good results in terms of MSE and PSNR. But, Zernike moments have higher computational cost than Gegenbauer moments.
Keywords: Gegenbauer Moments computation; Legendre Moments; Zernike moments; 3D images/object; Computation time.
Integration of a quantum scheme for key distribution and authentication within EAP-TLS protocol
by GHILEN AYMEN, Mostafa AZIZI
Abstract: The extensive deployment of wireless networks has led to a significant progress in security approaches that aim to protect confidentiality. The current methods for exchanging a secret key within Extensible Authentication Protocol-Transport Layer Security (EAP-TLS) protocol is based on Public Key Infrastructure (PKI). Although this technique remains one of the most widely implemented solution to authenticate users and to ensure secure data transmission, its security is only computational. In other words, by the emergence of the quantum computer, the existing cryptosystems will become completely insecure. Improving the contemporary cryptographic schemes by integrating quantum cryptography becomes a much more attractive prospect since its technology does not rely on difficult mathematical problems such as factoring large integers or computing discrete logarithms. Thus, we propose a quantum extension of EAP-TLS that allows exchanging a cryptographic key and authenticating a remote client with unconditional security, ensured by the laws of quantum physics. PRISM tool is applied as a probabilistic model checker to verify specific security properties for the new scheme.
Keywords: EAP-TLS; Quantum Cryptography; Authentication;Key Agreement; Entanglement; PRISM; Model Checking.
A rapid detection method of earthquake infrasonic wave based on decision-making tree and the BP neural network
by Yun Wu, Zuoxun Zeng
Abstract: In this paper, we propose a rapid automatic detection method based on decision-making tree combined with BP neural network for the earthquake infrasonic wave. Three factors of frequency (F), duration period (P) and amplitude (A) of seismic infrasonic waves were selected as the network input parameters in the three-layered BP neural network. A total of 30 different infrasonic waves were tested in this model. The results indicate that the successful decision rates can reach 0.8 with input parameters F, P and A. When using proper thresholds for the input parameters, such as F = 0.005 Hz, P = 500 s and A = 5 Pa, the detection results are very closed to the true input signals, and the infrasonic sources as well as their main characteristics can be effectively recognised and classified rapidly. This new method could provide
clues and thoughts for the short-term earthquake infrasonic wave detection.
Keywords: infrasonic wave; earthquake; decision-making tree; BP neural network.
Data Dissemination on MANET by repeatedly transmission via CDN nodes
by Nattiya Khaitiyakun, Teerapat Sanguankotchakorn, Kanchana Kanchanasut
Abstract: Recently, many researches on MANET (Mobile Ad Hoc network) have been carried out due to its various applications in information exchange. The efficient data dissemination in such an infrastructure-less environment as MANET is considered as one of the challenging issues. This paper proposes to adopt the concept of CDN (Content Delivery Network) Technique, normally used in Internet, for disseminating information in MANET. The source node disseminates data to surrounding nodes by repeatedly transmitting batches of packets via a set of CDN nodes acting as relay nodes. Our proposed data dissemination technique via CDN nodes is developed based on the OLSR (Optimized Link State Routing Protocol) on MANET. The limited number of CDN nodes is selected from MPRs (Multi-Point Relay) in OLSR in order to optimally cover all subscriber nodes and to avoid the interference problem as well. The packets are transmitted from CDN to destination nodes using the broadcasting technique the same technique as the one adopted in MPR, a broadcasting technique. In this work, the performance of our proposed technique is evaluated in terms of probability of successful transmission by simulation using NS3. The performance is compared with the typical OLSR and the recent work called Clustering-based data transmission algorithm in VANET (Vehicular Ad Hoc Network). It is apparent that our proposed algorithm improves drastically the overall probability of successful transmission when comparing with the typical OLSR. Additionally, it achieves higher probability of successful transmission at high nodes density when comparing with Clustering-based data transmission algorithm in VANET. Finally, the closed-form mathematical expression of the probability of successful transmission of our proposed algorithm in multi-hop network environment is derived and verified.
Keywords: Content Delivery Network(CDN); MANET; OLSR.
Suboptimal Joint User Equipment Pairing and Power Control for Device-to-Device Communications Underlaying Cellular Networks
by Chaoping Guo, Xiaoyan Li, Wei Li, Hongyang Li
Abstract: Device-to-device(D2D) communications underlying cellular networks, results in the cellular interference to D2D User Equipment(DUE) which is larger than the D2D interference to Cellular User Equipment(CUE). A joint resource allocation scheme is presented to both perform user equipment pairing and power allocation to minimize total interference of DUE and CUE. The scheme is composed of two parts: in the first one the base station assigns power to each CUE and each D2D transmitter by graphic method, and in the second one it selects the optimal CUE to pair with D2D pair by modified Hungarian algorithm in order to minimize total interference. The simulation results show that the proposed scheme can not only decrease the interference caused by D2D pair and that caused by CUE, but also increase the number of permitted D2D connections.
Keywords: device-to-device communications; power control; resources allocation; Suboptimal Joint; Cellular Networks; User Equipment.
A Cognitive Approach of Collaborative Requirements Validation based on Action Theory
by Sourour MAALEM, Nacereddine ZAROUR
Abstract: The requirements must be validated at an early stage of the analysis. Requirements validation usually involves natural language processing, which is often inaccurate and error-prone, or translated in formal models, which are difficult to understand and use for non-technical stakeholders. The majority of existing approaches for validating requirements are used in a heterogeneous process, using a variety of techniques, relatively independent without any methodological or cognitive approach through in which, mechanisms of human thought or artificial are used. In this work we present a cognitive approach to collaborative requirements validation based on the theory of action, through a set of steps that must increase the involvement of the Client in this stage of the engineering cycle; and to bring the customer mental model to the analyst one. In the proposed process, the analyst starts by extracting needs one by one from requirements documents. Each need goes through a step of formulating the intention, which results in a transformation of needs into requirements. This transformation is performed by respecting a new syntax in any generated a checklist (quality attributes) from the viewpoint of each stakeholder, followed by a step of specification of actions that engenders on the basis of the intentions, actions which the analyst perceives and verifies to build a rapid prototype of software interface, executable on machine. The customer perceives the prototype, interprets it and validates these needs. A valid needs database is created, needs that remain invalid must be negotiated, and if conflicts persist, another error base is created. At the end of this collaborative process of requirements validation, decisions will be made concerning who must participate during needs validation meetings with respect to the Mental Effort metric which will classify according to the mental difficulty of execution of the prototype, the articulatory and semantic problems, and measure commitment and motivation of stakeholders.
Keywords: Requirements engineering; Requirements validation; Action Theory; Prototype; Cognitive Approach.
Approach to Terrain Pretreatment for the Yazidang Reservoir Based on Image Processing
by Lingxiao Huang
Abstract: The terrain pretreatment based on image processing in the Yazidang Reservoir is presented. The image processing is based on the image stitching and the image edge detection. The SURF algorithm is used to extract feature points of reservoir partial images. To match feature points and stitch reservoir partial images, the wide view reservoir image is stitching using KD-tree algorithm and linear gradient fusion algorithm. The improved mathematical morphology is adopted, and five different shapes and four different scales of structural elements is implemented to obtain precise waterfront edge of reservoir image. The *.dxf file is obtained by using CAD software to depict reservoir shore contours. It transforms *.dxf into *.kml to extract latitude and longitude information of reservoir waterfront edge. The satisfied mesh effect is generated by using the Delaunay triangulation algorithm.The result shows that accurate and detailed reservoir terrain can be obtained using the image processing method and the GE software, which provides a prerequisite for the numerical simulation of sedimentation in the Yazidang Reservoir.
Keywords: terrain pretreatment; image processing; SURF algorithm; KD-tree algorithm; linear gradient fusion algorithm; mathematical morphology; Delaunay triangulation.
A novel imbalanced data classification algorithm based on fuzzy rule
by Zhiying Xu, YiJiang Zhang
Abstract: The classification of imbalanced data can increase the comprehensibility and expansibility of data and improve the efficiency of data classification. The accuracy of classification is poor when the data is classified by the current method for imbalanced data analysis of big data. To this end, this paper presents an imbalanced data classification algorithm based on fuzzy rule. The algorithm firstly collects the imbalanced data, selects the features of the imbalanced data, and optimises the imbalanced data classification algorithm by using the fuzzy rule classification algorithm. The experimental results show that when the classifier maintains a certain size of the weak classifier, the classification accuracy of the proposed algorithm will be gradually improved as the training time increases, and gradually be stable within a certain range of accuracy, this method can improve the accuracy of imbalanced data classification.
Keywords: imbalanced data; data feature selection; data classification.
A Personalized Recommendation Algorithm Based on Probabilistic Neural Networks
by Long Pan, Jiwei Qin, Liejun Wang
Abstract: Collaborative filtering is widely used in recommendation system. Our work is motivated by the observation that users caught in their attention relationship network, and their opinions about items will be directly or indirectly affected by others through such a network. Based on behaviors of users with similar interest, the technique focuses on the use of their opinions to recommend items. Therefore, the quality of similarity measure between users or items has a great impact on the accuracy of recommendation. This paper proposes a new recommendation algorithm with graph-based model. The similarity between two users (or two items) is computed from the connections on graph with nodes of users and items. The computed similarity measure is based on probabilistic neural networks to generate predictions. The model is evaluated on a recommendation task which suggests that which videos users should watch based on what they watched in the past. Our experimental results on the YouKu and Epinions datasets demonstrate the effectiveness of the presented approach in comparison with both collaborative filtering with traditional similarity measures and simplex graph-based methods and further improve user satisfaction, our approach can better improve the overall recommendation performance in precision, recall and coverage.
Keywords: recommendation system; similarities; graphs-based approach; collaborative filtering; probabilistic neural networks.
Temporal Impact Analysis and Adaptation for Service-Based Systems
by Sridevi Saralaya, Rio D'Souza, Vishwas Saralaya
Abstract: Temporality is an influential aspect of Service-Based Systems (SBS). Inability of a service to achieve time requirements may lead to violation of Service-Level Agreements (SLAs) in a SBS. Such non-conformity by a service may introduce temporal inconsistency between dependent services and the composition. The temporal impact of the anomaly on related services and also the composition will need to be identified if SLA violations have to be rectified. Existing studies concentrate on impact analysis due to web service evolution or changes to a web service. There is a huge lacunae regarding studies on impact of time delay on temporal constraints of dependent services and obligations of business process. Although reconfiguration of SBS to overcome failures is extensively addressed, reconfiguration triggered due to temporal delay is not well explored. In this study we try to fill the gap between reconfiguration and impact analysis invoked due to temporal violations. Once the impacted region and the amount of temporal deviation to the business process are known, we try recovery by localizing the reconfiguration of services to the impacted zone.
Keywords: Service-Based Systems; Impact-Analysis; Proactive Adaptation; Reactive Adaptation; Reconfiguration; Cross-Layer Adaptation; SLA violation handling; Anomaly handling; Service-Based Applications.
Considering the environment's characteristics in wireless networks simulations: case of the simulator NS2 and the WSN
by Abdelhak EL MOUAFFAK, Abdelbaki EL BELRHITI EL ALAOUI
Abstract: Recently, the wireless networks, particularly the wireless Sensor Networks (WSN) occupy an important place in several application areas due to the progress in microelectronics and wireless communications domains. Thus, a set of researches had addressed this issue in order to broaden the possibilities offered by these networks and circumvent the encountered problems. The test of any new solution is an essential phase to validate its performances. This phase is done in network simulators; which NS is the most used. The impact of the physical layer and the radio signal propagation environment criteria on the simulations results is indisputable. In this context, and after presenting and classifying the radio propagation models, we study in detail the implemented models in NS-2. The focus is on the ability of these models to consider the characteristics of the wireless networks deployment environment (e.g. nature, position and mobility of the obstacles). And to consider the specificities of WSN, the effect of other parameters (e.g. antenna height) will be discussed.
Keywords: Wireless network; Wireless sensor network; Simulation; Network Simulator; NS-2; Radio propagation model; Deployment environment.
Improved Biometric Identification System Using a New Scheme of 3D Local Binary Pattern
by KORICHI Maarouf, Meraoumia Abdallah, Aiadi Kamal Eddine
Abstract: In any computer vision application, integration of relevant feature extraction module is vital to help in making accurate decision ofrnthe classification. In the literature, several methods that havernachieved promising results and high accuracies are based on texturernanalysis. Thus, there exist various feature extraction techniques torndescribe the texture information, among them; the Local BinaryrnPattern (LBP) is widely used to characterize the image sufficiently.rnGenerally, LBP descriptor and their variants are applied onrngray-scale images. Thus, in this paper, we propose a new methodrnthat can be applied to any type of image either in grayscale, color,rnmultispectral or hyperspectral. It is a new scheme of 3D LocalrnBinary Pattern. We have developed biometric system for personrnidentification and an edge detection technique to evaluate it. Thernobtained results have showed that it has higher performancesrncompared to other methods developed in the literature in terms ofrnidentification rates.
Keywords: Feature extraction;Local Binary Pattern (LBP);Biometrics;Person identification;Palmprint;Data fusion.
An Agricultural Data Storage Mechanism Based on HBase
by Changyun Li, Qingqing Zhang, Pinjie He, Zhibing Wang, Li Chen
Abstract: With the development of agricultural space-time localization, sensor network and cloud computing, the amount of agricultural data is increasing rapidly and the data structure becomes more complicated and changeable. Currently, the widely used agricultural database is the relational database. This database handles a large amount of data with very limited throughput and is not suitable for organization and management of distributed data. Hbase is a non-relational database of distributed file storage built on Hadoop platform. Hbase is suitable for unstructured data storage database and it can handle large volume of data with high scalability. To better store agricultural big data in Hbase, we propose a special agricultural data buffer structure which stores data based on the data category and a two-level indexing memory organization strategy on HBase. The method proposed saves more than a quarter of the time compared to traditional buffering methods. Experimental results show the higher efficiency of the agricultural data buffer structure and memory-organization strategy.
Keywords: agricultural big data; data buffer structure; HBase; two-level indexing strategy.
A Real-time Multi-Agent System for Cryptographic Key Generation using Imaging-based Textural Features
by Jafar Abukhait, Ma'en Saleh
Abstract: Traditional network security protocols depend on exchanging the security keys between the network nodes, and thus opposing the network to different classes of security threats. In this paper, a multi-agent system for cryptographic key generation is proposed for real-time networks. The proposed key generation technique generates a 256-bit security key for the Advanced Encryption Standard (AES) algorithm using the textural features of digital images. By implementing this key generation technique at both the sender and receiver network nodes, the process of exchanging the security keys through the network would be eliminated, and thus making communication between network nodes robust against different security threats. Simulation results over a real-time network show the efficiency of the proposed system in reducing the overhead of the security associations performed by the IPsec protocol. On the other hand, the proposed agent-based system shows a high efficiency in guaranteeing the quality-of-service (QoS) of the real-time requests in terms of miss-ratio and total average delay through applying the best scheduling algorithm.
Keywords: Cryptography; Textural Features; Gray Level Co-occurrence Matrix (GLCM); Advanced Encryption Standard (AES); QoS; Security Key; Network Node.
Modeling of learner behavior in massive open online video-on-demand services
by Ji-Wei Qin, Xiao Liu
Abstract: Video-on-demand service as a popular Internet application provides lively learning resource, learner can freely selects and watches his/her interesting videos in massive open online education. Learner video-on-demand behavior as feedback shows preference among learners is available to help video provider to design, deployment and manage learning video in massive open online VoD services. In this paper, we collected the learner video-on-demand behavior reports in 875 days, on the basis of real-word data, the learner video-on-demand model is presented in massive open online VoD service. Three main findings are proposed. 1) The Educational video popularity matches better with the Stretched Exponential model than the Zipf model. 2) The long-session educational video attends with the less-popularity. 3) The Poisson distribution is considered the best fit for the arrival learner in massive open online vod services. Educational video popularity distribution would be helpful to define the number copy of educational video file for deployment on video-on-demand server. Session and arrvial pattern would be helpful to design the content of educational video in massive open online VoD services.
Keywords: learner behavior; video-on-demand; massive open online.
Research on equalization scheduling algorithm for network channel under the impact of big data
by Zheng Yu, Dangdang Dai, Zhiyong Zha, Yixi Wang, Hao Feng
Abstract: In order to improve the equalization scheduling ability of network channel, an equalization scheduling algorithm for big data network communication based on the baud-spaced equilibration and decision feedback modulation technology is proposed in this paper. With this algorithm, a model for network communication channel under the impact of big data is constructed to analyze the multipath characteristics of network channel; coherent multipath channel modulation method is used to conduct wave filtering to intersymbol interference and adaptive baud-spaced equilibration technology is also used to design channel equalization; the model for tap delay line of channel is used for multipath suppression of network channel, and the decision feedback modulation technology is also used for network channel equalization scheduling to overcome the impact of phase shift caused by big data impact on channel and improve channel equalization. Simulation results show that when the proposed algorithm is used for network channel equalization scheduling, the fidelity of symbols output through network communication is good and the bit error rate is low, and the performance of network channel equalization scheduling under the impact of big data and multipath is good, which improves the robustness of network channel.
Keywords: big data; multipath effect; network channel; equalization scheduling; baud space; modulation \r\n\r\n.
Study on high accurate localization using multipath effects via physical layer information
by Yajun Zhang, Yawei Chen, Hongjun Wang, Meng Liu, Liang Ma
Abstract: Device-free passive localization (DFL) has shown great potentials to localizing target(s) without carrying any device in the area of interests (AoI). It is especially useful for many applications, such as hostage rescuing, wildlife monitoring, elder care, intrusion detection, etc. Current RSS (received signal strength) based DFL approaches, however, can only identify underlying the prerequisite that the signal collected is mainly conveying along a direct line-of-sight (LOS) path, but cannot perform well in a typical indoor building with multipath effects. This paper explains the fine-grained CSI-based localization system that is effective to locate within multipath environment and non-line-of-sight scenarios. The intuition underlying our design is that CSI (Channel State Information) benefits from the multipath effect, because the received signal measurements at different sampling positions will be the combination of different CSI measurements. We adapt the improved maximum likelihood method to pinpoint single target's location. Final, we propose a prototype of our design utilizing commercial IEEE 802.11n NICs. Results from the experimental scenes of a Lobby and a laboratory of our university, comparing to RSS-based and CSI-based scheme, demonstrate that our design can locate best single target with the accuracy of 0.95m.
Keywords: device-free passive localization; channel state information; improved maximum likelihood methodrnrn.
A fast particle filter pedestrian tracking method based on color, texture and corresponding space information
by Yang Zhang, Dongrong Xin
Abstract: A fast particle filter pedestrian tracking method based on color, texture and corresponding space information is proposed. In this algorithm, firstly, we extract space information of object pedestrian and disintegrate it into three local regions. In addition, employ the improved texture and color information extract algorithm to get the joint texture and color information from the corresponding sub-region. Finally, determine the position of object by color-texture similarity indicator based on space division, and get the result of accurately track. In consideration of the multi thread information fusion algorithm need a larger number of particles, this factor could reduce the computational efficiency. Therefore, a wave integral histogram algorithm is proposed for improving arithmetic speed. The experiment carried out on videos, result indicates the effectiveness and efficiency of the proposed method, which can achieve higher accuracy than other tow state of the art algorithms in the actual traffic scene, and the real-time performance also has been improved considerably.
Keywords: pedestrian tracking; particle filter; integral histogram; texture information.
Velocity Monitoring Signal Processing Method of Track Traffic Based on Doppler Effect
by Xiaojuan Hu, Tie Chen, Nan Zhao
Abstract: In order to improve the monitoring efficiency of track traffic speed, a signal processing method based on Doppler effect is purposed to meet the accurate velocity measurement of high speed trains. As a result, the track traffic radar monitoring signal processing method based on the Doppler effect is studied. First of all, the Doppler effect and the Doppler principles are analyzed. Then, the research of signal processing algorithm of rail traffic radar system is focused on, and the improvement method of a real sequence FFT (fast Fourier transform) imaginary part arithmetic algorithm is put forward. The process of FFT algorithm is simplified, and finally, through the Matlab software simulation, the improved FFT algorithm spectrum analysis effect is further verified. The test results showed that the use of improved FFT algorithm ensured the measurement accuracy, and improved the Doppler frequency calculation speed. In addition, it can meet the processing requirements of rail traffic radar velocity measurement system on the rail traffic velocity monitoring signal.
Keywords: Signal processing; Doppler effect; Track traffic; FFT algorithm.
Research and Application on Logistics Distribution Optimization Problem using Big Data Analysis
by Yuming Duan, Hai-tao FU
Abstract: The optimization of logistics distribution center location is discussed under the environment of big data. Then the features of logistics distribution and algorithm design idea are provided using basic platform of MapReduce and integrated with data mining clustering algorithm. A clustering analysis algorithm based on geodesic distance is proposed, combined with the features of logistics, also with the parallel algorithm design and improved scheme based on MapReduce. It is considered in real situation, there is no linear distance between nodes, while Dijkstra distance can measure the actual distance between two points, so GDK-means clustering algorithm is put forward. The improved clustering algorithm gets parallel design to process a lot of unstructured data in big data of logistics. In parallel design, taking into account the complexity of algorithm and time efficiency, the parallelization algorithm is improved. Our clustering algorithm can be implemented on logistics distribution center location problem. It is verified to provide a decision scheme for any logistics route optimization in logistics distribution chain according to the size of the granularity of space division.
Keywords: logistics distribution; center location; k-means; geodesic distance; big data.
Efficient Scheduling of Power Communication Resources Based on Fuzzy Weighted Constraint Equalization
by Jinkun Sun, Kang Yang
Abstract: Resources in the power communication system are affected by crosstalk between power grid subnets during scheduling, resulting in a poor equalization of resource allocation. In order to improve equalization allocation of resources in the power communication system, an efficient scheduling method of power communication system resources based on fuzzy weighted constraint equalization is proposed. In this method, the nonlinear time series analysis method is used to construct an information flow model of power communication system resource data and the channel equalization control is carried out in transmission links of the power communication system; the fuzzy weighted constraint equalization method is used to implement adaptive baud-spaced equalization handling during resource scheduling and the fuzzy mesh-based clustering algorithm is used to carry out pairing of classification attribute weights of power resources, so as to achieve efficient clustering of resources and to improve the link equalization in resource scheduling. Simulation results show that with this method, the average data recall rate during resource scheduling reaches about 90%, and the equalization curve changes very smoothly, which fully indicates that with this method, the data recall rate and scheduling efficiency are high, and the equalization capability of communication channels is strong
Keywords: power communication system; resource scheduling; data clustering; channel equalization.
Mechanical fault diagnosis based on digital image processing technology
by Hengqiang Gao, HongJuan CAI
Abstract: Modern mechanical equipment continues to generate high productivity, at the same time, it has brought many new challenges and new problems. It is of great importance to detect and diagnose mechanical fault diagnosis in time and then provide alarm messages. Therefore, in this paper, we aim to propose a novel mechanical fault diagnosis with the digital image processing technology. Firstly, overall structure of mechanical equipment fault dynamic motion detection system is illustrated, and the infrared image segmentation is the key part this system. Secondly, we convert the infrared image segmentation problem to a cluster problem, and then provide a novel immune neural network cluster algorithm. Finally, experimental results demonstrate that the proposed algorithm can detect mechanical faults with high accuracy.
Keywords: Mechanical fault diagnosis; Infrared image; Image segmentation; Immune neural network; Fitness value.
Performance Analysis of Distributed Transmit Beamforming in Presence of Oscillator Phase Noise
by Ding Yuan, Chuanbao Du, Houde QUAN
Abstract: This paper analyses the performance of distributed transmit beamforming in presence of oscillator phase noise. Average beampattern of arbitrary array is adopted here as performance index. In the analysis, the phase noise process of each node oscillator is described by a stationary model taking into account and shape single-sideband (SSB) spectrum. By using non-parametric kernel method, the average beampattern in presence of phase noise is derived and corresponding beampattern characteristics are evaluated. Theoretical analysis and simulation results show that the accumulated PN can result in the degradation of DTBF performance, which are also reflected in the change of beampattern characteristics, such as 3dB width, 3dB sidelobe region and average directivity. With the increase of time duration, the cumulative phase noise becomes greater, and the degradation is more obvious.
Keywords: distributed transmit beamforming; average beampattern; phase noise; non-parametric kernel method; oscillator; performance analysis.
The Research of Image Super-Resolution Algorithm Using Convolutional Sparse Coding
by Bin Wang, Jun Deng, YanJing Sun
Abstract: According to super-resolution image reconstruction for convolution sparse coding, a novel super-resolution reconstruction algorithm named four-channel convolutional sparse coding method has proposed via improving convolutional sparse coding method. In the proposed method, a testing image was put in four-channel via rotating image ninety degrees in four times. Then, the high-frequent part and low-frequent part were reconstructed by means of convolutional sparse coding method and cubic interpolation method respectively. Finally, the reconstructed high-resolution image has obtained via the process of weighting on four images. The proposed method not only overcomes the problem of consistency for the overlapping patches, but also improves the detail contour for the reconstructed image and enhances its stability. The experimental results have shown that the proposed method has better PSNR, SSIM, noise immunity than some classical super-resolution reconstruction methods.
Keywords: Image Reconstruction; Super-Resolution; Convolutional Sparse Coding; Four-Channel; Stability.
Three-dimensional Dynamic Tracking Learning Algorithm for Pedestrians on Indefinite Shape Base Based on Deep Learning
by Yaomin Hu
Abstract: In order to improve the three-dimensional dynamic tracking and recognition ability to pedestrians, a three-dimensional dynamic tracking learning algorithm for pedestrians on indefinite shape base based on deep learning is proposed in this paper. First, the indefinite shape base mesh of body imaging is segmented to extract three-dimensional dynamic similarity features of pedestrians, and the three-dimensional feature points are marked; the deep learning method is adopted for fusion of gray pixel value and extraction of difference feature to images during three-dimensional dynamic tracking. Then a motion vector library is constructed based on the extraction results, and the template matching equation of three-dimensional dynamic feature points of pedestrians is obtained. The simulation results show that this method can accurately track moving bodies in three-dimensional dynamic tracking and recognition and can provide good robustness in moving body target extraction with accuracy up to 100% at maximum and detection time of 48.83ms at maximum.
Keywords: indefinite shape base; pedestrian; three-dimensional dynamic tracking; deep learning; image.
A 3D model retrieval method based on multi-feature fusion
by Hong Tu
Abstract: 3D model retrieval is a hot topic in information retrieval, and it is of great importance to fuse multi-feature of 3D models to achieve high quality retrieval. Therefore, in this paper, we propose a novel 3D model retrieval method based on the multi-feature fusion technology. Motivation for the proposed 3D model retrieval method lies in that we convert the 3D model retrieval problem to a discriminative feature space mapping problem. The framework of the multi-feature fusion based 3D model retrieval system contains two main modules: 1) model normalization, and 2) multi-feature fusion. The proposed 3D model retrieval method is designed based on multiple feature fusion and online projection learning. In order to effectively fuse multiple features, we train a model to learn a low dimensional and discriminative feature space from the multiple views of 3D models. Particularly, to effectively retrieve the newly added samples, we propose an online projection learning algorithm, which learns a projection matrix by handling the least square regression model. Experimental results show that the proposed method can achieve higher precision for a given recall than others methods, that is, the proposed method can obtain higher quality 3D model retrieval results than state-of-the-art methods.
Keywords: 3D model retrieval; Multi-feature fusion; Visual feature; Eigenvalue decomposition; Projection matrix.
Dual control algorithms for fault diagnosis of stochastic systems with unknown parameters
by Jinkun Sun, Kang Yang
Abstract: This paper researches the problem of fault diagnosis for stochastic system characterized by slowly changing, unknown parameters. This paper puts forward a conception of logic parameter decay rate based on the threshold, and has designed a rolling control algorithm learning control law based on Kalman filtering theory and lqg control law, while this algorithm can estimate the parameters, on the side of the parameter learning, the system has good fault tolerance ability, thus more accurate fault detection and isolation. The simulation results verify the effectiveness of the proposed method.
Keywords: Dual control; Parameter decay rate; Kalman filter; Lqg control; Rolling control algorithm.
The Research of Multimedia Dynamic Image 3d Adaptive Rendering Method
by Su-ran KONG, Jun-ping YIN
Abstract: The 3d adaptive rendering of the multimedia dynamic image is conductive to improve the quality of the image. The current method renders the multimedia dynamic image by geometric information scenario modeling which has the problem that rendering efficiency is low. To solve this problem, a 3d adaptive rendering method based on OGRE is presented in this paper. This method firstly USES compressed domain to correct segmentation of the image, and then using SIFT operator and Forstner operator classifying image characteristics, finally according to the array image complete 3d adaptive image rendering. The experimental results show that this method has obvious advantages in terms of classification time, rendering energy consumption, segmentation efficiency and other aspects in comparison with other methods, which fully demonstrates that this method improves the rendering efficiency of images.
Keywords: multimedia; dynamic image; 3d adaptive; rendering methods rnrn.
Automated Random Color Keypad
by Kumar Abhishek, Manish Kumar Verma, M.P. Singh
Abstract: In early 1970s, automated teller machine came into existence which was placement for cash counters at banks. People could now do transactions 24X7 with ease. But as ATM expanded its reach into human life, crimes related to ATM theft and fraud increased exponentially. There are flaws related to ATM security which is exploited by the criminals. ATM card cloning, card skimming, ATM pin theft are few of the most common crimes related to the ATM. These crimes result in loss of money measured in billions. The newspapers are full of these crime reports and hacks. In this paper, we have
proposed an automated random colour keypad by which ATM security can be enhanced; making these ATM attacks and fraud very difficult. This ARC keypad will act as a secure replacement for traditional ATM keypad. Several experimental results of the ARC keypad have been included in this paper which proves that our mechanism enhances ATM pin security and reduces the chances of fraud to a great extent.
Keywords: automated teller machine; ATM; ATM keypad; ATM cloning; ATM skimming.
NFK: a Novel Fault-tolerant K-Mutual Exclusion Algorithm for Mobile and Opportunistic Ad hoc Networks
by Tahar Allaoui, Mohamed Bachir Yagoubi, Chaker Abdelaziz Kerrache, Carlos T. Calafate
Abstract: This paper presents a fault-tolerant algorithm ensuring multiple resources sharing in Mobile Adhoc Networks (MANETs) that is able to handle the known k-mutual exclusion problem in such mobile environments. The proposed algorithm relies on a token-based strategy, and requires information about resources and their use to be carried in routing protocol control messages. This way, our solution avoids any additional exchange of messages. Furthermore, experimental results show that it offers a fast response time. Moreover, we introduce a dual-layer fault-tolerance mechanism that tolerates the faults of several sites at the same time without affecting the well functioning of the system. Simulation results also evidence the high efficiency of our proposal, which achieves reduced overhead and response delay even in the presence of critical situations where multiple simultaneous faults occur.
Keywords: NFK; Resource sharing; K-Mutual Exclusion; Fault tolerance; MANETs.
GIS Information Feature Estimation Algorithm Based on Big Data
by Chunyang Lu, Feng WEN
Abstract: In order to improve the data mining and information scheduling capabilities of Geo-information system(GIS), it is necessary to optimize GIS information feature estimation and perform GIS information feature extraction, so a GIS information feature estimation algorithm based on big data analysis is proposed. In this algorithm, the piecewise linear estimation method is adopted to reconstruct feature data in the GIS information database in group, and associated information fusion is performed to the GIS data in the database, and adaptive scheduling is performed to the GIS information feature database through the cascaded distributed scheduling method; according to the spatial distribution of geographic information, vector adjustment is performed to the cluster center, and the frequent item mining method is adopted to extract features of GIS information, and then sequential processing is adopted to the extracted feature quantity of GIS information; the regularized power density spectrum estimation method is adopted to perform unbiased estimation to GIS information feature data. Simulation results show that in GIS information feature estimation, the proposed method can provide estimation with low bias and high accuracy, so it has good GIS information scheduling capability and precision.
Keywords: big data; GIS; information feature estimation; associated information fusion.
Uncertain chance-constrained programming based on optimistic and pessimistic values: models and solutions
by Yao Qi, Ying Wang, Xiangfei Meng, Ning Wang
Abstract: To solve the uncertainty in real decisions and overcome the limitations of random programming and fuzzy programming in application, we proposed two novel uncertain chance-constrained programming models based on optimistic and pessimistic value of uncertain variables in this paper. Firstly, the optimistic value and pessimistic value of uncertain variables were introduced as the objective functions and the chance constraints of uncertain programming were defined as constraint functions, then the optimistic value model and pessimistic value model were established. Secondly, two lemmas were proposed and proved to transform the uncertain chance-constrained programming model into an equivalent deterministic programming model. Finally, the feasibility and effectiveness of the proposed models and solutions were verified by a numerical example.
Keywords: uncertain programming; chance-constrained; optimistic value; pessimistic value; equivalent deterministic model.
Multilingual Named Entity Recognition Based on the BiGRU-CNN-CRF Hybrid Model
by Maimaiti AYIFU, Silamu WUSHOUER, Muhetaer PALIDAN
Abstract: Uyghur, Kazak, and Kyrgyz (UKK languages) are agglutinative languages belonging to the Altaic language system and mainly located Xinjiang Uyghur Autonomous Region of China and Central Asia, which are low-resource languages with rich morphological features. Determining how to obtain a better general entity recognition method without relying on artificial features and resources is a problem that remains to be solved. In this paper, a hybrid neural network model based on bidirectional GRU (BiGRU)-CNN-CRF is proposed. This model takes the prefix or suffix character-level feature vectors captured by the convolutional neural network (CNN) layer, part-of-speech vectors, and word vector concatenated vectors of words as inputs and constructs a deep neural network of BiGRU-CRF suitable for the recognition of UKK named entities. Then, the output of the BiGRU layer is decoded by the conditional random field (CRF) layer, and the dependencies between the output tags are considered. Finally, the global optimal labeling sequence is outputted. The experimental results show that this model can solve the problem of automatic recognition of named entities, achieving the best results to date for the UKK data set provided by the laboratory. In addition, the model has good robustness. The F1 value of UKK named entity recognition reached 93.11%, 90.29%, and 89.22% for the Uyghur, Kazak, and Kyrgyz languages, respectively. This model has been verified in named entity identification tasks in three languages and can be extended to named entity identification tasks in other agglutinative languages.
Keywords: Recurrent neural network; convolutional neural network; conditional random field; named entity recognition; Uyghur; Kazak; Kyrgyz.
Latent Semantic Text Classification Method Research based on Support Vector Machine
by Qingmei Lu, Yulin Wang
Abstract: Text classification, as an important process of network public opinion analysis, will directly affect the judgment of text public opinion. The accuracy of text classification is an important prerequisite for textual public opinion analysis. At present, the commonly used text classification methods mainly focus on clustering and machine learning. In general, the accuracy is not ideal. Moreover, text classification method based on latent semantics has the characteristics of insensitivity to feature dimension and simple classification methods, so it has become the focus of extensive research. However, as the type of text increases, local semantic analysis will occur, resulting in the dropping of classification accuracy of text. In this paper, a latent semantic classification method based on support vector machine (LR-LSA) is proposed to solve the problem of local semantic analysis brought by too much text category, it can be better to solve the impact of feature dimension surge on effect.
Keywords: LSA; Semantics; Vector machine; Machine learning.
Autonomous Multi-target Tracking Technology of Unmanned Surface Vessel Based on Navigation Radar Image
by JiaWei Xia, DeChao Zhou, Xufang Zhu
Abstract: There are three prominent problems in traditional multi-target tracking technologies of unmanned surface vessel: repeated observations and observation omission due to instable reference system of vessel, the lack of data concerning radar observation point features and low utilization of it, and intermittent loss of radar image sequence signals due to environment interference. In order to solve the above problems, a radar image stitching algorithm based on interpolation, an improved target multi-feature extraction algorithm and a multi-target track management model based on multi-feature matching are introduced based on the architecture for multi-target tracking system of unmanned surface vessel to improve the timeliness and accuracy of target tracking. The feasibility of this technology has been tested through a field experiments on lake. The result reveal that the proposed method can provide better multiple targets tracking ability with lower predicted error than traditional target tracking methods.
Keywords: unmanned surface vessel; navigation radar; multi-target tracking; autonomous system.
MEM: A New Mixed Ensemble Model for Identifying Frauds
by Chen Zhenhua, Jiang WeiLi, Lei Ma, Zhang JunPeng, Hu JinShan, Xiang Yan, Shao DangGuo
Abstract: In the social security system, there still exist wilful insurance frauds. In this paper, to address the insufficient stability and randomness of the traditional insurance fraud evaluation model, we propose a new classifier called MEM (Mixed Ensemble Model). Based on the principle of ensemble learning, MEM combines several different individual learners and uses Q statistical methods to evaluate diversity. MEM has been tested on two fraud related datasets to compare with three state-of-the-art classifiers: Neural Network, Naive Bayes and Logistic Regression. The experimental results show that MEM performs better than the other three classifiers in both datasets under the four measures: Accuracy, Recall, F-value and Kappa. MEM can be a useful method for the detection of social insurance fraud.
Keywords: Detection of social insurance frauds; Measurement of social insurance frauds; Social insurance identification techniques; Mixed ensemble model.
Research on Security Assessment Algorithm of Navigation control system based on Big Data
by Jianxin Ge, Jiaomin Liu
Abstract: There is a trend of integration and modularization in navigation control system. It has the characteristics of high resource sharing, fast information transmission and integration of software and hardware. It has high requirements for the safety of navigation control systems. In view of the low accuracy of traditional algorithms, a new big data-based security assessment algorithm of navigation control system is proposed. The structure of navigation control system is introduced, and the realization of navigation control system is analyzed. In big data environment, CCS (Common Criteria) is used to determine the safety assessment objectives of the navigation control system. According to the different safety weights among the areas in the criterion layer, the pilots determine the weight of the assessment objectives by analytic hierarchy process based on the evaluation of the navigation control system and the corresponding safety rating indices. GRAP algorithm is used to acquire the security level of navigation control system so as to realize the safety assessment of the navigation control system. Experimental results show that the proposed algorithm can effectively improve the accuracy of assessment and reduce misclassification rate due to the comprehensive ability of this algorithm.
Keywords: Big data; Navigation control system; Safety; Assessment; Aircraft.
Decision of Mechanical Allocation in Subgrade Earthwork Construction under Uncertainty
by Bo Wang, Ying Wang, Lijie Cui
Abstract: There are two common phenomena in a subgrade earthwork construction, one is the queuing problem caused by mechanical resource confliction, the other is the hysteresis of construction schedule caused by uncertainties. These two phenomena will have a huge impact on the mechanical allocation. Based on the Petri net, queuing theory and uncertainty theory, we firstly establish the whole construction process model under uncertain conditions. Secondly, we solve the mechanical configuration problem according to a designed construction progress. Thirdly, we add some uncertain factors into the Mechanical allocation. Finally, through a numerical application, we verify and compare the decisions above..
Keywords: subgrade earthwork construction；Mechanical allocation；Uncertain Conditions.
Study of Big Data Mining Based on Cloud Computing
by Jiangyi Du, Fu-ling Bian
Abstract: In the era of big data, how to discover knowledge from various types of massive data is an important research direction of big data processing technology. So this is where the value of big data mining lies. Based on the comparison of conventional data mining and big data mining, this paper discusses the typical data mining algorithms, especially the parallel implementation of the algorithms, it also analyzes the architecture of big data mining system and the framework of big data mining platform based on cloud computing, which has provides reference to the users cognition and application of big data mining.
Keywords: Big Data; Data Mining; Cloud Computing.
Part-Based Pyramid Loss for Person Re-Identification
by Yuanyuan Wang, Zhijian Wang, Mingxin Jiang
Abstract: Person re-identification (ReID) is a challenging problem in computer vision, meanwhile attracted the attention of industry. Person ReID focuses on identifying person among multiple different cameras. A key under-addressed problem is to learn a good metric for measuring the similarity among images. Recently, deep learning networks with metric learning loss has become a common framework for person ReID, such as triplet loss and its variants. However, the previous method mainly uses the distance to measure the similarity, and the distance measure is more sensitive when the scale changes. In this paper, we propose part-based pyramid loss to learn better similarity metric for the person ReID, in which batches of quadruplet samples as the input. Specifically, we simultaneously use the relationship of distance and angle among samples learn the local body-parts features of person images. Our approach uses the pyramid relationship in triangles as a measure of similarity, minimizing the angle at the negative point of the triangle. Pyramid loss can learn better similarity metric and achieve a higher performance on the person ReID benchmark datasets. The experimental results show that, our method yields competitive accuracy with the state-of-the-art methods.
Keywords: Person Re-identification; Metric Learning; Pyramid Loss; Part-based.
Feature Extraction Algorithm for Fast Moving Pedestrians with Frame Drop Constraint based on Deep Learning
by Yaoming Hu
Abstract: When the existing method extracts the information of the fast moving pedestrian, the frame dropping phenomenon may occur, resulting in low extraction precision. A fast moving pedestrian frame loss constrained feature extraction algorithm based on depth tilt is proposed. Block matching and denoising are performed on the pedestrian image. The contour feature extraction method is used to reconstruct the adjacent frames, and the reconstructed image frame vector is sub-block fusion. The depth learning algorithm is used to extract the feature quantity of the gray pixel from the frame falling part of the image. Improved feature extraction algorithm for pedestrians with frame loss constraints. The simulation results show that the standard deviation of the frame loss of the extraction result is 8.235, and the standard deviation of the non-drop frame is 4.353. It proves that the algorithm has low frame loss rate and high extraction and recognition ability.
Keywords: pedestrian; frame drop; feature extraction; tracking and identification; deep learning.
Building Energy Consumption Forecasting Algorithm Based on Piecewise Linear Fusion and Exponential Spectrum Analysis
by Chenqiang Zhan
Abstract: In order to solve the problem of large error in traditional statistical prediction methods, a large data prediction method based on pie chart is proposed. Linear fusion and exponential spectrum analysis methods are proposed. The method establishes the target model of building energy consumption prediction and carries out nonlinear exponential sequence analysis. Game analysis of building energy consumption, segmentation linear fusion method is used to decompose the characteristics of building energy consumption map, and statistical analysis is carried out. According to the evolution of feature decomposition and learning trends, the analysis and accurate prediction of building energy consumption big data is realized. The simulation results show that the method reduces energy consumption, is conducive to building energy-saving emission reduction and green building, and provides a new idea for building energy conservation. Provide scientific support for the development of building energy conservation and environmental protection.
Keywords: big data environment; building energy consumption; forecasting algorithm; map feature analysisrnrn.
CNN-based Text Multi-Classifier using Filters Initialized by N-gram Vector
by Yan Xiang, Ying Xu, Zhengtao Yu, Hongbin Wang, Yantuan Xian
Abstract: Text classification based on Convolutional Neural Networks (CNN) has got more attention recently. This paper presents an improved CNN-based text multi-classifier. First, word vector training is performed on the corpus to be classified. Then, the most important n-grams for a particular category are selected and clustered into different groups. Finally the centroid vectors of different groups are used to initialize the center weights of filters. Initialization weights enable CNN to extract n-gram features more effectively and ultimately improve text classification results. Multi-classification experiments using multiple advanced models were performed on different data sets. Experiments show that the proposed model is more accurate and stable than other baseline models.
Keywords: Convolutional Neural Networks; Text Classification; n-gram; Word Embedding; Clustering.
Source Code Based Context-Sensitive Dynamic Slicing of Web Applications
by Jagannath Singh, D.P. Mohapatra
Abstract: Web applications are broadly utilized for spreading business around
the globe. To meet the necessities of the clients, web applications must have
better quality and robustness than any other applications. Web slicing improves
the understanding of the important information which transitively improves the
quality of the web application. The system dependence graph is the most popular
intermediate representation for explicitly represent all dependencies that have to
be considered in slicing. The system dependence graph has been extended to
Web Dependence Graph (WDG). A partial tool has been developed for automatic
generation of the WDG. We proposed a Context-sensitive Web Slicing (CSWS)
algorithm for computation of slices using WDG. During our literature survey, we
noticed that majority of the automatic graph generation tools are mainly based on
byte-code whereas our tool uses the dependency analysis from the source code
of the given program. Using our tool WDG, we compared the performance of
our proposed CSWS algorithm for slicing with another closely related slicing
Keywords: Program Slicing; JSP Application; Source Code Analysis; Context
sensitive; Dynamic Slicing.
An Item Recommendation Model with Content Semantic
by Yunpeng Jiang, Liejun Wang, Ji-Wei Qin
Abstract: Current recommender service providers are offering interesting items for user based user behavior (e.g. the users rating, the trust value) and videos feature manual tagging, and ignoring the content semantic of items. As an accurate reflection of the item content, the item semantic is should be taken into account to avoid subjective feeling of the user marked items feature, we present a recommender model that leverages content semantic and user rating. In this model, the item similarity is firstly calculated with content semantic by best Word2vec method, where the item content as words is mapped into vector space and the distance between vectors is described as the similarity between items by Euclidean Distance, the distances are sorted in ascending order to form one item list recommended by the content semantic. Next, the user rating is used to model the user preference and build the other item list recommended by traditional recommendation method, such as SVD. Then, the two video list is mixed together as final item list recommended for user. Comparing the above algorithm to traditional collaborative filtering on different sparsity rating matrix, Movielens, Filmtrust and Online_Retail, we run experiments that show the presented algorithm has is greatly improved on accuracy, compared to the traditional algorithms, the accuracy of the model increases by an average of 25.32% to 31.41%, and the presented user preference has the good scalability.
Keywords: recommender model; semantic feature; similarities; Word2vec; data sparsity.
Stargan Based Camera Style Transfer for Person Retrieval
by Yuanyuan Wang, Zhijian Wang, Mingxin Jiang
Abstract: Person retrieval is also known as person re-identification (ReID) aiming to match person among cross cameras. Although the results of the person ReID have performed well in small datasets, the issues of the large number of identities in real scenarios or with more cameras have not been fully investigated. Being an image retrieval task under cross multi-cameras of intelligent video security, person ReID is influenced by the image style change caused by different camera illumination and view angles. The number of cameras in the latest datasets is increasing and more camera transfer models need to be trained. Traditional methods of generative adversarial network (GAN) can only handle transfer of two domains. To facilitate the research towards solving these problems, we use star generative adversarial networks (StarGAN) to transfer the image from one camera to another camera in the latest large benchmark datasets. We train multiple transfer models simultaneously, minimizing the bias among different cameras. Label smooth regularization (LSR) algorithm is utilized to mitigate the effects of noise in the model. We learn part-based descriptors from pedestrian samples to generate robust feature representation. Our work is competitive compared to the state-of-the-art.
Keywords: StarGAN; Person retrieval; LSR.
Fast Mining Algorithm for Multi-level Association Rule Data under Temporal Constraints
by Yicheng Mu
Abstract: Redundant interference occurs between frames of multi-level association rule data under temporal constraints, which brings poor clustering and anti-interference performance to data mining. In order to improve the multi-level association rule data mining ability, this paper proposes a fast mining algorithm for multi-level association rule data based on temporal constraints. It constructs a fitting state model of multi-level association data distribution, and uses the reorganization method of multi-level association rules to re-arrange data structure and extract the average mutual information feature; it constructs detection statistics to conduct multi-level linear programming design for association rules data, and uses the autocorrelation detection method to conduct de-interference processing and the fuzzy directional clustering method to conduct fuzzy clustering processing for multi-level association rule data, to realize fast mining of multi-level association rule data under temporal constraints. The simulation results show that compared with traditional methods, the proposed method reduces the execution time of multi-level association rule data mining by 12.77%, and the mining accuracy is improved by 23.34%. High mining accuracy and strong anti-interference ability make the data mining efficiency improved.
Keywords: temporal constraints; association rules; data mining; feature extraction.
Fuzzy judgment of edge features under dynamic constraints in pedestrian tracking
by Yaomin Hu
Abstract: Pedestrian tracking and recognition is influenced by the pedestrian environment and edge factors of dynamic features, which is easy to tracking errors, so in order to improve the pedestrian tracking and recognition ability, it is required to conduct fuzzy judgment to edge features. Therefore, a fuzzy judgment method of edge features under dynamic constraints in pedestrian tracking based on local motion planning and edge contour segmentation was proposed. In this method, a geometry mesh area model for pedestrian tracking and recognition was constructed, and the fuzzy dynamic feature segmentation method was adopted to reconstruct dynamic edge feature points in pedestrian tracking to extract the greyscale pixel set under dynamic constraints in pedestrian tracking; edge feature quantity was fused based on the distribution intensity of greyscale pixels to realise pedestrian tracking image fusion and information enhancement processing; the three-dimensional dynamic constraint method was adopted for local motion planning of pedestrian tracking, and then fuzzy judgment was carried out to edge features in pedestrian tracking based on the edge contour segmentation results. The simulation results show that in pedestrian tracking and recognition, this method has strong fuzzy judgment ability of edge features and can provide results with error below 10 mm and relatively stable fluctuation, so this method can provide relatively high recognition accuracy and good robustness.
Keywords: pedestrian tracking; dynamic constraint; edge feature; recognition; image fusion.
Design of Cloud Computing-based Foreign Language Teaching Management System Based on Parallel Computing
by Kanmanli Maimaiti
Abstract: In view of the long response time of the traditional foreign language teaching management system and the inability to guide students to improve their learning interest, a foreign language teaching management system based on parallel computing is proposed. In this method, the cloud architecture of the foreign language teaching system is given under the cloud computing environment, on which the parallel computing method is adopted to design the foreign language teaching management system hardware, which is scalable and flexible. The parallel algorithm is designed and the communication between the modules of the system is implemented with C# language. The experimental results show that response time for each of the four scores of Zhang is 1s in reference , the response time for each of them is only 0.6s in this paper. The system can shorten the response time for query and improve the development speed of web-based foreign language teaching, effectively promotes the development of web-based foreign language teaching and students interests in foreign language learning.
Keywords: parallel computing; cloud environment; foreign language teaching; management system.
Study on the Subway Transfer Recognition during Rush Hour Based on Big Data
by Shushen YAO, Xiaoxiong WENG
Abstract: With the development of the subway network, multipath coexistence becomes very common in big cities. Its followed that the tickets clearing problem is highly concerned by co-investors, which relies on accurate transfer paths identification. Different from the commonly used Logit models for subway transfer recognition problem, we adopted the Adaptive Gauss Cloud Transformation (A-GCT) model, which transformed the distribution of passengers trip time into multiple concepts of different granularity, and evaluated the maturity of the concept by the of parameter named Confusion Degree (CD). The case in this paper shows that, the A-GCT model has higher accuracy in dealing with uncertain problem such as subway transfer recognition.
Keywords: Gaussian Cloud Transformation(GTC); subway transfer recognition; big data.
Intelligent Monitoring System for Thermal Energy Consumption of Buildings under the IoT Technology
by Lu Wang, Difei Jiang
Abstract: In view of the poor accuracy of single value of heat energy consumption and the weak real-time monitoring of energy consumption in buildings, an intelligent monitoring system for thermal energy consumption of buildings under the Internet of Things (IoT) technology is designed. The overall structure of the intelligent monitoring system for thermal energy consumption of buildings is constructed. The DSP integrated signal processor is used for data acquisition and real-time information processing of thermal energy of buildings. A wireless intelligent gateway for building thermal energy consumption monitoring is designed by using Internet of things technology, a wireless sensor network model is constructed. The VME bus is used as information transmission channel to realize intelligent monitoring for thermal energy consumption. The test results show that the real-time monitoring accuracy of the system is better than that of the traditional method, and it has a good application prospect in all aspects.
Keywords: Internet of Things (IoT) technology; thermal energy of buildings; energy consumption monitoring; system design.
An Algorithm of Decomposition and Combinatorial Optimization Based on 3-Otsu
by Liejun Wang, Junhui Wu, Ji-Wei Qin
Abstract: Recently, the 3-Otsu(three-dimensional maximum between-class variance algorithm) have drawn great attention in image segmentation. However the time consumption and calculation amount of 3-Otsu is large, so this paper provide a compositing 3-Otsu decomposed algorithm. Firstly, the histogram of 3-Otsu is resolved into three two-dimensional histogram by projecting, and the projection plane is three coordinate plane of its own space. Secondly, the two-dimensional histogram formed after segmented by using 2-Otsu, then three segmentation results are obtained. Finally, three segmentation results are combined in linear manner, and combination result is the output of segmentation result, under the ideal noise-free, gaussian noise, salt noise, pepper noise and salt and pepper mixture noise, respectively. The results show that the proposed algorithm is nearly 30 times smaller in time consumption than 3-Otsu, although slightly more than 2-otsu,its value is still small. Meanwhile the anti-noise performance, especially for mixed noise, is better than two other algorithms.
Keywords: three-dimensional Otsu; decomposition and reduction; linear combination.
Video Encryption Based on Chaotic Array System: Working with Image Directly
by Hongyan Zang, Jing Yang, Guodong Li
Abstract: A new three-dimensional discrete chaotic system is proposed according to the Marotto theorem in this paper. On this basis, a coupled array chaotic system is given as the drive system. Moreover, with the help of the bidirectional generalized synchronization theorem, the response system is constructed and proved to be chaotic. According to the chaotic matrix generated by the above systems, a special video encryption scheme was proposed. While the experiment shows that the encryption scheme occupies a large key space and enjoys an approximately uniform distribution of the ciphertext entropy already, the main contribution of the new constructed system is the higher speed than the vector system since once the encryption starts, the chaotic matrix generated by the array systems and the image with the same size to the matrix can be operated directly.
Keywords: generalized synchronization; video encryption; coupled array chaotic system; computing simulation.
User Information Intrusion Prediction Method Based on Empirical Mode Decomposition and Spectrum Feature Detection
by Zheng Ma, Yan Ma, Xiaohong Huang, Manjun Zhang, Bo Su
Abstract: In distributed intelligent computing environment, user information is vulnerable to plaintext intrusion, resulting in information leakage. In order to ensure the security of user information, a user information intrusion prediction method based on empirical mode decomposition and spectrum feature detection in distributed intelligent computing is proposed in this paper. Firstly, a model of user information and intrusion signal in distributed intelligent computing is established; then an intrusion detection model is established with signal processing method; finally, time-frequency analysis and feature decomposition are conducted for intrusion information in distributed intelligent computing with empirical mode decomposition method, and accurate prediction of user intrusion information is achieved based on joint probability density distribution of spectrum feature, so as to improve the algorithm design. The simulation results show that when the signal to noise ratio is 12.4dB, the detection probability of the method proposed in this paper is 1, and then the false alarm probability can be 0, which indicates that this method can provide good intrusion detection probability and low false alarm probability even at relatively low signal to noise ratio. Therefore, the method proposed in this paper has good intrusion interception and prediction ability.
Keywords: distributed intelligent computing; user information; intrusion prediction; feature extraction; empirical mode decomposition.