International Journal of Computational Intelligence Studies (29 papers in press)
Air pollution prediction through Internet of Things technology and Big Data Analytics
by Safae Sossi Alaoui, Brahim Aksasse, Yousef Farhaoui
Abstract: Air pollution is one of the biggest and serious challenges facing our planet nowadays. In fact, the need to develop models to predict this issue is considered so crucial. Indeed, our work aimed at building an accurate model to predict air quality of US country by using a dataset collected from connected devices of Internet of Things (IoT), namely from wireless sensor networks (WSN). Therefore, the huge amount of data captured by these sensors (approximately 1.4 million observations) brings about a highly complex data that necessitates new form of advanced analytic; its about Big Data Analytics. In this paper, we examine the possibility to make a fusion between the two new concepts Big Data and Internet of Things; in the context of predicting Air pollution that occurs when harmful substances; like NO2, SO2, CO and O3, are introduced into Earth's atmosphere.
Keywords: Internet of Things (IoT); Wireless sensor networks (WSN); Air pollution; Air Quality Index (AQI); Big Data Analytics; Apache Spark.
Handling the Crowd Avoidance Problem in Job Recommendation Systems Integrating FoDRA
by Nikolaos Almalis
Abstract: In this article, we present the basic principles and approaches of the Job Recommender Systems (JRSs). Furthermore, we describe the four different relation types of the job seeking and recruiting problem, derived directly from the formal definition of the JRSs. We use our already published Four Dimensions Recommendation Algorithm (FoDRA) to calculate the suitability of person for a job and then we model a job seeking and recruiting problem with many candidates and many jobs (N-N case). Finally, we execute the algorithm and present the results proposing a solution -the minimum acceptable suitability level-for the crowd avoidance problem that occurred. Our study produces satisfying results and shows that this approach can be considered as an important asset in the domain of Job Seeking and Recruiting.
Keywords: Recommendation system; Job seeking and recruiting; Job recommender; Matching people and jobs; Constraint-based; Information filtering.
Creating classification rules using Grammatical Evolution
by Ioannis Tsoulos
Abstract: A genetic programming based method is introduced for data classification. The fundamental element of the method is the well - known technique of Grammatical Evolution. The method constructs classification programs in a C like programming language in order to classify the input data, producing simple if else rules. The paper introduces the method as well as the conducted experiments on a series of datasets against other well known classification methods.
Keywords: Genetic algorithm; Data classification; Grammatical evolution; Stochastic methods.
Big data: A distributed storage and processing for online learning systems
by Karim DAHDOUH, Ahmed DAKKAK, Lahcen OUGHDIR
Abstract: The new information and communication technologies have changed the way of teaching and learning. In particular, the big data technology that has recently been developed to overcome the limitations of traditional systems of storage, processing, and analysis. In fact, big data has been used in several fields including health care, public services, and online services such as social media and online learning. It offers a rich set of new technologies in terms of data integration, distributed storage, parallel processing, and data visualization. Furthermore, big data provides many techniques to bring solutions to various educational problems such as the courses recommendation engine, the prediction of learner behaviour, the exponential growth of the learners and pedagogical resources, etc. Today, thanks to the big data ecosystem, it is possible to greatly improve the effectiveness and performance of the online learning services. This article presents the big data paradigm, its components, technologies, and characteristics. It proposes an approach for incorporating big data, online learning systems, and cloud computing in order to enhance the efficiency of the distance learning environment. Also, it provides a methodology to store and process the data produced by online learning platforms using advanced big data technologies and tools. Moreover, It explores the advantages and benefits that big data offer to students, teachers and online learning professionals.
Keywords: computing environments for human learning; big data; cloud computing; e-learning; Online learning; Learner; Learning Management Systems (LMS); NoSQL database; Hadoop; MapReduce; Spark; Cassandra; Hive; Apache Flume; Apache Sqoop.
An Independent-domain Natural Language Interface for Multimodel Databases
by Bais Hanane
Abstract: Databases are gaining prime importance in the world of modern computing. Retrieving information stored in databases required the knowledge of the database Query languages such as Structured Query Language (SQL). However, learning this language can be difficult for non-expert users.Hence, the using of natural language is a very easy and convenient method that can provide powerful improvements to the use of data stored in databases. In this paper, we present the architecture of an intelligent natural language interface for a multimodel database. This interface functions independently of database domain, language and model. The using of machine learning approach helps our system to improve automatically its knowledge base through experience.
Keywords: Databases; Natural Language Processing (NLP); Intermediate XML Logical Query (IXLQ); Extended Context Free Grammar (ECFG); intelligent interface.
Diabetes Risk Stratification Method Based On Fuzzy Logic And Bio-Inspired Meta-Heuristics
by Deme Andreea, Chifu Viorica Rozina, Pop Cristina Bianca, Chifu Emil Stefan, Salomie Ioan
Abstract: This paper presents a system for diabetes risk stratification that combines fuzzy logic with two bio-inspired algorithms. The developed system takes as input a set of patients described by numerical and categorical features and generates fuzzy rules to classify them into groups according to their risk of having diabetes. To take into consideration the uncertainty from the input data set, our system combines fuzzy logic techniques with bio inspired algorithms and hierarchical classification. The system has been evaluated on Pima Indians data from UCI machine learning repository.
Keywords: CLONALG algorithm; fuzzy logic; ant clustering; patient risk stratification.
Stopping rules for a parallel genetic algorithm
by Ioannis Tsoulos, Alexandros Tzallas, Markos Tsipouras, Vasileios Christou, Dimitrios Tsalikakis
Abstract: A novel method for the implementation of parallel genetic algorithms is introduced to locate the global minimum of a multidimensional function inside a rectangular hyperbox. The algorithm relies on a client - server model and incorporates an enhanced stopping rule. A number of experiments were conducted in order to measure the effects in termination by using the termination rule either on server machine or on clients. The method is tested on a series of well - known test functions as well as neural network training and the results was compared against another parallel genetic algorithm method. The results from the experiments are reported in terms of test error and amount of generations.
Keywords: Genetic algorithm; parallel algorithms; stopping rules; optimization.
Six Phase Transmission Line Protection against Open Conductor, Phase to Ground and Simultaneous Faults using Fuzzy Inference System
by A. Naresh Kumar, Ch Sanjay, M. Chakravarthy
Abstract: Conventional techniques fail to handle classification and location of simultaneous phase to ground and open conductor faults. In this regard, a novel classification and location strategy based on fuzzy logic is proposed in this paper. The main contribution of fuzzy strategy is to improve six phase transmission line (SPTL) under phase to ground and open conductor fault protection schemes. Moreover, fuzzy strategy identifies and locates simultaneous occurrence of open conductor and phase to ground faults. MATLAB software is used to simulate the SPTL. The measured instantaneous values of current in each phase at one end of SPTL are processed by low pass Butterworth filter, sampler and discrete fourier transform. The obtained fundamental component of currents is selected as input to the proposed classification and location strategy. An advanced soft computing toolbox i.e., fuzzy toolbox, is used to implement fuzzy inference systems. From the simulation test results, the fuzzy strategies not only classify faults correctly, but also predict the location of faults precisely. The effectiveness and feasibility of fuzzy strategy is validated by the obtained test results considering the effect of variation in different fault parameters such as fault type, fault resistance, fault location and fault inception angle.
Keywords: Six phase transmission line; fuzzy interface system; phase to ground faults; open conductor faults; simultaneous faults.
Multi Objective Optimization of Thickness and Strain Distribution for Automotive Component in Forming Process
by Ganesh Kakandikar, Vilas Nandedkar
Abstract: Automotive manufacturing industry has emerged as one of the important facet of economy boost in developing countries like India. Globalization, with invited competition, is demanding best quality products from manufacturer. Most of the parts of automotive, which contribute to safety and aesthetics i.e. body parts are manufactured from sheet metal. Metal forming is complex, strain distribution process in formed part from flat blank involving range of processes from simple bending to deep drawing. Ideally the volume of the blank and formed component must remain constant, but decrease/increase in thickness of sheet metal is observed along with strains in major and minor direction. This results in various failures as wrinkling and fracture. The paper presents innovative methodology to distribute uniformly the thickness, preventing thinning/thickening. Multi objective optimization problem has been framed with two contradictory objectives as Thinning and Thickening correlating the process variables. Sealing Cover, automotive component has been selected for study and numerical experimentation, from Vishwadeep Enterprises, Pune. Multi Objective Genetic Algorithm has been applied for process optimization. The results obtained are encouraging and avoids thinning/thickening and results in uniform distribution of thickness in all sections of sealing cover.
Keywords: Optimization; Artificial Intelligence; Genetic Algorithm; Metal Forming.
Special Issue on: Intelligent Systems for Cyber Security Current Trends, Applications and New Challenges
Intrusion Detection using Data Mining
by Shubha Puthran, Ketan Shah
Abstract: Intrusion Detection plays very important role in securing Information Servers. Classification and Clustering Data Mining algorithms are very effective to deal with Intrusion Detection. However, classification (supervised) results with false negative detection and Clustering (unsupervised) results with false positive detection. This paper introduces a unique framework consisting of Pre-processing unit, Intrusion detection using quad split(IDTQS), Intrusion Detection using Correlation based quad split (IDTCA) and Intrusion Detection using Clustering (IDTC). In this proposed framework, IDTQS and IDTCA shows accuracy improvement for University of New South Wales (UNSW) dataset is in the range 4%-34% for DoS, Probe, R2L, U2R and Normal classes. IDTC Clustering algorithm performs with 97% accuracy. Training and testing time is improved by 14% for IDTCA in comparison with IDTQS.
Keywords: Quad split; Decision Tree; Correlated Attributes; UNSW dataset.
An Integrated Approach for Multimodal Biometric Recognition System using Pearson Type-II (Beta) Distribution
by Naga Jagadesh Bommagani, A.V.S.N Murty
Abstract: Biometric recognition plays an important role in personnel identity authentication. Usually biometric recognition protocols which involve single source of information are called unimodal systems. Such systems suffer from the problems like noisy sensor data, performance, collectability and non-universality. To have an accurate recognition it is needed to develop a system with multimodal biometrics. Hence, in this paper a new approach is proposed with the combination of multiple biometric traits such as face, fingerprint and palm vein. Region of Interest (ROI) is used to consider the valuable information from the images. The 2D Discrete Cosine Transform is used for extracting the feature vector from face, fingerprint and palm vein and fusion at feature extraction level. Here the feature vector is modelled with Pearson Type-II distribution and the model parameters are estimated using the EM algorithm. The initialization of model parameters is done through moment method of estimators and K-means algorithm. The performance of the proposed algorithm is carried by experimentation with CASIA biometric database. Through experimentation the proposed model performs more effectively than the algorithm with Gaussian mixture model.
Keywords: Multimodal biometric recognition; Discrete Cosine Transform; EM algorithm; Pearson mixture model.
IbPaKdE: Identity-Based Pairing free Authenticated Key and Data Exchange Protocol for Wireless Sensor Networks
by Lakshmana Rao Kalabarige
Abstract: The security vulnerabilities in key distribution approaches of WSN
reveals important credentials. The secure distribution of keys without having
permanent storage of important credentials in the permanent memory part
(ROM) of a sensor node is a challenging task. Further, the design of a key
distribution approach with less computational complexity, energy efficient, low
communication overhead, and low memory overhead are some challenging
tasks for a resource constrained sensor nodes. This piece of work addresses
all these challenges by combining Identity-Based Cryptography(IBC) with
Symmetric Key Cryptography(SKC). The proposed Identity-Based Pairing free
Authenticated Key and Data Exchange Protocol(IbPaKdE) avails advantages
of both IBC and SKC to address the above challenges. This approach does
not require prior communication for the establishment of secret keys and it
supports pairing free key distribution. The proposed IbPaKdE uses IBC strategy
for secure exchange of keys and SKC to provide security to the data to be
transmitted. The on demand establishment of keys eliminates the permanent
storage of important credentials in the sensor nodes. Simulation results of the
proposed approach is compared with hashed identity based secure key and data
exchange(HISKDE). The results show that the IbPaKdE incur better results than
HISKDE in terms of energy efficiency, reduces memory, communication and
computational, overheads of a sensor node.
DCT statistics and pixel correlation based blind image steganalysis for identification of covert communication
by Madhavi B. Desai, S.V. Patel, Vipul H. Mistry
Abstract: In the last decade, the interconnection of systems through networks, access to information, different computer technologies and the combination of all these aspects have increased the use of image steganography techniques for illegal acts. Furtherance of image steganography techniques exploited to send secret information on social network builds the requisite of blind image steganalysis. Blind image steganalysis is one in which no prior information is available about the data hiding method used to embed the message. Existing image steganalysis methods are either domain specific or the one which requires a very high dimensional feature set. Considering the types of image steganography methods, embedding rates, image types and feature dimensionality there is an utmost need of a low dimensional blind image steganalysis method. This paper proposes a blind steganalysis method with a 32-D feature set comprising of DCT Statistics and Pixel Correlation (DSPC) algorithm with the aim to the reduced time complexity of feature extraction as well as the complexity of classifier. The experimental results evidence that the proposed feature set gives better results against state-of-art high dimensional image steganalysis methods. The performance of the proposed algorithm is evaluated using experiments with varying embedding message size, message types and image formats using Ensemble classifier. The algorithm is implemented in Matlab and all the experiments are performed on standard image datasets i.e. BSDS300, CorelDraw.
Keywords: Blind Image Steganalysis; Binary Similarity Measures; DCT Transform; Ensemble Classifier; Feature Extraction; Statistical Features; and SVM.
Adaptive QoS Constraint-based Service Differentiated Routing In Wireless Sensor Network
by Yogita Patil
Abstract: Achieving the best quality of service (QoS) as per user requirement is one of the important challenges. The time-critical applications in Wireless Sensor Networks (WSN) demand energy efficient transmission of data with limited resource availability. To resolve these issues AQSDR has been proposed. The proposed protocol support packet differentiation and selection of sensor node based on energy, delay, and congestion for path establishment to transfer normal data packets providing energy conservation. The multipath is chosen for transmission of emergency packets satisfying delay requirements of non-delay tolerant applications. AQSDR support adaptive path selection, according to application requirement. The proposed technique in this work outperforms when compared with the existing protocol in terms of minimized energy consumption, delay, control overhead, packet drop ratio, and high throughput.
Keywords: Clustering; Congestion Index; Delay; Energy Efficiency; Multipath; Service differentiation; WSN.
Special Issue on: BDCA'18 Data Science and Applications
Efficient of Bitmap Join Indexes for optimizing Star Join Queries in Relational Data Warehouses
by Mohammed YAHYAOUI, Souad AMJAD, Lamia BENAMEUR, Ismail JELLOULI
Abstract: Data warehouses are dedicated to analysis and decision-making applications. They are often schematized as star relational models or variants for on-line analysis. Typically, the analysis process is conducted via OLAP (On-Line Analytical processing) type queries. These queries are usually complex, characterized by multiple selections operations, joins, grouping, and aggregations on large tables. Which require a lot of calculation time and thus a very high response time. The performance of these queries depends directly on the use of the secondary memory. Indeed, each input-output on disk requiring up to ten milliseconds. In order to reduce and minimize the cost of executing these queries, the data warehouse administrator must make a good physical design during the physical design and tuning phase by optimizing access to the secondary memory. We focus on bitmap join indexes that share the same resource, that is, the selection attributes extracted from the business intelligence queries. To optimize star join queries.
Keywords: Data Warehouse; OLAP; Indexes; Optimization Query; Star join query; Bitmap join indexes.
Special Issue on: BDCA'18 Data Science and Applications
Information Technology performance management by Artificial Intelligence in Microfinance Institutions: An overview
by Kaicer Mohammed
Abstract: This paper presents an overview of the use of new information technology to improve the management of microfinance institutions, experiencing a gap due to the growth of the microfinance sector and the diversity of products and services they offer to the target populations. We will show that artificial intelligence could play a role to ensure reliable management information systems in MFIs.
Keywords: Management of Informatics Technology; Artificial intelligence; Microfinance Institution; Central risk.
Intelligent intrusion detection system using multilayer perceptron optimized by genetic algorithm
by Mehdi Moukhafi, Khalid El Yassini, Bri Seddik
Abstract: This paper presents a neural network-based intrusion detection method for the attacks on a computer network. Neural networks are used to predict unusual activities in the system. In particular, feedforward neural networks with the back propagation training algorithm were employed in this study. we propose a method of intrusion detection based on a combination of GA(Genetic algorithm) and MLP (Multilayer Perceptron) Neural Network to develop a model for intrusion detection system. All tests were realized with the kdd99 data set. The performance of the proposed method of intrusion detection was evaluated on all KDD99 data set, 10% of the KDD99 data set were used for training the GA-MLP model. This system achieves a top accuracy of up to 93.05%.
Keywords: Machine Learning Based Intrusion Detection; Parameters optimization; Genetic algorithm; Multilayer Perceptron Neural Network.
QoE in Video Streaming over Ad-hoc Networks: Comparison and Analysis of AODV and OLSR Routing Protocols
by Hind ZIANI, Nourddine ENNEYA
Abstract: Video Streaming services are easily among of the most consumed services on the internet. Indeed, they are single-handedly accountable for up to 85% of overall internet traffic. And yet, despite the multiple modern infrastructure networks and high-end technologies which remain in constant evolution, network masters still assess Quality levels by its dependent and independent factors. Furthermore, new venues of marketing strategies are constantly witnessing the emergence of ever-novel, ever-revolutionary quality horizons which centralize Human perception. And so, in addition to the Quality of Service (QoS) which is built upon network-oriented metrics, we are now faced with stakes bearing on the Quality of Experience (QoE). In MANET, guaranteeing good quality and performance, be it objective or subjective, is a challenge to be reckoned with. In fact, The extant routing protocols are generally network-oriented and are, as such, chiefly dependent upon objective quality parameters, whence they seldom correlate with the QoE standards as averred by the users perception of the received service. This article purports to analyze and experiment on video transmission, through an Ad-hoc network, based on two emblematic routing protocols -AODV and OLSR- in view of identifying the one most relevant to, and optimal for the subjective quality (QoE) we are focused on.
Keywords: Ad-hoc networks; Video Streaming; Routing protocols; Quality of Experience.
Special Issue on: CMDM 2017 Computational Intelligence and Data Mining
MC4.5 decision tree algorithm: An improved use of continuous attributes
by Anis Cherfi, Kaouther Nouira, Ahmed Ferchichi
Abstract: C4.5 is one of the top ten data mining algorithms, it is the most widely used decision trees construction techniques. Although effective, it suffer from the problem of complexity when it deals with continuous attributes. It also leads to a certain level of information loss. Therefore, minimizing such loss, and reducing the time complexity is one of the main goals in this paper. With the intention of alleviating these problems, this paper presents a novel algorithm namely MC4.5, which proposes the statistical mean as an alternative to the C4.5 threshold selection process. To demonstrate the effectiveness of the new algorithm, a complete evaluation was launched to prove that MC4.5 complies with the objectives previously mentioned. From the theoretical perspective, we develop an analysis of the complexity to compare algorithms. Empirically, we conduct an experimental study using 30 data sets to prove that, in most cases, the proposed algorithm leads to smaller decision trees with better accuracy comparing to the C4.5 algorithm.
Keywords: Decision tree; MC4.5; C4.5; Statistical mean; Continuous attributes;rnClassification; Information gain.
Solving flexible job-shop problem with sequence dependent setup time and learning effects using an adaptive genetic algorithm
by Ameni Azzouz, Meriem Ennigrou, Lamjed Ben SAID
Abstract: For the most scheduling problems studied in literature, job processing times are assumed to be known and constant over time. However, this assumption is not appropriate for many realistic situations where the employees and the machines execute the same task in a repetitive manner. They learn how to perform more efficiently. As a result, the processing time of a given job is shorter if it is scheduled later, rather than earlier in the sequence. In this paper, we consider the Flexible Job Shop Problem (FJSP) with two kinds of constraint, namely, the sequence-dependent setup times (SDST) and the learning effects. Makespan is specified as the objective function to be minimized. To solve this problem, an Adaptive Genetic Algorithm (AGA) is proposed. Our algorithm uses an adaptive strategy based on : (1) the current specificity of the search space, (2) the preceding results of already used operators and (3) their associated parameter settings. We adopt this strategy in order to maintain the balance between exploration and exploitation. Experimental studies are presented to assess and validate the benefit of the incorporation of the learning process to the SDST-FJSP over the original problem.
Keywords: scheduling problem; Genetic algorithm; Adaptive strategy; Learning effects.
Contributions to the Automatic Processing of the User-Generated Tunisian Dialect on the Social Web
by Jihene Younes, Hadhemi Achour, Emna Souissi, Ahmed Ferchichi
Abstract: With the growing use of social media in the Arab world, Arabic dialects are rapidly spreading on the web, leading to a growing interest from NLP researchers. These dialects are however, still under-resourced languages and the lack of available dialectal resources is a major obstacle to their study and processing. In this paper, we focus on the automatic processing of the user-generated Tunisian dialect (TD) on the social web and propose an approach that aids to automatically generate TD language resources (LRs), useful for any NLP research work dealing with this dialect. This approach exploits the large amounts of textual productions on the social web in order to extract and generate dialectal content. It is based on two main NLP components, namely the TD Identification and the TD transliteration. A machine learning approach using Conditional Random Fields (CRF), is proposed for implementing these two components and reached an accuracy of 87.45 for the TD identification and 90.49 for the automatic generation of dialectal contents by transliteration.
Keywords: Tunisian Dialect; language resources; corpora; lexica; identification; transliteration; natural language processing; machine learning.
A Co-evolutionary Decomposition-based Algorithm for the Bi-level knapsack optimization problem
by Abir Chaabani, Lamjed Ben Said
Abstract: Bi-level optimization problems (BOPs) are a class of challenging problems with two levels of optimization tasks. These problems allow to model a large number of real-life situations in which a first decision maker, hereafter the leader, optimizes his objective by taking the follower\'s response to his decisions explicitly into account. In this way, evaluating a solution in the upper level requires finding an optimal solution to the lower level problem. This fact makes BOPs difficult to handle and have kept researchers and practitioners busy alike. Recently, a new research field, called EBO (Evolutionary Bi-Level Optimization) has appeared thanks to the promising results obtained by the use of EAs (Evolutionary Algorithms) to solve such kind of problems. In this context,two recently proposed EBO called CODBA and CODBA-II were proposed to solve combinatorial BOPs. The proposed approaches were able to improve the quality of generated bi-level solutions regarding to the recently proposed methods within this research area. In fact, a wide range of applications fit the bi-level programming framework and real-life implementations still scarce. For this reason, we propose in this paper a Co-evolutionary Decomposition-based Bi-level Algorithm for the bi-level knapsack optimization problem. The computational performance of the proposed algorithm turned out to be quite efficient on both computation time and solution quality regarding to other competitive EAs.
Keywords: Bi-level combinatorial optimization; evolutionary methods; bi-level\r\nknapsack problem.
Web service selection based on QoS and user profile
by Ilhem Feddaoui, Faîçal Felhi, Jalel Akaichi
Abstract: The Web Services are from different sources, heterogeneous, and of large volume. The user is in a crucial situation to select the best Web services. The Web service selection process aims to discovery the desired Web services; as it allows to select the best Web services to users' query. In particular, various Web services have the same functionalities, so we need another factor to select the desired Web services, which is the Quality of Service (QoS). The QoS has an important role in the Web service selection process, it aims to classify the Web service that have same functionality. This paper focuses on different concepts of the QoS. We present a new approach that is composed by two services; its role is primarily the best Web service selection in relation with users' query and profile. In our approach, a better knowledge of user behavior is important because users can participate in research design and construction. The experiment shows that our method can accurately recommend the needed Web services in a faster time.
Keywords: Web service; Query; User profile; QoS.
An effective Genetic Algorithm for solving the Capacitated Vehicle Routing Problem with Two-dimensional Loading Constraint
by Ines Sbai, Olfa Limam, Saoussen Krichen
Abstract: In this article, we focus on the symmetric capacitated vehicle routing problem where customer demand is composed of two-dimensional weighted items. The objective consists in designing a set of trips, starting and terminating at a central depot, that minimise the total transportation cost with a homogenous fleet of vehicles based on a depot node. Items in each vehicle trip must satisfy the two-dimensional orthogonal packing constraint. The capacitated vehicle routing problem with two-dimensional loading constraint is an NP-hard problem of high complexity. Given the importance of this problem, many solution approaches have been developed. However, it still a challenging problem. Then, we propose to use a new heuristic based on an adaptive genetic algorithm in order to find better solution. Our algorithm is tested with 150 benchmark instances and compared with state-of-the-art approaches. Results shown that our proposed approach is competitive in terms of the quality of the solutions found.
Keywords: Capacitated Vehicle Routing Problem; Loading; Genetic Algorithm;rn2L-CVRP.
A Multi-Level Study for Trust Management Models Assessment in VANETs
by Ilhem Souissi, Nadia Ben Azzouna, Lamjed Ben Said
Abstract: Nowadays, trust management is one of the key elements to ensure a high security level in ad hoc networks. Trust assessment can be perceived at three levels. First, the data perception trust need to be assessed in order to ensure a high quality of raw sensed data. Second, the trust relationship assessment is required to detect the selfish and malicious entities and to maintain the data integrity. Finally, the data fusion trust is essential to preserve the performance of the fusion process. In this paper, we intend to point out the need to integrate the data perception trust, the communication trust and the data fusion trust in order to preserve the information trustworthiness in VANETs. We further browse the literature to identify recent advancements with regard to each type of trust.
Keywords: Data Perception Trust; Communication Trust; Data Fusion Trust; VANETs.
Special Issue on: Applications of Hybrid Bio Inspired Algorithms
Fuzzy Knowledge Based Fractional Order PID Control Implementation with Nature Inspired Algorithms
by Ambreesh Kumar, Rajneesh Sharma
Abstract: In this paper, we attempt to hybridize nature inspired optimization techniques with fuzzy knowledge based proportional integral derivative (PID) control for applications on fractional order systems. Two nature inspired approaches, namely, Genetic algorithm and Ant Colony algorithms have been employed for tuning the parameters of the fuzzy knowledge based fract-order PID controller offline. In the next stage, we fine tune the PID controller parameters using a fuzzy knowledge based formulation. In our proposed nature inspired fractional fuzzy PID (NIFFPID) framework, GA has been used for optimizing the inputs to the ANT controller. We illustrate effectiveness of our methodology by simulation results on four plants: one integer order and three fractional order ones having different orders. Simulation results and comparison of our approach against other approaches, viz., fractional order PID-ANT, fractional order PID-GA, fuzzy fractional PID-ANT and fuzzy fractional PID-GA, shows feasibility and effectiveness of our approach for fract order systems.
Keywords: Integer order plant; fract order plants; fuzzy knowledge based control; NIFFPID approach.
Human Activity Recognition from Histogram of Spatiotemporal Depth Features
by Naresh Kumar
Abstract: The recent evolution in sensor based depth information has been developed a sounding scope to work for human activity recognition using depth image sequences. The activities due to human being can have great interest in every domain of real life where human is always a major actor. Activity recognition is having a key importance due to its advantages in several domain like surveillance systems at airport, patient monitoring system, care of elderly people etc. The variation in spatial and temporal parameters can represent any activity efficiently. In natural color vision, it is not efficient to attain the complete information because it represents flatness and occluded points for every portion of the images. This work is proposed the objective and evaluations to recognize daily life human activities by spatiotemporal depth information. Several varieties of actions may be performed by a single person or more than one person at a time. For this purpose, Kinect sensor is used to collect the data pertaining single activity performed by multiple person at a time. The spatiotemporal depth features are computed for activity recognition and support vector machine is used in classification phase. We have nine class of human actions in the database for RGB-D human activity recognition. This dataset is reconfigured from Cornell human activity and Berkeley multimodal human action databases. For multiple human action recognition, 91.38% accuracy is achieved on the synthetic dataset. This work can get better performance that are tough to achieve through the normal video frames of human activities.
Keywords: Human Action Recognition(HAR); Principal Component Analysis (PCA); spatiotemporal descriptors; Histogram of Gradient(HOG); Support Vector Machine(SVM); Histogram of oriented feature(HOOF).
Stock Price Trend Prediction with Long Short Term Memory Neural Networks
by Varun Gupta, Mujahid Ahmad
Abstract: Stock market is an immensely complex, chaotic and dynamic environment. Thus, the task of predicting changes in such an environment becomes challenging with regards to its accuracy. A number of approaches have been adopted to take on that challenge and machine learning has been as the crux in many of them. There are plenty of examples of algorithms based on machine learning yielding satisfactory results for such type of prediction. This paper presents the usage of Long Short Term Memory (LSTM) networks in this scenario, to predict future trends of stock market prices based on the patterns from price history, paired with technical analysis indicators. To achieve this, a model has been built, and a series of experiments have been conducted through a number of parameters and the results were analyzed against predefined metrics to assess if this algorithm presents any improvements in front of other machine learning methods and strategies. Also, a comparative study is presented which analyzes popularly used optimizers and error schemes to check which given optimizer yields the best results. The results obtained are promising and presented a reasonably accurate prediction for the rise or fall of a particular stock in the near future.
Keywords: Stock market prediction; LSTM; Recurrent neural networks; artificial neural networks; machine learning; deep learning; artificial intelligence; soft computing.
PREDICTION OF AIR POLLUTION USING LSTM BASED RECURRENT NEURAL NETWORKS
by Varun Gupta, Akshat Jain, Ashim Bhasin
Abstract: This paper proposes a system that predicts the pollution level at some hour at a place. It also infers about the various parameters associated with the increasing pollution across the globe, its ill effects and the future scenario of the same. An air quality dataset reporting level of pollution and weather every hour for five years is taken and Long Short Term Memory network (LSTM) based Recurrent Neural Networks using keras library with Tensorflow as back-end were applied in a python environment. The paper studies all 13 parameters affecting the weather and air pollution conditions and forecasts the pollution for any hour given the weather conditions and pollution value for the previous hour.
Keywords: Air Pollution Prediction; LSTM; Recurrent Neural Networks; Artificial Neural Networks; deep learning; machine learning; soft computing; artificial intelligence.