International Journal of Intelligent Systems Technologies and Applications (33 papers in press)
Efficient Blind Nonparametric Dependent Signal Extraction Algorithm for Determined and Underdetermined Mixtures
by Fasong WANG
Abstract: Blind extraction or separation statistically independent source signals from linear mixtures have been well studied in the last two decades by searching for local extrema of certain objective functions, such as non-Gaussianity (NG) measure. Blind source extraction (BSE) algorithm for extracting statistically dependent source signals from underdetermined and determined linear mixtures is derived using nonparametric NG measure in this paper. After showing that maximization of the NG measure can also separate or extract the statistically weak dependent source signals, the nonparametric NG measure is defined by statistical distances between different distributions of separated signals based on cumulative density function (CDF) instead of traditional probability density function (PDF), which can be estimated by the quantiles and order statistics (OS) using the norm efficiently. The nonparametric NG measure is optimized by a deflation procedure to extract or separate the dependent source signals. Simulation results for synthesis and real world data show that the proposed nonparametric extraction algorithm can extract the desired dependent source signals and yield ideal performance.
Keywords: blind source separation; non-Gaussianity measure; independent component analysis; probability density function; dependent component analysis; underdetermined blind source extraction.
Indic Script Identification from Handwritten Document Images
by Pawan Kumar Singh, Ram Sarkar, Mita Nasipuri
Abstract: Script identification plays an important role in document image processing especially for multilingual environment. This paper hires two conventional textural methods for the recognition of the scripts of the handwritten documents inscribed in different Indic scripts. The first method extracts the well-known Haralick features from Spatial Gray-Level Dependence Matrix (SGLDM) and the second method computes the fractal dimension by using Segmentation-Based Fractal Texture Analysis (SFTA). Finally, a 104-element feature vector is constructed from the features designed by these two methods. The proposed technique is then evaluated on a total dataset comprising of 360 handwritten document pages written in 12 Indian official scripts namely, Bangla, Devanagari, Gujarati, Gurumukhi, Kannada, Malayalam, Manipuri, Oriya, Tamil, Telugu, Urdu and Roman. Experimentations using multiple classifiers reveal that Multi Layer Perceptron (MLP) shows the highest identification accuracy of 96.94%. Encouraging outcome confirms the efficacy of customary textural features to handwritten Indic script identification.
Keywords: Script Identification; Handwritten Indic documents; Textural Features; Spatial Gray-Level Dependence Matrix; Segmentation-Based Fractal Texture Analysis; Statistical Significance Tests.
Enhancement based Background Separation Techniques for Fruit Grading and Sorting
by Jasmeen Gill
Abstract: Image processing plays a remarkable role in the automation of fruit grading and sorting. While grading the fruit, accurate extraction of fruit object from the image (background separation) is the chief concern. For extraction of fruit, appropriate segmentation technique is employed; and to accomplish it accurately, enhancement must be performed prior to segmentation. However, majority of the researchers emphasized over fruit segmentation alone. This communication is intended to show the potential of enhancement techniques when combined with fruit image segmentation. Besides, it presents a comparative analysis of enhancement based background separation techniques for fruit grading and sorting. For this purpose, four main techniques, namely, Contrast limited adaptive histogram equalization (CLAHE) method, Gaussian filter, Median filter and Wiener filter were utilized for enhancement and Basic Global thresholding, Adaptive thresholding, Otsu thresholding and Otsu-HSV thresholding were applied for segmentation. 16 sub-models were developed by combining each enhancement method with every segmentation technique. Afterwards, the image quality of the sub-models was validated using quantitative as well as qualitative analyses. Test results demonstrate that CLAHE/Otsu-HSV model outperformed the others for fruit grading and sorting.
Keywords: Digital image processing; Segmentation; Enhancement; Fruit grading and sorting; Background separation; Otsu-HSV segmentation.
Soft Neural Network based Block Chain Risk Estimation
by Ganglong Duan, Wenxiu Hu, Yu Tian
Abstract: Financial risk refers to the uncertainty caused by the change of the economic and financial conditions. As a kind of economic phenomenon, the financial risk is objective and can not be eliminated. At present, there are still some imperfect aspects in the research of financial risk assessment. In order to achieve the purpose of comprehensive evaluation of financial risks, the paper analyses the methodology of soft computing and neural networks. The basic function of financial risk monitoring and evaluation system is to forecast the trend of financial activities and risk status, and this is also the fundamental function and objectives of the assessment system. We use BP neural network theory to establish the logistics finance risk evaluation model, using BP neural network structure and training principles to train sample data. The soft computing method is based on the factors of uncertainty and irrationality, which breaks through the limitation of traditional hard computing. There is a consistency between the fuzzy thinking principle of soft computing method and the attribute and structure of the objective world, therefore, soft computing can be used in field of financial risk assessment.
Keywords: Block chain; financial risk; assessment; neural network; soft computing.
Data Flow Tracking based Block Chain Modelling
by Ganglong Duan, Wenxiu Hu, Yu Tian
Abstract: This article carries on the analysis to the recent global financial industry focus block chain, introduces the concept and scope of the technology, and to the financial payment service as an example, analyzes the block chain technology to the center, to trust four characteristics, collective maintenance and safety database etc.. In this area chain based technology in the financial field of application and Research on the current situation, the international payment as an example, analyzes the block chain payment mode and the traditional mode of payment differences, and that the international financial industry attention block chain technology, nature is to build a flat global integrated settlement system. Finally, we summarize the trend of the development of block chain technology innovation, and put forward some key issues that should be paid attention to. Block chain is a kind of computing paradigm to the center of the infrastructure and distributed with new bitcoin digital encryption currency popularity gradually rise, at present it has attracted great attention and concern of government departments, financial institutions, enterprises and capital market. The block chain technology has decentralized, time series data, the collective maintenance, programmable and safe and reliable characteristics, especially suitable for construction of the monetary system, the financial system and macro social system programming. The rapid development of the financial chain block technology applications such as digital currency, smart contracts cause new financial risk, which brings a series of challenges to the existing financial supervision system in China.
Keywords: data stream; tracking algorithm; finance; block chain; technical framework.
AN EFFICIENT MEDICAL IMAGE WATERMARKING TECHNIQUE USING INTEGER WAVELET TRANSFORM AND QUICK/FAST RESPONSE CODES
by K.J. Kavitha
Abstract: Securing the medical images, to make it tamper free is a terribly difficult task. This challenge is with efficiency handled with the assistance, digital watermarking techniques. With the assistance of this growing technology we are able to evaluate validation, dependability, privacy and integrity of the medical images. Several algorithms are enforced on this technology. The Digital Watermarking (DWM) is implemented in two main domains: transform & spatial. The DWM is mostly implemented using the transform techniques such as Singular Valued Decomposition (SVD), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), combination of DCT and DWT and also with the combination of DWT and SVD. These days the work is extended with Integer Wavelet Transform (IWT). One of the foremost challenges in these technologies is information embedding capability or we say; number of bits hidden in the cover information. This parameter is considered for evaluation of the system since as we know that the more number of bits; the distortion of the medical information is also more and vice versa. The distortion of the information is strictly avoided within the case of medical and military applications. To beat this downside of the technology, these days research work goes on to implement digital watermarking technique with less information embedding capability. One of the possible ways to reduce the number of bits in information is to use Quick /fast response code (QR Code). The QR code consumes a less space compared to the other existing formats available such as barcode. In this paper Associate in nursing approach is planned to implement the digital watermarking technique for the medical images that includes the following techniques: Integer wavelets Transform (IWT), Bit plane methodology and QR code. For the proposed system, the watermarked image is evaluated against a number of the parameters to grasp the potency of technique being utilized. The experiment is dispensed for 2 totally different bit planes and also the results are compared to point out embedding within which of the bit plane range ends up in additional potency and eventually the conclusion is formed.
Keywords: Watermarking; DWT; QR Code.
Automatic Identification of Rhetorical Relations Among Intra-sentence Discourse Segments in Arabic
by Samira Lagrini, Nabiha Azizi, Mohammed Redjimi, Montheer Al Dwairi
Abstract: Identifying discourse relations, whether implicit or explicit, has seen renewed interest and remains an open challenge. We present the first model that automatically identify both explicit and implicit rhetorical relations among intra-sentence discourse segments in Arabic text. We build a large discourse annotated corpora following the rhetorical structure theory framework. Our list of rhetorical relations is organized into three level hierarchy of 23 fine-grained relations, grouped into seven classes.
To automatically learn these relations, we evaluate and reuse features from literature, and contribute three additional features: accusative of purpose, specific connectives and the number of antonym words. We perform experiments on identifying fine-grained and coarse-grained relations. The results show that compared with all the baselines, our model achieves the best performance in most cases, with an accuracy of 91.05%.
Keywords: Discourse relations; Rhetorical structure theory; Arabic language.
Unsupervised Generation of Arabic Words
by Ahmed Khorsi, Abeer Alsheddi
Abstract: Automated word generation might be seen as the reverse process of morphology learning. The aim is to automatically coin valid words in the targeted language. As many other challenges in the field of Natural Language Processing (NLP), the building of the generation engine might be carried out using a supervised or unsupervised approach. The former requires a clean learning data set of a decent size whereas the later needs no more than a plain text. Nonetheless, the unsupervised approaches are usually blamed for their low accuracy. The present article reports the results of an investigation on a context-free generation of classical Arabic words. Unsupervised and relatively simple, The proposed approach reached easily an accuracy of 90%.
Keywords: Arabic language; classical vocabulary; computational linguistics; corpus expansion; linguistic corpora; morphology learning; natural language processing; unsupervised learning; statistical linguistics; word generation.
MODELLING AND SIMULATION OF FUZZY BASED MPPT CONTROL OF GRID CONNECTED PV SYSTEM UNDER VARIABLE LOAD AND IRRADIANCE
by Subhashree Choudhury, Pravat Kumar Rout
Abstract: The Photovoltaic (PV) based distribution generation system has a nonlinear power characteristic curve under random variation in solar irradiance, ambient temperature and electric load. As a result, for the accurate detection and tracking of the maximum power points (MPPs),it is necessary to design an optimal controller with dynamic control capability. Solution to the above issue, this paper presents an intelligent Mamdani based Fuzzy Logic Controller (MFLC) for maximum power point tracking (MPPT) of a PV system. Different test cases with respect to different possible load and irradiance variations in grid connected mode of operation are investigated. To confirm the power quality indices within IEEE standards specification, Fast Fourier Transform (FFT) analysis of voltage and current at the point of common coupling has been done. A detailed comparison has been made in between PV without MPPT, with Incremental Conductance and proposed Fuzzy Logic Control (FLC). The results show an enhance efficiency of energy production from PV and reflects the effectiveness of the proposed scheme justifying its real time application.
Keywords: Photovoltaic (PV) array; Maximum power point Tracking (MPPT); dc-dc boost converter; Incremental Conductance (IC); Mamdani based fuzzy logic controller (MFLC); Fast Fourier Transform (FFT).
STATISTICAL DATA MINING TECHNIQUE FOR SALIENT FEATURE EXTRACTION
by Jahnavi Reddy
Abstract: Internet based news documents are an important source of information transmission. Large numbers of news documents from various news wire sources are available on the internet. It is almost impossible to view all the news documents generated as a result of a search performed by a user. rnTerm weighting is a useful technique that extracts important features from textual documents, thereby providing a basis for different Text Mining approaches. While several term weighting algorithms, based on manifold statistical measures have been proposed in the past, they are inaccurate in extracting salient terms from internet based digitized news documents. rnThe objective of this work is to study the existing term weighting algorithms for feature extraction and to develop an efficient term weighting algorithm for mining salient features from internet based newswire sources. TF*PDF (Term Frequency * Proportional Document Frequency) is the most popular term weighting algorithm which extracts influential features from news archives. TF*PDF satisfies the basic property of the features in news documents i.e., frequency and thus increases the accuracy when compared to other term weighing algorithms such as Binary, TF (Term Frequency), TF-IDF (Term Frequency-Inverse Document Frequency) and its variants. However, only frequency property is not sufficient for salient topic extraction. To overcome that problem, this paper presents an innovative and effective term weighting algorithm that considers Position, Scattering and Topicality along with Frequency for extracting salient events. Frequency considers the number of occurrences of a term; Position focuses on location of the term; Scattering focuses on the distribution of a term in the entire document. Topicality is the variation in the frequency of usage of a term over a period of time. Experimental evaluation shows that the proposed term weighting algorithm performs better than the existing term weighting algorithms in terms of Coverage Rate.
Keywords: Term Weighting; TF*PDF; FPST.
Neural Network Based Adaptive Selection CFAR for Radar Target Detection in Various Environments
by Budiman Putra Asmaur Rohman, Dayat Kurniawan
Abstract: Constant False Alarm Rate (CFAR), a target detection method commonly used in the radar systems, has an inconsistent performance against various environments. For improving the radar detectability, this paper proposes a novel scheme of radar target detection using neural network based adaptive selection CFAR. The proposed method employs Cell-Averaging, Ordered-Statistic, Greatest-of and Smallest-of CFAR thresholds as the basis of references. The pattern of those threshold values combined with the Cell Under Test signal value will be identified and classified by the neural network to compute the raw threshold. Then, the final threshold is selected depending on the nearest value between raw and four referenced CFARs. The performance of the proposed method is examined against three possible cases of the radar systems including homogeneous background, multiple targets and clutter boundary. The result of this research shows that the proposed method outperforms the classical CFARs due to the adaptive selection algorithm can select properly among referenced CFARs against the given cases particularly in the homogeneous and multiple target environments.
Keywords: neural network; adaptive selection; CFAR; radar; target detection.
Local Search-Based Recommender System for Computing the Similarity Matrix
by Yousef Kilani, Ayoub Alsarhan, Mohammad Bsoul, Subhieh El-Salhi
Abstract: Recommender systems reduce the users' effort in finding their favourite items among a great number of items. In collaborative-based RSs, there are different similarity measures to compute the similarity values between every two users or two items. These measures include: genetic algorithms, Pearson, and Cosine-based similarity techniques. The number of items and personal attributes (e.g environment, sex, job, religion, age, county, education, etc.) that are used by the similarity metric algorithms are increasing significantly which makes the recommendation task more difficult.
In our project, we introduce a new RS that uses the local search algorithms to compute the similarity matrix.
As far as we know we have not found any work in the RS literature that uses local search algorithms techniques.
We name the new recommender system, LSRS.
We consider part of the dataset as training data (e.g. 80%) in order to calculate
the similarity between every two users by LSRS. The remaining dataset is
the testing data (e.g. 20%).
LSRS finds the similarity among the users. It initializes
the similarity value between every two users to a random value between 0-1. It then uses local search to adjust this value by
training the recommender system using the training data.
We experimentally show that LSRS computes the similarity matrix and outperforms the other techniques like the Pearson correlation and cosine similarity and some of the recent genetic-based recommender systems.
Keywords: Collaborative filtering-based recommender systems; similarity matrix; Recommender Systems; local search algorithms; similarity measures.
Similarity searching in ligand based virtual screening using different fingerprints and different similarity coefficients
by BERRHAIL Fouaz, BELHADEF HACENE, HENTABLI HAMZA, FAISSAL SAEED
Abstract: Similarity searching plays an increasingly important role in virtual screening. Its a screening technique that works by comparing the features of the target compound with the features of each compound in the database of compounds. This comparison can be described in three steps: The first step involves the representation of the target compound and the database compounds with an equivalent representation, which is set of binary elements describing the presence or the absence of attributes of compounds (fingerprint). The second step uses similarity coefficient to calculate the score of similarity between two compounds representation. The third step is to rank the database compounds in appropriate order of the similarity score, in order to determine the actives compounds. Many approaches and techniques have been introduced in literature to enhancing and improving similarity based virtual screening. In this work, our primary interests are to investigate the effect of using different combinations of fingerprint and similarity coefficient in ligand-based virtual screening (LBVS). We use in this investigation the MDDR (Drug Data Repot database) to evaluate the different combinations descriptor-coefficient. Some obtained results of combinations with some coefficients demonstrate superiority in performances to these obtained in combination with Tanimoto coefficient.
Keywords: Ligand Based; Virtual Screening; Similarity Searching; Similarity Coefficients; Molecular Descriptors; Fingerprint; Drug Discovery.
Collaborative Approach to Secure Agents in Ubiquitous Healthcare Systems
by Nardjes BOUCHEMAL
Abstract: In sensitive domain such as healthcare, life of people is controlled and it is very important to access fast to health information especially in case of emergency (e.g., allergies and chronic diseases). For that, agent paradigm is very promising for ubiquitous healthcare systems. But, the inherent complexity of information security is bigger because agents are characterized by autonomy, intelligence and not under the control of a single entity. Indeed, the big challenge in agent-based ubiquitous healthcare systems is to assure that emergency workers and doctors can access personal information fast and whenever needed but with high level security. The use of cryptography concepts saturates agents embedded in limited resource devices and obstructs healthcare workers. The idea is to lighten agents with simple cryptography concepts while strengthening the surveillance and make it collaborative. Consequently, all agents of the system are concerned by security and collaborate to maintain it. This paper addresses security challenges in ubiquitous healthcare systems based on agents and presents a collaborative approach. Proposed agents are implemented in JADE-Leap platform designed for restricted devices.
Keywords: Security; U-healthcare; Collaboration; Collective Decision; Ubiquitous Agents;.
Statistical Assessment of Nonlinear Manifold Detection Based Software Defect Prediction Techniques
by Soumi Ghosh, Ajay Rana, Vineet Kansal
Abstract: Prediction of software defects has immense importance for obtaining improved and desired outcome at minimized cost and lesser time. Defect prediction in software system has attracted researchers to work on this topic applying various techniques. But those were not found to be fully effective. Software datasets comprise of redundant or undesired features that hinders effective application of techniques resulting more time consuming and inappropriate prediction of defective areas of software. Hence, it is required to apply proper techniques for accurate software defect prediction. A newer application of Nonlinear Manifold Detection Techniques (Nonlinear MDTs) has been examined and accurate prediction of defects in lesser time and cost by using different classification techniques. In this work, we analyzed and tested the effect of Nonlinear MDTs to find out the best classification technique with higher accuracy for all software datasets. Comparison has been made between the results obtained by using without or with Nonlinear MDTs for estimating better performance of classifier by reducing dimensions. Paired Two-tailed T-Test has been performed for statistically testing and verifying the performance of classifiers using Nonlinear MDTs on all datasets. The outcome revealed that among all Nonlinear MDTs, FastMVU makes most accurate prediction of software defects in case of most of the classification techniques.
Keywords: Dimensionality Reduction; FastMVU; Machine Learning; Manifold Detection; Nonlinear; Promise Repository; Software Defect Prediction.
A game-based virtual machine pricing mechanism in federated clouds
by Ying Hu
Abstract: In a federated cloud environment, diverse pricing schemes among different IaaS service providers (ISPs) form a complex economic landscape that nurtures the market of cloud brokers. Although pricing mechanisms have been proposed in the past few years, few of them address the issue of competitive and cooperative behaviours among different ISPs. In this paper, we employ the learning curve to model the operation cost of ISPs, and introduce a novel algorithm that determines the cooperative pricing mechanism among different ISPs. The cooperation decision algorithm uses the operation cost computed based on the learning curve model and price policies obtained from the competition part as parameters to calculate the final revenue when outsourcing or locally satisfying users resource requests. Extensive experiments are conducted in a real-world federated cloud platform, and the experimental results are compared with three existing pricing mechanisms. Our experimental results show that the proposed pricing mechanism is effective to improve resource utilization as well as reduce the profit loss caused by request rejection.
Keywords: cloud computing; pricing mechanism; resource market; game theory.
Real Time Path Planning for High Speed UGVs
by Ajith Gopal, Elsmari Wium
Abstract: The application of a modified A-Star (A*) global search algorithm and
trajectory planner based on the tentacles algorithm approach are investigated for
real time path and trajectory planning on an unmanned ground vehicle operating
at a speed of 40km/h. The fundamental assumption made is that for high speed
applications, the requirement for an optimal path is secondary to the requirement
for short processing times, provided that a solution, if it exists, is found. The
proposed solution is benchmarked against the original A* algorithm and shows
a reduction in search space of up to 84% and a reduction in processing time of
up to 97%. Results for the trajectory planner are also presented, though no direct
comparative evaluation against the original tentacles algorithm was executed. The
combined path and trajectory processing time of the proposed solution translates
to less than 2mm of travel distance before a reaction to a change in the environment
can be processed.
Keywords: Path Planning; Trajectory Planning; UGV; Real Time; A-Star.
Detection of Glaucoma based on Cup-to-Disc Ratio using Fundus Images
by Imran Qureshi, Muhammad Attique, Muhammad Sharif, Tanzila Saba
Abstract: Glaucoma is a permanent damage of optic nerves which cause of partial or complete visual loss. This work presents a glaucoma detection scheme by measuring CDR from fundus photographs. The proposed system consists of image acquisition, feature extraction and glaucoma assessment steps. Image acquisition discusses the transformation of a RGB fundus image into grey form and enhancing the contrast of fundus features. While, boundary of optic disc and cup were segmented in feature extraction step. Finally, a cup-to-disc ratio of an exploited image will compute to assess glaucoma in the image. The proposed system is tested on 398 fundus images from four publicly available datasets, obtaining an average value of sensitivity 90.6%, specificity 97% and accuracy 96.1% in glaucoma diagnosis. The achieved results show the suitability of proposed art for glaucoma detection.
Keywords: Cup-to-disc ratio (CDR); fundus images; glaucoma; image processing; optic disc; segmentation.
DESIGN OF GENERALIZED PREDICTIVE CONTROLLER FOR DYNAMIC POSITIONING SYSTEM OF SURFACE SHIPS
by Werneld Ngongi, Jialu Du, Rui Wang
Abstract: This paper presents a generalized predictive control algorithm (GPCA) for ship dynamic positioning (DP) controller using Controlled Autoregressive Integral Moving Average (CARIMA) model to describe the controlled object. The proposed control system is capable of making position and heading of the ship converge to the desired values by choosing the error correction coefficient, parameter adaptation and feedback correction techniques. Firstly, the basic principle of the generalized predictive control algorithm is introduced. Secondly, the generalized predictive control algorithm is used to design the ship dynamic positioning controller. Finally, the simulation of the designed controller is given. Simulation results prove the effectiveness and robustness of the controller.
Keywords: Dynamic positioning; Generalized Predictive Controller; feedback correction; Rolling Optimization; performance index; surface ships.
Meta-Heuristic Techniques for Path Planning: Recent Trends and Advancements
by Monica Sood, Vinod Kumar Panchal
Abstract: Path planning is a propitious research domain with extensive application areas. It is the procedure to construct a collision-free path from specified source to destination point. Earlier, classical techniques were widely implemented to solve path planning problems. Classical techniques are very easy to implement but they are time-consuming and are not effective in case of uncertainties. But meta-heuristic techniques have the ability even to perform in an approximate and uncertain environment. This makes the use of meta-heuristic techniques in a more focused manner for the optimal path planning research. This paper presents the Overview, recent trends and advancement from year 2001 to 2017 in the field of optimal path planning using meta-heuristic techniques. During the study, different meta-heuristic algorithms are analyzed and classified into three categories: swarm based meta-heuristic techniques, other than swarm based techniques and combinational meta-heuristic techniques. In addition, basic understanding and applicability of specific algorithms for path planning are also discussed along with its strengths and downsides.
Keywords: Path Planning; Meta-Heuristic Techniques; Optimization; Swarm Intelligence; Artificial Intelligence; Machine Learning; Computational Intelligence.
A Novel and Improved Developer Rank Algorithm for Bug Assignment
by Asmita Yadav, Sandeep Kumar Singh
Abstract: Analytical studies on automatic bug triaging approach have the main objective to recommend appropriate developer for bug report with reduced bug tossing length, time and effort in bug resolution. In bug triaging process, if the first recommended developer cannot fix a bug, it is tossed to another developer and the tossing process is continued till the bug gets assigned and resolved. Existing approaches to the best of our knowledge have not considered developers contributions and performance assessment metrics for bug triaging process. In this paper, we proposed a novel and improved two phase Bug Triager that involves a developer profile creation and assignment phases. In this, developer profile is built by using individual contributions (IC) and performance assessment (PA) metrics. Contribution and performance of a developer in pre-fixed bug reports are analyzed to calculate a developers weighted score. This score indicates the level of expertise to fix and resolve a newly reported bug. This approach is tested on two open source projects- Eclipse and Mozilla. Empirical results show that proposed approach has achieved a significantly higher F-score up to 90% for both projects and has effectively reduced bug tossing length up to 11.8% as compared to existing approaches.
Keywords: Bug Repository; Bug Triaging; Developer’ Expertise; Bug Assignment; Bug Reports; Bug Tossing; Developer Contribution Assessment.
Biased Face Patching Approach for Age Invariant Face Recognition using Convolutional Neural Network
by Mrudula Nimbarte, Kishor K. Bhoyar
Abstract: In recent years, a lot of interest is observed among researchers, in the domain of age invariant face recognition. The growing research interest is due to its commercial applications in many real-world scenarios. Many researchers have proposed innovative approaches to solve this problem, but still there is a significant gap. In this paper, we propose a novel technique to fill in the gap, where instead of using a whole face of a person, we use horizontal and vertical face patches. Two different feature vectors are obtained from these patches using Convolutional Neural Networks (CNN). Then fusion of these two feature vectors is done using weighted average of features of both patches. Lastly, SVM is used as a classifier on the fused vector. Two publicly available datasets, FGNET and MORPH (Album 2) are used for testing the performance of the system. This novel approach outperforms the other contemporary approaches with very good Rank-1 recognition rate, on both datasets.
Keywords: Face Recognition; AIFR; Aging Model; Deep Learning; CNN; Weighted Average.
Automatic Sizing of CMOS based Analog Circuits Using Cuckoo Search Algorithm
by Pankaj Prajapati
Abstract: The increasing complexity of physical models ofMOSFETand process
variations with downscaling of CMOS technology have made the manual design
of analog circuits challenging and time-consuming. Therefore, development of
efficient automatic analog circuit design techniques looks very attractive. In this
work, the Cuckoo Search (CS) algorithm has been tested for the optimum design
of CMOS based analog circuits with high optimization fitness. The CS algorithm
has been implemented using C language and interfaced with Ng-spice circuit
simulator. In this work, the CS algorithm has been used as a searching tool
for transistor sizing and Ng-spice has been used as a fitness creator. Various
analog circuits like CMOS common-source amplifier, CMOS cascode amplifier
and CMOS differential amplifier using a current mirror load have been optimized
using this automatic optimization tool with BSIM3v3MOSFETmodels using 180
nm CMOS technology. This technique gives more accurate results and consumes
less time as compared to manual circuit design.
Keywords: Cuckoo Search algorithm; Optimization; Fitness; Simulator ;
Product Service Model Constructing Method for intelligent home based on Positive Creative Design Thinking
by Weiwei Wang, Ting Wei
Abstract: With the booming market economy, companies need to maintain competitive advantage through positive and innovative design thinking. Building a service model is an indispensable part of this approach. The design aims to improve the competitiveness of the enterprise by extracting effective user value, establishing a product service system, satisfying the user's needs, and analyzing the method of extracting the shape. In this paper, the researcher first selects the target user, draws the user journey map, analyzes the user's psychological activities, and uses the innovative design thinking to extract the user value. Secondly, according to the positive value element, the human-object three-dimensional ecological circle is created, at the same time, using AHP Hierarchical Analysis software to analyze product modeling and build the product service model. Finally, the reliability of the model construction method was verified by the intelligent air-housekeeper product service system, and the requirements to meet the needs of users were met. At the same time, it can also provide certain reference for other product service design, reflecting the market competitive advantage of the product.
Keywords: Product Service Model; User Value; Positive Creative Design Thinking; Intelligent Air-housekeeper Product Service.
Local and Global Features Fusion to estimate Expression Invariant Human Age
by Subhash Chand Agrawal, Anand Singh Jalal, Rajesh Kumar Tripathi
Abstract: Human beings can easily estimate the age or age group of a person from a facial image where as this capability is not prominent in machines. This problem becomes more complex in the presence of facial expressions and due to age progression. In this paper, we introduced a novel method for age prediction using combination of local and global features. After detecting the face from image, we partition the facial image in 16*16 non-overlapping blocks and apply Grey-Level Co-Occurrence matrix (GLCM) on these blocks. After calculating four facial parts (Eyes, forehead, left cheek and right cheek) from facial image, features from second local feature Gabor filter are obtained. Global feature, Histogram of Oriented Gradients (HOG) is used to extract features from complete face image. Experimental results show that fusion of local and global features perform better than existing approaches and reported 6.31years mean absolute error (MAE) on PAL dataset.
Keywords: GLCM; Local feature; Global feature; Facial Expression.
Improving English-Arabic statistical machine translation with morpho-syntactic and semantic word class
by Ines Turki
Abstract: In this paper, we present a new method for the extraction and integrating of morpho-syntactic and semantic word classes in a Statistical Machine Translation (SMT) context to improve the quality of English-Arabic translation. It can be applied across different statistical machine translations and with languages that have complicated morphological paradigms. In our method, we first identify morpho-syntactic word classes to build up our statistical language model. Then, we apply a semantic word clustering algorithm for English. The obtained semantic word classes are projected from the English side to the featured Arabic side. This projection is based on
available word alignment provided by the alignment step using GIZA++ tool. Finally, we apply a new process to incorporate semantic classes in order to improve the SMT quality. We show its efficacy on small and larger English to Arabic translation tasks. The experimental results show that introducing morpho-syntactic and semantic word classes achieves 7.7 % of relative improvement on the BLEU score.
Keywords: Morpho-syntactic word classes; semantic word classes; alignment; Statistical machine translation.
A QoS-aware virtual resource pricing service based on game theory in federated clouds
by Tienan Zhang
Abstract: Recently, federated cloud platform has become a promising paradigm to provide cloud services for various kinds of users in a distributed manner. To compete for cloud users, it is critically important for each cloud provider to select an optimal price that best corresponds to their service qualities as well as remains attractive to cloud users. In this paper, we first formulate the pricing strategy of individual cloud provider as a constrained optimization programming problem to analyze the behaviours of both cloud users and cloud providers. Then, we present game-based model which introduces a set of virtual resource agents to help providers adjusting their prices with aiming at achieving a global optimal solution. Theoretical analysis is present to prove the validity and effectiveness of the proposed game model, and extensive experiments are conducted in a real-world cloud platform to evaluate its performance. The experimental results show that the proposed pricing model can significantly improve the resource revenue for cloud providers and provide desirable quality-of-service (QoS) for user tasks in terms of various performance metrics.
Keywords: cloud computing; pricing strategy; virtual machine; quality-of-service.
On Collaborative Filtering Model Optimized with Multi-Item Attribute Information Space for Enhanced Recommendation Accuracy
by Folasade Isinkaye, Yetunde Folajimi, Adesesan Adeyemo
Abstract: Recommender system is a type of information filtering system that is designed to curtail the difficulties of information overload by automatically suggesting relevant items to users tailored to their preferences. Bayesian Personalized Smart Linear Methods (BPRSLIM) is a variant of item-based collaborative filtering technique used in information filtering system. Although, this algorithm has shown outstanding performance in a range of applications, nevertheless it suffers serious limitation of inability to provide accurate and reliable recommendations when the user-item matrix contains insufficient rating information, this always reduces its accuracy. In this paper, we propose a framework that integrates multi-item attribute information besides the classic information of users and items into BPRSLIM model in order to ease the sparsity problem associated with it and hence improves its performance accuracy. The enhanced model is expected to outperform the original BPRSLIM model
Keywords: BPRSLIM; Sparsity Problem; Recommender System; Collaborative Filtering; Item Attribute Information; Optimization.
Image Matching Technique Based on SURF Descriptors for Offline Handwritten Arabic Word Segmentation
by Maamar Kef, Leila Chergui
Abstract: Image matching is an important task with many applications in computer vision and robotics. Recently, several scale-invariant features have been proposed in the literature and one of them is the local descriptors namely Speeded-Up Robust Features (SURF). Those features are scale and rotation-invariant descriptor, and have the advantage to being calculated quickly and efficiently. In this paper we presents a new segmentation system of handwritten Arabic words based on SURF descriptors. Firstly, a set of Arabic characters images were used to build 106 characters' patterns, which are used by a segmentation process based on an image matching technique. Tests were performed on our new databese of handwritten Arabic words. A high correct segmentation rate was reported.
Keywords: Image Matching; SURF Descriptors; Arabic Handwriting Recognition; Keypoints; Segmentation.
Special Issue on: IRICT 2017 Reliable and Intelligent Information and Communication Technology
Interest emotion recognition approach using self-organizing map and motion estimation
by Kenza Belhouchette, Mohamed Berkane, Hacene Belhadef
Abstract: Recognizing human facial expressions and emotions by computer is an interesting and challenging problem. Its usefulness may appear in various fields such as e-learning. Although several approaches have been proposed to recognize emotions based on facial expressions, the recognition rate, amount of used resources and calculation time remain factors for improvement. Our work presents a new approach for recognizing basic emotions (joy, sadness, anger, disgust, surprise and fear) in image sequences. We introduced interest emotion and created its corresponding action units (AUs) based on psychological foundations. Our approach is mainly characterized by minimizing used data and consequently optimizing the computing time and improving the recognition rate. The proposed approach was divided into three steps. The first step is face detection using the method developed by Viola and Jones. The second step concerns the extraction of facial features. At this level, we exploited the Facial Action Coding System proposed by Paul Ekman, which is based on AUs. To detect AUs, we extracted face strategic points (inner, outer and centre points of the eyebrow; centre points of the lower and upper eyelids; right, left, top and bottom corners of the mouth; and left and right external nose wing) using an active appearance model and a block-matching approach. At the last step, we classified the results by using the Kohonen self-organizing map.
Keywords: Emotion; Interest; neural network; Kohonen; action units; facial expression; bloc matching.
Arabic sign language recognition using vision and hand tracking features with HMM
by Ala Addin Sidig, Hamzah Luqman, Sabri Mahmoud
Abstract: Sign language employs signs made by hands and facial expressions to convey meaning. Sign language recognition facilitates the communication between community and hearing-impaired people. This work proposes a recognition system for Arabic sign language using four types of features, namely Modified Fourier Transform, Local Binary Pattern, Histogram of Oriented Gradients, and a combination of Histogram of Oriented Gradients and Histogram of Optical Flow. These features are evaluated using Hidden Markov Model on two databases. The best performance is achieved with Modified Fourier Transform and Histogram of Oriented Gradients features with 99.11% and 99.33% accuracies, respectively. In addition, two algorithms are proposed, one for segmenting sign video streams captured by Microsoft Kinect V2 into signs and the second for hand detection in video streams. The obtained results show that our algorithms are efficient in segmenting sign video streams and detecting hands in video streams.
Keywords: Arabic Sign language; Sign language recognition; video segmentation; Histogram of Oriented Gradients; Hands detection; Hidden Markov Model.
Quality of Service (QoS) Task Scheduling Algorithm for time-cost trade off Scheduling Problem in Cloud Computing Environment
by DANLAMI GABI, Abdul Samad Ismail, Anazida Zainal, Zailmiyah Zakaria
Abstract: As cloud computing environment is evolving, managing trade-offs between time and cost when executing large-scale tasks to guarantee customers minimum running time and cost of computation is not always feasible. Many heuristics and metaheuristics have been proposed to resolve this problem. The metaheuristics are considered promising, since they can schedule large-scale tasks as well optimise the best-known trade-offs among conflicting objectives and return solution in just one run. However, they are characterised with certain limitations that need to be resolve, which include local trapping, poor convergence and imbalance between global and local search to enhanced their solution findings. In this paper, we first present a multi-objective task scheduling model upon which a dynamic Multi-Objective Orthogonal Taguchi-Based Cat Swarm Optimisation (dMOOTC) Algorithm is proposed to solve the model. In the proposed algorithm, Taguchi Orthogonal method is incorporated into the local search of a conventional Cat Swarm Optimisation (CSO) to overcome local trapping and ensure it diversity. Pareto-Optimisation strategy incorporated within the algorithm is used to balance solution of the global search and local search. The efficiency of the proposed algorithm is studied by simulation with CloudSim tool. Thirty independent simulation runs where conducted and results is evaluated based on the following metrics, i.e., execution time, execution cost and Performance Improvement Rate Percentage (PIR%). The results of the simulations showed the proposed dMOOTC algorithm can select the best known optimal trade-off values that can minimised the execution time and execution cost than single objective conventional Cat Swarm Optimization (CSO), Multi-Objective Particle Swarm Optimization (MOPSO), Enhanced Parallel CSO (EPCSO) and Orthogonal Taguchi Based-Cat Swarm Optimization (OTB-CSO) algorithms.
Keywords: Multi-Objective; Quality of Service; Task Scheduling; Cat Swarm Optimisation; Pareto-Optimisation.
Data stream management system for video on demand hybrid storage server
by Ola Al-Wesabi, Nibras Abdullah
Abstract: The storage device is one of the main components of video on demand (VOD) server. The VOD storage system is responsible for storing and streaming large videos. Hence, the VOD server requires a large storage capacity and rapid video retrieval from this storage to quickly stream these videos to users. The hybrid storage system, which combines hard disk drive (HDD) and solid-state-drive (SSD) components in the server, has become popular because of such requirements. HDD is becoming economical and is providing a high storage capacity for numerous videos. Moreover, the SSD can act as a buffer for fast retrieval and streaming of videos to users. The combination of both storage modes is relatively weak in terms of optimizing fast access prior and in supporting the production of a high number of simultaneous streams. This paper presents the proposed VOD storage server system, namely, enhanced hybrid storage system (EHSS) based VOD server to improve the performance of the VOD server. The design of the EHSS and its streaming management scheme produce high performance and satisfy the performance requirements of a VOD server in terms of I/O throughput and access latency. The experimental results show that the proposed VOD server-based EHSS with the proposed DSC scheme provides better performance than the VOD server-based FADM because it enhances the average response times for the various scales of intensive workload by 69.89%.
Keywords: Data stream controller (DSC); Hard disk drives (HDDs); Cache hit ratio; Hybrid storage; I/O response time; Solid-state-drive (SSD); Throughput; Video on demand (VOD).