International Journal of Computational Vision and Robotics (27 papers in press)
Obstacle Detection System Based on Colour Segmentation Using Monocular Vision for an Unmanned Ground Vehicle
by Auday Al-Mayyahi, Weiji Wang, Phil Birch, Alaa Hussien
Abstract: A vision system-based obstacle detection system for an autonomous ground vehicle An obstacle detection system based on vision approach is introduced for an indoor unmanned ground vehicle (UGV). Coloured and solid obstacles were placed randomly in an indoor field as obstacles. These obstacles are then captured in an image by using a monocular vision to develop an obstacle detection algorithm. The obstacles are detected by analysing and processing the captured images using computer vision and image processing techniques. A camera calibration is conducted to determine the relative position and orientation of the UGV with respect to the obstacles. The camera calibration was used to find the intrinsic and extrinsic matrices. These two matrices are then combined and used to produce the perspective projection matrix. Based on the calibration process, the relative position and the offset distance in addition to the steering angle of the UGV, from the obstacles, were derived. The field geometry was used to obtain a mapped environment in the coordinates world. In this paper, a proposed algorithm was accomplished to identify the existence of the obstacles in the field, using bounding boxes around the detected obstacle. That allows the determination of the obstacles locations in a pixel coordinate frame. Thus, the depth perception was determined by using the pixel coordinates and the camera projection matrix. In this work, real-time experiments in an indoor environment are carried out using a four wheeled UGV system to demonstrate the validity and efficiency of the proposed algorithm. The outcome shows that the actual distances between the camera and the obstacles can be obtained using this technique.
Keywords: Unmanned Ground Vehicle (UGV); Obstacle Detection; Colour Segmentation; Depth Perception; Camera Calibration.
APPLICATIONS OF HYPERSPECTRAL AND OPTICAL SCATTERING IMAGING TECHNIQUE IN THE DETECTION OF FOOD MICROORGANISM
by Xu Jing, Ma Long, Wu Jie, Xu Xiaomeng, Sun Ye, Pan Leiqing, Tu Kang
Abstract: Food is very easy to contaminate microorganism during production, processing, storage and transportation, the mass propagation of microorganism can cause food deterioration so that the food-borne pollution and food poisoning will be caused, with a serious threat to human health. However, the traditional methods for microorganism detection are complicated in process, poor in timeliness or low in sensitivity and are hard to meet the increasing requirements of the rapid and accurate detection, becoming the bottleneck for food quality and safety detection. With collecting the relevant information, then algorithm processing information and finally the relevant models, the modern optical imaging technique can achieve the rapid detection of food quality information. This article reviews in detail the the latest developments of hyperspectral imaging and optical scattering techniques in the nondestructive detection of the food microbial contamination, and discusses the advantages and deficiency of the various techniques.
Keywords: hyperspectral imaging; optical scattering; food microorganism; detection.
Weighted Feature Voting Technique for Content Based Image Retrieval
by Walaa Elhady, Abdulwahab Alsammak, Shady Elmashad
Abstract: A content-based image retrieval process is used to retrieve most
similar images to a query from a large database of images on the basis of
extracted features. Matching measures are used to find similar images by
measuring how the query features are close to the features of other images in
the database. In this paper, a multi-features system is proposed which
incorporates more than one feature in the retrieval process. The weights of
these features are calculated based on the precision of each feature to reflect its
importance in the retrieval process. These weights are used in a weighted
feature voting technique to incorporate the role of each feature in extracting the
relevant images. Also, different distance measures are used to get the highest
precision of each feature. The results of applying the multi-features and
multi-distances measures technique outperform other existing methods with
accuracy 86.5% for Wang database, 86.5% for UW database and 85% for
Keywords: content based image retrieval; computational vision; feature
extraction; hierarchical annular histogram; weighted average; matching
measures; weighted feature voting.
Fusion Strategy based multimodal human-computer interaction
by Shu Yang, Ye-peng Guan
Abstract: Human-computer interaction (HCI) has great potential for applications in many fields. The diversity of interaction habits and low recognition rate are main factors to limit its development. In this paper, a framework of multi-modality based human-computer interaction is constructed. Interactive target can be determined by different modalities including gaze, hand pointing and speech in a non-contact and non-wearable way. The corresponding response is feedback timely to users in the form of audio-visual sense with an immersive experience. Besides, the decision matrix based fusion strategy is proposed to improve the systems accuracy and adapt to different interaction habits which are considered in an ordinary hardware from a crowded scene without any hypothesis that the interactive user and his corresponding actions are known in advance. Experimental results have highlighted that the proposed method has better robustness and real-time performance in the actual scene by comparisons.
Keywords: human-computer interaction (HCI); multi-modality; audio-visual feedback; interaction habits; fusion strategy.
Channel Estimation for High Speed Unmanned Aerial Vehicle (UAV) with STBC in MIMO Radio Links
by Amirhossein Fereidountabar, Luca Di Nunzio, Rocco Fazzolari, GianCarlo Cardarilli
Abstract: This paper proposes a channel estimation method based on Kalman filter and adaptive estimation with Space Time Block Code (STBC) and multiple antenna systems (Multiple Input Single Output, MISO, and Multiple Input Multiple Output, MIMO) for high speed UAVs. Simulations have been done in time-varying Rayleigh faded channels for BPSK and QPSK. The proposed technique seems to obtain an error performance closer to the known channel information case in severely faded channel considerations. Application of Alamouti STBCs with diversity based on multiple antennas provides improved performance in faded wireless channels. Alamouti transmit diversity scheme, however, relies on the availability of accurate Channel State Information (CSI) for Unmanned Aerial Vehicles (UAVs). The simulation results show that our proposed method achieves accurate estimation for the SNR and Doppler shift in a wide range of velocities and SNRs.
Keywords: Doppler Shift; Radio Propagation; SNR; MIMO; STBC; Adaptive Estimation.
New Spatiotemporal Method for Assessing Video Quality
by David Bong, Woei-Tan Loh
Abstract: The existence of temporal effects and temporal distortions in a video differentiate the way it is assessed from an image. Temporal effects and distortions can enhance or depress the visibility of spatial effects in a video. Thus, the temporal part of videos plays a significant role in determining the video quality. In this study, a spatiotemporal video quality assessment (VQA) method is proposed due to the importance of temporal effects and distortions in assessing video quality. Instead of measuring the frame quality on a frame basis, the quality of several averaged frames is measured. The proposed spatiotemporal VQA method is significantly improved compared with image quality assessment (IQA) methods applied on a frame basis. When combined with IQA methods, the proposed spatiotemporal VQA method has comparable performance with state-of-the-art VQA methods. The computational complexity of the proposed temporal method is also lower when compared with current VQA methods.
Keywords: video; frames; video quality; spatial effects; temporal effects; temporal distortions; spatiotemporal; average; video quality assessment; image quality assessment; computational complexity.
A new method for three-dimensional magnetic resonance images denoising
by Feriel Romdhane, Faouzi Benzarti, Hamid Amiri
Abstract: Removing noise in magnetic resonance images (MRI) is a crucial issue in the field of medical image processing. These images are infected by Rician noise which is a non-additive noise, allows to reduce the image contrast and causes random fluctuations. Our paper proposed a new method for 3D MRI denoising based on new combination between non-local means filter and the diffusion tensor with adaptative MAD estimator Rician noise. The performance of our proposed algorithm was evaluated with respect to different quantitative measures, compared to other denoising methods which illustrate that our proposed denoising algorithm efficiently removes noise and preserves more details.
Keywords: 3D MRI; 3D denoising method; non-local mean filter; diffusion tensor; Rician noise.
A novel incremental topological mapping using global visual features
by Nabila Zrira, El Houssine Bouyakhf
Abstract: Mapping is fundamental in the navigation task of autonomous mobile robots. In appearance-based mapping, the process of detecting visual loop closing determines whether the current observation comes from a previously visited location or a new one. The purpose of this paper is to present a new method of exploring indoor environments by an autonomous mobile robot, as well as building topological maps based on global visual attributes. This method takes advantage of the small size of the GIST descriptors, and the ease of their calculation. We also make use of omnidirectional images to build a single global visual descriptor showing an entire room. Furthermore, in order to handle the problem of a visual loop closing, we have employed a formula that correctly assigns each global descriptor to its location.
Keywords: topological mapping; GIST descriptor; visual loop closing; omnidirectional images; navigation; mobile robots; indoor environments.
Car manufacturer and model recognition based on scale invariant feature transform
by Yongbin Gao, Hyo Jong Lee
Abstract: Vehicle analysis involves licence plate recognition, vehicle typerecognition, and car manufacturer and model recognition. Car manufacturer and model recognition plays an important role in providing supplementary information to licence plate recognition for the unique identification of a car. In this paper, we propose a framework to recognition car manufacturer and its model based on scale invariant feature transform (SIFT). We first detect a moving car using frame differences; the resultant binary image is used to detect the frontal view of a car by a symmetry filter. The detected frontal view is then used to identify a car based on SIFT algorithm. Experimental results show that our proposed framework achieves favourable recognition accuracy.
Keywords: moving car detection; car model recognition; scale invariant feature transform; SIFT.
Image compression-based multiple description transform coding using NSCT and OMP approximation
by Amina Naimi, Kamel Belloulata
Abstract: In this paper, we present a novel multiple description transform image coding architecture, which uses an attractive transform called non-subsampled contourlet transform (NSCT). It combines NSCT and orthogonal matching pursuit algorithm (OMP) to give a sparse representation of images, aiming at solving the compression problem due to the redundancy property of NSCT. In this way, OMP turns to give a solution to remove the redundancies. We evaluate the performance of our image coder in the case of four descriptions that are dispatched over different channels. The experimentations show that the proposed method is efficient and the potential using NSCT than DWT in multiple description image coding, is evaluated by PSNR in each case of packet loss, where every description can reconstruct the image with acceptable fidelity, the later is much better if all descriptions are available.
Keywords: multiple description coding; MDC; non-subsampled contourlet transform; NSCT; discrete wavelet transform; DWT; orthogonal matching pursuit; OMP.
Original strategy for avoiding over-smoothing in SFS problem resolution
by Rocco Furferi, Lapo Governi, Yary Volpe, Luca Puggelli, Monica Carfagni
Abstract: With the aim of retrieving 3D surfaces starting from single shaded images, i.e. for solving the widely known shape from shading problem, an important class of methods is based on minimisation techniques where the expected surface to be retrieved is supposed to be coincident with the one that minimise a properly developed functional, consisting of several contributions. Despite several different contributes that can be explored to define a functional, the so called 'smoothness constraint' is a cornerstone since it is the most relevant contribute to guide the convergence of the minimisation process towards a more accurate solution. Unfortunately, in case input shaded image is characterised by areas where actual brightness changes rapidly, such a constraint introduces an undesired over-smoothing effect for the retrieved surface. The present work proposes an original strategy for avoiding such a typical over-smoothing effect, with regard to the image regions in which this is particularly undesired such as, for instance, zones where surface details are to be preserved in the reconstruction. The proposed strategy is tested against a set of case studies and compared with other traditional SFS based methods to prove its effectiveness.
Keywords: shape from shading; SFS; variational approach; 3D model; smoothing; minimisation; smoothness constraint.
Efficient holistic feature basis learning for pedestrian detection
by Kyaw Kyaw Htike
Abstract: Pedestrian detection is an important research area in computer vision and Artificial Intelligence due to its potential applications in pedestrian safety, elderly monitor and care, surveillance, image retrieval and video compression. Many pedestrian detection systems have been proposed and it has been pointed out in state-of-the-art research that feature extraction is one of the significant factors in improving the performance of a pedestrian detector. Therefore, much work has focused on proposing novel feature extraction schemes to improve pedestrian detection. Moreover, most are end-to-end pedestrian detection systems, making it unclear about the contribution of classifiers in the detection pipeline. In this paper, we fill in some of this gap and focus on the classification process and propose feature basis learning for holistic high dimensional feature vectors that are common in pedestrian detection. We experimentally show that it is possible to obtain superior performance by our proposed feature basis learning algorithms even on high dimensional datasets.
Keywords: pedestrian detection; feature extraction; classifiers; computer vision; feature learning.
Road traffic sign recognition algorithm based on computer vision
by Huiming DaI, Xin Zhang, Dacheng Yang
Abstract: As road traffic sign recognition is a crucial component for automatic driver assistance systems, it is a key problem in computer vision as well. Therefore, in this paper, we study on the problem of road traffic sign recognition utilising the computer vision technology. The main innovation of this paper is to propose an improved convolutional neural network, and then use it to tackle the road traffic sign recognition problem. Convolutional neural network can learn features from training data set, and a convolutional network contains alternating layers of convolution and pooling. Particularly, RGB traffic images are transformed to grey scale images, and then grey scale images are input to the improved convolutional neural network. Furthermore, the fixed layers are utilised to discover region of interests, and the learnable layers are used to extract features. In general, output information of the proposed two learnable layers are input to the classifier separately, and parameters of learnable layers and the classifier are trained at the same time. Finally, GTSDB data set is chosen to make performance evaluation, among which 600 images and 300 images are regarded as training and testing data set respectively. Experimental results demonstrate that the improved CNN-based traffic sign recognition performs better than the traditional CNN.
Keywords: road traffic sign; object recognition; computer vision; convolutional neural network.
Special Issue on: Recent Advances and Emerging Topics in Computer Vision Methods and Image Analytics
Exploring Necessity and Utility of Lightweight Android Chatting Application
by Ekbal Rashid
Abstract: In this paper there is elaborate discussion about the methods and results of a field research which led to the understanding that a lightweight chatting android application would be extremely useful for people using social networking apps. The paper details the steps how the app was developed, its salient features, its use cases and finally the findings about what people thought about it once they began using it. It also discusses the usability testing and resulting iteration. The paper attempts to highlight how such an app can help in using the resources of an Android device in a better way.
Keywords: Android; application; chatting; lightweight; mobile; communication.
Effective Image Retrieval Based on Hybrid Features with Weighted Similarity Measure and Query Image Classification
by Vibhav Prakash Singh, Rajeev Srivastava
Abstract: Content Based Image Retrieval (CBIR) is a wide research area in computer vision, in which unknown query image yield similar images as per the query content. An effective CBIR needs efficient extraction of low-level features, and for this many different methods have been recently proposed using colour, texture, and shape features. Most of these methods use the histogram or some variation for representing colour and other descriptors. So, all these features may require a significant amount of space and more similarity calculation. Also, the CBIR performance is not so encouraging due to gap between low-level visual features and high-level understanding. Here, an efficient CBIR system is proposed, which is based on the fusion of chromaticity-colour moments, and colour co-occurrence based small dimension features using inverse variance weighted similarity measure. In this measure, weight of a feature with high variance is low, while the weight of a feature with low variance is high. This interesting property of the varying weights, effectively retrieves relevant images. In addition, this paper also proposes a supervised query image classification and retrieval model by filtering out irrelevant class images using a multiclass support vector machine (SVM) classifier. Basically, this model categorises and recovers the category of a query image based on its visual content, and this successful categorization of images significantly enhances the performance and searching time of retrieval system. Descriptive comparative analyses confirm the effectiveness of this work. Here, we have obtained 83.83 % and 76.9 % average precision for 12 and 20 image retrieval, respectively using weighted similarity measure together with 85.6% average precision and 84.4 % recall for the query image classification framework.
Keywords: classification; chromaticity moment; colour co-occurrence; feature fusion; content based image retrieval; variance.
Facial Expression Recognition based on Eigenspaces and Principle Component Analysis
by Ashim Saha
Abstract: Facial Expression detection or Emotion Recognition is one of the risingrnfields of research on intelligent systems. Emotion plays a significant role in nonverbal communication. An efficient face and facial feature detection algorithms are required to detect emotion at that particular moment. In this work authors implemented a system that recognizes the users facial expressions from the input images, using the algorithm of Eigenspaces and Principle Component Analysis (PCA). Eigenspaces are the face images which projected onto a feature space that encodes the variation among known face images. Authors used PCA to make dimensional reduction of images in order to obtain a reduced representation of face images. The implementation is been applied on three different Facial Expressions databases, Extended Cohn-Kanade facial expression database, Japanese Female Facial Expression database and self made database in order to find out therneffectiveness of proposed method.
Keywords: Facial expression; Eigenspaces; Principle component analysis; Emotion detection; Image Processing.
Content Based Image Retrieval with Pachinko Allocation Model and a Combination of Color, Texture and Text Features
by Ahmed Boulemden, Yamina Tlili, Hamid Jalab
Abstract: Probabilistic topic models are a set of algorithms which aim to learn and discover hidden concepts responsible of generating words of documents in large archives. These models have been also used for image processing tasks such as object recognition, image annotation and image retrieval. rnWe present in this paper a content based image retrieval system (CBIR) based on pachinko allocation model (PAM) and employing a combination of color, texture and textual features. PAM has presented more efficiency compared with other topic models by the way in which it captures correlation not only between words in documents but also between different topics (concepts) responsible of their generation. Although this advantage of PAM, there is no works which explore its utility for content based image retrieval tasks and using single and multimodal image features.rnWe aim to evaluate the use of PAM for CBIRs by implementing a system based on it. We are interested also in evaluating PAM with single and multimodal image features (i.e. the use of single and combined features). In this context, PAM was applied with two different modalities of features, image global features (color and texture) and textual indexes (from associate texts with images) separately and combined.rnMean Average Precision is evaluated. The use of PAM with combination of features has slightly improved results of using it with just one modality, this opens more perspective in order to enhance results. Images from the ImageCLEF IAPR 2012 dataset have been used for experiments.rn
Keywords: Pachinko allocation;image retrieval;color moments;texture features;global feature extraction;textual modality;features combination
An automatic natural feature selection system for indoor tracking - Application to Alzheimer patient support
by Mohamed Badeche, Frederic Bousefsaf, Abdelhak Moussaoui, Mohamed Benmohammed, Alain Pruski
Abstract: In this paper, we propose an automatic selection and natural feature tracking method that uses a monocular camera for path capturing and guides the user showing him the path to be followed. The application targets Alzheimer patients for helping them in their indoor moves. By offering an automatic selection of features, the user intervention and prior knowledge of the working environment would not be required to assure the good working of the system. The general principle of the proposed method is to record the path to be followed, and then recognize it in real time using purely visual methods, using only a single camera as an acquisition sensor. The devised system could be implemented on augmented-reality glasses with one single built-in camera. The experimental results have shown that the proposed method is very promising and the application could follow accurately the required path in real time, with a satisfying robustness in a fully-contrasted and static environment.
Keywords: natural features; matching; local descriptor; optical flow; Alzheimer disease; augmented reality glasses.
Combining Zernike moment and Complex wavelet transform for Human object Classification
by Manish Khare, Om Prakash, Rajneesh Kumar Srivastava
Abstract: Human object classification is an important problem for smart video surveillance, where we classify human object in real scenes. Even though different features have been used for human object classification task, most of the existing methods adopt a single feature to classify the objects. In this paper, we have proposed a new method for human object classification, which classify the object present in a scene into one of the two classes: human and non-human. The proposed method uses combination of Daubechies complex wavelet transform and Zernike moment as a feature of object. The motivation behind using combination of these two as a features of object, because shift-invariance and better edge representation property makes Daubechies complex wavelet transform suitable for locating object as compared to real valued wavelet transform, whereas rotation invariance property of Zernike moment is also helpful for correct object identification. Therefore, combination of these two features brings about significant synthesized benefits over each single feature and other widely used features. The proposed method matches Zernike moments of Daubechies complex wavelet coefficients of objects. We have used Adaboost as a classifier for classification of the objects. The proposed method has been tested on standard dataset like INRIA person dataset. Quantitative experimental evaluation results shows that the proposed method is better than other state-of-the-art methods and gives better performance for human object classification.
Keywords: Human object classification, feature selection, Daubechies complex wavelet transform, Zernike moment, Adaboost classifier, Video Surveillance
Stairways Detection Based on Approach Evaluation and Vertical Vanishing Point
by Md. Khaliluzzaman, Kaushik Deb
Abstract: Detecting stair region and estimating the distance from a camera to stair in a stair image is the fundamental step in the implementation of autonomous stair climbing navigation, as well as alarm systems for vision impaired people. In this paper, a framework is proposed for detecting the stair region from a stair image utilizing some natural properties of a stair. One unique property of them is, every stair steps beginning and ending horizontal edge point intersects with two vertical edge points creating three connected point. These vertical edges are stair steps height and its width edge. Another property is steps of a stair appear gradually increasing order from top to bottom of a stair in a parallel arrangement. For that initially, directional Gabor filter and Canny edge detector are employed on the stair image to eliminate the influence of illumination and for detecting stair edges. Non-candidate stair edges are eliminated by performing filtering operation. Then longest horizontal edges are extracted by using a proposed edge linking method on the edge image. After that, a search method is applied for finding stair step height and its width edge point at the beginning and ending point of the longest horizontal edges. This operation is performed for detecting three connected points to validate the stair edge segments. In the next step, these validated edge segments are used to calculate the vertical vanishing point to justify the stair edges. This justification ensures that validated edge segments are arranged in an increasing parallel order from top to bottom of a stair. Finally, these increasing edge segments are verified from other stairs similar patterns. This verification is performed by utilizing the y coordinate value of the vanishing point and confirmed the detection of stair candidate region. In addition, the triangular similarity is used for distance estimation from camera to stair. The proposed framework is tested using various stair images under a variety of conditions and results are presented to demonstrate the efficiency and effectiveness.
Keywords: autonomous stair climbing navigation; alarm system; Gabor filter; illumination; vanishing point; three connected point; triangular similarity.
Special Issue on: Recent Advances in Theory and Applications of Visual Intelligence
Modeling and Simulation of the In-wheel Motor Applied in Electric Vehicle
by Shanshan Peng, Xuejiao Wang, Shipei Cheng, Rongyun Zhang
Abstract: To study the ride comfort of in-wheel motor electric vehicle, it is necessary to analyze the speed ripple and torque ripple of motor. Thus, the motion differential equation of PMSM (permanent magnet synchronous motor) is presented in this paper. Based on the vector control principle of motor and the rotor-field-orientated control, a double closed loop controlled PMSM simulation model is built through Sim Power System Toolbox. After that, the simulation analysis of the running process of motor uninfluenced and influenced by load is carried out in Matlab/Simulink. Then the performance curves of speed, torque, and stator current of motor are obtained from the simulation analysis. As last, an in-wheel motor test system is established to verify the simulation results. The research results show that the proposed simulation model of motor is reliable and can fully reflect the running status of real motor which provides basis for the further researches on the effect of motor torque ripple on the vertical vibration of vehicle suspension.
Keywords: Electric vehicle; in-wheel motor; modeling; simulation.
Calibration and using a laser profile scanner for 3D robotic welding
by Michal Chalus, Jindrich Liska
Abstract: This paper describes first functions of a developed cognitive module for 3D robotic welding using TIG or laser technology. This area is constantly evolving with the needs to solve complex problems of welding automatization and robotization, and also thanks to the continuous advances in measurement technology and robotics. Besides the use of welding robots for serial production with dedicated tightly defined trajectories, systems for automatic welding of a previously undefined path or paths, which the operator cant manually define because of its complexity, are developed. This paper covers the general description of the cognitive module and its required functions. Then necessary knowledge about a pose representation and transformation is presented. After that, procedures for a calibration of a profile scanner and its using for 3D model construction based on a depth map are described in more detail. The cognitive module prototype is tested in the task of automatic cavity repair.
Keywords: 3D robotic welding; tungsten inert gas welding; TIG welding; laser welding; laser profile scanner; hand-eye calibration; 3D model construction; depth map; image processing; trajectory identification; cognitive robot.
Content-Based Image Retrieval Using Multiresolution Speeded-Up Robust Feature
by Prashant Srivastava, Ashish Khare
Abstract: The advent of numerous low cost image capturing devices has led to the proliferation of huge amount of images in the present world. The images have grown more complex day-by-day and in order to access them easily, there is a need of efficient indexing and retrieval of these images. The field of Content-Based Image Retrieval (CBIR) tends to achieve this goal. This paper proposes the concept of multiresolution Speeded-Up Robust Feature (SURF) descriptor which combines Discrete Wavelet Transform and SURF descriptor to extract interest points at multiple resolutions of image for CBIR. The feature vector has been constructed through Gray-Level Co-occurrence Matrix (GLCM). The advantage of this technique is that it exploits multiple resolutions of image to extract interest points which single resolution processing techniques fail to do. Performance of the proposed method is tested on two benchmark datasets Corel-1K and GHIM-10K and measured in terms of precision and recall. The performance of the proposed method is measured with other state-of-the-art feature descriptors. Experimental results demonstrate that the proposed method outperforms other state-of-the-art descriptors in terms of precision and recall.
Keywords: Content-Based Image Retrieval; Speeded Up Robust Transform; Gray-Level Co-occurrence Matrix; Multiresolution SURF.
An image encryption algorithm using logarithmic function and henon-chaotic function
by PURUSHOTHAM REDDY M
Abstract: This paper proposes a natural logarithmic and chaotic-based encryption algorithm for securing images. It has two important steps. In the first step, a natural logarithmic function of the image to reduce the intensity of the pixel values and image fusion are used for encrypting the image using the key. In the second step, the Henon chaotic function is used for shuffling the pixel values. Here, logarithmic function scatters the pixel values differently and image matrix is a key used to create image fusion. In the resulting matrix, neighboring values with naturally close will take on appreciably different values, making it difficult to crack the resulting image. The proposed method creates complexity against differential attacks. The different types of the tests with analysis have been performed to prove the validity and the security of the algorithm.
Keywords: Natural logarithmic function; henon chaotic function; image fusion; key image.
Support Vector Machine Based Approach for Text Description from the Video
by Vishakha Wankhede, Ramesh M. Kagalkar
Abstract: Human uses communication, language either by written or spoken to describe visual the world around them. So the study of text description for any video goes increasing. In this paper, we are representing a framework that gives output as a description for any long length video using natural language processing. The framework is divided into two sections called training and testing section. The training section is used to train the video with its description like activities of objects present in that video.Another section is testing section. The testing section is used to test the video and retrieve the output as description of video comparing videos stored into database (i.e. in training section). Using natural language processing, sentences are generated from objects and their activities. For the evaluation, maximum 50 second videos are used.
Keywords: natural-language processing; NLP; video processing; video recognition.
Improved Eigenspectrum Regularization for Human Activity Recognition
by FESTUS OSAYAMWEN, Jules-Raymond Tapamo
Abstract: A within-class subspace regularization approach is proposed for eigenfeatures extraction and regularization in human activity recognition. In this approach, the within-class subspace is modeled using more eigenvalues from the reliable subspace to obtain a four parameter modelling scheme. This model enables a better and true estimation of the eigenvalues that are distorted by small sample size effect. This regularization is done in one piece, thereby avoiding undue complexity of modeling eigenspectrum differently. The whole eigenspace is used for performance evaluation because feature extraction and dimensionality reduction is done at later stage of the evaluation process. Results show that the proposed approach has better discriminative capacity than several other subspace approaches for human activity recognition.
Keywords: Feature extraction; human activity recognition; linear discriminant analysis.
DETECTION OF DEFECTIVE PRINTED CIRCUIT BOARDS USING IMAGE PROCESSING
by Beant Kaur, Gurmeet Kaur, Amandeep Kaur
Abstract: Manufacturing of Printed Circuit Boards involves three stages (printing, component fabrication over surface of printed circuit boards, soldering of components), where inspection at every stage is very important to improve the quality of production. Image subtraction method is widely used for finding the difference between any two images. Using this method, defects have been detected by finding the difference between reference (defect free) and test image (to be inspected). The major limitation of image subtraction is that both the images should have same size and same orientation. The proposed method removes the above explained limitation of image subtraction method and also calculates total number of defects on printed circuit boards. The proposed method is tested on six test images. Experimental results show that proposed method is simple, economical and easy to implement in small and medium scale industries where most of the inspection is still done by humans.
Keywords: Printed Circuit boards; inspection system; image registration; phase correlation; mean; image subtraction and connected components.