Forthcoming and Online First Articles

International Journal of Innovative Computing and Applications

International Journal of Innovative Computing and Applications (IJICA)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Innovative Computing and Applications (34 papers in press)

Regular Issues

  • Implementation of Cloud Service Platform for Monitoring Charging Facility Status of Electric Vehicle based on MQTT   Order a copy of this article
    by Lei Li, Weidong Liu, Xiaohui Li, Guang Yang, Dan Li 
    Abstract: Aiming at the heterogeneous charging facilities of many manufacturers, the difficulty of communication and information interaction among these devices, and the lack of real-time information sharing, this paper makes full use of Message Queuing Telemetry Transport(MQTT) which is a lightweight message publishing and subscribing protocol, designs a cloud service platform for monitoring the status of charging facilities of electric vehicles, and collects monitoring data of charging facilities of electric vehicles to store in cloud server. Firstly, the network topology structure of cloud service platform is given. Then, the logical architecture and functions of the platform are designed. The information collection, pushing process and security design of the cloud service platform are described in detail. Finally, the implementation and performance tests of the presented system are carried out. The MQTT-based Internet of Things Middleware in the cloud platform can provide remote parameter configuration and information acquisition functions for charging facilities in smart grid environment, shield the underlying hardware devices, realize data interaction between the sensor layer and the upper application, and realize the integration of multi-source information of heterogeneous devices. The research results have important application value for improving the automation level of electric vehicle charging facilities monitoring and the real-time information interaction among heterogeneous devices of electric vehicles.
    Keywords: Electric Vehicle; Charging Facility Monitoring; Cloud Computing; MQTT; Middleware.

  • New Architectural Optical Character Recognition Approach for Cursive Fonts: The Historical Maghrebian Font as an Example   Order a copy of this article
    by Ilyes OULED OMAR, Sofiene HABOUBI, Faouzi BENZARTI 
    Abstract: In this paper, we intend to present different built Maghrebian font databases with giving the different challenges facing the development of an Optical Character Recognition system able to treat it.rnAlso, the full architecture able to treat the historical Maghrebian font is revealed. Further, a complete design with the accuracy of each module is provided. The novel OCR architecture includes a binarization module based on deep neural networks with an accuracy of 98.1%. Moreover, it involves three segmentation tasks based on deep learning approaches for text/non-text separation, columns division and connected components segmentations. The classification task is based on the DenseNet model with an accuracy of 98.95%. the post-processing module is based also based on deep learning approaches based on sequential modelling with an accuracy of 81.3%. It also includes a user-feedback stage with an accuracy of 94.7%. The total system accuracy is 89.06%.
    Keywords: OCR; Cursive Historical Documents; Maghrebian Font Database; Deep Learning.

  • New delay-independent exponential stability rule of delayed Cohen-Grossberg neural networks   Order a copy of this article
    by Cheng-De Zheng, Haorui Meng, Shengzhou Liu 
    Abstract: This manuscript studies the stability for a class of Cohen-Grossberg neural networks (CGNNs) with variable delays. By practicing the scheme of Lyapunov function (LF), M-matrix (MM) theory, homeomorphism theory and nonlinear measure (NM) method, a new sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability (GES) of equilibrium point (EP) for the studied network. As the condition independent of the delay, it can be applied to networks with large delays. The result generalizes and improves the earlier publications. Finally, an example is supplied to exhibit the power of the results and less conservativeness over some earlier publications.
    Keywords: stability; inequality; delay; homeomorphism.

  • Fuzzy improved Firefly based MapReduce for Association Rule Mining   Order a copy of this article
    by Lydia Nahla Driff, Habiba Drias 
    Abstract: In a research environment, Biology-derived algorithms founded on neighboring local optimum are considered among the most powerful optimization algorithms. The biggest challenge of these algorithms is to be able to ensure balanced convergence and this is what we describe in this paper, we were interested in devising an improved version of FireFly Algorithm (FF) for Association Rules Mining called IFF. In order to refine generated rules based on frequent patterns, we had to reduce or even eliminate blind mating from the design of genetic algorithm and replaced it by mating between mature fireflies. Knowing that it could not have been possible to refine ARM using classic method such as Apriori, proposed approach has more advanced methods such as controlled genetic operations to manipulate frequent patterns, and the uses of fuzzy logic to control IFF parameters and assure convergence calibration based on data size, algorithm iterations and temporary local optimum. On the other hand, we executed IFF under Hadoop to get a MapReduce system and ensure the most optimal execution time. To analyze the quality of our proposal, we made simulations on MEDLINE collection using statistical analysis and comparing with classical algorithms and recent evolutionary approaches. Results indicate that the proposed approach is superior to existing algorithms with an accuracy of 10% to 50% and save execution time around 36% while ensuring a good balance between the quality and variety of the obtained knowledge.
    Keywords: Swarm Intelligence; Firefly algorithm; Genetic algorithm; Fuzzy logic; Association Rules Mining; Frequent patterns; MapReduce; Hadoop; MEDLINE collection.

  • Fuzzy Modeling Techniques for Improving Multi-label Classification of Software Bugs   Order a copy of this article
    by Rama Ranjan Panda, Naresh Kumar Nagwani 
    Abstract: Software bug repositories stores a wealth of information related to the problems that occurred during the software development. Today's software development is a modular approach, with multiple developers working in different locations all around the world. A software bug may belong to multiple categories and can be resolved by more than one developer. For understanding the multiple causes of software bugs and proper bug information management at large bug repositories, better classification of software bugs is needed. In the proposed work, a multi-label fuzzy system-based classification (ML-FBC) is proposed. A fuzzy system is used to compute the membership of software bugs into multiple categories. Then a fuzzy C-means clustering algorithm is used to create various clusters. Once the clusters are created, the cluster-category mapping is done for various software bugs. For a new bug, the fuzzy similarity values are computed, and the created cluster-category mappings are utilized to categorize it. Using a user-defined threshold value, a new bug is classified into multi-label categories. Experiments are carried out on available benchmark data sets to compare the performance measures F1 score, BEP score, Hloss, accuracy, training time, and testing time of various multi-label classifiers. The proposed ML-FBC outperforms existing multi-label classifiers.
    Keywords: Mining Bug Repositories; Bug Information Management; Fuzzy Modeling; Multi-Class Categorization; Multi-Label Classification.

  • HARDeep: Design and Evaluation of a Deep Ensemble Model for Human Activity Recognition   Order a copy of this article
    by R. Raja Subramanian, V. Vasudevan 
    Abstract: With the emergence of smartness in various fields including medical science, forensics and security, remote monitoring of human activities has gained more interests in research. The ambulatory health monitoring services includes monitoring the activities of mentally challenged and elderly people. These activities are tracked to get rid of abnormalities. In this research paper, we propose a novel framework for activity recognition from video sequences captured from static cameras and those captured from UAVs. The proposed framework, HARDeep, composes of two models: a human detection model leveraging YOLOv3, to extract the set of video frames containing human and an activity recognition model leveraging the ensemble of three deep learning models: GoogleNet, ResNet-50 and ResNet-101. HARDeep is evaluated against three datasets including Hollywood2 dataset, KTH dataset and the UCF-ARG dataset, consisting of video sequences captured from UAVs. Empirical evaluations state that HARDeep exhibits a sound recognition accuracy compared to similar inference models.
    Keywords: Activity recognition; Deep ensemble model; UAV; Scene Stabilization; Fog Computing.

  • Automatic Speech Recognition of Gujarati Digits using Wavelet Coefficients in Machine Learning algorithms   Order a copy of this article
    by Purnima Pandit, Shardav Bhatt 
    Abstract: In today's world, Automatic Speech Recognition (ASR) is an important task implemented via Machine Learning (ML) to assist Artificial Intelligence (AI). It has diverse applications such as humanmachine interactions, handsfree computing, voice search, domestic appliance control and many more. Speech recognition in an Indian regional language becomes a very necessary task in order to facilitate people, who can communicate only using their mother tongue and the disabled ones. In this article, we have proposed and performed experiments of speech recognition for Gujarati language, particularly for Gujarati digits. The recorded speech is preprocessed and then speech features are extracted from it using Mel-Frequency Discrete Wavelet Coefficient (MFDWC). These features are trained using Artificial Neural Networks (ANN) for classification. Two ANN architectures namely, Multi-layer Perceptrons (MLP) and Radial Basis Function Networks (RBFN) are used for training and recognition. The experimental results obtained in this work are compared with our previous experimental results.
    Keywords: Automatic Speech recognition (ASR); Machine Learning (ML); Artificial Neural Networks (ANN); Radial Basis Function Networks (RBFN).

  • A New Dispersive Flies Optimisation Algorithm for the Sum of Three Cubes   Order a copy of this article
    by Boian Lazov, Tsvetan Vetsov 
    Abstract: By first solving the equation $x^3+y^3+z^3=k$ with fixed $k$ for $z$ and then considering the distance to the nearest integer function of the result, we turn the sum of three cubes problem into an optimisation one. To our knowledge, this is a novel approach. We then present a modification of the dispersive flies optimisation (DFO) algorithm and apply it to this function in the case with $k=2$. We have two goals: to show the viability of using optimisation when searching for integer solutions and to measure how efficient our modified DFO is. We have significantly improved the performance of DFO for very large and discrete search spaces by adding new mechanisms to increase the exploration behaviour of the flies. As a comparison we also use two implementations of simulated annealing. The efficiency of the algorithms is measured by their running times. We model the data by assuming two underlying probability distributions -- exponential and log-normal, and calculate relevant numerical characteristics for them. Finally, we evaluate the statistical distinguishability of our methods with respect to some standard parametric and non-parametric statistical tests.
    Keywords: dispersive flies optimisation; particle swarm optimisation; Diophantine equations; sum of three cubes; simulated annealing.

  • Prioritizing Test Cases to Improve the Software Fault Detection using MCDM Methods   Order a copy of this article
    by Maryam Mohammadi Sarpiri, Keyvan Mohebbi, Ali Jamshidi 
    Abstract: To decrease the cost of software testing, we can run a subset of test cases, but this may result in residual faults. To keep the efficiency of testing, the most important test cases should be selected through a prioritization approach. Such prioritization requires the assessment of different criteria, so it can be formulated as a multi-criteria decision-making (MCDM) problem. This research proposes an approach to select the proper subset of test cases using the MCDM methods. Three MCDM methods, namely, Fuzzy SAW, Fuzzy VIKOR, and Fuzzy TOPSIS are applied to prioritize the test cases concerning various criteria. To select a subset of test cases, a threshold is determined for different pairs of the most important criteria. The proposed approach is applied to an actual e-government software system with two variants. The experimental evaluations indicate the efficiency of this approach with respect to both the failure rate and the average percentage of fault detection metrics.
    Keywords: Software Testing; Test Case Prioritization; Multi-Criteria Decision Making; Fault Failure Rate; and Average Percentage Fault Detection.

  • Deep Learning Intelligence for Influencer-based Topological Classification for Online Social Networks   Order a copy of this article
    by Somya Jain, Adwitiya Sinha 
    Abstract: Social network analysis provides quantifiable methods and topological metrics to examine the networked structure for several interdisciplinary applications. In our research, a social network of GitHub community is constructed that forms a dense network of 37700 developers with 289003 associations amongst them. The research involves finding the central developers in the GitHub network using graph analytics and benchmark centrality metrics, including Degree, Betweenness, Closeness, PageRank and Eigenvector; which is based upon network structural information. Our research methodology provides a breakthrough towards predicting the classification of GitHub users using artificial intelligence-based learning model trained with derived topological network centrality metrics. The proposed approach performs feature extraction for the developers by computing centrality score of each user followed by building correlation matrix using centrality parameters based on network topology. Further, the derived topological centrality scores were used as input features to train and build artificial intelligence-based models for classification. Our experimentation shows better performance of artificial neural network over autoencoders, logistic regression and hyper-parameter tuned support vector machine. Certain intermediate outcomes include correlation, principal component analysis, loss monitoring, etc. The performance evaluation was performed in terms of macro and weighted F1-score, recall, precision, and accuracy.
    Keywords: Online social network; GitHub community; user influence computing; network centrality; artificial intelligence; topological classification.

  • Convolutional neural networks applied in the detection of pneumonia by X-ray images   Order a copy of this article
    by Luan Silva, Leandro Araújo, Victor Ferreira, Raimundo Neto, Adam Santos 
    Abstract: According to the World Health Organization (WHO), pneumonia kills about 2 million children under the age of 5 and is constantly estimated as the leading cause of child mortality, killing more children than AIDS, malaria, and measles together. The application of deep learning techniques for medical image classification has grown considerably in recent years. This research presents three implementations of convolutional neural networks (CNNs): ResNet50, VGG-16, and InceptionV3. These CNNs are applied to solve the classification problem of medical radiographs from people with pneumonia, as a manner to assist in the disease diagnosis. The three architectures used in this research obtained satisfactory results. The ResNet50 outperformed InceptionV3 and VGG-16, achieving the highest percentage of training and testing precision, as well as superior recall and f1-score. For the normal class, the f1-score related to ResNet50 was 88.42%, compared to 81.54% for InceptionV3 and 81.42% for VGG-16. For the pneumonia class, this metric was 95.10% against 92.82% for InceptionV3 and 92.54% for VGG-16.
    Keywords: deep learning; pattern recognition; convolutional neural networks; CNNs; pneumonia; X-ray.
    DOI: 10.1504/IJICA.2022.10039108
     
  • An empirical evaluation of strategies based on the triangle inequality for accelerating the k-means algorithm   Order a copy of this article
    by Marcelo Kuchar Matte, Maria Do Carmo Nicoletti 
    Abstract: The k-means clustering algorithm has a long history of success in a wide range of applications in many different research areas. Part of its success is due to both, the simplicity of the algorithm, which helps its quick implementation and the good results it produces. Despite success, however, the original k-means has some shortcomings. One of them relates to the processing time required for the algorithm to finish the iterative process that, given a set of data instances, and an integer value k, induces a clustering having k clusters of the given data instances. This article presents an empirical evaluation of three strategies found in the literature that employ the triangle inequality, with the purpose of accelerating the k-means processing time. Experiments were conducted using two groups of datasets, seven real datasets and ten artificially created datasets. Besides empirically evaluating the impact of variables involved in clustering processes that can interfere with accelerating processes, the article also discusses the different ways the triangle inequality concept is employed by the strategies.
    Keywords: k-means; optimisation; triangular inequality; clustering; machine learning; acceleration strategies.
    DOI: 10.1504/IJICA.2022.10050686
     
  • A high precision stereo-vision robotic system for large-sized object measurement and grasping in non-structured environment   Order a copy of this article
    by Guoyang Wan, Guofeng Wang, Kaisheng Xing, Tinghao Yi, Yunsheng Fan 
    Abstract: Handling and loading of large-sized objects represent a challenging task in industrial environments, especially when the object is a metal object with reflective surface features. Active stereo vision technology is not good for measuring the posture of reflective metal object, a high-precision pose measurement system based on passive stereo vision is proposed to automatic measurement, handling and loading of large objects by industrial robots in industrial environments. The system adopts advanced coarse and fine stereo vision positioning strategy, and realises the high-precision positioning of the measured target for the premise of ensuring stability. For coarse positioning, an improved multi-models template matching method based on machine learning is proposed to robust recognition of the multiple objects in complex background. A RANSAC-based method for the ellipse fitting and multi-points plane fitting is proposed for the 6-DOF pose of object accurately obtained in fine positioning step. Compared with the classical CAD-views method, experiments show that the method proposed in this paper has better performance in positioning accuracy and recognition robustness.
    Keywords: industrial robot; machine vision; stereo vision; 3D measurement; template matching; coordinate transform.
    DOI: 10.1504/IJICA.2022.10050687
     
  • A robot-soldering workstation combined with the deep learning and the template matching technology   Order a copy of this article
    by Guoyang Wan, Tinghao Yi, Guofeng Wang, Kaisheng Xing, Yunsheng Fan 
    Abstract: To improve the stability and precision in PCB soldering application, an unmanned robot workstation is proposed to solve the problem of automatic soldering of different model chips. In the first step, a novel 2D hand-eye calibration method is developed to acquire a high-precision transformation model from the coordinate system of the vision system to the robot working coordinate system. Then a robust classification and location method is developed to realise high precision positioning of PCB objects, which is based on deep neural network combining with the traditional template matching. The proposed method solves the problems of poor positioning accuracy and low recognition rate when traditional machine vision detects PCB objects with the same shape but different models. And the automation of chip soldering is realised. The experimental results show that the proposed algorithm displays excellent robustness.
    Keywords: deep learning; template matching; hand-eye calibration; coordinate transformation; object detection.
    DOI: 10.1504/IJICA.2022.10050688
     
  • Optimisation of plagiarism detection using vector space model on CUDA architecture   Order a copy of this article
    by Jiffriya Mohamed Abdul Cader, Akmal Jahan Mohamed Abdul Cader, Hasindu Gamaarachchi, Roshan G. Ragel 
    Abstract: Plagiarism is a rapidly rising issue among students during submission of assignments, reports and publications in universities and educational institutions, due to easy accessibility of abundant e-resources on the internet. Existing tools become inefficient in terms of time consumption when dealing with the prolific number of documents with large content. Therefore, we have focused on software-based acceleration for plagiarism detection using CPU/GPU. Initially serial version of vector space model was implemented on CPU and tested with 1,000 documents, which consumed 1,641 s. As processing time was a bottleneck of performance, we indented to develop parallel version of the model on the graphics processing units (GPUs) using compute unified device architecture (CUDA) and tested with the same dataset which consumed only 36 s and gained 45x speed up compared to the CPU. Then the version was optimised further and took only 4 s for the same dataset which was 389x faster than the serial implementation.
    Keywords: graphics processing units; GPUs; compute unified device architecture; CUDA; plagiarism detection; vector space model; CPU; VSM; parallel computing; speed up; acceleration; idf; web-based commercial tool; kernel; Google Cloud.
    DOI: 10.1504/IJICA.2022.10042480
     

Special Issue on: IWSSIP 2020 Latest Scientific and Theoretical Advances in Systems, Signals and Image Processing

  • Performance Evaluation of Energy Reconstruction Methods for the ATLAS Hadronic Calorimeter Using Collision Data   Order a copy of this article
    by Juan Marin 
    Abstract: Particle accelerators are machines that collide particle beams next to the speed of light. The Large Hadron Collider, the largest and most powerful beam collider, operates at a center-of-mass energy of 13~TeV with a 40~MHz collision rate. In the ATLAS experiment, optimal filter techniques can be applied on energy estimation even in the occurrence of noise and signal overlapping. In this context, the present work compares the performance of the energy estimation in the main hadronic ATLAS calorimeter, for two methods: the baseline algorithm OF2 (Optimal Filter) and COF (Constrained Optimal Filter). The work proposes an adaptive estimator of the energy signal pedestal in the ATLAS experiment at the LHC. For performance comparisons, the statistics from the energy estimation distributions are employed. The results show that the COF method presents a better performance than the OF2 method in terms of estimation error. The proposed signal baseline estimator properly improves the COF efficiency.
    Keywords: High-energy physics; Signal estimation; Optimal filtering; Signal pile-up.

  • Light Leaf Spots Segmentation Algorithm based on Color Difference Vectors   Order a copy of this article
    by Sheng Miao, Weili Kou, Lihui Zhong 
    Abstract: Leaf spots image segmentation and quantifying is a crucial step of disease image recognition, which is very important for monitoring plant diseases and subsequently diagnosis. In the leaf spots segmentation, compared with dark spots, light leaf spots segmentation is hinder due to unobvious color characteristics, and easily confused with leaf veins. This study specially focuses on the light leaf spots segmentation and vein removal. First of all, a mask extraction model was constructed to get leaf spots mask, includes three steps: L * a * b color model mapping, color difference vectors construction and color difference vectors segmentation. Secondly, veins mask and spots mask have been extracted from healthy leaves and spots leaves by mask extraction model respectively. And vein ratio of vein mask was calculated to evaluate whether vein removal is needed or not. Thirdly, vein area and roundness were calculated for both vein mask and spots mask, and remove areas with high similarity to vein features from spots mask. Leaves with light spots of 12 plants types were selected to test proposed algorithms, the experimental results show: compared to the manual segmentation method for identifying light leaf spots, the presented algorithm has higher average accuracy (86%) than both traditional watershed (77%), Barbedo 2017(82%) and CNN (81%). Moreover, this algorithm is easy to be used in other plant species.
    Keywords: Light leaf spots; Image segmentation; Vein removal; Color difference vector.

  • Detecting Acute Leukemia in Blood Slides Images Using a CNNs Ensemble   Order a copy of this article
    by Maíla Claro, Rodrigo Veras, Luis Vogado, André Santana, Vinicius Machado, Justino Santos, João Tavares 
    Abstract: Leukemia is a disease that has no defined etiology and affects the production of white blood cells in the bone marrow. Young cells or blasts are produced abnormally, replacing normal blood cells (white, red blood cells, and platelets). Consequently, the person suffers problems in transporting oxygen and infections combat. Acute leukemia is a particular type of leukemia that causes abnormal cell growth in a short period, requiring a quick start of treatment. Classifying the types of acute leukemia in blood slide images is a vital process, and a system of assisting doctors in selecting treatment becomes necessary. This article presents an ensemble approach composed of three convolutional neural networks (CNNs) - Alert Net-RWD, Resnet50 and InceptionV3. These CNNs, individually, demonstrated effectiveness in differentiating blood slides with Acute Lymphoid Leukemia (ALL), Acute Myeloid Leukemia (AML), and Healthy Blood Slides (HBS). We verified that the union of these three well-known CNNs improves the hit rates of current techniques from the literature. The experiments were carried out using 18 public data sets with 3,293 images, and the proposed CNNs ensemble achieved an accuracy of 96.17%, and precision of 96.38%
    Keywords: Acute leukemia diagnosis; model ensemble; convolutional neural network.

  • Evaluation of Banknote Identification Methodologies Based on Local and Deep Features   Order a copy of this article
    by Leonardo P. Sousa, Rodrigo M. S. Veras, Luis H. S. Vogado, Laurindo S. Britto Neto, Romuere R. V. Silva, Flávio H. D. Araujo, Fátima N. S. Medeiros 
    Abstract: There are many people with disabilities; it is estimated that 39 million people are blind and 246 million have limited vision, giving a total of 285 million visually impaired people. Information and communication technologies can help disabled people achieve greater independence, quality of life, and inclusion in social activities by increasing, maintaining, or improving their functional capacities. This paper presents a significant evaluation of local and deep features for an automatic methodology for identifying banknotes. To determine the best local features, we evaluated a set of four point-of-interest detectors, two descriptors, seven ways of generating the image signature, and six classification methodologies. To define the deep features, we extract features using three pre-trained well-known CNNs. Additionally, we evaluated using a hybrid descriptor formed by the combination of local and deep features. In this situation, the features were selected according to their gain ratios and used as input to the classifier. Experiments performed on US Dollar (USD), Euro (EUR), and Brazilian Real Banknotes (BRL) obtained rates of accuracy of 99.96%, 99.12%, and 96.92%, respectively.
    Keywords: accessibility; visually impaired; banknote recognition; assistive technologies.

  • Multilevel CNN for Anterior Chamber Angle Classification using AS-OCT Images   Order a copy of this article
    by Marcos Ferreira, Geraldo Braz, João Almeida, Anselmo Paiva, Rodrigo Veras 
    Abstract: Glaucoma is the second leading cause of blindness and the first leading cause of irreversible blindness. The main types of the disease are open-angle and angle-closure glaucoma. In people with angle-closure, the anterior chamber angle is narrow, which leads to a rising intraocular pressure, and consequently, optic nerve damage, causing vision loss. Since it is irreversible, an early diagnosis is essential. So, angle classification is fundamental for diagnosis. Anterior segment optical coherence tomography is one of the imaging tests used to diagnose the disease. In addition to the fact that no eye contact is necessary, this test provides a fast way to analyze the anterior chamber angle, to classify it as open or closed. We propose the anterior chamber angle classification method based on visual feature extraction, using deep neural networks in this work. In a multilevel architecture, different pre-trained CNNs are adjusted to extract deep features and train two classifiers. The best model extracted visual features from the anterior camera angle in the experiments, and achieved a sensitivity value of 1.000 as the best result.
    Keywords: Angle-Closure Glaucoma; Transfer Learning; Deep Features; Multilevel Networks.

  • Transfer Learning Based Lung Segmentation and Pneumonia Detection for Pediatric Chest X-Ray Images   Order a copy of this article
    by Vandecia Fernandes, Gabriel Bras, Lisle Faray De Paiva, Geraldo Braz Junior, Anselmo Cardoso De Paiva, Luis Rivero 
    Abstract: Pneumonia is the leading cause of morbidity and mortality in under-five children, especially in developing countries. Accordingly to UNICEF and World Health Organization, a child dies of pneumonia every 39 seconds, and pneumonia kills more children than any other infectious disease, accounting for 15% of all deaths of children under five years old. In regions with a high prevalence, the early detection and treatment of pneumonia can significantly reduce children's mortality rates. Commonly, a chest x-ray is a diagnostic exam. Nevertheless, it is a problematic image for reading and interprets, requiring an expert physician. So, it is essential to provide computational methods to help exam interpretation or enhance important information. This paper proposes a transfer learning method to segment lung regions on the Chest X-Ray dataset to extract ROI for pneumonia detection. The results are promising and reach 0.917 of dice using U-Net combined with InceptionV3 in a chest x-ray dataset without lung annotation. For pneumonia detection, the method achieves 0.954 precision.
    Keywords: Pneumonia; Transfer Learning; Deep Learning; Segmentation.

  • Wind Turbine Fault Detection: A Semi-Supervised Learning Approach With Two Different Dimensionality Reduction Techniques   Order a copy of this article
    by Fernando Sá, Rafaelli Coutinho, Eduardo Ogasawara, Diego Brandao, Rodrigo Toso 
    Abstract: The quest to save the environment has led many countries to change their mix of energy sources, with most countries focusing on wind energy as a clean, alternative source. Since wind turbines are at the center of this revolution, ensuring their continuous operation by preemptively detecting and correcting faults is a key to success. Towards that end, making use live operational signals from the turbine, various data-driven methods for fault prediction using traditional machine learning have been proposed. As with all traditional machine learning solutions, these require careful feature selection and hyperparameter tuning for each turbine, which is not simple to scale. This work adopts a novel, automatic, end-to-end AutoML approach covering aspects from features and hyperparameters selection to fault prediction using semi-supervised support vector machines guided by the multi-objective optimization framework Non-dominated Sorting Genetic Algorithm II (NSGA-II). Experiments were carried out using a dataset containing unlabelled records of five 2.0MW wind turbines. In the results, we analyze and compare the approaches with respect to fault detection performance. We found that our AutoML approach using NSGA-II for feature selection offers up to 9% improvement in solution quality over the state-of-the-art while being fully automated and requiring no costly and time-consuming feature engineering.
    Keywords: Feature Selection; Fault Detection; Wind Turbine; Semi-Supervised Learning.

  • Heuristic-based approaches for fracture detection in borehole images   Order a copy of this article
    by Maira Moran, Eduardo Vasconcellos, Jordan Cuno, Mauro Biondi, Jose Riveaux, Maury Correia, Esteban Clua, Aura Conci 
    Abstract: The success of an oil well drilling process depends, among other factors, on the borehole stability, which is closely related to fractures. Borehole imaging tools are commonly used to identify this kind of feature. Visually, fractures can be identified as sinusoidal curves in the 2D data obtained by the Ultrasonic Borehole Imager (UBI). Their automatic detection is not trivial. Previous works in literature proposed solutions for this problem. Most of them are based on extensive curve searches with a high computational cost. A recent work uses a more efficient method based on a heuristic algorithm. However, it results in many false fractures. In this work, we propose a set of 11 new heuristics-based innovative methods, composed of two different algorithms and three pre-processings, to identify the sinusoidal curves related to fractures in UBI data. Moreover, we include a post-processing step to reduce the number of incorrect fractures. We perform an evaluation of these proposed methods in real data, comparing them to the only heuristic-based method previously presented in the literature. Our achieved results are very promising, with the best methods presenting fewer false curves and a higher rate of correctly detected fractures.
    Keywords: Oil drilling; Ultrasonic Borehole Image; Fractures; Sinusoidal curves; Heuristics; Variable Neighborhood Descendent; Iterated Local Search; Detection.

Special Issue on: Recent Advances in Intelligent Systems

  • Optimization of Cost 231-Hata Model Based on Deep Learning   Order a copy of this article
    by Qinxia Huang, Cheng Zhang, Jing Liu, Shilin Wu 
    Abstract: Based on the data set of question A in the 16th "Huawei Cup" mathematical modelling competition, this paper uses deep learning algorithm to optimize the Cost 231-Hata model of wireless communication. Firstly, the feature parameters of Cost 231-Hata model are analysed, and the corresponding features are found in the data set. Secondly, two new reference features are extracted according to the geometric relationship between base station and cell location. Then, the principal component analysis is used to reduce the dimension of the data set, and six features that are highly correlated with the target are extracted from multiple reference features. Finally, as these six features are taken as the input of neural network, a wireless propagation model based on deep learning is constructed by using error back propagation algorithm. The results show that the prediction accuracy of this model is higher than that of the traditional Cost 231-Hata model.
    Keywords: wireless communication; Cost 231-Hata model optimization; feature engineering; deep learning.
    DOI: 10.1504/IJICA.2022.10048843
     
  • Multimodal Multi-objective Differential Evolution Algorithm Based on Spectral Clustering   Order a copy of this article
    by Shenwen Wang, Xiaokai Chu, Jiaxing Zhang, Na Gao, Yao Zhou 
    Abstract: In recent years, in the face of the same problem in industrial production and life, decision-makers often hope to have a variety of different solutions to deal with. In other words, we hope to locate more different Pareto solutions under the condition of finding Pareto front. However, there are few researches in this field. For this reason, we propose a multimodal multi-objective differential evolution algorithm based on spectral clustering (SC-MMODE), which mainly uses some mechanisms to divide the solutions in the decision space into several mutually independent subpopulations. First, SC-MMODE uses a spectral clustering algorithm to control the decision space and form multiple sub-populations with good neighborhood relations. Secondly, a special crowding distance mechanism is used to balance the distribution of solutions in the decision space and objective space. In addition, the classical differential evolution algorithm can effectively prevent premature convergence. Then, in 17 test problems, the SC-MMODE algorithm and some new multimode multi-objective algorithms are tested simultaneously. Finally, through the analysis of experimental data, the SC-MMODE algorithm can find more Pareto optimal sets in the decision space, so it can effectively solve such problems.
    Keywords: Multimodal multi-objective; Spectral clustering; Decision space;Differential evolution algorithm;Special crowding distance.

  • Enhancing cuckoo search algorithm with complement strategy   Order a copy of this article
    by Hu Peng, Hua Lu, Changshou Deng 
    Abstract: Cuckoo search algorithm(CS) is a simple and effective swarm intelligence algorithm. It searches for new solutions by the global explorative random walk and the local random walk, the greedy strategy is used to choose a better solution. However, the CS uses a single strategy and fixed parameters, in which performance of the CS in balancing exploitation and exploration is inadequate, which results in poor convergence performance of the CS. To cope with this problem, a novel CS variant with complement strategy (CoCS) was proposed by us, in which the new solution is generated by two strategies in a random manner. One of the strategy is an improved Levy flights, and the other is adaptive to determine the step size according to the fitness value of the step size and the number of current iterations. The algorithm also uses an improved random walk. The proposed CoCS, the standard CS, and other excellent CS variants were tested on CEC 2013 test suite. Experimental results prove that the CoCS is superior to these competitors.
    Keywords: Cuckoo search algorithm; Global optimization; Complement strategy; Dynamic parameter adjustment.

  • Accelerating Artificial Bee Colony Algorithm Using Elite Information   Order a copy of this article
    by Xinyu Zhou, Yanlin Wu, Shuixiu Wu, Maosheng Zhong, Mingwen Wang 
    Abstract: In nature, the foraging behaviour of bee colony is always guided by some elite honeybees with the aim of maximizing the overall nectar amount. Being inspired by this phenomenon, we propose an improved artificial bee colony (ABC) variant by using elite information. In our approach, as the main way of generating new offspring, two novel solution search equations are developed based on utilizing elite information, which has the advantages of accelerating convergence rate. Moreover, to preserve the search experience of the scout bee phase, a new reinitialization method is proposed based on using elite information. Extensive experiments are conducted on the CEC 2013 and CEC 2015 test suites, and other four relevant ABC variants are included in the comparison. The results show that our approach has better performance in terms of convergence speed and result accuracy.
    Keywords: Artificial bee colony; Solution search equation; Elite information; Exploration and exploitation.

  • Density peaks clustering algorithm based on kernel density estimation and minimum spanning tree   Order a copy of this article
    by Tanghuai Fan, Xin Li, Jiazhen Hou, Baohong Liu, Ping Kang 
    Abstract: Rodriguez et al. reported a clustering algorithm that can achieve rapid searching of density peaks. This algorithm requires objective function without iterative optimization and reduced quantity of parameters, resulting in facile yet effective clustering. However, this algorithm cannot automatically determine the local density of samples based on data size; the allocation process is easy to produce allocation errors and subsequent problems, resulting in a poor final clustering effect. Aiming at solving the shortcomings of density peak clustering (DPC) algorithm in local density and allocation strategy, this paper proposes a density peak clustering algorithm based on kernel density estimation and minimum spanning tree (MST). The proposed DPC algorithm adopts the Gaussian kernel density to estimate the local density of samples, coordinating the relationship between the part and the whole; Proposing a new allocation strategy, which combines the idea of minimum spanning tree to generate a tree from the data set according to the principle of high density and close distance; the degree of polymerization is defined and calculated before and after disconnecting one edge of the tree, and the edge is disconnected with the larger degree of polymerization, which until the expected number of clusters is met. The experimental results make known that the proposed algorithm has better clustering result.
    Keywords: density peak clustering algorithm; minimum spanning tree; local density; degree of polymerization.

  • Elite subgroup guided particle swarm optimization algorithm with multi-strategy adaptive learning   Order a copy of this article
    by Runxiu Wu, Lulu Wang, Shuixiu Wu, Hui Sun 
    Abstract: In order to address the premature convergence of the standard particle swarm optimization (PSO) algorithm, this paper proposes an elite subgroup guided particle swarm optimization algorithm with multi-strategy adaptive learning (EGAPSO). In order to enhance the particles ability to escape from the local extremum point, the social cognitive part of originally learning only from the global optimal particle is changed to the part of adaptively choosing to learn from the global optimal particle and the particle in the elite subgroup in the model. Meanwhile, in order to make the algorithm more universal, a variety of learning strategies such as elite particle opposition-based learning, subspace Gaussian learning and mean center learning with different search characteristics are adaptively selected in the evolutionary process. Combination of the two improved measures can not only increase the universality of the algorithm, but also enhance the diversity of the population, which effectively helps the algorithm escape from the local optimum and avoid the premature convergence. Simulation results on the typical test function set and test results of comparison with other classical and newly improved PSO algorithms show that the proposed algorithm performs better in optimization and stability.
    Keywords: PSO algorithm; elite subgroup; subspace; opposition-based learning; mean center.

  • Agent-based modeling approach for Internet rumor propagation and its empirical study   Order a copy of this article
    by Tinggui Chen, Bailu Jing, Jingtao Rong, Zepeng Wang 
    Abstract: This paper analyzes the current research status of Internet rumor propagation at home and abroad, proposes an expanded SPNR rumor propagation model based on SIR infectious disease model, carries out simulation analysis with agent method, and uses two methods of data set verification and case verification to verify the SPNR model.Firstly, the infection status of rumor spreaders is divided into positive and negative, and the positive and negative infection rates are dynamically set according to the proportion of surrounding infected persons. At the same time, the forgetting mechanism is also increased, and a more applicable SPNR rumor spreading model is proposed. Secondly, the simulation of rumor spreading process is realized based on agent method, and the influence analysis of parameters affecting rumor spreading is carried out by numerical simulation method, which provides the basis for the formulation of rumor control strategy. Finally, from the perspective of data set validation and case validation, the simulation results of SinaWeibo and SPNR model are compared to verify the applicability of SPNR model in the process of rumor propagation.
    Keywords: Network rumors; epidemic models; complex networks; information dissemination.

  • Simulation Research on Trajectory Tracking Control System of Manipulator Based on Fuzzy PID Control   Order a copy of this article
    by Ruyi Ma, Zeshen Li, Du Jiang, Juntong Yun, Ying Liu, Yibo Liu, Dongxu Bai, Gongfa Li 
    Abstract: In view of the influence of external environment interference, parameter changing and random noise in the process of robot arm trajectory tracking, as well as the problems of fuzzy PID control, such as fuzzy rules are not easy to set, quantization interval and parameters are affected by subjective consciousness, which leads to the problem of insufficient performance of manipulator system. In this paper, firstly, in the MATLAB simulation environment, based on the D-H parameter model, the linkage model of the four degree of freedom manipulator is established, and the homogeneous transformation matrix of the manipulator is calculated. Then, through the application of fuzzy PID control in the Simulink toolbox, the optimal parameters of the controller are found by using the proportion factor self-adjusting strategy, and the corresponding trajectory tracking and forward inverse kinematics simulation are studied. The trajectory tracking curve of the manipulator is obtained. The research method and experimental results in the paper are of great significance for the simulation of articulated manipulator.
    Keywords: manipulator; trajectory tracking; fuzzy PID control; simulink.

  • A Novel Differential Evolution with Staged Diversity Enhancement Strategy   Order a copy of this article
    by Wei Li, Yafeng Sun, Ying Huang 
    Abstract: Differential evolution (DE) algorithm is a simple and efficient evolutionary computing technology. Although DE has achieved good results in many fields, inappropriate parameter combinations can easily lead to the problem of premature convergence. In response to this problem, this paper proposed an effective DE with staged diversity enhancement strategy (SDESDE), which can increase the diversity of the population. In the early stage of SDESDE evolutionary process, SDESDE emphasizes the balance search strategy, and use the diversity enhancement strategy to avoid getting trapped in the local optima in the middle stage. In the later stage, the faster convergence strategy is adopted. Besides, an adaptive mechanism is added to enhance the control of population diversity at different stages to close to the global optima faster and improve the efficiency of search. The proposed SDESDE algorithm is compared with four representative DE and experimental results demonstrate that the proposed algorithm not only has better performance in maintaining population diversity but also has highly competitive in overall performance.
    Keywords: Differential Evolution; Staged Strategy; Diversity Enhancement; Adaptive Mechanism.

  • A hybrid firefly algorithm based on modified neighbourhood attraction   Order a copy of this article
    by Rongfang Chen, Jun Tang 
    Abstract: Some studies reported that firefly algorithm (FA) had high computational time complexity. To tackle this problem, different attraction models were designed including random attraction, probabilistic attraction, and neighbourhood attraction. This paper concentrates on improving the efficiency of neighbourhood attraction. Then, a hybrid FA based on modified neighbourhood attraction (called HMNaFA) is proposed. In our new approach, the best solution selected from the current neighbourhood is used for competition. If the best solution wins the competition, the current solution flies towards the best one; otherwise a new neighbourhood search is employed to produce high quality solutions. Experiments are validated on several classical problems. Simulation results show HMNaFA surpasses FA with neighbourhood attraction and several other FA algorithms.
    Keywords: Firefly algorithm; modified neighbourhood attraction; generalized opposition-based learning; neighbourhood search.

  • A genetic algorithm based robust approach for type-II U-shaped Assembly line balancing problem   Order a copy of this article
    by Guangyue Jia, Honghui Zhan, Yunfang Peng 
    Abstract: U-shaped assembly lines are widely used to implement Just-in-time manufacturing. U-shaped assembly line balancing problem is important to improve productivity. Most of research ignore uncertainty such as operation times. This study applies robust optimization method to deal with type-II U-shaped assembly line balancing problem (UALBP-2) under uncertainty. A mathematical programming model is proposed with interval task operation times, and a genetic algorithm is developed to deal with it. A robust solution is defined as the most frequent solution falling within a pre-specified percentage of the optimal solution for different sets of scenarios. The experimental results are compared with the expected solution to verify the feasibility and effectiveness of the robust method.
    Keywords: U-shaped assembly line; mathematical programming; robust solution; genetic algorithm.