Forthcoming articles

 


International Journal of Web Engineering and Technology

 

These articles have been peer-reviewed and accepted for publication in IJWET, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

 

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

 

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

 

Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

 

Register for our alerting service, which notifies you by email when new issues of IJWET are published online.

 

We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.

 

International Journal of Web Engineering and Technology (8 papers in press)

 

Regular Issues

 

  • Creating Entrepreneurial Education Programs and Ecosystems in Universities
    by Anna Závodská, Veronika Šramová, Dario Liberona 
    Abstract: Research have shown that majority of start-ups fail [16], [18], [19]. Some of the reasons can be expressed as follows: lack of entrepreneurial knowledge and experience, premature scaling, poor product, ignoring customer needs, among other that have been studied. One of the main factors of failure in emerging countries is caused by the lack of entrepreneurial skills of founders and their co-workers. Creating favorable start-up ecosystem requires cooperation of many stakeholders such as companies, public institutions (universities, self-governing regions, research centers, incubators, co-work spaces) as well as enthusiastic individuals. Universities play a crucial role in the ecosystem creation and development of entrepreneurial knowledge and skills. They are actively trying to prepare students for their entrepreneurial career, professional skills, and also research. They are organizing events as well as offering courses, which should help students to be more successful with their start-ups. This paper describes the development of an entrepreneurial course at the University of ilina in Slovakia and an entrepreneurship educational program at Santa Mar
    Keywords: Start-up; entrepreneurial course; startup events; university entrepreneurial ecosystem; experimental learning; entrepreneurial skills; Experiential Learning.

  • Holistic Evaluation of Knowledge Management Practices in Large Indian Software Organizations   Order a copy of this article
    by Asish Oommen Mathew, Lewlyn L. R. Rodrigues 
    Abstract: This research analyses the knowledge management (KM) implementation in India software organisations from the perspective of the knowledge workers. A holistic KM evaluation was conducted by capturing the perceptions of critical success factors, process capability, and the effectiveness of KM. The parameters of the study were developed by using content analysis. A questionnaire survey was conducted to capture the knowledge workers perception of the eight generic dimensions of critical success factors, five dimensions of KM process capability and five dimensions of KM effectiveness. The data for this research was collected from 423 knowledge workers from 66 large software firms that were listed on the Bombay Stock Exchange (BSE). The perception of each factor was converted to knowledge management index (KMI) scores for interpretation. The results indicated that the overall implementation of KM in Indian software firms was in the right direction with above average KMI scores for all factors. The critical success factors, knowledge process capability factors, and knowledge management effectiveness parameters were ranked based on their perception scores
    Keywords: Knowledge Management; Critical Success Factors; Process Capability; Effectiveness; Organizational Culture; Technology; Leadership; Knowledge Workers.

  • A simple decision-making approach for information technology solution selection   Order a copy of this article
    by Yuri Zelenkov 
    Abstract: Information technology (IT) is an indispensable tool for any organization today, so the choice of adequate IT solutions is a critically important skill. In the literature, many methods for selecting IT solutions have been proposed, but often they use vague criteria that are very difficult to quantify and complex methods to compare alternatives. So, the application of these methods outside the theoretical articles is restricted, since practitioners need simpler approaches. We propose a simple method of the evaluation of alternative IT solutions based on five criteria, namely the cost of ownership, the time for the change, security risks, acceptance by users, and confidence in the supplier's ability to implement the solution. In accordance with the theory of probabilistic mental models, a reference class is proposed for each criterion and variables that can be measured quantitatively are chosen on its base. To simplify the decision-making process, a weighted production model is used for the comparison of alternatives.
    Keywords: IT project selection; IT costs; IT intangible benefits; IT risks; IT sourcing; users’ adoption.

  • Finding and Validating Medical Information Shared on Twitter: Experiences Using a Crowdsourcing Approach
    by Scott Duberstein, Derek Doran, Daniel Asamoah, Shu Schiller 
    Abstract: Social media provide users a channel to share meaningful and insightful information with their network of connected individuals. Harnessing this public information at scale is a powerful notion as social media is rife with public perceptions, signals, and data about a variety of topics. However, there is a common trade-off in collecting information from social media: the more specific the topic, the more challenging it is to extract reliable and truthful information. In this paper, we present an experience report describing our efforts in developing and applying a novel approach to identify, extract, and validate topic specific information using the Amazon Mechanical Turk (AMT) crowdsourcing platform. The approach was applied in a use-case where meaningful information about a medical condition (major depressive disorder) was successfully extracted from Twitter. Our approach, and lessons learned, may serve as a generic methodology for extracting relevant and meaningful data from social media platforms and help researchers who are interested in harnessing Twitter, Amazon Mechanical Turk, and the like for reliable information discovery.
    Keywords: Crowdsourcing; Amazon Mechanical Turk; Twitter; Social Media; Major Depressive Disorder.

  • Semantic Lifting and Reasoning on the Personalised Activity Big Data Repository for Healthcare Research
    by Hong Qing Yu 
    Abstract: The fast growing markets of smart health monitoring devices and mobile applications provide opportunities for common citizens to have capability for understanding and managing their own health situations. However, there are many challenges for data engineering and knowledge discovery research to enable efficient extraction of knowledge from data that is collected from heterogonous devices and applications with big volumes and velocity. This paper presents research that initially started with the EC MyHealthAvatar project and is under continual improvement following the projects completion. The major contributions of the work is a comprehensive big data and semantic knowledge discovery framework which integrates data from varied data resources. The framework applies hybrid database architecture of NoSQL and RDF repositories with introductions for semantic oriented data mining and knowledge lifting algorithms. The activity stream data is collected through Kafkas big data processing component. The motivation of the research is to enhance the knowledge management, discovery capabilities and efficiency to support further accurate health risk analysis and lifestyle summarization.
    Keywords: Big Data; Knowledge Discovery; Semantic Web; Ontology; Data Engineering; Data processing; Healthcare.

  • Anomaly detection in the web logs using user-behavior networks
    by Xiaojuan Wang 
    Abstract: With the rapid growth of the web attacks, anomaly detection becomes a necessary part in the management of modern large-scale distributed web applications. As the record of the user behavior, web logs certainly become the research object relate to anomaly detection. Many anomaly detection methods based on automated log analysis have been proposed. However, most researches focus on the content of the single logs, while ignoring the connection between the user and the path. To address this problem, we introduce the graph theory into the anomaly detection and establish a user behavior network model. Integrating the network structure and the characteristic of anomalous users, we propose five indicators to identify the anomalous users and the anomalous logs. Results show that the method gets a better performance on four real web application log datasets, with a total of about 4 million log messages and 1 million anomalous instances. In addition, this paper integrates and improves a state-of-the-art anomaly detection method, to further analyze the composition of the anomalous logs. We believe that ourwork will bring a newangle to the research field of the anomaly detection.
    Keywords: graph theory; anomaly detection; user behavior.

  • DWSpyder: A New Schema Extraction Method for a Deep Web Integration System
    by Yasser Saissi 
    Abstract: The deep web is a huge part of the web that is not indexed by search engines. The deep web sources are accessible only through their associated access forms. We wish to use a web integration system to access the deep web sources and all of their information. To implement this web integration system, we need to know the schema description of each web source. The problem resolved in this paper is how to extract the schema describing an inaccessible deep web source. We propose our DWSpyder method as being able to extract the schema describing a deep web source despite its inaccessibility. The DWSpyder method starts with a static analysis of the deep web source access forms in order to extract the first elements of the associated schema description. The second step of our method is a dynamic analysis of these access forms using queries to enrich our schema description. Our DWSpyder method also uses a clustering algorithm to identify the possible values of deep web form fields with undefined sets of values. All of the information extracted is used by DWSpyder to generate automatically deep web source schema descriptions.
    Keywords: Web Integration; Schema Extraction; Deep Web; Clustering.

  • Impact of Replica Placement-based Clustering on fault Tolerance in Grid Computing
    by Rahma Souli-Jbali 
    Abstract: Due to several demands on very high computing power and storage capacity, data grids seem to be a good solution to meet these growing demands. Indeed, these architectures make it possible to add hardware/software resources offering a virtually infinite storage and computation capacities. However, the design of distributed applications for data grids remains complex. This task is even harder if one considers that faults and disconnections of machines, leading to data loss, are common in data grids. Therefore, it is necessary to take into account the dynamic nature of the grids since the grid nodes may disappear at any time. This paper focuses on problems related to the impact of replica placement-based clustering on fault tolerance of nodes in Grids. The main idea is based on two well-known fault tolerance protocols used in distributed systems. We propose a novel protocol based on two levels: Inter-cluster and intra-cluster. In interclusters, the message-logging protocol is used. In intra-cluster, the same interclusters protocol is used coupled with the non-blocking coordinated checkpoint of Chandy-Lamport. This ensures that in case of failure, the impact of the fault would remain confined to the nodes of the same cluster. The experiment results shows the efficiency of the proposed protocol in terms of time recovery, numbers of either used processes and exchanged messages.
    Keywords: Data Grids; Fault Tolerance; Replica placement; Clustering; Job scheduling.