Linear Kernel pattern matched discriminative deep convolutive neural network for dynamic web page ranking with big data
by P. Sujai; V. Sangeetha
International Journal of Critical Infrastructures (IJCIS), Vol. 20, No. 5, 2024

Abstract: Websites and information are plentiful. Search engines return many pages based on user requests. Thus, unstructured web content compromises information retrieval. A new gestalt pattern matched linear kernel discriminant maxpooled deep convolutive neural network (GPMLKDMDCNN) is to rank web pages by query. At first, Szymkiewicz-Simpson coefficient and Gestalt pattern matching Paice-Husk method are to remove stop words and stem words during preparation. Fisher kernelised linear discriminant analysis then selects keywords from preprocessed data. Bivariate Rosenthal correlation is utilised for page rank-based correlation outcomes and saving time, and online sites are ranked by user query with higher accuracy. The experiment uses parameters such as accuracy, false-positive rate, ranking time, and memory consumption. The evaluation shows that the GPMLKDMDCNN method is superior in using the CACM dataset with maximum ranking accuracy of 5%, minimum false positive rate and memory consumption of 39% and 13%, and quicker ranking time by 20% than the existing methods, respectively.

Online publication date: Fri, 13-Sep-2024

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Critical Infrastructures (IJCIS):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com