Fake content detection on benchmark dataset using various deep learning models Online publication date: Mon, 09-Sep-2024
by Chetana Thaokar; Jitendra Kumar Rout; Himansu Das; Minakhi Rout
International Journal of Computational Science and Engineering (IJCSE), Vol. 27, No. 5, 2024
Abstract: The widespread use of social media and its development have offered a medium for the propagation of fake contents quickly among the masses. Fake contents frequently misguide individuals and lead to erroneous social judgments. Individuals and society have been harmed by the dissemination of low-quality news content on social media. In this paper, we have worked on a benchmark dataset of news content and proposed an approach comprising basic natural language processing techniques with different deep learning models for categorising content as real or fake. Different deep learning models employed are LSTM, bi-LSTM, LSTM and bi-LSTM with an attention mechanism. We compared the outcomes by using one hot word embedding and pre-trained GloVe technique. On benchmark LIAR dataset, the LSTM achieved a better accuracy of 67.2%, while the bi-LSTM with GloVe word embedding reached an accuracy of 67%. An accuracy of 98.22% is achieved using bi-LSTM and 97.98% using LSTM on Real-Fake dataset. Fake news can be a menace to society, so if it is detected early, harmony can be maintained in society and individuals can avoid being misled.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computational Science and Engineering (IJCSE):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com