Using latent semantic analysis to grade brief summaries: some proposals Online publication date: Thu, 16-Oct-2014
by Ricardo Olmos, Jose A. Leon, Inmaculada Escudero, Guillermo Jorge-Botana
International Journal of Continuing Engineering Education and Life-Long Learning (IJCEELL), Vol. 21, No. 2/3, 2011
Abstract: In this paper, we present several proposals in order to improve the LSA tools to evaluate brief summaries (less than 50 words) of narrative and expository texts. First, we analyse the quality of six different methods assessing essays that have been widely employed before (Foltz et al., 2000). The second objective is to analyse how new algorithms inspired by some authors (Denhiere et al., 2007) that try to emulate human behaviour to improve the reliability of LSA with human graders when assessing short summaries, compared with standard LSA use in expository text. Finally, we present an assessment method to combine LSA as a semantic computational linguistic model with ROUGE-N as a lexical model, to show how combining different automatic evaluation systems (LSA and ROUGE) can improve the quality of assessments in different academic levels.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Continuing Engineering Education and Life-Long Learning (IJCEELL):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com