Integrating parallel analysis modules to evaluate the meaning of answers to reading comprehension questions Online publication date: Tue, 04-Oct-2011
by Detmar Meurers, Ramon Ziai, Niels Ott, Stacey M. Bailey
International Journal of Continuing Engineering Education and Life-Long Learning (IJCEELL), Vol. 21, No. 4, 2011
Abstract: Contextualised, meaning-based interaction in the foreign language is widely recognised as crucial for second language acquisition. Correspondingly, current exercises in foreign language teaching generally require students to manipulate both form and meaning. For intelligent language tutoring systems to support such activities, they thus must be able to evaluate the appropriateness of the meaning of a learner response for a given exercise. We discuss such a content-assessment approach, focusing on reading comprehension exercises. We pursue the idea that a range of simultaneously available representations at different levels of complexity and linguistic abstraction provide a good empirical basis for content assessment. We show how an annotation-based NLP architecture implementing this idea can be realised and that it successfully performs on a corpus of authentic learner answers to reading comprehension questions. To support comparison and sustainable development on content assessment, we also define a general exchange format for such exercise data.
Online publication date: Tue, 04-Oct-2011
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Continuing Engineering Education and Life-Long Learning (IJCEELL):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email firstname.lastname@example.org