Authors: Rod D. Roscoe; Laura K. Varner; Scott A. Crossley; Danielle S. McNamara
Addresses: Human and Environment Systems Department, Learning Sciences Institute, Arizona State University, 7271 E. Sonoran Arroyo Mall, 150D Santa Catalina, Mesa, AZ 85212, USA ' Department of Psychology, Learning Sciences Institute, Arizona State University, P.O. Box 872111, Tempe, AZ 85287, USA ' Department of Applied Linguistics/ESL, Georgia State University, 34 Peachtree St. Suite 1200, One Park Tower Building, Atlanta, GA 30303, USA ' Department of Psychology, Learning Sciences Institute, Arizona State University, P.O. Box 872111, Tempe, AZ 85287, USA
Abstract: Various computer tools have been developed to support educators' assessment of student writing, including automated essay scoring and automated writing evaluation systems. Research demonstrates that these systems exhibit relatively high scoring accuracy but uncertain instructional efficacy. Students' writing proficiency does not necessarily improve as a result of interacting with the software. One question is whether these systems offer appropriate or sufficient formative feedback to students about their writing. To motivate further research in this area, we present a straightforward methodology for constructing automated feedback algorithms that are grounded in writing pedagogy and assessment. The resulting threshold algorithms are demonstrated to be meaningfully related to essay quality and informative regarding individualised, formative feedback for writers. Potential applications and extensions of this methodology are discussed.
Keywords: automated essay scoring; AES; automated writing evaluation; AWE; intelligent tutoring systems; ITS; formative feedback; natural language processing; NLP; learning technologies; writing pedagogy.
International Journal of Learning Technology, 2013 Vol.8 No.4, pp.362 - 381
Published online: 04 Feb 2014 *Full-text access for editors Access for subscribers Purchase this article Comment on this article