Title: Determining item position effects in a computer-based test

Authors: Gary Skaggs

Addresses: Virginia Tech, 145 Dixon Drive, Hardy VA 24101, USA

Abstract: If items become more or less difficult or discriminating depending on their position within a test form, and if different examinees respond to the same items in different positions, the fairness of test scoring for examinees is undermined. Such context effects have been investigated in the past with mixed results. This study investigates whether item position effects are present in a computer-administered certification test in which items are presented in random order. The results showed few consistent significant position effects for either item difficulty or discrimination for the entire test. However, there was a tendency for items to be slightly more difficult when they were administered in the last five positions at the end of each test form. For individual items, there was no obvious position effect for either item difficulty or discrimination. For practitioners working in computer-based testing programs, the findings support the use of random item ordering, but it is still recommended that each testing program conduct an item position investigation for items.

Keywords: item position effects; random item ordering; computer-based testing; certification tests.

DOI: 10.1504/IJQRE.2016.073673

International Journal of Quantitative Research in Education, 2016 Vol.3 No.1/2, pp.94 - 108

Received: 05 Jun 2014
Accepted: 30 May 2015

Published online: 15 Dec 2015 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article