Title: Comparing the normalised and 2PL IRT scoring methods on multi-form examinations
Authors: Aolin Xie; Ting-Wei Chiu; Keyu Chen; Gregory Camilli
Addresses: Law School Admission Council, 662 Penn Street, Newtown, PA 18940, USA ' Princeton, NJ, USA ' Prometric, 1501 South Clinton Street, Baltimore, MD 21224, USA ' Law School Admissions Council, 662 Penn Street, Newtown, PA 18940, USA
Abstract: This study compared candidates' scores based on the normalised model and the two-parameter item response theory (2PL IRT) model using simulated multi-form exam data. Candidates' calculated scores, rankings, qualification status and score ties from the two models were compared with their true values. The results suggest that the 2PL IRT model outperformed the normalised model when the candidate ability distributions varied across forms. It was found that candidate scores based on the 2PL model were more closely related to the true scores. The qualification status of candidates belonging to the top 10% group were more accurately classified by the 2PL model than the normalised model when group abilities differed.
Keywords: 2PL IRT model; normalised model; multi-form exam; equating; candidate classification.
DOI: 10.1504/IJQRE.2021.119812
International Journal of Quantitative Research in Education, 2021 Vol.5 No.3, pp.268 - 276
Accepted: 16 Sep 2020
Published online: 21 Dec 2021 *