Options
Item analyses of reading comprehension tests using classical and item response methods
Author
Soo, Kia Yong
Supervisor
Chew, Lee Chin
Abstract
In item analysis and test development work, classical test theory (CTT) and item response theory (IRT) are two competing measurement frameworks. There is current interest in contrasting the two measurement frameworks to better understand several purported claims of limitations of the CTT framework on one hand and theoretical advantages of IRT on the other. This study was aimed at comparing item and person statistics obtained with the two measurement frameworks. Five research issues were investigated. (a) How comparable are the person statistics from CTT and IRT measurement frameworks? (b) How comparable are the item difficulty statistics from CTT and IRT measurement frameworks? (c) How comparable are the item discrimination statistics from CTT and IRT measurement frameworks? (d) How invariant are the item difficulty statistics of CTT and IRT across participant samples? and (e) How invariant are the item discrimination statistics of CTT and IRT across participant samples? Test-items of reading comprehension tests in English Language were used in this empirical analysis. The results suggest that the CTT- and IRT-based person statistics were comparable and so, were the item statistics. Relatively high invariance estimates across samples were obtained with both the CTT and IRT measurement frameworks. Implications of the findings are discussed for item analyses and test development work by classroom teachers and for research purposes.
Date Issued
2007
Call Number
LB3051 Soo
Date Submitted
2007