Please use this identifier to cite or link to this item:
A Rasch analysis of an international English language testing system listening sample test
Issue Date: 
Paper presented at the 3rd Redesigning Pedagogy International Conference, Singapore, 1 - 3 June 2009
This study reports on an investigation of the construct validity of an International English Language Testing System (IELTS) Listening sample test. The test was administered to 148 multinational participants. The Rasch modeling of data was used to fulfill the research objectives. Four major conclusions were made: 1) the Rasch differential item functioning analysis revealed that limited production items behave differently across different test taker groups suggesting the presence of construct-irrelevancies, 2) multiple choice questions (MCQ) do not cause construct-irrelevancies unless testees need to make ‘close paraphrases’ to comprehend the item stem or the question demands more than one answer; this nominates short MCQ as a best item format in listening tests, 3) evidence was found for ‘lexical processing’ which is different from top-down/bottom-up
processing, and 4) the Wright map provided evidence for construct under-representation
of the test. Findings from this study provide different sorts of evidence supporting and disproving the claim of the construct validity of the test, although they should be further investigated in future studies with different samples. Implications of the findings for IELTS and item writers are also discussed.
Appears in Collections:CRPP - Conference Papers

Files in This Item:
File Description SizeFormat 
CRPP_2009_Aryadoust_a.pdf169.13 kBAdobe PDFView/Open
Show full item record

Page view(s) 50

checked on Dec 10, 2018

Download(s) 10

checked on Dec 10, 2018