Options
A study of mathematics written assessment in secondary schools
Author
Wong, Lai Fong
Supervisor
Kaur, Berinderjeet
Abstract
When assessing students’ learning, teachers rely heavily on written assessments they construct themselves. To increase the validity of these teacher-made assessments, item-writing rules-of-thumb are available in the literature, but few have been tested experimentally. In light of the paucity of studies on this issue, the purpose of this study is to examine mathematics written assessment from three secondary schools in Singapore, using a set of assessment standards derived from existing taxonomies and approaches documenting cognitive demands of students’ mathematical knowledge and skills in the literature of assessment. Implications are drawn for teacher-practitioners specific to assessment items that are valid, that is, aligned to relevant dimensions of understanding and cognitive demands, and are mathematically accurate and correct.
The six categories of cognitive demands examined in this study are: (A1) Factual Knowledge, (A2) Comprehension, (A3) Routine Procedures, (B1) Information Transfer, (C1) Justifying and Interpreting, and (C2) Implications, Conjectures and Comparisons; and the four dimensions of understanding are: (S) Skills, (P) Properties, (U) Uses, and (R) Representations. Test items are scrutinized for mathematical accuracy and correctness based the following types of error: (CE1) Item with incorrect mathematical concept, (CE2) Item with implicit assumption(s), (CE3) Item with imbalanced testing objective(s), (LE1) Item with incorrect mathematical language/terms affecting mathematical sense, (LE2) Item with incorrect use of language/term(s), (DE1) Item with misleading diagram(s), and (PE1) Item with impractical scenario(s).
Findings of this study show that: (1) at least five out the six categories of cognitive demands are represented in the mathematics written assessments, but the items are heavily biased towards assessing knowledge and skills in (A3) Routine Procedures; (2) all four dimensions of understanding are represented, but items are biased towards assessing the (S) Skills dimension; and (3) all seven types of error are present, in particular language-related type, and at least 13% of the items in a written assessment are erroneous.
This study, though not extensive, has shed lights on the principles and standards of constructing mathematics written assessment. Teachers must be mindful of the types of cognitive demands to make on students and the dimensions of understanding to assess as they write the items in the different strands, so as to create a more balanced assessment that is capable of achieving the assessment objectives stipulated in the syllabus. Errors in the assessment items highlighted in this study also serve as signposts to inform teachers of the common oversights or mistakes made in item-construction to be avoided when they design test items in future. Recommendations to adopt the set of assessment standards in designing a robust and balanced mathematics written assessment, and the need to look into the professional development of classroom teachers on the design and construction of good assessment items and the assessment procedures are made.
Some limitations of the study are discussed, and the study concludes with recommendations for further research on the mathematics written assessment from more schools and schools of varying characteristics, and also on other forms, both formative and summative, of mathematics assessment.
The six categories of cognitive demands examined in this study are: (A1) Factual Knowledge, (A2) Comprehension, (A3) Routine Procedures, (B1) Information Transfer, (C1) Justifying and Interpreting, and (C2) Implications, Conjectures and Comparisons; and the four dimensions of understanding are: (S) Skills, (P) Properties, (U) Uses, and (R) Representations. Test items are scrutinized for mathematical accuracy and correctness based the following types of error: (CE1) Item with incorrect mathematical concept, (CE2) Item with implicit assumption(s), (CE3) Item with imbalanced testing objective(s), (LE1) Item with incorrect mathematical language/terms affecting mathematical sense, (LE2) Item with incorrect use of language/term(s), (DE1) Item with misleading diagram(s), and (PE1) Item with impractical scenario(s).
Findings of this study show that: (1) at least five out the six categories of cognitive demands are represented in the mathematics written assessments, but the items are heavily biased towards assessing knowledge and skills in (A3) Routine Procedures; (2) all four dimensions of understanding are represented, but items are biased towards assessing the (S) Skills dimension; and (3) all seven types of error are present, in particular language-related type, and at least 13% of the items in a written assessment are erroneous.
This study, though not extensive, has shed lights on the principles and standards of constructing mathematics written assessment. Teachers must be mindful of the types of cognitive demands to make on students and the dimensions of understanding to assess as they write the items in the different strands, so as to create a more balanced assessment that is capable of achieving the assessment objectives stipulated in the syllabus. Errors in the assessment items highlighted in this study also serve as signposts to inform teachers of the common oversights or mistakes made in item-construction to be avoided when they design test items in future. Recommendations to adopt the set of assessment standards in designing a robust and balanced mathematics written assessment, and the need to look into the professional development of classroom teachers on the design and construction of good assessment items and the assessment procedures are made.
Some limitations of the study are discussed, and the study concludes with recommendations for further research on the mathematics written assessment from more schools and schools of varying characteristics, and also on other forms, both formative and summative, of mathematics assessment.
Date Issued
2014
Call Number
QA11.2 Won
Date Submitted
2014