Options
Goh, Christine Chuen Meng
Preferred name
Goh, Christine Chuen Meng
Email
christine.goh@nie.edu.sg
Department
English Language & Literature (ELL)
Personal Site(s)
ORCID
Scopus Author ID
7202892316
2 results
Now showing 1 - 2 of 2
- PublicationMetadata onlyTeacher-examiners’ explicit and enacted beliefs about proficiency indicators in national oral assessmentsThe test of English oral proficiency is important in High-Stakes national examinations in which large numbers of teachers are involved as examiners. Although the literature shows that the reliability of oral assessments is often threatened by rater variability, to date the role of teacher beliefs in teacher-rater judgements has received little attention. This exploratory qualitative study conducted in Singapore identified teachers’ beliefs about the construct of oral proficiency for their assessment of secondary school candidates and examined the extent to which these beliefs had been enacted in real-time assessment. Seven experienced national-level examiners participated in this study. They listened to audio-recordings of four students performing an oral interview (conversation) task in a simulated examination and assessed the performance of each of them individually. Data about teachers’ thinking which revealed their underlying beliefs when assessing was elicited through Concurrent Verbal Protocol (CVP) sessions. In addition, a questionnaire was administered a month later to elicit their explicit beliefs. Findings showed that teachers possessed a range of beliefs about the construct of oral proficiency but only some of these formed the core of their expressed criteria when assessing student performance in real time. Implications for oral assessments and further research are discussed.
7 - PublicationOpen AccessUnderstanding discrepancies in rater judgement on national-level oral examination tasksThe oral examination is an important component of the high-stakes ‘O’ level examination in Singapore taken by 16-17 years olds whose first language may or may not be English. In spite of this, there has been sparse research into the examination. This paper reports findings of an exploratory study which attempted to determine whether there were any discrepancies in rater judgements and thereafter, explore the nature and scope of the discrepancies identified. Five audio recordings were obtained from a simulated oral examination of five candidates conducted by a trained ‘O’ level oral examiner. Seven other trained ‘O’ level oral examiners were asked to rate four of the recordings individually and provide concurrent verbal reports. Questionnaires were also given to the raters for data triangulation after the verbalisation. The data were analysed through Verbal Protocol Analysis and descriptive statistics. Rater discrepancies detected in the scores were qualitatively determined to be due to four differences: emphases on factors assessed, constructs of oral proficiency, rater interpretations and approaches in assessment. These findings provide valuable insights into raters' perceptions of the construct of speaking and offer implications for rater training and the development of rating scales.
WOS© Citations 7Scopus© Citations 13 163 420