Now showing 1 - 10 of 18
  • Publication
    Open Access
    EduTOOLS: A development of on-line tools for project work
    (2000-09) ;
    Ng, Connie Siew Ling
    ;
    Sim, Wee Chee
    In an effort to spearhead the use of IT for education, the Centre for Learning Technologies, which is jointly set up by the Ministry of Education, Kent Ridge Digital Labs and Infocomm Development Authority, has embarked on an R&D project named eduTOOLS. This project aims to provide teachers and students a suite of on-line tools for project work. For the teachers, the use of these tools could help them monitor and assess students' project work processes and products. For the students, these tools could help them collaborate on-line in their project work with other group members. Also available to students are a collection of recommended project resources and a list of experts. This paper reports on the development of eduTOOLS by examining the conceptual and technical aspects of each tool. The discussions bear implications to the use of IT for project work and some pertinent issues for consideration.
      187  105
  • Publication
    Open Access
    Test reporting of English language proficiencies of teacher trainees: Towards a profiling assessment system
    (2000-09) ;
    Seow, Anthony
    ;
    Luo, Guanzhong
    A single overall grading is the hallmark of test reporting practices in education. This assessment system has provided an efficient and convenient means of reporting test results for a summative purpose such as certifying students on completion of a course of study or selecting students who can benefit from the next level of education. However it is less useful for reporting results for a formative purpose such as monitoring students' learning progress or diagnosing their learning difficulties. The National Institute of Education has developed a computerised assessment tool known as "NIE Computerised English Language Test" (NIECELT) for testing the English language proficiencies of prospective students in pre-service teacher training programmes. NIECELT, however, can be used for creating appropriate tests meant for any other grade level as well – primary, secondary or college. This paper will describe a profiling assessment system to be incorporated into the test reporting of language proficiencies. NIECELT, an interactive mode of computerised testing, presents a number of sub-tests for assessing a student’s level of competence in English sentence structure, collocation, text cohesion, text meaning, editing skills as well as semantic awareness. By harnessing the computer's unique capabilities in data management, the student's performance in specific language skill areas can be profiled.
      133  133
  • Publication
    Open Access
    Student readiness for EduTOOLS use: An investigation of their IT-related skills and behaviours
    (2000-09) ;
    Ng, Connie Siew Ling
    ;
    Sim, Wee Chee
    EduTOOLS, a R&D project of the Centre for Learning Technologies which is jointly set up by the Ministry of Education, Kent Ridge Digital Labs and Infocomm Development Authority, involves an innovative use of computer technology. With EduTOOLS, students can make use of a suite of on-line tools for project work. Also available to students are a collection of recommended project resources and a list of experts. This project will be piloted in three Singapore schools - two secondary schools and one junior college. For an implementation of a high technology project such as EduTOOLS, one main concern is student readiness in terms of IT skills. This paper reports on the findings of an investigation of students' experience with computers, namely their use of computers in and outside of school, the types of computer applications used and of activities engaged with a computer, and their frequency of computer use. Self-reports of students' working behaviours on the computer as well as their affective characteristics were also captured. The implications of the findings for EduTOOLS implementation will be discussed.
      90  93
  • Publication
    Restricted
    Development and validation of computerised adaptive tests with and without performance diagnosis
    This research has focused on an application, in actual classroom practice, of computerised adaptive testing (CAT) in assessing learning achievement. The concept of CAT, an application of item response theory, has become feasible with the advent of high-speed computers. Basically, the testing in CAT is tailored to an examinee's ability level: the computer first obtains an estimate of the examinee's ability level based on his or her responses to initial test items, and then makes subsequent item-selection decisions that are most appropriate for measuring his/her performance level.

    Based on the literature review, investigative directions for this study were conceptualised in terms of the development and validation of computerised adaptive tests with and without the provision of performance diagnosis. 'Item response theory' and the unitary validity concept formed the theoretical basis of the research.

    This research on CAT was thus partitioned into three closely related studies. Study 1 focused on the calibration of a test-items bank on biology under the item response theory (IRT). It also examined relevant practical issues encountered in the development of a computerised item bank. In Study 2, an automated construction of tests using the calibrated pool of test items was performed. Whilst the IRT-calibrated item bank was used to construct both conventional fixed-length and adaptive variable-length tests, the functioning of the item pool was particularly investigated for computerised adaptive testing (CAT). Study 3 validated the computerised adaptive tests using students in a Singapore school. The effects of student factors - i.e. gender, ability, computer familiarity (viz. computer ownership, computing experience and frequency of computer use), and affective characteristics (viz. attitude towards computers and to science learning) - on three measures of CAT outcomes, namely, test performance, attitude towards the test administration and benefits derived from the testing, were investigated.

    Results of this study indicated comparability of test scores obtained with the computerised adaptive tests and those obtained with a parallel paper-and-pencil test. In brief, ability was the best predictor of test performance in biology using CAT. Overall the students were positive towards the CAT administration; but they were most bothered by their inability to review test items during the testing. Some differences were found in the reactions to certain aspects of CAT of students in different subgroups defined by gender, computer familiarity, and affective characteristics. However, these differences did not appear to affect their test performance on CAT. Satisfactory benefits were derived from the use of CAT. These included an average of 10% reduction in test length and 50% reduction in test time, and the testing showed adequate adaptability to individual students' ability level. Students who were treated with CAT with performance diagnosis indicated the usefulness of the feedback and diagnosis facility.

    With these insights into the use of CAT in a secondary school in Singapore, implications for future implementation and use of computerised adaptive tests in school-based testing are discussed. The limitations of the research are also explained and suggestions for further research indicated.
      181  16
  • Publication
    Open Access
    Developing computerized language proficiency tests at NIE
    (1998-11)
    Hsui, Victoria Y.
    ;
    Seow, Anthony
    ;
      106  172
  • Publication
    Open Access
    Using computer-based modelling for primary science learning and assessment
    (2006-05)
    Zhang, Baohui
    ;
    ; ;
    Jacobson, Michael J.
    ;
    Looi, Chee-Kit
    Computer-based modeling is not just a means for students to learn important scientific knowledge and skills, but also a technique to assess student understandings of science. A software tool called Model-It allows young students to create their own models so that their learning becomes more interactive and engaged. However, there is a mismatch between how students learn and how they are assessed if conventional paper-administered tests are used. This paper argues for alternative assessments to be better aligned with curriculum and instruction. Forty 4th grade students in a local Singapore school participated in a science inquiry activity that involved learning with modeling as an alternative assessment. The students individually created models of food webs to illustrate their understanding of energy flows and photosynthesis. A scoring rubric based on four criteria (“focus and structure”, "accuracy", "completeness" and "functionality") was used to evaluate the models, with the modeling scores being compared to student scores of the school’s paper-based assessments of science learning. In addition, 18 students were interviewed about their understanding of models and modeling. The data is currently being analyzed and the findings of this study and potential implications for educational assessments will be presented in this paper.
      161  118
  • Publication
    Open Access
    Issues and practices of school-based testing and future challenges in innovative technological assessment in Singapore
    (2001-12)
    School-based testing is an integral part of the teaching and learning process, and it provides teachers with vital information about students' learning progress. What are the current practices in school testing in Singapore, and what are some of the issues arising therefrom? Do conventional testing practices measure up to the needs of recent changes in Singapore education, in the areas of IT, Thinking, Project Work, and the "School Excellence Model"? What are some innovations in educational assessment that may be more compatible with these changes? How can computer technology support and enhance these innovative assessments? This paper will first examine the assessment system in Singapore education, and discuss some issues related to current testing practices. In the light of recent changes in education, the need for a rethink on these practices is proposed. The paper will then explore some innovations in educational assessment that may bring about a more meaningful assessment of student capabilities and potentials in learning. A place for technology in educational assessment will also be discussed.
      304  163