Conference Papers

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 1732
  • Publication
    Open Access
    The role of peer feedback on the quality of students’ computer-supported collaborative argumentation
    (Global Chinese Society on Computers in Education, 2023) ;
    Ng, Eng Eng
    ;
    Li, Xinyi
    ;
    Chai, Aileen Siew Cheng
    ;
    Lyu, Qianru
    The importance of peer feedback in collaborative argumentation has been well-established. However, little is known about the extent to which peer feedback is associated with the quality of collaborative argumentation. Particularly, there is limited evidence for how specific types of feedback is related to argumentation quality. This study investigated peer feedback against four dimensions of collaborative argumentation quality (clarity, multiple perspectives, selection of evidence, and elaboration and depth). Collaborative argumentation quality was also compared against peer feedback types (appropriateness, specificity, and elaboration). In this design-based research (DBR), a class of 40 secondary Grade Three students in Singapore participated in three cycles of argumentation and peer feedback activities using the AppleTree online learning environment, each cycle consisting of five collaborative learning phases scripted by the Spiral Model of Collaborative Knowledge Improvement (SMCKI): Individual ideation, group synergy, peer critique, group refinement, and individual achievement. Scaffolds of sentence openers and reflections were added in Cycles 2 and 3. Quantitative analyses comparisons of argumentation and per feedback quality across three cycles revealed that except for the multiple perspectives dimension of argumentation quality, students performed significantly better in forming their argumentations and giving peer feedback. Additionally, the quality of argumentation improved significantly over the three cycles when accounting for peer feedback types as correlates, and vice versa.
      22  68
  • Publication
    Open Access
    Prompt-based and fine-tuned GPT models for context-dependent and -independent deductive coding in social annotation
    (Association for Computing Machinery, 2024)
    Hou, Chenyu
    ;
    ;
    Zheng, Juan
    ;
    Zhang, Lishan
    ;
    Huang, Xiaoshan
    ;
    Zhong, Tianlong
    ;
    Li, Shan
    ;
    Du, Hanxiang
    ;
    Ker, Chin Lee
    GPT has demonstrated impressive capabilities in executing various natural language processing (NLP) and reasoning tasks, showcasing its potential for deductive coding in social annotations. This research explored the effectiveness of prompt engineering and fine-tuning approaches of GPT for deductive coding of context dependent and context-independent dimensions. Coding context dependent dimensions (i.e., Theorizing, Integration, Reflection) requires a contextualized understanding that connects the target comment with reading materials and previous comments, whereas coding context-independent dimensions (i.e., Appraisal, Questioning, Social, Curiosity, Surprise) relies more on the comment itself. Utilizing strategies such as prompt decomposition, multi-prompt learning, and a codebook-centered approach, we found that prompt engineering can achieve fair to substantial agreement with expert labeled data across various coding dimensions. These results affirm GPT’s potential for effective application in real-world coding tasks. Compared to context-independent coding, context-dependent dimensions had lower agreement with expert-labeled data. To enhance accuracy, GPT models were fine-tuned using 102 pieces of expert-labeled data, with an additional 102 cases used for validation. The fine-tuned models demonstrated substantial agreement with ground truth in context-independent dimensions and elevated the inter-rater reliability of context-dependent categories to moderate levels. This approach represents a promising path for significantly reducing human labor and time, especially with large unstructured datasets, without sacrificing the accuracy and reliability of deductive coding tasks in social annotation. The study marks a step toward optimizing and streamlining coding processes in social annotation. Our findings suggest the promise of using GPT to analyze qualitative data and provide detailed, immediate feedback for students to elicit deepening inquiries.
      4  85
  • Publication
    Open Access
    Testing only
    (2024)
    Stephanie Ow
      3  4
  • Publication
    Open Access
    Ethnicity and intonational variation in Singapore English child-directed speech
    (Chinese and Oriental Languages Information Processing Society, 2023)
    Chong, Adam J.
    ;
    ;
    Post, Brechtje
      13  62
  • Publication
    Open Access
    MATHDIAL: A dialogue tutoring dataset with rich pedagogical properties grounded in math reasoning problems
    (Association for Computational Linguistics, 2023)
    Macina, Jakub
    ;
    Daheim, Nico
    ;
    Sankalan Pal Chowdhury
    ;
    ;
    Manu Kapur
    ;
    Gurevych, Iryna
    ;
    Mrinmaya Sachan
    While automatic dialogue tutors hold great potential in making education personalized and more accessible, research on such systems has been hampered by a lack of sufficiently large and high-quality datasets. Collecting such datasets remains challenging, as recording tutoring sessions raises privacy concerns and crowdsourcing leads to insufficient data quality. To address this, we propose a framework to generate such dialogues by pairing human teachers with a Large Language Model (LLM) prompted to represent common student errors. We describe how we use this framework to collect MATHDIAL , a dataset of 3k one-to-one teacher-student tutoring dialogues grounded in multi-step math reasoning problems. While models like GPT-3 are good problem solvers, they fail at tutoring because they generate factually incorrect feedback or are prone to revealing solutions to students too early. To overcome this, we let teachers provide learning opportunities to students by guiding them using various scaffolding questions according to a taxonomy of teacher moves. We demonstrate MATHDIAL and its extensive annotations can be used to finetune models to be more effective tutors (and not just solvers). We confirm this by automatic and human evaluation, notably in an interactive setting that measures the trade-off between student solving success and telling solutions. The dataset is released publicly.
      9  64