Options
Peer feedback feature analysis with large language models: An exploratory study
Citation
Lyu, Q., Lin, Z., & Chen, W. (2024). Peer feedback feature analysis with large language models: An exploratory study. In A. Kashihara, B. Jiang, M. M. Rodrigo, & J. O. Sugay (Eds.), Proceedings of the 32nd International Conference on Computers in Education (Volume 1). Asia Pacific Society of Computers in Education. https://doi.org/10.58459/icce.2024.4861
Abstract
Peer feedback is a pedagogical strategy for peer learning. Despite recent indications of Large Language Models (LLMs) ' potential for content analysis, there is limited empirical exploration of their application in supporting the peer feedback process. This study enhances the analytical approach to peer feedback activities by employing state-of-the-art LLMs for automated peer feedback feature detection. This research critically compares three models—GPT-3.5 Turbo, Gemini 1.0 Pro, and Claude 3 Sonnet—to evaluate their effectiveness in automated peer feedback feature detection. The study involved 69 engineering students from a Singapore university participating in peer feedback activities on the online platform Miro. A total of 535 peer feedback instances were collected and human-coded for eleven features, resulting in a dataset of 5,885 labeled samples. These features included various cognitive and affective dimensions, elaboration, and specificity. The results indicate that GPT-3.5 Turbo is the most effective model, offering the best combination of performance and cost-effectiveness. Gemini 1.0 Pro also presents a viable option with its higher throughput and larger context window, making it particularly suitable for educational contexts with smaller sample sizes. Conversely, Claude 3 Sonnet, despite its larger context window, is less competitive due to higher costs and lower performance, and its lack of support for training and fine-tuning with researchers' data weakens its learning capabilities. This research contributes to the fields of AI in education and peer feedback by exploring the use of LLMs for automated analysis. It highlights the feasibility of employing and fine-tuning existing LLMs to support pedagogical design and evaluations from a process-oriented perspective.
Date Issued
2024
ISBN
9786269689040 (online)
Publisher
Asia-Pacific Society for Computers in Education
Grant ID
021799-00001
Funding Agency
Nanyang Technological University, Singapore