Repository logo
  • Log In
Repository logo
  • Log In
  1. Home
  2. NIE Publications & Research Output
  3. Electronic Academic Papers
  4. Journal Articles
  5. A human-centric automated essay scoring and feedback system for the development of ethical reasoning
 
  • Details
Options

A human-centric automated essay scoring and feedback system for the development of ethical reasoning

URI
https://hdl.handle.net/10497/24836
Loading...
Thumbnail Image
Type
Article
Files
 ETS-26-1-147.pdf (374.23 KB)
Citation
Lee, A. V. Y., Luco, A. C., & Tan, S. C. (2023). A human-centric automated essay scoring and feedback system for the development of ethical reasoning. Educational Technology & Society, 26(1), 147-159. https://doi.org/10.30191/ETS.202301_26(1).0011
Author
Lee, Alwyn Vwen Yen 
•
Luco, Andres Carlos
•
Tan, Seng Chee 
Abstract
Although artificial Intelligence (AI) is prevalent and impacts facets of daily life, there is limited research on responsible and humanistic design, implementation, and evaluation of AI, especially in the field of education. Afterall, learning is inherently a social endeavor involving human interactions, rendering the need for AI designs to be approached from a humanistic perspective, or human-centered AI (HAI). This study focuses on the use of essays as a principal means for assessing learning outcomes, through students’ writing in subjects that require arguments and justifications, such as ethics and moral reasoning. We considered AI with a human and student-centric design for formative assessment, using an automated essay scoring (AES) and feedback system to address issues of running an online course with large enrolment and to provide efficient feedback to students with substantial time savings for the instructor. The development of the AES system occurred over four phases as part of an iterative design cycle. A mixed-method approach was used, allowing instructors to qualitatively code subsets of data for training a machine learning model based on the Random Forest algorithm. This model was subsequently used to automatically score more essays at scale. Findings show substantial agreement on inter-rater reliability before model training was conducted with acceptable training accuracy. The AES system’s performance was slightly less accurate than human raters but is improvable over multiple iterations of the iterative design cycle. This system has allowed instructors to provide formative feedback, which was not possible in previous runs of the course.
Keywords
  • Automated essay gradi...

  • Human-centric AI

  • Formative feedback

  • Machine learning

  • Ethics education

Date Issued
2023
Publisher
International Forum of Educational Technology & Society
Journal
Educational Technology & Society
DOI
10.30191/ETS.202301_26(1).0011
Funding Agency
Nanyang Technological University, Singapore
  • Contact US
  • Terms of Use
  • Privacy Policy

NTU Reg No: 200604393R. Copyright National Institute of Education, Nanyang Technological University (NIE NTU), Singapore

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science