Vom 20.12.2025 bis 11.01.2026 ist die Universitätsbibliothek geschlossen. Ab dem 12.01.2026 gelten wieder die regulären Öffnungszeiten. Ausnahme: Medizinische Hauptbibliothek und Zentralbibliothek sind bereits ab 05.01.2026 wieder geöffnet. Weitere Informationen

Treffer: Humanizing Automated Programming Feedback: Fine-Tuning Generative Models with Student-Written Feedback

Title:
Humanizing Automated Programming Feedback: Fine-Tuning Generative Models with Student-Written Feedback
Language:
English
Source:
International Educational Data Mining Society. 2025.
Availability:
International Educational Data Mining Society. e-mail: admin@educationaldatamining.org; Web site: https://educationaldatamining.org/conferences/
Peer Reviewed:
Y
Page Count:
8
Publication Date:
2025
Document Type:
Konferenz Speeches/Meeting Papers<br />Reports - Research
Education Level:
Higher Education
Postsecondary Education
Geographic Terms:
Entry Date:
2025
Accession Number:
ED675665
Database:
ERIC

Weitere Informationen

The growing need for automated and personalized feedback in programming education has led to recent interest in leveraging generative AI for feedback generation. However, current approaches tend to rely on prompt engineering techniques in which predefined prompts guide the AI to generate feedback. This can result in rigid and constrained responses that fail to accommodate the diverse needs of students and do not reflect the style of human-written feedback from tutors or peers. In this study, we explore learnersourcing as a means to fine-tune language models for generating feedback that is more similar to that written by humans, particularly peer students. Specifically, we asked students to act in the flipped role of a tutor and write feedback on programs containing bugs. We collected approximately 1,900 instances of student-written feedback on multiple programming problems and buggy programs. To establish a baseline for comparison, we analyzed a sample of 300 instances based on correctness, length, and how the bugs are described. Using this data, we fine-tuned open-access generative models, specifically Llama3 and Phi3. Our findings indicate that fine-tuning models on learnersourced data not only produces feedback that better matches the style of feedback written by students, but also improves accuracy compared to feedback generated through prompt engineering alone, even though some student-written feedback is incorrect. This surprising finding highlights the potential of student-centered finetuning to improve automated feedback systems in programming education. [For the complete proceedings, see ED675583.]

As Provided