Treffer: Evaluation of LLM Tools for Feedback Generation in a Course on Concurrent Programming

Title:
Evaluation of LLM Tools for Feedback Generation in a Course on Concurrent Programming
Language:
English
Authors:
Iria Estévez-Ayres (ORCID 0000-0002-1047-5398), Patricia Callejo (ORCID 0000-0001-6124-6213), Miguel Ángel Hombrados-Herrera (ORCID 0000-0002-8254-8795), Carlos Alario-Hoyos (ORCID 0000-0002-3082-0814), Carlos Delgado Kloos (ORCID 0000-0003-4093-3705)
Source:
International Journal of Artificial Intelligence in Education. 2025 35(2):774-790.
Availability:
Springer. Available from: Springer Nature. One New York Plaza, Suite 4600, New York, NY 10004. Tel: 800-777-4643; Tel: 212-460-1500; Fax: 212-460-1700; e-mail: customerservice@springernature.com; Web site: https://link.springer.com/
Peer Reviewed:
Y
Page Count:
17
Publication Date:
2025
Document Type:
Fachzeitschrift Journal Articles<br />Reports - Evaluative
Education Level:
Higher Education
Postsecondary Education
DOI:
10.1007/s40593-024-00406-0
ISSN:
1560-4292
1560-4306
Entry Date:
2025
Accession Number:
EJ1488271
Database:
ERIC

Weitere Informationen

The emergence of Large Language Models (LLMs) has marked a significant change in education. The appearance of these LLMs and their associated chatbots has yielded several advantages for both students and educators, including their use as teaching assistants for content creation or summarisation. This paper aims to evaluate the capacity of LLMs chatbots to provide feedback on student exercises in a university programming course. The complexity of the programming topic in this study (concurrency) makes the need for feedback to students even more important. The authors conducted an assessment of exercises submitted by students. Then, ChatGPT (from OpenAI) and Bard (from Google) were employed to evaluate each exercise, looking for typical concurrency errors, such as starvation, deadlocks, or race conditions. Compared to the ground-truth evaluations performed by expert teachers, it is possible to conclude that none of these two tools can accurately assess the exercises despite the generally positive reception of LLMs within the educational sector. All attempts result in an accuracy rate of 50%, meaning that both tools have limitations in their ability to evaluate these particular exercises effectively, specifically finding typical concurrency errors.

As Provided