Exploring the Potential of Large Language Models to Generate Formative Programming Feedback

Kiesler N, Lohr D, Keuning H (2023)


Publication Language: English

Publication Type: Conference contribution, Conference Contribution

Publication year: 2023

Conference Proceedings Title: Proceedings of the 2023 IEEE ASEE Frontiers in Education Conference

Event location: Texas US

Abstract

Ever since the emergence of large language models (LLMs) and related applications, such as ChatGPT, its performance and error analysis for programming tasks have been subject to research. In this work-in-progress paper, we explore the potential of such LLMs for computing educators and learners, as we analyze the feedback it generates to a given input containing program code. In particular, we aim at (1) exploring how an LLM like ChatGPT responds to students seeking help with their introductory programming tasks, and (2) identifying feedback types in its responses. To achieve these goals, we used students' programming sequences from a dataset gathered within a CS1 course as input for ChatGPT along with questions required to elicit feedback and correct solutions. The results show that ChatGPT performs reasonably well for some of the introductory programming tasks and student errors, which means that students can potentially benefit. However, educators should provide guidance on how to use the provided feedback, as it can contain misleading information for novices.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Kiesler, N., Lohr, D., & Keuning, H. (2023). Exploring the Potential of Large Language Models to Generate Formative Programming Feedback. In Proceedings of the 2023 IEEE ASEE Frontiers in Education Conference. Texas, US.

MLA:

Kiesler, Natalie, Dominic Lohr, and Hieke Keuning. "Exploring the Potential of Large Language Models to Generate Formative Programming Feedback." Proceedings of the Proceedings of the 2023 IEEE ASEE Frontiers in Education Conference, Texas 2023.

BibTeX: Download