[FADB+22] James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. The robots are coming: Exploring the implications of openai codex on introductory programming. In Australasian Computing Education Conference, ACE '22, page 10–19, New York, NY, USA, 2022. Association for Computing Machinery. [ bib | DOI | http ]
Recent advances in artificial intelligence have been driven by an exponential growth in digitised data. Natural language processing, in particular, has been transformed by machine learning models such as OpenAI's GPT-3 which generates human-like text so realistic that its developers have warned of the dangers of its misuse. In recent months OpenAI released Codex, a new deep learning model trained on Python code from more than 50 million GitHub repositories. Provided with a natural language description of a programming problem as input, Codex generates solution code as output. It can also explain (in English) input code, translate code between programming languages, and more. In this work, we explore how Codex performs on typical introductory programming problems. We report its performance on real questions taken from introductory programming exams and compare it to results from students who took these same exams under normal conditions, demonstrating that Codex outscores most students. We then explore how Codex handles subtle variations in problem wording using several published variants of the well-known “Rainfall Problem” along with one unpublished variant we have used in our teaching. We find the model passes many test cases for all variants. We also explore how much variation there is in the Codex generated solutions, observing that an identical input prompt frequently leads to very different solutions in terms of algorithmic approach and code length. Finally, we discuss the implications that such technology will have for computing education as it continues to evolve, including both challenges and opportunities.
Keywords: CS1, AI, code writing, GitHub, code generation, machine learning, introductory programming, neural networks, artificial intelligence, OpenAI, GPT-3, Codex, academic integrity, deep learning, copilot, novice programming
[SDHL22] Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. Automatic generation of programming exercises and code explanations using large language models. In Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 1, ICER '22, page 27–43, New York, NY, USA, 2022. Association for Computing Machinery. [ bib | DOI | http ]
This article explores the natural language generation capabilities of large language models with application to the production of two types of learning resources common in programming courses. Using OpenAI Codex as the large language model, we create programming exercises (including sample solutions and test cases) and code explanations, assessing these qualitatively and quantitatively. Our results suggest that the majority of the automatically generated content is both novel and sensible, and in some cases ready to use as is. When creating exercises we find that it is remarkably easy to influence both the programming concepts and the contextual themes they contain, simply by supplying keywords as input to the model. Our analysis suggests that there is significant value in massive generative machine learning models as a tool for instructors, although there remains a need for some oversight to ensure the quality of the generated content before it is delivered to students. We further discuss the implications of OpenAI Codex and similar tools for introductory programming education and highlight future research streams that have the potential to improve the quality of the educational experience for both teachers and students alike.
Keywords: Code explanations, OpenAI Codex, Programming exercises, Exercise generation, Robosourcing, Natural language generation, Resource generation, GPT-3, Automated feedback, CS1, Large language models
[Nie22] Therese Nieva. Automatic programming exercise generation. Bachelor's thesis, Information Technology and Electrical Engineering, Brisbane, AU, 2022. Supervisor: Richard Thomas. [ bib ]
[BDFA+22] Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. Programming is hard -- or at least it used to be: Educational opportunities and challenges of ai code generation, 2022. [ bib | DOI | http ]
Keywords: Human-Computer Interaction (cs.HC), Artificial Intelligence (cs.AI), Computers and Society (cs.CY), Machine Learning (cs.LG), Software Engineering (cs.SE), FOS: Computer and information sciences, FOS: Computer and information sciences, K.3.2; I.2; H.5.2
[Deg22] Jonas Degrave. Building a virtual machine inside ChatGPT. https://www.engraved.blog/building-a-virtual-machine-inside/, December 2022. [ bib ]
[HFK22] Thilo Hagendorff, Sarah Fabi, and Michal Kosinski. Machine intuition: Uncovering human-like intuitive decision-making in gpt-3.5, 2022. [ bib | DOI | http ]
Keywords: Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences
[LWSD22] Bin Li, Yixuan Weng, Qiya Song, and Hanjun Deng. Artificial text detection with multiple training strategies. 2022. [ bib | DOI | http ]
Keywords: Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences
[Joy23] David A. Joyner. Chatgpt in education: Partner or pariah? XRDS, 29(3):48–51, apr 2023. [ bib | DOI | http ]
ChatGPT has taken the world by storm, with educators reeling from its implication for curricula and assessment. This article examines how ChatGPT resembles earlier technologies and predicts how we can expect it to impact education going forward.
Continues the argument that generative AI tools, such as ChatGPT, will follow the same path in education as the calculator or search engine, i.e. elevate students capacity to learn more complex subjects. Suggests that the career of prompt engineer may be part of the answer but stops regretably short of looking into what the more complex skills like look like in the realm of CSEd. Does not address the big question of “what, if any, are the key skills CS students must maintain that ChatGPT can do?”.

This file was generated by bibtex2html 1.99.