Professor, Designer, Husband, Father, Gamer, Bagpiper

I've been using Github Copilot for a while, as I've been hacking on sample solutions for assignments in my 3rd-year computer graphics class, as well as a large Javascript development project, and I have found it incredibly useful.  If you aren't familiar with it, Github has trained an OpenAI-based system on all the publicly available code on Github, resulting in autocomplete-on-steroids.  It works on code, comments, text files (I've actually had it suggest useful English in a document I was writing), etc.  Anything that's on Github.

Since it takes a lot of context into account, some of the suggestions it makes are uncannily accurate, especially if you use descriptive variable names and good comments.  It's really taken some of the drudgery out of coding (suggesting the structure of loops, filling in initializers for variables, etc), and while some of the code suggestions it makes are poor, they are often close enough that it can be faster to accept a large for loop and then edit it, rather than writing it myself.

The question here is:  what does this mean for CS education?  As I was writing sample solutions for assignments in my graphics class, I was impressed with how helpful it was.  Since this class isn't about simple programming basics, I actually found that the suggestions it made were "mechanical" enough that it would be fine for students to use them.  Many of the suggestions it made that started to help with the code things the students should be doing themselves were "not quite right" enough that you had to know what you wanted to accept-and-fix what it gave you.

From a pedagogical perspective, in upper-division classes like this, using co-pilot should be fine, IF the professors don't reuse the same problems year after year. I give out significant sample code to start projects, with descriptive comments, so I would assume it would eventually start filling in more and more of the code as students made their solutions public on Github.  I've heard of professors forbidding students to share their code on Github because they wanted to keep reusing problems, but I think that's unfair to the students;  if you have created an elegant solution to a class assignment, sharing it on your personal Github as part of your portfolio can be an important part of how you get a job down the line.

But what about lower-division classes, where the problems are much simpler and often reused?  My colleague Kyle Johnsen (CS professor at UGA) and I were chatting about it, and he sent this video he found that discusses this issue.

There are good reasons to have students work on small, carefully designed problems when they are learning to program, largely in the "learning to think about processes and how to break down a problem."  On the other hand, learning to do this is tangled up in "learning the syntax", which is a necessary-but-annoying part of programming.

I usually tell students to think about programming as two related things, using an analogy like painting. First, you need to be able to effortlessly execute the mechanics of programming (from syntax to libraries to common patterns in your language/environment), akin to manipulating paint and using tools (brushes, knives, etc) to put paint on canvas in shapes and blends to create the visual structures you want. Second, you need to be able to think computationally to be able to break down large problems into smaller steps and know what to actually program to solve these smaller steps, akin to knowing how to take an idea of a painting and translate it into a collection of actions that work with your media.  

In both cases, if step 1 is hard, it can make step 2 impossible.

Imagine how a tool like Copilot might make step 1 easier (by suggesting syntax and library usage/patterns) while allowing students to focus on step 2?  Certainly, the examples in the video demonstrate some of that (e.g., suggesting loops and initializers, while not necessarily having the loops doing the right things for the exact problem).  Copilot doesn't do that right now, but it could.

For me, the upshot is this:

  • As a learning tool, Copilot can actually be quite helpful if the assignments are not reused year after year and don't rely on standard CS tropes. Say goodbye to foo, bar, buzz, and fizz. Since Copilot does a good job of suggesting solutions to problems in chapters of popular textbooks, it can be useful for students using those for practice, but perhaps don't use those for graded work. And consider using Copilot yourself while creating solutions to assignments, so you see what suggestions it gives, to avoid situations where it outright suggests an answer.
  • Confronting this upfront and telling students when they are and are not allowed to use Copilot will be essential.  I can see new additions to the "Collaboration and Allowable Resources" sections on syllabi.  But this can't be the only thing we do.

At the end of the day, students who want to learn will learn, and those who want to cheat will try to.  Assuming the existence of these tools, and making sure all students (not just the cheaters and the ones who already have experience) know about them, will be essential.

What would be really good would be "Github Copilot Classroom", akin to "Github Classroom" (which I use, and love):  a version of Copilot that only suggests certain kinds of code, or where there is instructor control over what it may or may not suggest.  This would be hard and starts to sound like another facet of Explainable AI (a hot research topic right now).  Perhaps someone will be interested in doing this?

You’ve successfully subscribed to Blair MacIntyre's Blog
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Your link has expired
Success! Check your email for magic link to sign-in.