How do we teach computing science in the brave new world of GitHub Copilot and ChatGPT?
We have pretty good programming problems to use with students. No. We HAD pretty good problems. We don’t have them anymore because almost all of them can be solved by AI.
We should get past the denial phase as soon as possible. We can’t ignore the fact that our curricula don’t work anymore.
We can’t say, “Don’t use Copilot.” Some students will comply, some won’t, and we’ll never know because AI writes pretty darn good code with enough variability.
So what do we do?
It seems we have two strategic directions.
First, we need to acknowledge that the brave new world IS here, and that our responsibility as educators is to teach students within it. Natural language has become a more powerful tool than any programming language, so the new desired skill is how to properly describe the problem to the large language model so that it can generate the code to solve it. We dreamed of making programming languages look more like English. Our dreams have come true. English has become the most powerful language for programming, and we should teach students to formulate problems in English in the clearest and most precise way.
But that’s not enough. AI-generated code will always be “almost right,” so debugging and testing skills are more important than ever, and students can’t debug or test code if they can’t write it in the first place. This means that we should still be teaching students these, now increasingly lower-level, skills, and in the new reality, the only way to do that is to take learning to a low-tech level. Tracing code on paper, offline workstations in computer labs – here we go again!