The way we (teach) code is wrong

As a senior in high school who has been programming for the past 8 years, I have an interesting and often aggravating perspective watching my classmates take their first programming courses. In this article, I discuss the problems with the way my classmates are taught and suggest solutions.

By Aaron Cruz

I notice someone turn to me in my peripheral vision. “Hey, can you help me with this?” they ask politely. Happy to help, I lean over and look at their code. They have an off-by-one error, so I point to the screen and tell them to subtract one there, explaining why their math was off. They look at me, hoping for further comments. I try again. “So, that variable there,” I look back to make sure I was getting my point across, “is one higher than it’s supposed to be in order to get the substring.” “Mmhmm,” they say. I continue, “so, you need to subtract one in order to get the right range.” They still remain motionless, so I point as close as I could to the computer screen without actually touching it. “You need to put a subtract one, right here.” Their hands finally reach out to the keyboard and clicked where I was trying to get them to type. They finally understand!. “subtractedby…”

“No! Wait—you have to use the subtraction sign.”

“Well, why can’t you put the word subtraction?”

“Because that’s not how it works.”

“Well why not?”

“…Because.”

As a senior in high school who has been programming for the past 8 years, I have an interesting and often aggravating perspective watching my classmates take their first programming courses. For those lucky enough to have avoided writing code thus far, this is what code looks like:

#include <iostream>

using namespace std;

int main() {

    cout << "Hello, World!" << endl;

    return 0;

}

This is a piece of code that prints “Hello, world!” to the screen. It’s about the simplest C++ program possible, besides one that does absolutely nothing. Despite its simplicity, it looks complicated for students who haven’t seen any code before. It was also the first code shown in the intro to programming class I took this semester. Watching teachers explain hello world to first-time code students is about as satisfying as watching golf if the ball was made of gelatin. There are a lot of pieces here, iostream, std, cout, endl, return, int, and main, and each of them are ideas that can go really deep. Just the concept of how an “endl” interacts with text and how text is stored can be explored for hours. All the professor could do was vaguely gesture at each bit, try to explain what it did, and eventually give up when they realized explaining would take too long. To make things worse, our C++ professor often showed us this code snippet:

#include <iostream>

using namespace std;

int main() {

    cout << "Hello, World!\\n";

    return 0;

}

This code does the exact same thing, and the professor never explained why “\n” and “endl” are the same thing, why one has to be in quotes and the other doesn’t, what an << is, or why in order to show something to the screen, we needed to write the Australian slang term for a butt. Of course, if you’re used to programming and C++, like my professor was, you can see how the code works. But new programmers don’t see the signal, they just see noise.

Imagine, for a moment, walking into kindergarten. You climb into your comically small seat and stare at the whiteboard. Your greatest academic achievement so far has been memorizing the entire alphabet. Nonchalantly, the teacher throws this at you: “1 + 1.” Instantly, your mind is thrown into a spiral. What is 1? What is +? What kind of moon runes is this tall human throwing at you? It is too much for your neurons to handle, and you pass out due to the sheer complexity of it all.

Of course, this isn’t how kindergarten goes. Primarily because young children don’t pass out very often, but also, young children are taught math from the ground up. When I was in kindergarten, I was taught each digit one by one. Then, they taught us how to count up. Then, and only then, they taught us about “1 + 1.” From there, it was really easy to get to “1 + 2”, and “1 + 3”, and even the fabled “2 + 2.”

Programming class never teaches you about the very, very, basics (working with text that’s being sent to a computer), before you get to the more complicated stuff. When people tell things to computers, they’re used to autocomplete and semantic searching, tools that are designed to forgive massive mistakes.

Code isn’t as forgiving. As this is by design; programmers want to be exact about what they want their computers to do. Unfortunately, it’s an unusual way to interact with computers for people who don’t do it every day, and so it’s hard to do while learning the basics of procedural programming, like data types, functions, and variables. And so it’s no wonder that when helping my classmates, I constantly see them making syntax errors. Most people can understand the logic behind programming, it’s just hard for them to learn how to actually express their ideas in text. It’s like trying to teach people math without teaching them numbers first.

So, what’s the solution? Well, first of all, no one should learn C++ for their first language. That “cout <<” syntax is (with a few exceptions) only used when printing. That lead to a lot of people throwing << and endl in places where they didn’t belong. C++ in general has a lot of awkward features that are hard to explain without a “you need that there because you do.” The simplest solution to my problem here is to teach people a simpler language. (I don’t think that’s the best solution, but if you really want to learn how to code now, check out Python, and Swift, which has some incredible tutorial software that’s unfortunately exclusive to iPad and Mac.) The other solution, and my personal favorite way to teach code, is to avoid text-based code entirely in intro-programming classes. Life’s too short for syntax errors.

MIT Scratch uses colorful blocks with drop-downs to write logic. It’s simple, but it contains many of the ideas found in a higher-level language like Python. It’s just much, much easier to use. (It’s also a lot less practical for real-world applications, at the moment.) For education, it’s fantastic. And it’s not the only one like it. Check out Media Molecule’s Dreams, or Epic Games’ Blueprint for Unreal. Even Apple trusts its users to learn how to use logic to control shortcuts, and Apple doesn’t even trust its users to change their app icons. These “visual scripting languages” work, and they work really well. Unfortunately, colleges are never going to teach scratch to grown adults. You might think that clearly, visual languages are just for people who aren’t smart enough to program properly. I hate to break it to you, but that’s how some people who use C++ feel about people who use Python. It’s also how some people who use Python feel about people who use C++. Sometimes, it’s even how people who use visual languages feel about people who use Python and C++. Programmers hate each other! People should just learn how put the blocks together. For some reason, people always want to just learn how to write “real” code, whatever that means. As I’ve seen in a lot of my classmates, they fail to really understand what they’re doing, and then they give up, which is the worst outcome of any kind of education. I don’t have any advice for people looking to understand C++ on its own from scratch, but if you haven’t done so yet and you want to start now, maybe try learning how to write your numbers before you do addition.