I like coding in C. I like manually allocating memory, and opening the registers window in Visual Studio to see the values of the eax register and blitting graphics to the screen and all the stuff that Dr. Dobbs wrote about in the 90s. My programming friends seem to believe that understanding this level of programming is good in a hand-wavy, theoretical sense, but when you consider all the web development, Java frameworks, and existing libraries most programmers today rely on, it’s hard to really pin down a solid answer to the question ‘Why Learn C?’. This is my attempt to answer that question, and I believe it comes down to the basic programming concept of abstraction.
My definition of abstraction for this article is a way to hide parts of a library or a piece of code or a technology that you do not need to think about when you’re using it in your own work. Abstraction is a tool that has lowered the barrier of entry to programming for everyone, because it gives us a way to tuck away the details of the hardware itself. Learning to think in the way a computer operates is a hard task. Computers are meticulous and unforgiving. The tiniest flaw in your logic and they will either give you horribly wrong answers or refuse to work altogether. There have always been programmers, people who love logic, who drifted naturally in the direction of computer thinking and were always going to be excellent engineers, regardless of whether or not they were working with Python or punch cards. These people do not need any special guidance. They will study for hours on end to wrap their minds around computer architecture, and will emerge as brilliant developers on their own. This is a small subset of the general population though.
The modern economy needs more people than just these zealous souls, which makes easing that barrier of entry to the world of computer manipulation a Good Thing. It gets more people developing software and decreases technophobia in society. We seen articles for years now – decades even – that say computer jobs are going to grow by a double digit percentage over the next ten years. We need more people in the industry than just those go-getters who are going to end up there regardless of how hard it is to get in at first. Even beyond people who want to program as a career, the world is increasingly technical. The level of technology in a teenager’s daily life is so much different than mine was 20 years ago that even I can have a hard time relating. I got the internet for the first time in my early teen years! I didn’t have a cell phone until I was in my 20s. My high school had a beeper policy! We need people to be as prepared as possible to work with this technology, and understanding simple code manipulation will make that possibility easier.
This barrier is currently being hurdled with popular languages that are object-oriented, meaning they encourage the programmer to think using a model based on objects, a model of the real world. It does not take into consideration programmatic efficiency, or cache systems, or effective use of virtual memory. Nor should it, as one of its aims is to move the programmer away from having to think in the way the computer operates and closer to a human way of thinking. Does someone brand new to programming, who’s excited about making changes in the HTML on a page or setting up a blog to later customize, or just wants a slow introduction to more complex topics, really need to worry about cache coherency? Or threading? Clearly that’s a problem for down the road. Maybe that’s a road that some developers will never need to walk down at all, depending on their focus.
I’ve been working on Handmade Quake for about a month now, and have emailed with dozens of people following the series. Many people want to understand how Quake worked because they loved the game when they played it, which I expected, but I’ve seen even more interest in how the game itself works on a low level. Just saying the phrase ‘low-level programming’ is enough to pique interest in a lot of programmers. The fact that Quake was written in C makes the idea even more appealing. C operates close to the hardware, to the point that with some practice, it’s not difficult to take a piece of C code and estimate how it will be translated into assembly by the compiler – at least a non-optimizing compiler. You can’t really get any closer to the hardware without getting into hardware specifics, which makes C about as generically low level as you can go.
I believe the benefit of understanding how to code in C is that you can consider it the lowest level programming language there is, other than assembly. All abstraction at this point goes upwards, and any features at all, be they concepts like object-oriented or functional programming, or programs like the virtual machine for Java or Python, can be built upwards from the level you’re on. If you learn Python, you’re inside its black box, and you do not have the ability to move outside of that box. You’re part of the abstraction itself. If you learn C, you can download the Python interpreter itself and see how the hardware actually runs a Python program. Before, you were programming with kids gloves on, leaky abstractions and all. Now, you’re using abstraction the way it was meant to be used. You’re choosing to hide away unnecessarily complexity, with the option of removing it and looking into the C code whenever you choose.