The Programming Language Everybody Should Know: A Junior Engineer’s Perspective
As the author of this article, I am deeply committed to maintaining the highest standards of integrity and transparency in my work. For that reason, I want to clarify that no part of the below article was generated or written by ChatGPT or any other AI tool.
I believe in the power of human creativity and intellect, and this article is a testament to that belief. Thank you for taking the time to read, and I hope you find the insights provided both informative and thought-provoking.
"C is quirky, flawed, and an enormous success." - Dennis Ritchie
The Handcuff of Abstraction
My first programming language was R, then Python, then C#. These were all great (and still are), but I wish I knew C or C++. Rust might be a good candidate, but the reasons I wouldn’t choose Rust even with a time machine would be because it’s main “advantage” over C and C++ is that it abstracts from the very knowledge that I wish I had gained at step one: memory management, garbage collection, and everything in between. In an interview, Dr. Chuck, a P.h.D from Michigan State university and the creator of the “Programming For Everybody” course, dismisses Rust as a successor to C. In the interview, Dr. Chuck states that “It doesn’t look all that likely that Rust is going to be the successor of C.” He goes on to say that “Rust has a number of C flaws but has modern affordances,” so he sees no point in the language. Obviously, that’s high-level, but I'm sure his reasoning has deep technical roots that were abstracted away in his explanation. He goes on, “If [Rust] was like a really safe C, I could get behind that, but it’s not.” Personally, I trust Dr. Chuck's opinion. His knowledge spans decades and his love for the history of the technologies we use today is incredible. If you’re a C# (Java for Windows), Java, or Python developer like myself, you know that all memory management components have been placed on guard rails, which could serve as a crutch if we ever want to venture into the world of cybersecurity, embedded software engineering, or even learn how to optimize our software for efficiency. In a previous article, I discussed how more efficient code is going to be increasingly more important as we move forward, especially with limitations with Moore’s Law and the limited ability for CPU processing power to grow as it becomes tougher for nodes to reach levels of 3nm and below.
Nevertheless, as we run into lower level problems in our development, a good base of knowledge with memory management is vital. As devs, if we want to get into robotics and work with embedded software, this is all but necessary. I’m not the only one saying it, either. Many experienced developers say that learning C is the most important thing you can do as a programmer as it’s the closest (besides Assembly) that we're going to get to metal.
The Big Three
In the world of programming, there have been 3 major changes that have immensely increased developer productivity. A while back, I listened to a freeCodeCamp podcast, which featured Joel Spolsky, founder of Trello and Stack Overflow. In the cast, Joel talked about AI, business, and best practices when programming, and also had some insight into what he called the three major innovations in programming. The development of C was the first, the development of languages with automatic memory collection was the second, and the introduction of AI was the third.
In the podcast, Joel said, "Since the era of the C programming language, there have been two serious innovations in developer productivity which, as he stated, are just inventions that we've had which have sped up developer productivity by a lot.” As stated, the first one is memory management (creation of C), where you have to manage the memory all by yourself. Nowadays, as Spolsky put it, “That (manual memory allocation) turns out to be a big waste of time because we've got garbage collection and reference-counted languages. The programming languages of today (Java, C#, Python, etc.) will manage memory for you in a way that is good enough for almost all use cases. The exception is microcontrollers and really, really tiny computers—maybe operating systems,” he says. Other than at the lowest level, he continues, “It’s fine to go ahead and let the programming language and operating systems manage your memory for you. It will be fine. It’s amazing to see how much time people spent on memory management in the era before automatic memory management came out. It was the source of all bugs.” In fact, security flaws were almost directly tied to memory management bugs. Spolsky went on to say, “This was such a huge improvement in developer productivity.” He recalled that the first language to do this was Java, but quickly corrected himself, recounting that it was actually Visual Basic that pioneered this. “Visual Basic and then Java,” he clarified.
He then shifted to the second major innovation since the inception of the C programming language. Initially, even Joel was skeptical of AI being able to write functioning code. "I was pretty confident that the quality of code generated by ChatGPT-4 could not possibly... (laughing) could not possibly ever work—it’s gonna’ be a nightmare. We are at a point where a smart, knowledgeable coder using ChatGPT-4 can kind of accelerate the work that they're doing, but they have to know what they're doing.” The inception of AI into the lives of programmers is what he deems the third big innovation in programming, but he still recognizes the need for an understanding of the basics. He says, “You still have to know the C stuff because at this point I'm literally having people show me code and saying 'Why doesn't this work?' and it was obviously generated by ChatGPT, it's got an error, and they don't know how the original code was supposed to work, but I do, so I can find the bug for them in a millionth of a second and they can't. So these tools accelerate you, but if you don't know what they're accelerating, then you run into trouble and you’re very likely to hit a wall. Which is why I say go back to starting with principles and let the tools speed you up afterward, but you really gotta know what they're speeding up.”
This comes from the great Joel Spolsky, and he’s not the only one saying it.
领英推荐
From Microcontrollers Onward
The great Alan Kay, another great computer scientist, said it best: People who are really serious about software should make their own hardware. While I’m not exactly sure what he means by this, I’m assuming that he’s basically saying that the more you know about the computer's hardware, the more you will understand and be able to maximize your software’s efficiency. A couple of ways we get close to the hardware is through working with microcontrollers, such as the Arduino. Over the past month, I dipped my toes into the world of microcontrollers and followed along a YouTube course where I learned some solid-state physics, some high level EE concepts, and some simplified C++ (this is the language Arduino uses). My projects were very simple, so I never reached the point where I had to manage memory (which was my initial goal), but once projects start growing, so too will my need to understand memory allocation. If you are new to programming or trying to figure out how to start, I think jumping into the world of microcontrollers is great. It’s hands-on, it’s engaging, and this makes learning a lot more fun. In my experience, I was able to work with and construct different LED patterns, work with potentiometers to model things like analogous light switches in our homes and replicate volume control knobs in our cars. In the end, when we change the volume or dim the lights, we’re just modifying the voltage being passed to the outputs (speakers or LEDs, in this case). In addition, I was able to control the on and off state of LEDs based on the level of light in the room with a photoresistor. Seeing that was pretty cool. Regardless, the point is that it’s engaging. Backend development (APIs, databases, networking, etc.) can be super dry because there’s a lot that goes into seeing some output. On the contrary, working with front end development provides instantaneous output, which I think is important for an entry level developer to see. Working with the Arduino was like working with front-end development on steroids. Not only was I able to see virtually instantaneous output from my program, but I was able to touch and play with the LEDs, the arduino, the wires, the resistors, and the breadboard, which actively engaged me. Long story short, I think introducing programming with microcontrollers like an Arduino is very beneficial. It’s highly interactive, it’s engaging, and as projects get more complex, it provides an opportunity to learn about the features that low level languages like C and C++ provide.
History of C
Now, onto the history (you know I couldn’t pass up on the history on this stuff). Before C was B, which was invented in 1969. B had many problems, and these were highlighted with the exponentially growing chip industry. With the rise of computing power came the need to 1) maximize memory efficiency at the system level and 2) take advantage of growing computational power of processors. Ultimately, B was limited in this respect with the limited ability of pointer arithmetic. C changed that. In addition, C also introduced primitive types (int, char, booleans) and also introduced structs, which allowed for handling of more complex data structures needed for system level engineering. In addition to problems with B, there was also the problem of software having to be tied to specific hardware. For example, certain languages like Fortran and Cobalt could only be used on certain computers. Without extensive modification, the software was literally tied to the computer, which obviously has huge limitations. C introduced the ability to be programmed irrespective of the underlying hardware architecture. These are just a couple of the massive improvements that came with C. 1972, C was invented by one of the greatest computer scientists to ever live: Dennis Ritchie. At the time of creation, Ritchie worked at Bell Labs (AT&T). Since its inception, many programming languages have come and gone, but C has stood the test of time. In fact, it’s seen somewhat of a recent resurgence due to the growing field of embedded.
History of C++?
Approximately ten years after the inception of C another gentleman (and by gentleman I mean computer science god) jumped onto the scene. In 1982, Bjarne Stroustrup, who also worked for Bell Labs, created C++. C++, or “incremented” C, was the new version of C standing to implement object oriented programming abilities. This allowed for a new way to model the real world with classes and objects. In addition to OOP, C++ also introduced better exception handling capabilities. With C, exception handling was done manually. Furthermore, templates were introduced in C++, which allows for type systems when working with structs. Templates are conceptually similar to generics in C#. All in all, C++ introduced new features that enabled multi-paradigm capabilities. This allowed developers to design their programs with functional, procedural, or OO programming. With C, engineers were limited to procedural programming where blocks of functions were kind of developed chronologically throughout the application. With OOP, many of these functions are in the form of methods which reside within the class where their presence makes sense.?
Moving Forward
From my perspective, it drives me crazy that there’s another abstracted layer that I’m not familiar with. I mean, I know about the stack and how it's used to hold short term memory for the duration of a method's lifespan. Then the heap is more for dynamic memory, which holds objects derived from classes, which are held there until there is no longer a reference being pointed to a particular object. Besides a few more pieces here and there, that’s pretty much what I know, but don’t actually know what’s happening under the hood. Understanding low level languages like C and C++ take years to master, but an understanding is undoubtedly worth it. For me, it will probably be another year or so before I really start getting into C. For anybody that’s new to programming, gaining knowledge of the features involved with low-level languages through something like a microcontroller can be very engaging. It provides automatic feedback, similar to the first time a developer sees “Hello, World.” show up in their browser window.?
I was hoping to get into the differences between C and C++ in terms of their usage with actual projects and operations going on in industry, but I realize that I’m not particularly well equipped to talk in-depth on this subject just yet. In a couple years, I'll get into the details, but until then and as always, I hope you were able to take something from this read. Good luck.