Then and Now
Tom Kaczmarek
Former Director Graduate Studies and DIrector of Center for Cyber Security Awareness and Cyber Defense at Marquette University
Two events yesterday have left me wondering about then and now in computer science and computing.
A colleague sent me a note on the heels of preparing an introduction to a paper written by Niklaus Wirth for a graduate seminar that I conduct. The note from the colleague and the preparation for the seminar were completely unrelated events but there was a thought expressed by my colleague that caught my attention in light of reading the paper by Wirth.
As background, I am reviewing the history of program language design as a way to provide insight into the important concepts of computing exposed through the inspiration of the early computer scientists. The early giants of computer science were engaged in defining the fundamental theories of computer science. These theories were then often introduced into programming languages. So, study of the design principles of programming languages exposes fundamental concepts about computation.
In the conclusion of his paper about Modula-2 and Oberon, Niklaus Wirth makes an observation about the shift in how we teach programming and how we view programs. He speaks of the early emphasis on proving the correctness of programs and having rules and logic instead of trial and error to teach programming. He questions if we have forgotten our purpose for developing computer science.
The connection I made to my colleague's comments was through a reference to insisting that there was “theoretical framework” taught in classes.
My understanding of computer science is now almost ancient. I came from a math background. I was a MS grad student in math in 1970 when I was introduced to computers and left the program to complete my graduate work in Computer and Information Sciences. The seminar I am doing has reintroduced me to thoughts coming from the early geniuses (like Wirth) who defined computer science. Thoughts from leaders like Hoare, Dijkstra, Dahl, Liskov, Parnas, Strachey and Scott, riddle the papers I have the students reading. Automata theory, formal languages, formal semantics of programs, computability, and abstraction were concepts I learned in my early days in computer science. My enthusiasm for AI probably brought me over to the “scruffy side” where satisficing solutions were the only practical solutions we could hope to produce. Being engaged later in commercial applications of computing and the software development process, I was further involved in verification and validation which was mostly about testing for correctness.
When I was in research, we had algorithms with formal notions that allowed rigorous analysis. I investigated using frame based knowledge representation (KR) and even contributed by supporting the implementation and a very limited distribution of a frame based KR tool. Our effort was based on work by Ron Brachman the father of the KL-ONE. KL-ONE was a frame language incorporating rigorous semantics for the "IS-A" relationship. It fostered integrating a frame logic for KR, classification rules, and object-oriented programming. Owing to the formalism behind the knowledge representation, Ron and Hector Levesque were able to formally prove that the knowledge representation and logic implemented in our classification algorithms could not be used to decide all classification questions. We were left with an incomplete solution. Reflecting back on the paper by Niklaus Wirth, it was our "logical, consistent framework for proving programs correct" that allowed us to discover and characterize our shortcomings.
Later in the day, I spoke with an acquaintance at a professional meeting about the testing procedures for the software that his company uses as the basis for the service they provide. It occurred to me during this conversation that we are doing validation through what Wirth had called "trail and error." We do not have the rigor that the founders of computer science called for when they ventured into proving program correctness. Having come from manufacturing in my professional life, I quickly realized that this is about trying to achieve quality through inspection. That is costly and error-prone.
I am left with questions. If we are teaching programming through trial and error and creating quality in systems using trial and error, have we gone down the wrong path? The more expensive path? Was Wirth correct in questioning where we were headed? Where does agile foster rigor?
REFERENCES
Niklaus Wirth, Modula-2 and Oberon, 2006
Ronald Brachman, What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks, 1983
Hector Levesque and Ronald Brachman, A Fundamental Tradeoff in Knowledge Representation and Reasoning, 1985