The AI Countercategorical Imperative
In an earlier lifetime I was a Philosophy student at Binghamton University, taking every class I could with Prof. Martin Dillon, who passed in 2005 after a decorated and productive life and career. His classes were a joy and the subject matter of his expertise - Phenomenology and Existential Philosophy was of special interest to me back then. A lot of the viewpoints I developed back then have been cornerstones of my personal philosophies of life ever since -- even if maybe I didn't quite get the source material right or forget how it all went along the way. It was dense stuff then and these days I'm also pretty dense, so here we are.
Anyway, one thing that I think I garbled up, but that has worked well for me is my malapropistic recollection of Kant's "Categorical Imperative". I read Kant, Husserl, Merleau-Ponty, Nietzsche, and all those other guys and sort of munged their concepts of Ontology, Hermeneutics and existentialism up and no longer can I remember what came from Hume and whom.
Here's how I remember Kant's "Groundwork of the Metaphysics of Morals" and the foundation for the concept of the Categorical Imperative -- which in no way resembles what you will read on Wikipedia:
God knows all things on a first name basis. God knows every blade of grass as an individual; every grain of sand for its own unique identity. This is the omniscience of God -- to know every single thing as a single thing.
As people, we can't do this, so we have to come up with categories for things (this is where I think I jumbled up "categorical" but let's just roll with it). We don't know every blade of grass and never could. Instead we have a concept of "grass". If you're a grass biologist or something, you have more knowledge about the field and know of several types of grass. Some experts may know hundreds of variations of grass. If you're a groundskeeper or run a golf course, you probably know many grass variants but also the microbiomes of your golf course and what grows best in what area at what times of the year and under what circumstances.
So developing expertise is essentially the activity of taking big, broad categories of things and segmenting them into more specific, smaller clusters of things. A bad headmaster at a school may just know that there are students there. A good headmaster knows every student by name and how things are going at home.
Statistics is the math and science of clustering things. We try to sample a population (basically a big, unknown clump) and make guesses about how to break it into smaller clumps and how to make guesses about the characteristics of those clumps and predictions about how members of those clumps are likely to proceed. Andre's version of Kant's God wouldn't need statistics because that God knows every single member of the population and everything about them and therefore knows what they're likely to do. We don't have that kind of time or mental processing power, so we resort to stereotypes and heuristics to make our predictions.
So Andre's version of Kant's Categorical Imperative was the name for the need for limited thinkers (all of us) to use short-cuts and mental clusters to understand our world. To name categories of things so we can operate in a world that would otherwise be limitlessly, fractally impossible to comprehend. Every color is different from every other color, but let's just call that tomato red and move on to eating it.
Daniel Kahneman has a very similar concept that he explores throughout his work. When it comes to making decisions, we substitute a simple question for the infinitely complex question that is actually in front of us, answer that simple question and use it as a proxy for the answer to the hard question. Effective decision makers are people who can do this quickly and don't oversimplify the question to end up with bad answers. Good metaphors are good heuristics; bad ones are deeply misleading.
领英推荐
So where does AI come to this?
Well, right now it doesn't. Seriously, read my other post on AI. It is a garbage product being tailored by garbage people to produce garbage results. And trick everybody into thinking it isn't garbage. I'm sorry if you're working in the field right now and trying to help AI detect tumors in radiology reports or help sequence the human genome or something like that. If you are though, you probably know better than anybody that most of the progress in AI is in bullshitting chatbots, hyper-personalized pornography and advertising that borders on fraud.
But it could break the problem of the categorical imperative.
Hypothetically, AI could have the bandwidth, processing speed and memory to handle every single thing as an individual thing. "Big Data" was all the news a few years ago, but it turned out that "Big Data" just meant doing pricing analysis via SQL on a Hadoop cluster instead of importing your data into excel. Then it was "Blockchain" which ended up having nothing to do with anything. But AI here really could address the problem of treating every single agent as a unique and different thing and understand the vector it is on, and its history and projecting it forward without trying to guess at what that agent is like and working from metaphor and comparison.
Statistics is a worthless field when the sample size is too small to provide any predictive probabilities. AI could make statistics a thing of the past. But I keep saying "could" instead of "will" because two things would have to happen:
Let's hope that AI will either be another flash in the pan concept that goes away as fast as it came and doesn't end up sticking around. Or if it does stick around, let's hope that somehow it avoids the internet trend that leads to monetization through awfulness.