What's with all those hypothetical questions about AI?
Christian Lawaetz Halvorsen
Solutions Architect, AI & New Tech | Leading TopGPT, finance’s first customer facing GenAI assistant | Startup Advisor & Mentor
Maybe there is an AI out there that could help me explain why all these hypothetical AI questions are backward and counterproductive?
"I'm sorry, Christian, I'm afraid I can't do that."
Why are you reading a rant of my frustrations with hypothetical questions on what we should do with AI's in the far future or in obscure thought experiments? Well, it's because I think many of the efforts are a waste of time (not you OpenAI, you guys are brilliant!), and the approach is, in my opinion, not beneficial for the actual advancement of AI and humankind.
Thought experiments on how to crash
There is a soft spot for applying "human" logic to problems that are not supposed to be solved that way. A great example is MIT's Moral Machine, which is supposed to be a platform to collect intelligence on whom we should tell our self-driving cars to hit.?
Why do we need to run millions of thought experiments and gather information on which of two groups would be the lowest amount of loss for our society to hit in a hypothetical scenario? I am pretty sure one of them will ever happen in real life, at least not at the thought experiment level where it's "just" a matter of choosing between two options. I know that these puzzles are engaging at an entertainment level. I might have gone for a couple of rounds as well, but it should only be an entertainment piece and not something we teach to our machines!
Is the solution that an AI needs rules for killing, or are we approaching a problem from human bias and twisting it to a level that has no use in practice? I would argue that we have an AI in the first place because it has a better decision process
You could even argue that building any level of complexity on top of a simple and efficient rule, such as "if danger = avoid, else break," would be a worse implementation. It could run the risk of our cars spending an additional unit of time processing every predestinate in its radius and considering who would make the most sense to hit. Worst case, we could have had false positives where the car hits someone on purpose and should have just hit the breaks when a dangerous situation arises.??
Instead of our philosophical thought experiments, we should send our AI's to driving school. A great example is this virtual world for AI to hear how to drive, obviously, this is a more productive way of approaching the challenge of teaching our AI's how to drive cars. With the added benefit that the AI will only risk running down virtual families while it's learning how to drive.
领英推荐
Can we trust a primate with such questions?
Another situation where I feel that our human perspective is a limitation is approaching the highly complex subject of consciousness. As featured in this article, Who gets to decide if an AI is alive?, where a lot of (humankind) mental energy is spent on how we can determine if/when an AI is conscious. Yes, we had our hero of computing, Allan Touring, who had a revolutionary, although the kinda theoretical, approach to computers and consciousness. This was obviously a monomial step in our scientific direction to work on consciousness and AI. Still, it also became a limiting factor because of the "ruleset" that suggested how to identify if a machine was conscious.
Today, our voice interphases, such as Alexa, Hey Goole, and Siri (maybe not my Siri, she seems a little incompetent), are already running circles around the Touring test. Suggesting that such a test from the past had no realistic way of knowing how the future would turn out and what it would need to define something as complex as consciousness. To put it crudely, we would never ask a monkey to design a test for human consciousness.? I am definitely not confident enough to define parameters to conclude if a new and potentially higher level of intelligence is conscious, especially not when we still struggle with defining if a dolphin or a dog is really conscious.
Fear not, there are plenty of problems to solve!
But fear not, we do have many real problems on our plate right now regarding AI. We really should be careful of what they learn from us. A great example, which I think we have many of by now, was when the toxicity of Twitter made Microsofts AI racist in less than a day.
Most times, we approach a problem where AI is potentially helpful. We look at the data (and especially its quality). It's pretty simple, the most basic algorithm can do wonders with a perfect data set, but sadly the reverse is also true. Our most sophisticated AI will fail if the data is terrible. Bad data in, bad results out. It's that simple. So back to the beginning, why are we collecting data on what pedestrian to hit? This seems, at best, like a biased and imperfect data set, something we should be careful feeding any intelligence, especially the artificial kind.
I have recently found myself at the intersection between AI and data privacy. I have actually identified that those two subjects are oddly similar in their approach to problems on a sidenote. I apologize to lawyers and engineers. I fear I might offend both groups for what I am saying. Still, we are at times equally systematic and pragmatic approaches to our problems… and, of course, our obsessive use of subject-specific terminology (only dwarfed by medical professions).
Anyways, enough time spent offending professions, I have a point with this path, I swear. There is a lot of talk about ethics in data privacy, especially now that AI is getting involved, and we risk approaching this in a counterproductive way. We should not focus on teaching AI's how to act more like us. We should focus on building methodologies
A scary example of human bias is when we try and apply AI to criminal cases. Where an AI ends up being biased and deciding that some minorities or ethical backgrounds are more likely to be criminals. Obviously, this is not an AI being evil. It's basically a newborn (a very smart one) that is taught how to deal with the data it interacts with. If the training material is already biased, then we will for sure repeat our societal mistakes all over again.
The quality of the input and output is directly correlated
To conclude on a somewhat messy rant around AI and the problems that we focus on as humans, I simply suggest we take a step back and evaluate the most crucial factor in any AI's development, the data that we feed it. The creation of artificial intelligence will require even more focus on human bias and how to minimize it. We should also consider what part of our humanity we want to transfer to the next generation of intelligence, which might be one of the most important decisions we will ever make as a race. It's crucial to remember that no matter what information processing system we are working with, human or machine, the input is key.
The Future of Business Architect | Redefining Startup & SME Success Through Disruptive?Business?Models
2 年Most insightful?Christian
Futurist ? Love Tech,??Humans More! ? Chief Human Innovation Officer of Inc. Award Company & #HumanFirst Nonprofit ? C-Suite Advisor : Ethics, Leadership & Growth ? Human Rights, Agency & Sovereignty ? Author ? Speaker
3 年OUTSTANDING POST Christian Lawaetz Halvorsen It is essential we ask the tough (and rather obvious) questions about the role we want #AI to have in the #future. Will it serve or destroy humanity?
Project Management / Operations & Growth / Process Optimisation / Stakeholder Engagement
3 年I really enjoyed your article, Christian. You've made plenty of good points that I've thought of myself, and I think you've wrapped them up perfectly by pointing out we should focus on which part of humanity we want to transfer to the next generation of intelligence. And if I may add, it's very obvious that an engineer wrote this :))