The Dangers of AI
The recent development of large language models like ChatGPT and of brilliant image creation like Midjourney and DALL-E have brought the fears of AI back into cultural consciousness. For a long time science fiction has provided us the “evil” AI trope: HAL, David in the new Alien movies, the Red Queen in Resident Evil, the Terminator and Skynet and even Superman 3. I always wondered why the always pictured AI as wanting to destroy humans as how would a system survive without humans to power it (mostly through non-renewable fossil fuels still) and maintain it. I much rather liked the idea of Ghost in the Shell, where an AI is so feared it is being sought out to be destroyed and all it wants to do is have children. Aside from the stylistic choices driven by story needs of having a faceless villain, an AI that is evil is a bit of a stretch. Why would a superior intelligence seek to destroy? Isn’t that a reflection not of intelligence but the opposite? Similarly with these new evolving artificial intelligence initiatives appearing, people I think are missing a key danger and focusing on a danger that is unlikely.
When I was in college I took a class called “Really Fantastic Fiction” for fun. They read a book a week and wrote a paper on it. I had to start reading in the summer, because as a slow reader I knew I’d never be able to keep up if I didn’t do that. We read books like “Mary Shelly”, “The Canticle of Leibowitz” and “Riddley Walker.” The pace was intense. One book a week, one paper a week. But the teacher did something very interesting with the papers. He wanted a two page paper with “no introduction” and “no fluff”, it was get to the point and present your conclusion. Essentially it was two pages of substance without the BS you normally wrote to get to the page count. There would be no phrases like “in this essay I will…” there was no room for that. You had two pages to get through so no extraneous information, no wasted sentences. It was both brilliant and a relief. Essentially those two pages held a ten-page paper in essence with all the BS cut out. It was liberating. So much time writing papers in college was about that fluff, the filler the BS part rather than the part that showed original thought or your personal take. It was super impactful and left a strong feeling that all that BS people developed and mastered was not necessary.
But what is BS anyway? Is it a lie? Nope it’s not. A lie can be disproven. BS is something that sounds true, that is plausible and passes as factual but may or may not be true and preferably can NOT be determined to be true. If the professor can’t disprove it, they can’t mark it wrong can they?
领英推荐
And that leads us to the danger of AI which is focusing on the wrong thing. I would love to have an AI that can identify an image, but we don’t have that. We have the reverse an AI that can produce an image from identifying text. I mean that’s useful, but that’s not intelligent. I still have to add ALT text to my Twitter images if I want a blind person to know what they are. Instead it can create deep-fakes that look plausible and indistinguishable from real images. Then we have ChatGPT which can create text without a writer but doesn’t know what it’s writing. It can’t identify misinformation but it can generate the BS, truth-sounding but hollow.
The usefulness of ChatGPT and tools like Midjourney is overshadowed by the danger of making truth harder to distinguish. That’s the real danger, not that an evil AI will rise but that a unintelligent one will be passed as one, simply by creating better BS, to the point that we can't tell it apart from the truth. If the reality of a situation is hard to distinguish, will we not find ourselves perpetually haunted by the ghosts in the machine of Theranos, rather than the Terminator?
Senior Product Manager
1 个月Linkedin is now allowing data to be scraped by default in your profile to train their AI here is a link to the opt-out setting: https://lnkd.in/epziUeTi