AI is going to be better at everything. It's just a question of when.
Joshua Peskay
3CPO (CIO, CISO, CPO) CISSP, CISM - Helping nonprofits leverage technology to do more, do better and be more secure. Also, I collaborate with a potato.
I can't help but feeling like all these articles pointing out the current weaknesses of ChatGPT and other generative AI such as Midjourney are completely missing the point.
We made this video in October of this year. Less than 60 days ago.
Perhaps the most disruptive AI tool thus far, ChatGPT, was released AFTER we made this video. I subscribe to a few AI newsletters, one of which is Bens Bites. By my rough estimate, well over 1,000 new AI tools have been created just since we made this video. It's going even faster than I thought just two months ago.
I am increasingly unable to see how AI will not be better, and MUCH better, than all humans at all things digital within the next few years, if not sooner.
So far as I can tell ChatGPT is?already?a better writer than 90% or more of all people and, in terms of speed, better than 100% of people. For example, if we had a competition, right now, among a random 1,000 adults to write the best 100 word poem on the French Revolution in one hour or less, I would bet a lot of money on ChatGPT to win. And if the time-limit was one minute or less, I would bet ALL my money on ChatGPT to win.
And if I asked both ChatGPT and a random 1,000 American adults something else, like "write a 100 word essay on the difference between liberalism and conservatism as reflected by American politics in the 21st century," and gave everyone an hour and then had a random 1,000 high school English teachers judge them blind, I would bet a lot of money on ChatGPT to be in the top 10% and I would, quite frankly, be be surprised if ChatGPT was outside the top 5%.
Of course, that is if we did the contest right now. Today.
But as I said at the beginning of this article, that is not my point.
My point is that ChatGPT (and countless other tools like it) will be far better than ALL humans at these kinds of writing assignments soon. Probably before summer of 2023. And the AI tools will keep improving at an incredible rate. In a year or two (or less), AI writing tools will be 10 or 20 times better than any human at writing of any kind, fiction, non-fiction, journalism, comedy, anything.
领英推荐
By the same token, if I asked a thousand visual artists to make a portrait of King Arthur in the style of Van Gogh and gave them all 24 hours to do it and the same for Stable Diffusion and MidJourney and then asked 1,000 random people to pick the best portraits, who would you bet on?
I see no reason why AI won't be better, and soon, at creating music, writing code, making slide decks, video games, movies and virtually anything that is 100% digital and learnable from digital data.
Here's an example, someone built a tool called TXT-2-IMG-MUSIC-VIDEO that takes a text prompt and generates a short music video within a few minutes. Is it good yet? I prompted it with the title of this article so, you be the judge. Will it be really good soon? I would bet yes.
And AI will better at answering questions from the simple to the complex. I am ALREADY using ChatGPT to generate answers to complex questions I get asked in real life. Yes, I still evaluate those responses and some of them aren't great, but most of them are better than what I would have said. ChatGPT gives me a massive head start on crafting a better answer. But the time in which I have value to add to that process is dwindling. Fast.
Even if you don't agree with me that ChatGPT, Midjourney and Stable Diffusion are ALREADY better?than all most or all humans at writing and art right now,?they will be inarguably better soon. AI is improving on an exponential scale while humans are improving on a?linear?scale. It's no competition at all, frankly.
What I am trying to wrap my head around right now (and I welcome dissent/thoughts on this), is what the world looks like when AI is not just better, but exponentially better than humans at all of these things.
Comparable progress in the kinetic world will be slower, but I'm honestly not sure by how much. Imagine, if you will, a Boston Dynamics robot powered by an AI with access to all the motor controls of the robot plus a ChatGPT-esque ability to interpret human language instructions and respond appropriately. Google already has prototypes of these in their offices with increasingly impressive capabilities. And MIT already has prototypes of general purpose "assembler" robots. Think Midjourney+Robot+3D printer. Years away, perhaps, but not decades away.
It is not my intention, in writing this, to scare anyone. I have, literally, no idea whether I'm even close to correct in my thinking here. Even if I am right, on any level, I have no idea whether this will be the greatest thing to ever happen in human civilization, the end of human civilization, or something else entirely.
I do think, and more so each day, that this is a change on a scale and impact unlike anything we've ever seen in human history.
3CPO (CIO, CISO, CPO) CISSP, CISM - Helping nonprofits leverage technology to do more, do better and be more secure. Also, I collaborate with a potato.
2 年Just read through this post: https://twitter.com/BrianFOConnor/status/1603032772804857856 I don't have anywhere near the confidence to predict what kinds of jobs and industries will be disrupted first. There are just too many variables at play for me to have any confidence in predictions like this. That said, I am thinking about things like all the time and trying to imagine how different scenarios play out.
3DPO (Digital Privacy Project and Program Officer) PMP, CIPP/US - Helping nonprofits navigate data, privacy, and secure technology with clarity and care
2 年Even if we may not agree on words like "better" AI-generated content is proving itself to be "faster," "different," competent" and most often (though not always) "correct." And it's becoming easier to try and often free. But AI also scares me because it is new, truly understood by few, moving SO fast and well, we've seen how these relatively recent silicon valley experiments like Facebook are going. That's why it's important that more people who care about potential effects jump in - to raise the needed questions about bias, ethnics -- where does the data come from? Who's teaching it?