Is ChatGPT going to replace my job?
Michael Kierstead
??Visionary Engineering Leader and DevOps Coach | Driving Bionic AI Adoption, Scalable Solutions, Agile Transformation, & DevOps Practices ?? | Leading Cross-Functional Teams & Architecting the Future of Technology ??
My thoughts on the advancement of AI as a tool for content creation and moderation.
Like many, this past week or so, my work chats and social media have been filled with funny toy cases of ChatGPT from OpenAI. Eventually, I started hearing people sarcastically express how we are one step closer to kneeling before our AI overlords. I think we are a significant period away from that, but I do think that generative AI and AI used to moderate social content at scale will continue to buck the limits of our current ethical framework around data ownership, model and training bias, and next-order impacts of AI on the labor market. I ventured down the path of looking at these issues out of curiosity and wanted to share what I found.
AI is not new. It mainly comes from advanced mathematics applied to looking for patterns in data and then surfacing those patter for other systems or people to take action on.
For Example, Credit card fraud detection. What has changed is the general cost of computing keeps exponentially getting cheaper, and the barrier to leveraging an AI to solve loosely defined questions is getting more democratized.?
What could change as our lives are more AI-powered?
Not long ago, if I wanted to think through a logic bug in code or abstract systems design question, I needed another human to help or carry a rubber duck to talk things out with. Now with Github co-pilot , I can get better feedback than my very silent duck. Co-pilot is just the next generational step in IDE technology that would tell you that you missed a “;” on line 32. Now it can tell you that your innovative new algorithm is not new or innovative. It may even suggest a better way to do it.
I think of general AI as just the leveling up of what we would normally call automation today. We are still far from opening our eyes after 10 seconds and saying, “I know Kung FU,” like Neo in the Matrix. Still, I do think we could ask AI things like “Summarize the forecasted economic impact of the next Fed rate hike on pacific trade lane shipping capacity.” in the next decade or so.
To me, the next big jump for users of generative AI is for us to decouple ideas from the expression of those ideas. Both will still exist, but many expressions will just get far easier to create for those not skilled in the particular medium they want to express them in.?
For example, I would like to see tools GPT applied to education and training content. The idea is that an instructor could create a single corpus of work then a GPT-like solution could refactor that work to a structure uniquely optimized for each student.
I have seen some content creators focus on how this is terrible and that students will just have an AI write all their papers. To me, that sounds synonymous with if someone uses a calculator or a tool like Grammarly that is cheating. Nope, I use tools like Grammarly all the time, and the net effect is that I am not 100% reliant on the tool, but rather I just get better at writing.?
I also think that AI + low-cost sensor data will drive efficiency transformation in many industries, including medical, logistics, and energy.?
The future is not all sunshine and rainbows or doom and gloom.
We still have many challenges to work through in the areas of ethics and reasonable regulation. I expect the conversation about who owns generative AI content will be fun. If you write an essay using generative AI. Do you own that content, Does the owner of the AI model/tool or the people who own the data used to train that model??
I also think we will have no choice but to create a legal framework, at least in the US, that balances the need for fast innovations with privacy and transparency. One of the All-In podcast hosts suggested a way we could legislate a reasonable guideline for using AI to do content moderation at scale, focusing on transparency.
I, for one, look forward to worshipping at the feet of our new AI overlords though I have a feeling that day won’t come for some time as the CEO of OpenAI has described the compute cost of ChatGPT to be “eye-watering.” This means that the economics of replacing a call center of qualified customer support folks with AI chat are likely not there yet. I would be overjoyed if my preferred digital assistant could just answer the age-old question, “why did I walk into this room again?”?
Director of Information Technology at Devox Software
1 年Michael, thanks for sharing!