How will genAI and AgenticAI affect human IQ?

I am sure that many of you have tried this, either out of curiosity or as a valuable productivity tool. The action I am referring to is the use of generative AI tools like ChatGPT (or one of the many similar options) or Co-Pilot to improve your code. AI has been incorporated into many Search algorithms and smart assistants to the point that you could do what I recently did. I needed to write a short technical article on work that I have been doing on methane emissions from Orphan and Idle wells for the Payne Institute for Public Policy at the Colorado School of Mines. The question of P&A standards and even how long cement lasts in a plugging job have come up in conferences dealing with voluntary carbon market offset credits for plugging an orphan well so we thought we would try to answer a few questions that were out there.

I framed the question, added several references that I knew were comprehensive and trustworthy and loaded them into the ChatGPT application interface and asked the genAI website to write me an article. Almost instantly a five-page article came back as a response. I read it over and what do you know, it was very good. Know I need to go back and add the citations and do add more context to the article but most of my work it done. Not sure the proper way to cite ChatGPT but I can figure that out.

?It seems that “Resistance if Futile”, if you follow my StarTrek Borg metaphor. The larger question is what will be the role of humans in an AI dominated world? Have we already outsourced our brains to our smartphones? Many are seriously worried about AI replacing their jobs. I am not sure I can trust any video I see on social media anymore. OpenAI only released ChatGPT (using ChatGPT 3.5) on November 30, 2022. In a very short time, we have begun to see AI not only as a tool but to consider AI as a co-worker. So where is all this headed? Will robots and algorithms do all the hard work in the future? Will we delegate to AI the role of providing the truth and some version of generativeAI or the new trend in Agentic AI as being the authority without checking the data it was trained on? At least in my paper I knew that I fed ChatGPT with good references but are we ready to do that in all cases?

According to Nvidia: “The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries. Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.” https://blogs.nvidia.com/blog/what-is-agentic-ai/

If you want a few new ways of tagging a social media post on AI try these: FOMO (Fear of Missing Out), or JOMO (Joy of Missing Out), and my favorite Fear of my Obsolescence (FOMO 2,0).

I did some research on my own (which meant that I had to read a few articles). The first topic was Language translation and speech interfaces (smart assistants are getting smarter with AI agents and we are beginning to talk to our AI-co-worker). These observations came from a recent article in the Economist.

Translation apps are improving at a rapid pace. Why bother spending your time learning a new language when you phone is already fluent in it? From one report by EF (Education First) an international language training firm, China ranks 91st among 116 countries in terms of English proficiency. Just four years ago they ranked 38th/100. “English fever” in China has dimmed over the last 40 years since it opened up to global business.

In one study by Unbable in Portugal resulted suggested that while today 95% of global translation is performed by humans, in just three years-time, almost none will. Humans were better than the algorithms only when they were fluent to both languages and also expert in the material being translated. Machine translation has become so reliable and ubiquitous so fast that many users don’t know the difference (remember Turing’s test for machine intelligence?). The first computerized translations were attempted by IBM more than 70 years ago with an application that has a vocabulary of 250 words of English and Russian and six grammatical rules.? When Google Translate was launched in 2006 the approach was “statistical” based rather than rules based. The field exploded in 2016 when Google switched to a neural engine and has since evolved to large language models.

But how will the AI trend impact humans? Again, from another article I picked up from the Economist in December, 2024. Trends in human literacy and numeracy (are you smarter than a 10-year-old?)

The OECD tests adults about every ten years about trends in numeracy and literacy and problem solving (Survey of Skills). They aim to mimic problems people aged 16-65 face in daily life whether they are working in a factory or an office or simply trying to make sense of the news. The latest tests were carried out on 160,000 people in 31 rich countries. Finland, the Netherlands, Norway, and Japan were are the top. England has improved over the last decade. America is below the OECD average. Chile, Italy, Poland, and Portugal are at the bottom of the table.

The results suggest that a fifth of adults do no better in maths and reading than might be expected of a primary school child despite the fact that adults hold more and higher education qualifications (what is your degree worth?). Demographics are a factor: new immigrants (non-native speakers) may struggle with a new language, natural aging is a factor (results suggest that skills peak at about 30 – ouch that one hurts) but even after adjusting for these factors, trends in many countries are down. Social media, video games are a suspect but so are education and training systems that are not working as planned. Are “soft skills” being emphasized over traditional reading, writing and arithmetic lessons? Does all this make a difference? Most expert would say yes. People who perform best of the test boast wages that are 75% higher than those with the lowest scores.

So where are these trends going to lead us If all human knowledge is digitized and fed into a large language model what will be the effect on human intelligence. Will we have the critical thinking skills to recognize deep fakes and when AI starts to hallucinate? Will intellectual property no longer be valuable? Many creative types are already worrying about this and lawsuits are in the courts. Will citation and attribution in scientific work be recognized or will we just want the results from a ChatGPT type application? Will we all want to work for Google (or OpenAI) because they know everything already? This is getting a little scary. I recommend that all of us investigate “Human-in-the-loop” or “Human-over-the-loop” systems as well as checking the data that your large language model is trained on. One example, and there are others is SPE’s own Trey Fleming’s iSPAI platform.?

Or maybe we can ask ChatGPT what role it has in mind for its human co-worker. Anyone remember reading Asimov’s I Robot collection growing up? “I, Robot is a fixup collection made up of science fiction short stories by American writer Isaac Asimov. The stories originally appeared in the American magazines Super Science Stories and Astounding Science Fiction between 1940 and 1950 and were then compiled into a single publication by Gnome Press in 1950, in an initial edition of 5,000 copies.” That citation is from Wikipedia since I am from the old-school.

Jim Recent experiences make me think there is trouble in AI paradise. I had started to write an article and typed in the title into WORD: The Dinosaurs of Broomfield County Colorado. And since copilot prompted me- I let it run without further constraints. I got a coherent article- general intro paragraph, middle content, and closing. It was drawing solely on fossils found in Rock Creek Farm- which I know is actually in Boulder County. And while the text was readable, it certainly was not written in my style. Most importantly- it read like a book report written by a court grader- not much insight into the topic. I also have a theory that I can't prove- that we are seeing an AI generated recursive knowledge death spiral. LinkedIn frequently sends me questions to comment on, which often have a proposed structure on to how to answer. I have noticed many replies simply reuse the same words that appeared in the question- which leads me to believe that some people are using AI to respond to an AI generated question which is narrowing the understanding of the problem rather then exploring new aspects.

回复
Najib Abusalbi, PhD

Independent Advisor (retired from SLB 2018)

1 个月

Thanks for these insightful comments and thoughts. I was and will remain of the opinion that HCAI (human centered AI), or human-in-the-loop systems will (or should) dominate our society and work environments. Plus I still would want to learn another language - total dependence on translating apps is denying us the experience of interacting with with other humans!

回复
Roger Nickie

Telecommunications Engineering/Project Management

1 个月

Hey Jim, was this article written by an AI? There were a lot of grammar errors and some paragraphs didn’t read like your normal writing. Just curious. Good questions in this article. For someone like me who haven’t (or is it hasn’t)tried any of the AI apps directly (except what is integrated in my iPhone), I am somewhat skeptical about the whole topic. I am also a bit paranoid about the possibilities.

回复

要查看或添加评论,请登录

Jim Crompton的更多文章

社区洞察

其他会员也浏览了