AI and You

AI and You

Welcome to a companion article to the?AI and You podcast, where we explore the implications of the ever-evolving world of artificial intelligence and its applications.

Introduction to ChatGPT

ChatGPT, is a generative AI technology that has the potential to disrupt a number of industries, from generating writing content to transforming the way we do our jobs.

Back in 2019, a non-profit research group called OpenAI created a software program called GPT-2 that could generate paragraphs of coherent text and perform rudimentary reading comprehension and analysis without specific instruction. OpenAI initially decided not to make its creation fully available to the public, out of fear that people with malicious intent could use it to generate massive amounts disinformation and propaganda. Fast forward three years to 2022 and the release of GPT-3 and now, in the past week or so with the new GPT-4, the capabilities have increased exponentially.

How ChatGPT Works

The first thing to explain is what ChatGPT is always fundamentally trying to do, is produce a “reasonable continuation” of whatever text it’s evaluating so far, where by “reasonable” we mean: what one might expect someone to write after seeing what people have written on billions of webpages, and other accessible text such as digitised books and Wikipedia for example.

At the core of ChatGPT is what has been termed a “large language model” that has been trained by example to estimate the probabilities with which word sequences can and should occur. The remarkable thing is that the underlying structure of ChatGPT is sufficient to make a model that computes next-word probabilities well enough, and fast enough to give us reasonable content style text.

ChatGPT Issues

Whilst it is impressive how human-like the results are, OpenAI itself doesn’t hold back about the new model’s potential to cause damage: “While less capable than humans in many real-world scenarios, GPT-4's capabilities and limitations create significant and novel safety challenges.”

There is now robust and heightened debate about whether tech companies, producing these so-called chatBots are being irresponsible by putting this powerful technology in the public domain despite its proven flaws and drawbacks. This puts all of us in an extremely poor position to predict what the consequences will be for society - we have no idea of what is in the training set and no way of anticipating which problems it will work on and which it will not.

There is also increasing debate regarding the carbon footprint of the generative AI boom with concerns about the use of power and water for example, as well as the cost of computing resources required to teach and operate these sophisticated software systems.

It needs to be noted that the big players in social media, together with the big IT companies are the leaders and major investors in artificial intelligence – witness the recent Microsoft $10 billion investment to gain more control over how the latest OpenAI ChatGPT4 will be used, and possibly by whom and whom not.

The disbanding of the A.I. ethics groups within some big tech companies is cause for concern and raises questions as to whether the tech industry can be trusted to self-regulate when it comes to AI ethics or safety, and why government regulation is urgently needed. An early sign recognising the need for regulation comes from the U.S. government warning companies not to exaggerate their AI claims or face consequences. This is an inflection point. History teaches us that these kinds of powerful new technologies can and will be used for good and for bad. How responsible and transparent will the tech companies that develop it be? How quickly will governments establish legal guardrails to guide the technological developments and prevent their misuse?

These are some of the questions that need to be asked – and answered: What are some of the ethical implications of releasing such powerful artificial intelligence tools? How can governments create legal guardrails to guide technological developments and prevent their misuse? What steps can companies take to ensure their products are used responsibly and for the greater good? What are the environmental implications of the generative AI boom?

Conclusion

In conclusion, ChatGPT has transformed the way in which writing (and very soon image content) can be created. While the technology has its flaws and limitations, with appropriate regulation and responsible development, it can be used for the common good.

Mira Maralova

Investor-ready financial models and pitch decks that secure funding

1 年

David, insightful! Especially about the carbon footprint of the gen AI - would be interesting to learn more on this.

回复
Louise Madeley

Business Owner, Physical and Mental Health First Aid trainer, Nurse Practitioner at Madeleys First Aid +, BNI Achievers

1 年

Great article David F George !!

回复

要查看或添加评论,请登录

David F George的更多文章

社区洞察

其他会员也浏览了