Marking ChatGPT’s One Year Anniversary: Safety and Regulation
Sandeep Sacheti
Elevating leaders and transforming large-scale processes and corporate governance structures with data, design, and domain experts | patent holder of multiple innovations
In my second post celebrating today’s ChatGPT anniversary, I wanted to delve into one of the biggest topics around generative AI – safety and regulation. Almost as soon as ChatGPT became a public tool, we realized that language models tend to make things up due to the probabilistic nature of next word prediction. The world started calling this phenomenon hallucinations, a term that would have been unthinkable to use in computer science until last year.? Mostly as a joke, but it creates challenges for the use of large language models in critical functions like medicine, law, etc.?
History may assuage our fears.? By the 1880s, as electricity-based street lighting and factory electrification became commonplace, people started to be harmed by electrocutions.? To address the dangers, electrical installation codes were developed.? National Electric Code was published in US.? Electrical safety guidelines were included in Factory Law in UK, and later ground-fault circuit interrupter (GFCI) was invented at Berkeley, where I studied, to address a major cause of electrocution in commercial and domestic settings.? Today, all buildings go through inspections before anyone can live in them, protecting life, commerce and the home.? For most people, electricity has become so safe that we hardly think about protection but about aesthetics, lighting, décor and the mood of the room.
Similarly, governments, regulatory bodies, scientists, companies and the public are realizing the risks of GenAI.? Laws, guidelines and best practices are starting to be formed. The European Union’s AI Act proposes that AI systems should be analyzed and classified according to the risk they pose to users.? The US, UK and many other jurisdictions are considering additional measures.? Forward-looking companies are developing AI frameworks to create transparency and trust with the stakeholders. Moreover, language models are also advancing at rapid clip with models getting smarter. One of the exciting techniques is for LLMs to call tools (internet browser, python code, calculator, etc.) just like humans would when a special tool is required versus trying to do complex math in the head.
Like electricity regulation, with guidelines, laws and certifications, the power of Generative AI and AI in general will become safer.? What do you think?? Will laws and regulations move fast enough?? Or will laws smoother innovations??
领英推荐
#WoltersKluwer #Berkeley #OpenAI #ChatGPT #hallucinations #GenAI #saftey #regulation #AI