Corralling & Controlling ChatGPT

Corralling & Controlling ChatGPT

It had to happen – but how embarrassing that it was a lawyer who did it.

A New York lawyer is facing disciplinary proceedings for using generative AI to do legal research that threw up bogus cases. Steven Schwartz of Levidow, Levidow & Oberman admitted to using ChatGPT for a brief citing six non-existent court decisions in a personal injury case against Avianca Airlines. The attorney’s nightmare was created by a phenomenon known as “hallucinations” where AI comes to conclusions based on data it dreamed up.

It is easy to blame the lawyer for cutting corners but while he clearly failed to live up to his professional responsibilities, there is also a compelling case for businesses to increasingly rely on AI’s faster and, once the hallucinations are resolved, presumably more accurate results. Customers will want to reduce the amount they pay for goods and services where AI deployment yields cheaper solutions.

?

Corralling AI’s Capabilities

AI is clearly here for the long haul and businesses that do not learn to reap its benefits will become obsolete, much like a courier company with a sentimental attachment to horse drawn carriages. While it clearly still has teething problems, the day is not far off when AI will attain at least as high a rate of accuracy as the average second year associate in a services company (with fewer spelling errors and zero complaints about work life balance). Even if it does not, another software will be developed that can correct the work of the first chatbot. That is the inexorable march of technology.

As AI evolves, human workers will have to as well.

Backbreaking tasks like trawling over voluminous data will sit squarely in technology’s domain. It will outperform humans in assignments informed by historical information, whether these are research briefs, opinions or marketing and news copy. Where technology is not yet adept is in interfacing with people – innately complex, unpredictable and not always rational – and intelligently forecasting future human behaviour.

Businesses and, in particular, service providers would do well to focus their training on other areas. Productivity savings can be applied to equipping human workers with better people skills and enhanced critical thinking. Because technology looks backwards, humans will have to be forward thinkers, using validated AI-generated data while integrating an understanding of how human beings think and respond, to deliver a more sophisticated future-facing solution.

This is going to be an especially big challenge for the Singaporean worker, product of an education system that traditionally tests accuracy of examination answers rather than skills like analysis or persuasion. It is easier to score well in science-based topics than in the humanities, so our top students tend to be scienced-trained, which means that a lot of our C-suite leaders – be it in business or otherwhere – will have the same skills that technology is best at. And because human psychology predisposes the decision-maker to promote and value their own abilities, a vicious circle results.

So the way we train our future workforce must change. Without detracting from core subjects, vocational education will also have to equip students with skills that enable leveraging AI but focus on what it cannot do well. Journalists’ value will be in their commentary, not straight news reporting; lawyers’ in their solution creation rather than expounding what the law is. Problem-solving, critical thinking and people skills will be prized by employers. Educators will have to teach students to achieve the correct answers but also to start answering questions that have no single accurate response. A constantly questioning mind will replace “meticulous and diligent” on CVs.

?

Controlling AI

But experts are worrying that AI poses a graver existential threat than to just our jobs.

In March, more than 1,000 technology leaders, researchers and others who work?in AI signed an open letter warning that it presents “profound risks to society and humanity.” One of its signatories is Geoffrey Hinton, widely seen as the godfather of AI for his work in developing the algorithm that allows machines to learn. He was joined by Yoshua Bengio, who won the Turing Prize alongside Hinton and Yann LeCun in 2018 for their work in AI.

Their concerns centre on the unknowns around AI—how it can (and may soon) overtake humanity and the potential risks. But even before that dystopic day arrives, we are already facing serious dangers from AI.

Disinformation is the first concern. As AI becomes increasingly relied upon, bad actors will have the opportunity to spread false narratives and propaganda. Historian and bestselling author Yuval Noah Harari warned in a recent Economist essay that “people may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion”, referring to fears and loathings created and nurtured by machines. As Hinton says, “don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” as the Guardian newspaper reports.

A decade ago, the Arab Spring uprising was facilitated by viral social media posts. Imagine if those posts contained inaccurate information or worse, had been planted by bad actors intending to foment revolution?

AI also has the potential to do great harm to individuals. AI-generated fake videos are becoming more common and convincing. Deepfakes have been used to put women’s faces, without their consent, into pornographic videos, in a “depraved AI spin on the humiliating practice of revenge porn, with deepfake videos appearing so real it can be hard for female victims to deny it isn’t [sic] really them”, reports CNN. The technology will soon allow anyone with a computer to create deepfakes with a few photos. The potential for harm is limitless, from discrediting politicians to influence an election, to extortion and simple revenge porn.

Unfortunately, the harms now overwhelmingly target females, meaning gender equality could be set back by generations. As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” If the physical world is an increasingly equal place for women, the internet represents the Dark Ages for gender equity.

Hidden bias is another concern. AI trawls the internet and produces commentary based on available information. Unlike using Google where the user would evaluate the veracity of the articles and posts the search engine throws up, ChatGPT confidently presents its commentary as fact. Bias (whether conscious or not) could enter the system in two ways: first, in the programming of the large language model, and second through bias embedded in the data relied upon.

AI could steadily destroy what we think of as truth and fact. Experts are concerned that people will rely on these systems for medical advice, emotional support and the raw information they use to make decisions.

With these varied and serious risks presented by AI, coordinated and decisive action must be taken at governmental level to address them. Just as drugs and nuclear weapons are regulated and human cloning banned, so some form of control over the development of AI must be exercised. Clearly the actions must be taken at a global level, to the extent possible, due to the cross-jurisdictional accessibility of the internet.

Such boundaries, argue experts, could include limiting AI companies’ access to computing power, withholding sensitive information, and licensing developers. Large businesses are already requiring vendors to confirm how AI is used in the provision of goods and services. But that is to address the downstream symptoms. Governments and regulators have to regulate to reduce these. Even Sam Altman, chief executive of ChatGPT’s creator, said in his testimony before the US Senate that the risks are serious enough to warrant government intervention.

The possible benefits of AI are legion, but its potential for harm in all areas of human existence is incalculable. The problem with technology is that human beings have come to accept its advancement as inevitable. While we still have the ability to control the development of AI, let us act to harness it and set some boundaries, before we no longer can.

要查看或添加评论,请登录

社区洞察