Recently, the Artificial Intelligence Institute of South Africa and the Department of Communication and Digital Technologies discussed the future of artificial intelligence in South Africa. Specifically, a National AI Government Summit Discussion document was produced; potential regulation strategies, interdisciplinary initiatives between data scientists, and policy makers, and ethical guidelines, among other things. [1]
While the document serves as a respectable first step forward, igniting the discussion on ethical use of large language models and artificial intelligence, its contents are underwhelming. In this article, I will briefly summarize the positives and negatives proposed in the fifty-three (53) page document.
Negatives
- Artificial intelligence is defined on page 1 as, “...systems that exhibit intelligent behavior which quickly analyse various activities and specific environments, then make independent decisions with the aim to achieve specific [socio-economic goals] {own emphasis}” (OECD, 2018). The word 'independent' is a poor choice. Artificial intelligence and LLM's (for convenience purposes, herein referred to as "AI/LLM") are not sentient. They require humans for their maintenance, development and usage. To regard them as independent may be a lacuna for liability in the future. Who is liable for damages in the event that a crane acting on pre-written programming injures a human worker? Is the programme regarded as 'independent' and the programmer free from a damage claim? These raise policy questions for future liability. Additionally, the definition provided is from 2018 — when AI/LLM was in its infant stages — and is now outdated. More complete definitions are referenced elsewhere but this leads to uncertainty, not clarity.
- The document raises valid points about the effect of generative artificial intelligence (which includes ChatGPT) but fails to address the wider implications. Page 10 states that: "GAI tools can produce a wide variety of credible writing in seconds, then respond to a user’s critiques to make the writing more fit for purpose." Why is the writing produced by GAI automatically 'credible'? What 'purpose' does it become more 'fit' for? As I've outlined in my previous articles, 'fit for purpose' is unfortunately regarded as how efficient, and effective the AI/LLM is, not how transformative it is.
- The document ignores one of the most crucial problems with integrating AI/LLM into a nation: AI alignment research. This concept refers to using benchmarks, closed-loop boxes and experiments that assess whether an AI/LLM intends to deceive its users, escape into the wider world, or has a detrimental effect on the workforce or education as a whole. [2] The document briefly alludes to the concept on page 39: "Risks that AI designed for military purposes “escapes” into the wider world and leads to enhanced criminal behavior or other highly dangerous outcomes." Unfortunately, there is no mention of initiatives that tackle this phenomenon.
- It is stated that South Africa may be inspired by the EU's AI Act. [3] This latter Act fails to address privacy concerns, extraterritorial selling of data, and has little to no accountability mechanisms against private corporations that conceal it's training data. I strongly recommend that AI regulation in South Africa adopts its own approach, sensitive to the culture and history of our own country rather than that of the Global North.
- The document's contents emphasise over and over that the goal of AI regulation and implementation should be efficiency, and intelligence. Surely, the approach should be grassroots; incentivise growth at the poorest, most disadvantaged levels of society in a transformative manner first? Why are the tenets of Western and Eurocentric beliefs; individualism, efficiency, corporate growth being injected into a sector which already inherently has the potential to widen the economic gap between the privileged and disadvantaged in society?
- Most alarmingly, the document is very vague and broad in the risks that it identifies. It does not mention that SARS is already using artificial intelligence to facilitate some of their operations, or that various African countries have rich initiatives from which we can learn from; Tanzania's Silabu AI that tutors, and mentor's students on a one-on-one level, concerns from Kenya's moderators of OpenAI that reviewing passages submitted by users can be overwhelming on one's mental health, among other examples. [4]
I understand that this is merely the beginning. It is a discussion document intended to galvanize a movement in the right direction. It is entitled to make mistakes, and I do not doubt or take away from the competent abilities of those who wrote it. However, as someone who has been researching artificial intelligence law for several years, I reiterate that South Africa needs better, and it needs to do it fast. AI/LLM's are advancing far quicker than law can keep up. We cannot afford to design solutions that respond to the AI/LLM's of five (5) years ago, they need to be precisely worded to incorporate all future capabilities of AI/LLM's as well.
Positives
On a more positive note, I take the time here to point out the passionate, and impressive strife's the document has made.
- Pages 22, 25-26 outlines several objectives, which I agree are important; promotion of South Africa's national character and identity, increasing investment in AI research, implementing legislation that caters to local businesses, improvement of service delivery, to create job opportunities and skill development. By incentivising these areas through coordinated task teams, South Africa can place itself on the world stage, and voice proudly it's opinions unique to its context.
- There are specific targets that are set to be achieved by 2030. Namely; twenty thousand (20,000) AI specialists in the workforce, five thousand (5000) AI experts, and a high of three hundred (300) AI start-ups, among other targets. These are impressive, and if South Africa can achieve it, the effects would be astronomically positive on our future.
- Page 37-38 points out collaborations between various other AI institutes from around the globe. Importantly, a strategic process of providing, integrating, foreseeing, developing and operating national research and regulations alongside orchestrating, fostering, developing and building AI expertise, entrepreneurship and solutions is depicted. Assuming this can be implemented practically both in universities across the country, and within law firms that have started their own AI departments, this two-tiered approach is insightful.
- The document also goes on to address the concerns of deepfakes being used to manipulate elections, the potential for discrimination in the training data, the widescale alleged copyright abuse of current forerunners in AI/LLM, and the protection of privacy. It would be interesting to see what follow-ups are made to address these issues considering that they involve multiple different constitutional rights. What balancing tests will be devised to balance freedom of speech, against free and fair elections? How will discrimination and the violation of privacy be eliminated when data is not yet open source? I trust the AI institute, and its relevant experts to spearhead the problem.
Concluding remarks
Ultimately, I think this document serves as a positive step in the right direction. Rather than disappearing into stagnation, various private and public actors are acting ambitiously to mitigate the harms of AI/LLM before they become irreversible. I propose that the best possible way to discover the questions that need to be asked, and to speculate the solutions to these questions is to do the following:
- Host regional moot courts that involve South African universities with a fact complex involving AI/LLM, and hypothetical regulations, statistics and clients. By doing this, we ensure students engage now with important material that prepares them for the future. Further, it gives policy makers a taste of what to expect in real courts, and what kind of policies to avoid.
- Encourage law firms to engage with computer scientists, alignment researchers, and legal practitioners in open, publicly recorded forums on the most pressing issues of AI/LLM. This allows citizens across the country to watch, and engage with high-level discussions that spark insights, questions and solutions into an otherwise untapped field.
I end by saying that this is the future. Historians will regard what we say and do now as being either, 'forward-thinking, and imaginative' or more bleakly, 'that is when our troubles began'. The time for action is now. We must act to shield ourselves from the harms of AI/LLM's, and act bravely to integrate it into our society in a meaningful and transformative manner. We cannot outsource all of our inspiration, and talent to international actors; we must look within.
[1] DCDT, AIISA, AI National Government Summit Discussion Document [2024].
[2] J Jing, T Qui 'AI Alignment: A Comprehensive Survey' (2023) arXiv 3.
[3] European Commission LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS [2021].
[4] N Moodley 'How SARS used AI and proactive measures to claw back R210bn this tax?year' How SARS used AI and proactive measures to claw back R210bn (
dailymaverick.co.za
)
(accessed 5 April 2024); Daily News Reporter 'How SILABU app is addressing challenges facing slow learners' How SILABU app is addressing challenges facing slow learners - Daily News
(accessed 5 April 2024); N Rowe '‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models' ‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models | Artificial intelligence (AI) | The Guardian
(accessed 5 April 2024).
Wow, your deep dive into the most recent AI discussion document really shows your dedication and attention to detail! Considering your interest, diving into how different countries are approaching AI legislation might give you broader insights. How do you see these developments influencing your career path in the field of law and technology? Exploring comparative law could really set your understanding apart. What areas of AI policy are you most passionate about exploring further?
Candidate Attorney. LLM (UCT), LLB and BA Law (UP) Graduate
7 个月This is a very insightful read, Liam Bolton. Thank you! Are you looking to write your final year LLB mini-dissertation on something relating to AI?