Debunking Musk and co’s concerns over AI: Why pausing development is not the answer.

Debunking Musk and co’s concerns over AI: Why pausing development is not the answer.

Ever since ChatGPT dropped into our unassuming laps in October of last year, the world has never been the same. The large language model employed by OpenAI’s chatbot is leaps and bounds ahead of anything we have ever experienced before. Indeed, what it can do (and what still needs refining) has made ChatGPT a symbol of the AI Revolution quickly unfolding before us.?


That has led to welcome, though long-overdue discussions, about what the full impact of that revolution will mean for humanity. The expected number of job losses as a result of workers displaced by AI varies wildly from 85 million to as high as 800 million, though most economists predict that ultimately AI will create more jobs than will be lost. Other concerns relate to privacy issues, whether AI machines will help address racial or ethnic inequity or make things worse, and whether or not AI robots can be trusted to engage safely with humans on an ongoing basis given that they can only emulate human behaviors and feelings, but do not and cannot be viewed as having a shared experience. Some worry that AI will make humans obsolete or lose their sense of purpose, while others express great concern about what will happen to our culture if much of human art and literature is replaced by machines that aren’t designed so much to innovate and defy convention as they are to see and replicate patterns.?


Now, Elon Musk, one of the original founders of OpenAI (he sold his stake in 2018) and other tech leaders, like Apple co-founder Steve Wozniak, have signed an open letter calling for pausing the development of all AI systems more powerful than GPT-4, for six months.?


GPT-4 is the newest version of OpenAI's language model systems, released this March – it’s much more powerful, able to process more inputs, including images, and can produce even more complex and nuanced outputs and responses than ChatGPT could before it. It’s so impressive, it can take a drawing and convert it into code that can power a functioning website. We’ve truly entered groundbreaking territory here.?


In light of this, one can see why Musk and his peers are so alarmed.?


Here are some concerns raised by their letter:


  • “Should we let machines flood our information channels with propaganda and untruth?”
  • “Should we automate away all the jobs, including the fulfilling ones?”?
  • “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”?
  • “Should we risk [the] loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”


Let’s try and unpack these points.?


“Should we let machines flood our information channels with propaganda and untruth?”


While it is true that AI can generate much more believable fake media, be it text, images or audio, propaganda and untruth have always been a part of the internet. Just because the lie is more believable, does not mean that we should not do our own research as critical thinkers in a world of misinformation.?


Nonetheless, social media and the quickness and efficiency with which misinformation can be spread is a problem that has yet to be seriously addressed, despite numerous calls for tech companies to do so. As many of the investors, technologists and companies who helped create today’s Internet also are the ones involved in developing and launching AI technologies, I think it is smart to be skeptical that the issue of propaganda and misinformation will be addressed in a reasonable manner anytime soon.


At the same time, technology is merely a tool and therefore neutral. If we want to address the issue, AI could finally give us an upper hand against it. But the will must be there.


“Should we automate away all the jobs, including the fulfilling ones?”


A lot of people lost their jobs during the Industrial Revolution. Did this stop machines from taking over production and manufacturing?


Advancement and progress are unstoppable factors of life, and a reality of the human condition. We are constantly bettering ourselves and our lives, and making our habitats more hospitable. It’s human nature.?


?Most economists agree that in the long run, AI will create more jobs than it replaces, and by 2030, it will lead to an estimated $15.7 trillion, or 26% increase, in global GDP.


What has yet to be addressed and what was missing from the Future of Life Institute’s open letter and also in the conversations with AI proponents is how businesses and governments will address job displacement short term. How and when will we begin retraining and upskilling those workers who have been displaced? Will it be rolled out in phases or done all at once? Who will pay for it and where will it be done??


While AI may indeed be the ideal solution to nullifying unfilling and automatable jobs, we must bear in mind that most people do not take or stay in unfulfilling jobs by choice. Community colleges, universities, federal and state government agencies and businesses MUST all come together to address the question of retooling and upskilling workers, particularly given statistics that say that as much as 50 per cent of the US workforce will need to be retrained. We need to ensure that some public or private entity is accountable to ensure that human workers get the support and help they need so they don’t get left behind in this brave new world.?


“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”


It goes without saying that science fiction and dystopian tales have had a major influence on public opinion of AI. But what most don’t realise is that most often, AI is depicted as a character within these stories so that readers or viewers can relate to them, whether positively or negatively. In truth, anthropomorphizing AI beyond the realm of fiction can be a slippery slope.?


OpenAI CEO Sam Altman talks about this in an insightful interview with ABC News. It can be easy to tack on human characteristics to AI just because an algorithm-powered conversation tricks our minds into connecting emotionally with it. At the end of the day, however, ChatGPT and other chatbots are just large language models (LLM) that try and respond in a coherent manner to fulfil their user’s queries. Could there be an AI with human-like traits and sentience? Altman doesn’t dismiss this possibility in the far future, for versions with different architecture and programming, but as it currently stands, GPT-4 and its LLM peers are not designed to evolve in that direction.?


“Should we risk [the] loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”


Among the points raised in this paper, this perhaps holds the most credence. History has taught us how unfettered tech leaders tend to operate (proponents of the gig economy at Uber and Lyft are a recent example), and the outcome of their unrestrained pursuit of profit and growth.?


Should the ethics and legal framework surrounding AI development be left to tech execs? Certainly not. Altman discussed this in the aforementioned interview, where he said that one course of action could see world governments coming together to draft a unified piece of legislation that sets the moral and ethical frameworks for large language models like ChatGPT to follow. However, this isn’t a foolproof plan either, as many regulators tend to reflexively make decisions based on ignorance or fear. History has plenty of examples of this, from TV to social media and beyond. The innovators in question need to provide full visibility and awareness about their tech and its implications, both to the public and to regulators, in order for a proper, well-laid-out framework to be drafted that will ensure sustained innovation while protecting the public interest.


Because AI will impact all aspects of our lives, cross national borders, and be used in ways we haven’t even envisioned yet, I agree that AI needs to be regulated and on a global level. Already, we are seeing the makings of a technological AI race, calling parallels to the nuclear arms race of the past, and such a rivalry needs to be regulated and managed properly lest it gets out of hand.


I also believe humanity would be best served not to leave so important a matter to AI innovators nor lawmakers alone. There needs to be proacted conversation and dialogue drawing on a wide range of stakeholders, including business ethicists, academics, neuro-linguistic experts, legal professionals, mental health professionals, authors and artists, business anthropologists, social injustice activists and consumers to name just a few. Developing such a forum will take time and certainly can never be resolved in six months.?


While I support the spirit of the open letter, I also know that the topic of AI is immense and highly complex. Thus, a call for a six-month moratorium is not only disingenuous, it’s unrealistic, and its signers are surely aware that we can’t address all the issues that need to be resolved in the proposed time frame. As such, it makes me wonder about the true motivation behind the letter.??


What I do know is that we need to keep talking about both the long-term and short-term impact and effects of AI on humanity and put pressure on both global governing bodies and AI innovators to develop a set of tenable, practical but meaningful rules and limitations so that we co-exist positively in a world where humans and machines work side by side. I am a strong believer in the power of technology, and AI is one of the greatest innovations that we could have possibly created. It will allow us to automate the menial so that we can focus on the meaningful - creativity and human connection - the true fundamentals of living the human experience.



ABOUT THE AUTHOR, BRIAR PRESTIDGE:

Based between New York and Dubai, Briar Prestidge is a serial entrepreneur, CEO, trusted advisor and author.

Briar founded?Prestidge Group, a leading personal branding, PR and speaker relations agency managing HNWI, C-level executives, tech experts, celebrities, government officials and investors.

Her corporate power suit collection?BRIAR PRESTIDGE - The Label?is a premium label made from fine wool and cashmere. At the world’s first Metaverse Fashion Week, they dropped their spin-off NFT power suits for avatars on Decentraland.

Briar sits on the Advisory Board for the Metaverse Fashion Council, and she is also a Strategic Advisor for Scopernia, specialized in future proofing organizations by preparing them for Web3. In 2022, she?produced the documentary '48 Hours in the Metaverse'?where Briar travelled to 33 virtual worlds and interviewed 21 metaverse experts and VR build builders.

Briar has been featured in Forbes, Entrepreneur, Emirates Woman, Grazia, The National, and Marie Claire (among others) in recognition of her work.?LEARN MORE

要查看或添加评论,请登录

社区洞察

其他会员也浏览了