LLM aka Limited Liability Models?
Generative AI is the rage. Not a day passes without additional facts and opinions about the usefulness or otherwise of this technology. While Large Language Models like ChatGPT and Bard have captured the most attention, there are others like DallE. There is also criticism of some of these, and questions are being raised about whether we know where we are going or rather where generative AI is taking us.
In my view, this is a healthy debate. Issues around the use of generative AI technologies are not settled yet. To understand the issues and the opinions, lets use a simple 2 * 2 matrix, with one of the dimensions being the technology (and its two levels being utopia and dystopia) and the other being the impact on humanity (again with two levels – limited vs widespread). Though these two dimensions have been used because these are the predominant dimensions on which this debate seems to be centred, these dimensions are by no means holistic (as we will see later). Moreover, using only two dimensions helps to illustrate a few points about this debate.
Lets look at the 4 scenarios that this conceptualization leads to.
1.?????At the bottom right, the combination is Destructive Technology but Low Impact
2.?????At the bottom left, the combination is Destructive Technology with Widespread Impact
3.?????At the top left, the combination is Supportive Technology with Widespread Impact
4.?????At the top right, the combination is Supportive Technology with Low Impact?????????
Scenario 1:
This represents the scenario where technology may be potentially destructive, but impact may be contained. Experts who believe that this scenario will materialize will advocate controlled experimentation. An example is the statement released by G7 ministers that risk-based regulation of Ai was necessary to preserver an open and enabling environment for the development of AI technologies. I would place the letter signed by thousands of eminent personalities on a moratorium on the use of generative AI technologies for 6 months till safety protocols are put into place also partially in this scenario (and partially in scenario 2). How much training and testing of a technology to put in safeguards is possible in 6 months? Is it sufficient time to initiate a debate? To me, it seems that the letter has been successful in initiating the debate. ????
In this scenario, there is potential tech dystopia with limited potential for human dystopia.
领英推荐
Scenario 2:
This represents the scenario where technology will prove destructive, and have a widespread impact. This is the dystopian doomsday Teminator-like scenario. Experts who believe that we do not have control over AI will advocate to kill it before it assumes self-evolving capabilities. While they acknowledge that the probability of this scenario coming to bear is low, the potential impact is exponential in case this scenario materializes. They will advocate that there should be provision in technology to shut it down in case it turns rogue and enslaves us. Inventors of the technology may feel remorse over the use cases it is or may get adopted for. The Future of Life letter signed by thousands of eminent personalities (and also mentioned in Scenario 1) would be an example of views representing scenario 2 were it not for the 6 month moratorium part mentioned in that letter.
This scenario represents full-blown tech dystopia coupled with human dystopia.
Scenario 3:
This represents the scenario where the technology is helpful to mankind and there is widespread adoption. This will lead to increased productivity and hence mass unemployment as the work done by ‘intelligent’ humans no longer needs humans to get the job done; a software bot can do the job just as well, if not better. Goldman Sachs’s research report says generative AI will bring sweeping changes to the global economy, with upto 300 million jobs lost.
In my opinion, this is the scenario to which we need to pay the most attention. For example, the entire debate about the impact of generative AI is top-down. It would be good to see some studies with a bottom-up approach that talk to people on the ground and their understanding of how their jobs will transform or how they would like their jobs to transform once generative AI. Governments that are starting to think about the future of work, reskilling, 4 day work-weeks and social security nets are thinking ahead. Contingency plans need to be created on a war footing if this scenario plays out.
This is tech utopia bundled with human dystopia, and has the potential to take us by surprise due to the upheaval it creates. Tech leaders will advocate adoption, policy experts may advise caution.
Scenario 4:
This scenario results in the best use case for mankind, where the technology is helpful but has limited impact on the world in terms of employment patterns or predictions of mass unemployment. Experts who think this scenario is likely to pan out will encourage the progress of technology so that we can enjoy the gains of productivity. Examples cited will be the wheel, the car, computerization etc. They will also say that there is nothing to fear from the machines, and that pausing AI will prove counterproductive. Introduction of new technology did not make mankind redundant. Populations have grown and thrived. Many may feel that generative AI is just the next hype, which will subside on its own and which has limited potential for upheaval.
This is tech utopia with limited potential for human dystopia.
Pure technological solutions to these issues will prove insufficient. The key decision makers have to start thinking in terms of ethics, about HR policies, government / public policy. These are not factored adequately in the simple 2 * 2 matrix.
Many experts may claim that the scenarios foreseen have not materialized yet, and action will be taken if these scenarios do start manifesting. The picture this paints is of assuming limited responsibility now for the impact of these generative AI models or the direction the technology may take in the future i.e. a Limited Liability approach to the adoption of these Models, where no responsibility is taken for risks that have not yet manifested. Acting later may be a luxury the world may not be able to afford, yet stopping AI may mean lack of growth and development. Another related question is the degree of responsibility to be assumed by the model and how much by the user who uses a potentially dual-purpose model for nefarious purposes. The ethics and legality of several such scenarios is unknown. There are no easy answers for us. Open and honest conversations are key.?There is need for a new Operating Model for GenAI
Read more on AI at The Right Shift
Entrepreneur | Business Consultant | New Market Development | Global Sales Experience
1 年Good article, Vishal. I like the way you have laid out the scenarios and the concept of Limited Liability Models ??. One of the industries likely to be disrupted by generative AI is education and there is a concern teachers will be out of work. In that context, I found Sal Khan's (of Khan academy) recent TED talk introducing Khanmigo very interesting. It fits well in your 4th scenario by truly enhancing the quality of teaching without eliminating teaching jobs, en masse.