Can the AI developments lead to another “Hiroshima and Nagasaki”? Can we actually anticipate the moral scruples now?
Have you noticed the month flying by? Did you see “Barbie” and “Oppenheimer”? How is it even possible that two seemingly absolutely different movies got almost equal amount of love and attention from the public? I want to ask so many questions and answer none about both movies. However, I must admit that the scientific appeal of Oppenheimer is closer to my current area of interest here. In a way, the Oppenheimer’s “moral scruples” about the project Manhattan resonated with me, drawing parallels between the invention of atomic bomb and artificial intelligence (AI), as well as their potential impact for the future of the humanity. It also sent me back to the topic I discussed earlier in my articles – the “evil side” of innovation. Could you see how is that possible? Let us unpack more questions and contemplate about the power of innovative solutions, public perception and the impact of the changes brought by such innovations.
?
In case if you managed to avoid the hype about the “Oppenheimer” movie, I do not want to spoil it. However, evidentially, it is about one of the most influential physicist of 20th century – J. Robert Oppenheimer, and the project Manhattan that he directed, aimed to develop an atomic bomb, in attempt to stop the World War II. So in the nutshell, this is the pre-story of the Hiroshima and Nagasaki bombings in 1945. Some call it tragedy, for others it could be perceived as a success that brought the end to the war. One thing is for certain – it was a revolutionary solution that changed the history of humanity forever.
Without going into too much details on the statistics, I only want to highlight that the real impact in terms of lives lost and damaged is still unknown till this day. To be clear, I do not mean only the official death tolls, but also people who were poisoned by the radiation and further intra-generational mutations. Already 4 years after the bombing, the death toll in Hiroshima doubled (https://thebulletin.org/2020/08/counting-the-dead-at-hiroshima-and-nagasaki/). One interesting side note, in one of the documents presented in that Bulletin on the Atomic Scientists (the link above), the Join Commission describes the “peculiarities of the Japanese administrative methods” for having “no passion for accuracy”. Bear in mind, these cities were really well developed at that time. Well, allow me some generalization this once, but by 1970s the meticulous attention to detail (e.g. reference to the Just In Time (JIT) concept) turned all heads towards Japanese culture, and specifically the Toyota car manufacturer. How quickly things can change, right?!
Back to the impact discussion, what we need to understand here is that, just like with any epidemic, the atomic bomb carries radiation which affects people in so many ways long term, so it is not enough to look at the number of deaths and casualties “on the day”, it must be monitored throughout decades, or potentially centuries (?). The radiation affects the territories much wider than the radius of the explosion only. Plus, there is no precise explanation to how long the radiation stays active for or how it actually affects the flora and the fauna in general. Moreover, the objective statistics is rarely disclosed to the general public, as this might affect the status quo of authorities and people in power. Indeed, there is still so much mystery around the story of Hiroshima and Nagasaki, presenting a vast range of subjects for the research, education, and, well, now even entertainment (?).
Without too much dramatism though, could we draw parallels between the effects of the atomic bomb and the AI, or would it be a bit too far gone? Considering that the project Manhattan had a long history of developments leading up to it, we can say the same applies to the AI, as there is no a single event or a project that created the definition of what we now call broadly “artificial intelligence”, or “AI”. By now, there are so many shapes and forms of it also, that went through design evolution and numerous cycles of development. Even though we are nowhere close to the realization of the full potential of the AI, it is clear that we need to start asking relevant questions “yesterday”, thinking about policies, ethics and potential impacts. Similarly to the effects of the atomic weapons, the whole world is now being shaken by the AI revolution. In the past month the volume of media coverage on this topic increased dramatically, yelling about the big businesses and governments are chasing the cutting edge technologies to fulfil their own ambitious objectives.
领英推荐
For example, the UK government are investing billions of pounds into the AI related research and development, believing that AI will not disrupt the current employment markets but rather aid with the monotonous jobs like data input etc. (https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper). One particularly interesting sentence from this White Paper caught my eye – “To ensure we [the UK] become an AI superpower, though, it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments”. Now, do you remember how the story about the “superpower” countries (i.e. fights for the superpower, or to be called as a “superpower country”) goes? Well, this month the UK invested £100m of taxpayers’ money into the AI chips in attempt to regain the leadership in the AI superpower race, which actually includes participants from both public and private sectors. Does this somehow remind you a gambling game or a strategic investment? If the latter, as a tax payer myself, I would like to know what is estimated ROI or NPV, or at least a payback period. The point being, the investment happens into different projects that public has limited information about and definitely no clear picture of the bigger goal, apart from the UK to become the “AI superpower” (which I do not even know the definition of; the country that has the most chips or who develops the biggest weapon using the AI chips?). Despite loud omnipresent headlines on this topic, the levels of secrecy on the actual plans and AI related projects are high. Is this still about helping NHS and giving back to the public, or is there something we are not aware of, yet again? Possibly later I could dive deeper on this topic, as it would be interesting to explore how many people in the UK are in the category of ‘likely’ to be out of their jobs, to add up to the current levels of unemployment, which is even higher in the remote locations, even in this developed country. For now, it seems that whereas the researchers and people in the industry might be more engaged indeed, whilst people, whose primary, and possibly only one, occupation relates to the physical activities, would be required to re-qualify (would government provide for that?) or…(not even sure what other options are there at this point and I really do not want to come across as a pessimist here, so rather leave this space blank).? Can you see how this could be part of the “AI superpower” definition? In any case, just like in the Hiroshima and Nagasaki case, affected by the atomic bombing, similarly in case of the UK, chasing AI dreams, the number of “casualties” are not known, hard to predict and likely to increase over time. Sadly, I personally know people who just naturally not getting along with any type of technology; my attempts to do any knowledge-transfer failed miserably. I believe this to be normal though, in a sense that all people are different and bring the value to the society in different ways, this contributes to the overall diversity (or so it used to be?). Do you think in the next 20-50 years, people would need to start learning python coding language, just like they learn English language for example?
Finally, without diving deep into the technical and chemical structure of the AI chips, relying on my knowledge of the existing technologies, I must admit I am already concerned about the health hazards and safe disposal of the hardware. This is not covered in the whitepaper from the UK government and very limited information is available to me as a representative of the general public. If you have similar curious mind as mine, please go ahead and do a little research to see for yourself what are the AI chips made of and how can the parts can be disposed safely (or if they can?). Even better, if you have a specialist around you, ask them the key questions – how long the AI chips are valid for and what happens after the expiration day? With that, I can almost guarantee that there won’t be any doubts that really the humanity are diving into the dangerous oceans of the unknown unknowns, in terms of the qualitative research on the potential impact of the current AI related initiatives. And this is what we have to cross, in order to pass from the theory to the practice, the way that the scientists of the Manhattan project went through.
Overall, I really want to believe that I might find the answers to all or at least most of my questions and concerns in this and upcoming white papers from the UK government and other responsible organisations. There is definitely the need for more details on specific AI related projects, finances, public education etc. Taking into the account that realistically there is limited certainty in the future impacts of the AI, the general public need more hand holding whilst facing this incredible everlasting change.
?
?
Senior Director Of Technology @ Sombra
7 个月Tatiana, thanks for sharing!