Is AI safe?
iabg.de

Is AI safe?

What does “safe” and “intelligence” mean? Or more directly, would you get into a fully autonomous vehicle? (and if you wouldn’t, why would you expect others to?)

These are questions that are very relevant at this particular point in our history. How we treat AI and AI’s seemingly exponential development, will determine the quality and sustainability of our lives, going forward. So, we need to address it now, not as stakeholders, promoters, or luddites, but objectively and rationally as professionals.

So, firstly, is it “intelligent”? We seem to be using its abilities to play chess, refer to vast volumes of data, recognise patterns and execute evermore abstruse algorithms to ascribe intelligence to its systems. But as we know, human “intelligence” is so much more. The brain does much more than just organise data from the outside world. No doubt current AI can augment our “perception” ability, but how exactly the brain enables us to “consciously” decide to act on these inputs, is something of which we are currently ignorant. What confers real “consciousness” on individuals? Sea slugs seem to have all the basic abilities required for level 5 autonomous vehicles, are they intelligent?

So, let us agree that AI is not “consciously” intelligent, what about “safe”.

Another disturbing feature of the current system for assuring the safety of even the currently dumber software systems, (such as Horizon and NATS), not to mention the carefully crafted safety critical systems added to provide even more safety, all rely on qualitative assertions of fitness for purpose (safety cases). To an engineer brought up on the need for putting numbers on even the most complicated systems, (NASA, Nuclear, etc.) this seems to be a cop out. To paraphrase a Kelvin quote, “no numbers, no confidence”. (“Theatre”, prose and pontification alone don’t cut it!)

But in terms of system theory, if we use the Cynefin framework, (above) these systems are (merely?) very Complicated. The behaviours and outcomes of the mathematics of the algorithms, the machine interpretation of the coding, and the intended interactions of the user interfaces are all predictable. (hence quantifiable?). The problem arises when we introduce the human into the system. Our reactions to this new AI age, seem to be mirroring the advent of the machine age, by trying to ensure “safety” by regulation to control the design of the machines and rules / laws, procedures, education / training / discipline to control the human foibles.

But these systems, as currently operated with the human in the loop, or the environment, can be unpredictable, exhibiting emergent unexpected behaviours, (i.e., Complex). Hence, we need more than complicated mathematical models to attempt to control them or guarantee their safety.

This is where we need to aso recognise the limitations imposed by concentrating solely on trying to prevent “failures”, in these complex systems: as Perrow points out, they are inevitable and normal. We need to keep the human in the loop, not just as a passive user, but to use that other intelligent attribute that humans possess, resilience, the ability to adapt to emerging situations that are not in the book, but we have learned to expect in real life system operations.

Perhaps when we really understand and can add this required resilience “artificially” into these systems, we can then start to show that they can be “safe”.

But until then, we should only get into the auto-auto if we are given the opportunity to provide the required resilience to adapt to its possibly unscripted behaviours.

?

?

Phil Douglas

MD at Oracle Safety Associates, Safety Consultant, Safety Speaker, Safety Training Course Designer, Managing Director.

9 个月

AI expert and optimist Max Tegmark switched to alarming whistleblower he said: Unchecked development of advanced AI could pose an existential threat to humanity. He argues the "dangerous race" by companies to build ever more powerful AI without adequate oversight could have catastrophic consequences if the technology's goals diverge from human values. He criticises the lack of incentives for prioritising safety and calls for government regulation. Also in 2019, Nick Bostrom was well ahead of the curve in a vital way. As technologies like AI continue advancing rapidly, we would be wise to revisit his Vulnerable World Hypothesis and make sure we are putting into practice its prudent precautionary guidance. There is no doubt that Inventors and developers often lack incentives to prioritise safety and alignment with human values. The potential for profit or cutting-edge achievement can overrides caution. Regulation helps counteract this, but I feel it will take real harms before we get that right - such is the urge to make fortunes. We in OSH know well the profit incentive proves enormously challenging even when lives are at stake.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了