Simple, Elegant, Convincing, and Wrong: The fallacy of ‘Explainable AI’ and how to fix it, part 3
Author: https://www.123rf.com/profile_phonlamaiphoto. Licenced from 123rf

Simple, Elegant, Convincing, and Wrong: The fallacy of ‘Explainable AI’ and how to fix it, part 3

This week we have a series of blog posts written by our CEO&Founder Luuk van Dijk on the practicalities and impracticalities of certifying “AI”, running up to the?AUVSI panel discussion on April 25.

1. RUN! I’ll explain later?

2. What is this “AI” you speak of?

3. Certainty and Uncertainty

4. Traceability: Beware of what you wish for

5. The tragic case that was entirely explainable

6. General Izability to save the day

____________________________________________________

3. Certainty and Uncertainty

From the picture of machine learning that we sketched in the previous episode, we can draw two important conclusions: first of all, the requirements that your hardware and software are working as intended and are safe remain unchanged: your avionics system will have to meet DO-254 and DO-178C standards, it will have to be sufficiently high performing in its computation power, it will have to be reliable so that internal memory does not spontaneously flip bits and CAST32a compliant so that multicore processors don’t unpredictably stall each other, ruining any hard real-time guarantees.???

We also require that the whole system is in a proper enclosure and that the power supply is reliable (DO-160) and that the ethernet cables won’t catch fire etc etc.?Much hay is made of assuring – i.e. providing certainty about – these things, since aerospace engineers know very well how to certify them for a living, but they have zero bearing on whether the neural network’s prediction is correct or not. The emergent property of the model's predictions matching reality is not covered by any of these standards. They are necessary but insufficient by themselves to guarantee the system’s overall safety and fitness.?

Second of all: while the system produces an output deterministically, how well that output fits reality is a matter of statistics, and this is because of the nature of the problem with its inherent uncertainties and not because of the nature of the solution. Calling a machine-learned system non-deterministic misattributes the source of uncertainty which really lies in the environment from where the system gets its inputs.?

For example, one component of Daedalean’s Visual Traffic Detection is a neural network that decides if an image contains an aircraft or not.?Here are some examples:

No alt text provided for this image

Even if our training data set is 100% accurate, there may simply not be enough information in the 32x32 pixels of 8 bits each to make the call, just like height does not uniquely determine a person's weight, even though there is a clear statistical relationship.?At a great distance, an aircraft may be indistinguishable from an equally shiny ground vehicle.?There are 2^32x32x8?(approximately equal to a 1 followed by 2466 zeroes) possible images in our input space.

Between any image that we would clearly label ‘yes’ and one we would clearly label ‘no’, there are insanely many images that differ only by one bit in one pixel which are on a decision boundary, and to test exhaustively that we get them all right is completely impossible.?It is not even possible to determine what ‘right’ is in this case: the system requirements will have to be statistical in nature.

While this looks like a weakness of machine-learned systems, it only makes sense to apply them exactly to such problems.?If a problem is easy to capture in crisp requirements, you can probably construct a traditional rule-based system with all its guarantees and translate that to code in a straightforward way.?It is exactly for the kinds of problems that have this kind of unavoidable uncertainty, that machine learning is an effective, and currently the only feasible way to come to a solution.??

As I have argued elsewhere, in the air these are the kinds of tasks that are currently handled almost exclusively by human pilots, and if we want to build systems that take over risk management in flight, we are going to have to learn how to deal with uncertainty. Rather than faulting the system for acting unpredictably in an unpredictable environment, we must establish statistical bounds on how well it works given a correct statistical description of the inputs.?This will be one of the pillars of certifying machine-learned systems, to which we’ll return in the last post in this series.?

But first, we’ll look at the other big objection: lack of traceability.?Next post.?

Next: 4. Traceability: Beware of what you wish for

要查看或添加评论,请登录

Daedalean AI的更多文章

社区洞察

其他会员也浏览了