Why to Invest in Compassionate A.I.
Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That's a huge problem.
Indeed, I've written a book about Compassionate A.I.
See: "The Journey Towards Compassionate A.I.: Who We Are - What A.I. Can Become - Why It Matters"
Am I trying to attract investors for this now? Or am I just passionate about Compassionate A.I.? It is both, of course. Trying to make it succeed is part of the passion. I think this is of utmost importance. As an M.D., Ph.D. (body-mind), and Master in A.I., I have a uniquely holistic view on the domain. I see immense possibilities and challenges that are hardly or not taken into account in-depth where it matters.
Are you an investor in A.I.?
Then you have a huge responsibility towards the future. That entails the distant future, but also nearby: your lifetime, certainly the lifetime of your children, big time.
And in several ways, it's relevant already now.
Non-Compassionate A.I. is immensely dangerous.
I explain this extensively in my book in which I describe two main dangers on the road towards the real A.I.:
- one human-made, abuse/misuse of technology
- another A.I.-made, on the path towards more and more autonomous A.I.
You may be certain of this: We are progressing towards autonomy in A.I. If you still have any doubts, I suggest you read the book. We see little of it at present. Moreover, the striving is towards automation, not autonomy.
Or is it? Are we not building our machines to use them? Think again.
There is a fuzzy border between an automaton and an autonomous system.
Of course, nobody in his right mind is developing systems with the explicit aim that they overtake humanity, including the developer himself.
In practice, however, striving for real A.I. may turn out to be much more dangerous once real intelligence is attained in an artificial medium. The world of A.I. is still in a stage of enhanced information processing. The striving is towards more performant information processing. Here also lies a fuzzy border that may be transgressed unknowingly: between information and intelligence. Once the latter is attained, we're close to autonomous systems, more dangerously close than is apparent.
I don't mean something that looks like autonomy. We are talking about the real thing: volition, free will. More and more, the system decides about its own goals.
This is the logical consequence of what is already known.
In research, we know quite a lot about the path or 'journey' that I follow in my book, from data → information → knowledge (intelligence) → consciousness → wisdom (Compassion). It is, eventually, a path that one can discern in the organic/human case. This case (ourselves) can teach us a lot about how the same path can be looked upon at a more abstract level.
The next stage after this is to implement this path in another medium (silicon, later on: light, quantum).
This will not be easily done – and should certainly not be done – by engineers only. One needs many proper insights to do the translation towards abstractness and from there towards the different concrete medium. One needs even more proper insights to guide this into a direction that is not extremely dangerous. The main point is the following:
One should strive towards Compassion BEFORE real intelligence is attained in A.I.
I don't see that being done well at present. Note that this is very different from the mainstream in 'ethical A.I.' There are even additional huge dangers in this. For instance, striving towards a self-explainable A.I.-system (giving the user explanations for each of its decisions) makes the system also more self-explainable-towards-itself. Through this, with some slight twist, the system gets a vast amount of flexibility, self-learning capability, the ability to make domain-transcending associations. Hm. Please don't say I didn't warn about it. Not reckoning with this should be seen as very, very na?ve.
That's why I insist on "not by engineers only." They're smart at their field but seldom see the truly bigger picture.
Investors have a huge responsibility
The situation at present is that the course of A.I. developments is not mainly driven by academia, and with lots of time to think about the deeply ethical consequences, challenges, and dangers of A.I. The course of A.I. is mainly driven by investments. The main way to avoid non-Compassionate A.I. is to invest in the Compassionate one.
So, I didn't explain much about Compassion (with the mysterious capital C). "Read the book," they say, and I can only acknowledge. Compassionate A.I. can give the highest return on investment – however not in straightforward ways – because it is most human-centered.
Moreover, for the same reason, it's the only humanely worthwhile future we've got.
-
More on A.I. at https://aurelis.org/blog/category/artifical-intelligence.
#ArtificialIntelligence #CompassionateArtificialIntelligence #investments