Consciousness and its consequences is confusing
I have been thinking about a few things over the past couple of months that all relate to each other (in the end).
I have been reading about some amazing work which involves growing brain cells into a microscopic part of the brain . Currently, just a few millimetres across but importantly the neurons are 3D structures (not on a flat surface).
I have also been thinking about machine consciousness and sentience after reading an article about a google employee being fired for proposing that LaMDA (Language Model for Dialogue Applications) the deeply convincing AI, is sentient.
Reading the paper that #google released on LaMDA in my view it would pass the Turing test although most would acknowledge that the Turing test isn’t very useful anymore. I haven’t been able to find a test that I am satisfied with for consciousness or sentience.
I would say a lot of the debate that surrounds these two topics when it comes to machines seem to have some bias due to the nature of trying to apply concepts that are human or organic to inorganic entities such as AI.
What startled me was some of the work being done on organoids brains which connect the organoid to a robot and a feedback loop is applied. So, the structure of the organoid (mini-brain) changes in a way akin to the way a human brain develops by feedback from its own limbs and other senses.
When you take this research 20 or 50 years into the future when
领英推荐
When I consider the brain that sits in our skulls, it basically sits there in silence and has a load of input from very complex organic machinery such as our eyes, ears, skin etc. Then if I add in that when human-computer interfaces are more advance we will see our own brain structures (or at least for some people) will be changed by those external machine devices. This is already evident in some of the research being done by Hugh Herr out of #MIT . He is developing AI limbs which provide some feedback and learn . It is not a big assumption to say that the brain structure changes after a period of wearing these AI limbs.
I asked myself, will AI-based limbs or augmentations make us less human? I don’t think so or believe so. Even if that changes the structure of our brains in fundamental ways over long periods of time e.g., if someone receives artificial eyes, limbs or a combination from childhood.
But taking the converse of that, if a brain grown in a lab takes recorded input of exactly what a person feels, sees, hears, tastes etc then could that brain eventually become self-aware in the same way that a baby at some point becomes self-aware?
Here is where my thinking starts to fall down and I feel like I am in muddy water. The issue isn’t that these concepts haven’t been around since the dawn of science fiction. The issue is that all the building blocks for this technology are now available and therefore at some point will be used to create things that we have no ethical or moral framework to deal with. Will a brain grown in a lab with all or more sensory functions than a human being be able to declare sentience and in that case what will happen?
I have listened to the debate about abortion in America for example and one argument within the debate is that a newborn baby killed within minutes of its birth is murder but a baby that can survive is not considered murder it is an abortion in the eyes of the law.
While I don’t profess to have any learned opinions on the topic of abortion which is explosive regardless of how carefully it might be discussed. However, when I consider the idea of an organic brain grown over potentially years, connected to input and output devices at what point will that collection of brain cells and machinery have rights or will it never have rights because it was never born?