Consciousness and its consequences is confusing

Consciousness and its consequences is confusing

I have been thinking about a few things over the past couple of months that all relate to each other (in the end).

I have been reading about some amazing work which involves growing brain cells into a microscopic part of the brain . Currently, just a few millimetres across but importantly the neurons are 3D structures (not on a flat surface).

I have also been thinking about machine consciousness and sentience after reading an article about a google employee being fired for proposing that LaMDA (Language Model for Dialogue Applications) the deeply convincing AI, is sentient.

Reading the paper that #google released on LaMDA in my view it would pass the Turing test although most would acknowledge that the Turing test isn’t very useful anymore. I haven’t been able to find a test that I am satisfied with for consciousness or sentience.

I would say a lot of the debate that surrounds these two topics when it comes to machines seem to have some bias due to the nature of trying to apply concepts that are human or organic to inorganic entities such as AI.

What startled me was some of the work being done on organoids brains which connect the organoid to a robot and a feedback loop is applied. So, the structure of the organoid (mini-brain) changes in a way akin to the way a human brain develops by feedback from its own limbs and other senses.

When you take this research 20 or 50 years into the future when

  • all the issues around growing brain structures suitably complex and large are resolved
  • the lab-grown brain is connected to machines at some point the ethical questions around sentience and conscious entities having rights will need to be dealt with.

When I consider the brain that sits in our skulls, it basically sits there in silence and has a load of input from very complex organic machinery such as our eyes, ears, skin etc. Then if I add in that when human-computer interfaces are more advance we will see our own brain structures (or at least for some people) will be changed by those external machine devices. This is already evident in some of the research being done by Hugh Herr out of #MIT . He is developing AI limbs which provide some feedback and learn . It is not a big assumption to say that the brain structure changes after a period of wearing these AI limbs.

I asked myself, will AI-based limbs or augmentations make us less human? I don’t think so or believe so. Even if that changes the structure of our brains in fundamental ways over long periods of time e.g., if someone receives artificial eyes, limbs or a combination from childhood.

But taking the converse of that, if a brain grown in a lab takes recorded input of exactly what a person feels, sees, hears, tastes etc then could that brain eventually become self-aware in the same way that a baby at some point becomes self-aware?

Here is where my thinking starts to fall down and I feel like I am in muddy water. The issue isn’t that these concepts haven’t been around since the dawn of science fiction. The issue is that all the building blocks for this technology are now available and therefore at some point will be used to create things that we have no ethical or moral framework to deal with. Will a brain grown in a lab with all or more sensory functions than a human being be able to declare sentience and in that case what will happen?

I have listened to the debate about abortion in America for example and one argument within the debate is that a newborn baby killed within minutes of its birth is murder but a baby that can survive is not considered murder it is an abortion in the eyes of the law.

While I don’t profess to have any learned opinions on the topic of abortion which is explosive regardless of how carefully it might be discussed. However, when I consider the idea of an organic brain grown over potentially years, connected to input and output devices at what point will that collection of brain cells and machinery have rights or will it never have rights because it was never born?

I don't have a specific view but I was looking at some research related to autonomous weapon systems that can in theory (although it's not approved) target and make decisions which could kill people. There are real consequences in saying a machine has sentience because in theory it should be held to account. However, human laws do not make any sense to machines. Right now in the UK if someone from #SCO19 kills someone they will be taken to court and if they did not follow the rules it's possible they will receive a prison sentence. If a machine did this and it was given a 100-year prison sentence what would that matter to a machine? At some point if we pass the barrier of accepting that machines are sentience (and I don't know we will in my lifetime) then we will have to take care of the law that relates to this and that will incredibly complicated and potentially discriminatory to either sentient and conscious machines or humans.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了