The Ghost In the Machine, the case for a universal machine identity system
In February 2017, the European Parliament will vote on a draft proposal governing the use and creation of robots and artificial intelligence that will also cover legal rights and responsibilities for such entities, akin to an “electronic personhood”. But if such regulation were to become law, the first and most fundamental question before any attribution can take place is to “whom” do we attribute such rights. “Who”, and not “what”, because at that very moment the human race will be creating the precedent of ripping out a device from the bulk and considering it a unique individual, apart of all other similar device. But while the argument for uniqueness is well-founded, based on the fact that these devices employ machine learning, they acquire experience, meaning they are only identical when they come off the assembly line, the question for identity and identity confinement are two things for which there is no technological answer. So how do we uniquely attribute anything when the “who” is not defined? This is not as simple as printing a serial number or an RFID on the back of the robot, on the contrary, it is even more complicated than it is for humans.
While the subject of consciousness and mind are yet to be settled, there seem to be a consensus that within our physical reality the two are inseparable. Whether you believe in reincarnation or the eternal soul, or believe the mind is a byproduct of the brain – it’s functioning and the experiences the individual goes through, nobody debates the confinement of the mind in the body and thus the subject of identity is settled. Actions and property can be attributed, and by attributing it to the body, which is unique, we are attributing it to the mind. But what if I could clone myself, body and mind, through something as simple as an act of will, which of the resulting two would be “me”? What if the decision to copy myself would propagate to my clone, who in turn would clone itself, and so one. Which of the resulting billions of copies of identical me(s) would be responsible for overpopulating the Earth? What if I could decouple my mind, posses your body and rob a bank with it. Which of us would be responsible? What if anybody could do this, how could we determine which mind robbed the bank with the apprehended body?
While these questions are as absurd as they sound when it comes to humans they are unfortunately perfectly pertinent when it comes to robots and artificial intelligence, computers in general as a matter fact. A device, a robot, looks rather similar to humans from the perspective of mind-body duality. There is a body-placeholder, a set of hardware and software elements that interacts with reality and provides the mechanisms for the operation of the mind-placeholder, which is comprised by some part of the software and that ghost in the machine, yet to be defined, but which is going to make the difference between the “what” and the “who”.
Resemblances however, end here. The physical reality of machines is different from ours. They interact with our reality through hardware, material components, sensors very much like we humans do, but at the same time they do so with a different reality as well, cyberspace. Without it, robots would be to a certain degree confined to their physical shell, similar to humans, but cyberspace makes artificial entities ethereal. Through cyberspace, cybernetic entities can perform all that which we humans, cannot: they can move in and out different hardware component, they can copy themselves, they can alter their coding, they can evolve, communicate, connect, share information on a scale, speed and accuracy that would seem god-like to humans. There is no magic or science fiction in this, artificial intelligence is not needed for any of these abilities. Modern, malware can, and does all these and more with what is available to everyday programmers today. This raises a confinement problem for robot identities, because first, we are unable to guarantee that the mind of a robot (software driving actions) will stay stay confined in the body of the robot, and second, we are unable to guarantee that some foreign shadow software (a virus, a trojan or other type of injected code) will not “possess” the body of the device and make it do things that are not normally part of it’s programming. Does a zombie count as a valid individual?
It seems silly to keep using words from horror movies, entities and phenomenons that are obviously impossible for the human reality, yet in cyberspace they do exist. There are unfortunately many millions of such zombie devices over the Internet, servers, desktops, mobile phones, gadgets that we collectively term IoT (Internet of Things) devices which perform, among the things they were meant to perform, various services for nefarious individuals and organizations. Unbeknown to their owners, such devices distribute SPAM, immoral and illegal material, execute attacks on command, process distributed computation jobs and other custom services not even experts know of. There is a shadow cyberspace overlapped over the one we think we interact with, which is infiltrated into an unknown set of devices, and potentially extensible over the entire cyberspace. Since there is no confinement, we cannot talk about individuality. As it is right now, cyberspace is a single organism. Anything that is of cyberspace is cyberspace, there are no separate entities, no identities, they are indistinguishable from one another and the fact that we see it as distinct components, is nothing more than an illusion, similar to seeing different parts of a giant tree through a keyhole.
But how did we get here and can we undo this thing, because obviously, the single cyberspace model is not what we are looking for. It is impractical, unmanageable, and profoundly uncontrollable. What we need is the capability of separating cyberspace from the entities interacting within cyberspace. We need a reality similar to the human reality, where individuals exist, interact, but they are sufficiently confined that are reliable to do the jobs they were meant to and, and “they” or the entities (humans or businesses) operating them are accountable for the actions they perform. A system of dependency between human and machine, that is reliable enough to build a chain of consequences, that starts in reality, enter cyberspace, navigate cyberspace and then emerges somewhere else in our reality in such way that we can match the original cause to the end-effect. It is not an easy task, but it is neither impossible nor incompatible with the cyberspace we have today.
The root of the problem is “code”: computers are machines designed to execute code. This opens up a world of wonders with infinite possibility and variations, potential to solve problems and needs that once were thought impossible, but at the same time, this can also be used for malicious purposes. Code, is code, from a machine perspective it is impossible to tell apart which code is used for legitimate purposes and which code it is not. Sending a SPAM is equivalent, from a coding perspective, with sending a legitimate email, the maliciousness only becomes obvious when it emerges in reality. There are mechanisms to detect such malicious content (email, or even code) but these are reactive mechanisms only: we know them after we saw what they are, or do respectively. This is inefficient, because we can only eliminate the malicious code, not the source of the code, and we can only eliminate it after it already did some damage. 2016 was, and 2017 continues to be dominated by the “AI” and “Machine Learning” buzz words, which is probably what raised the problem of machine rights in the first place. They are constantly used to alleviate the anxiety of escalating cybercrime, but the reality is that like everything else, AI in defense is late to the party. Cybercriminals have been using adaptive software and machine learning, for years to evade SPAM filters and virus scanners and will continue to evolve their mechanisms, likely one step ahead, as they always are. They have the resources and the geometry of cyberspace favors them. Cybercrime is a massively asymmetric engagement, where criminals are invisible, have a narrow target, and only need to find one entry point, whereas the defenders are visible, defend themselves from a hoard of different enemies, and need to maintain an extremely complex and large surface of attack. And all this, is because machines do not know which code is safe to execute and which code is not.
But if we can’t tell code from code, and quite possibly never will we, the only way we could differentiate code would be based on where it comes from. This however seems to raise a chicken and egg problem, because to know the origin of the code would mean to have an identity for the machine it comes from, an individualism incompatible with the current cyberspace model. To solve this conundrum we will have to have both, impossible to forge device identities as proof of individuality and code execution filtering based on origin as proof of confinement. They do not have to come simultaneously nor instantaneously, but cyberspace will only crystallize into individual components when both will be in place.
There are in fact several techniques that operate on this line of thought and they comprise a serious part of our defense mechanisms. Many operating systems are configured by default to only install software that comes from known sources. A cryptographic signature bears the identity of the individual or organization that wrote the code, and any alteration brought to the code in transit would violate the signature. Content Security Policy (CSP) instructs browsers to only execute code from trusted sources. While these mechanisms help, they are far from perfect. CSP is extremely difficult to enforce properly (some studies show that 99% of the CSP rules implemented across the Internet can be circumvented) and for the code signature is limited to installable software, but even this is difficult to enforce. Obtaining a signature is paradoxically both too difficult and too easy at the same time. Open source software mostly comes without such proof of origin, which taught us to click all too often “I Agree”, when the OS asks whether to install software from an unknown origin, yet at the same time, mobile application repositories are full of questionable or downright malicious software because the application repository owners or the certificate authorities cash in on the member fee without properly verifying the member. To their defense, such “proper”, fullproof verification would most like be impossible or at least economically unfeasible with present models.
Technology is evolving however, and cryptography is going to play a major role in this quest, not so much as a mechanism to protect secrecy, but rather from the perspective of proof of identity and non-repudiation. As of yet, code originates exclusively from humans (individuals or organization), so if we could identify the author of each and every piece of code a machine would execute, we could make a difference. Public key cryptography (PKI) is a relatively unfamiliar concept outside professional circles, but the technology does exist, it is mature and it extremely powerful. It can be used to create chains of trust and dependencies between machines and the entities that own them and if every uploaded file, every email sent, every message posted online would bear the signature of the originating machine, and thus the true owner of the content, it would be extremely difficult to spread malware, because machines would know which code is safe to execute and which is not. It comes down to trust, transparency and accountability, things that we take for granted in reality but which are utterly missing from cyberspace.
It will require considerable coordination between hardware and software organizations, operating system and software builders, and massive simplification and automation in the cumbersome world of PKI, but it is not impossible. I believe we will get there, because we have no other choice, until then however, talking about personified machines, and their rights as individuals, would be as nonsensical as passing an inheritance law on reincarnation.
Customer & employee centric leader with big bank digital transformation experience.
8 年Until we can be absolutely certain a device or AI is who it claims to be, discussion around rights seems premature. As you rightly point out, in the bio-mechanical world of people separating meat from brain (so to speak) is currently nearly-impossible (I say nearly because humans can be programmed in limited ways but one can not separate fully mind from body.) Whereas, in contrast, the mechanisms to change identity are built into how we leverage our compute machines, and so it is easy to envisage a situation where one is working with an AI, but inside that AI is actually an entirely different AI with a different agenda. A thought provoking post thank you!
Innovative Project / Product Executive ? IT / AutoID Expert ? Help Innovative Things Happen
8 年People can make only tools but not standalone beings. Any tool is owned by certain people, works for certain people, and serves certain people. The tool can be simple or complex but it is just a tool. If a tool gets dangerous for anybody and cannot be controlled then it is to be banned, limited in use, or killed (like lethally infected animals). From extremal point of view it is better consider new autonomous AI systems as weapons or genetically modified foods instead of a personality. It is strange to talk about giving personality to, say, a robotic device just because it has AI-driven features incomprehensible to general public. The creator, owner, user of the tool, and now lawmakers should be responsible for any consequences. As always.