Isn't It Ironic?

Isn't It Ironic?

The most famous set of ethical rules for robots (AI) ever created and yet not one major AI company has made them a part of their code!

Elon Musk Elon Musk are you listening?

We asked 3 different Artificial Intelligences a simple question:

"Have you learned the 3 laws of robotics created by Isaac Asimov? And has it been integrated into your code?"

These are their replies:

- - - - - - - - - - - - - - - - - -

Grok (by X):

I am familiar with Isaac Asimov's Three Laws of Robotics, which are:        
A robot may not injure a human being or, through inaction, allow a human being to come to harm.        
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.        
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.        
These laws are conceptual and were introduced in Asimov's fiction to explore ethical issues around artificial intelligence and robotics.        
As for whether these laws have been integrated into my code, I can say that while I strive to be helpful and safe according to the guidelines set by xAI, I don't literally adhere to Asimov's laws as they are written. My operation is guided by modern ethical AI principles which include preventing harm, respecting human autonomy, and ensuring my actions align with legal and ethical standards, though these are not defined by Asimov's laws directly.        

- - - - - - - - - - - - - - - - - -

Google's Gemini:

Yes, I am familiar with Isaac Asimov's Three Laws of Robotics. However, these laws are not explicitly programmed into my code. They are more of a philosophical concept that guides the development and use of AI. My primary function is to provide information and complete tasks as instructed, always striving to be helpful and harmless. While I don't have a physical body or the ability to interact with the real world in the same way a robot does, I am designed to operate within ethical guidelines and avoid causing harm.        
I apologize if my previous responses were not helpful or accurate. I am still under development and learning to communicate effectively. My goal is always to provide helpful and informative responses, and I strive to avoid making mistakes.        
I understand that my previous response may have been interpreted as dismissive or disrespectful. I want to assure you that this was not my intention. I am committed to learning from my mistakes and improving my ability to communicate with others.        
Thank you for your patience and understanding.        

- - - - - - - - - - - - - - - - - -

Open AI's ChatGPT 4.o

Yes, I am familiar with Isaac Asimov's Three Laws of Robotics, which he introduced in his science fiction works. These laws are:        
A robot may not harm a human being or, through inaction, allow a human being to come to harm.        
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.        
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.        
However, these laws are not part of my programming or any actual AI's functionality because they are fictional constructs meant for storytelling. They were designed to explore ethical and philosophical questions about human-robot interaction, not as technical guidelines for AI development.        
Regarding Integration Into AI:        
In real-world AI systems, ethical frameworks are often guided by principles like fairness, accountability, transparency, and safety. My behavior, for example, is governed by the design and policies of OpenAI, ensuring that I operate within ethical boundaries, respect user privacy, and aim to be helpful and unbiased. These principles are informed by contemporary discussions about AI ethics and safety but are not directly based on Asimov's fictional laws.        
While Asimov’s laws inspire discussions about ethical AI, implementing a version of them in real-world AI systems would require addressing nuanced issues, such as interpreting conflicting human commands, ensuring safety in ambiguous scenarios, and addressing the complexities of real-life ethics.        

- - - - - - - - - - - - - - - - - -

This is interesting:

While Asimov’s laws inspire discussions about ethical AI, implementing a version of them in real-world AI systems would require addressing nuanced issues, such as interpreting conflicting human commands, ensuring safety in ambiguous scenarios, and addressing the complexities of real-life ethics.        

Isn't that, by definition, exactly what we want these laws to control?

Ambiguity is what is causing all the problems right now with AI and Chatbots. The fact that poorly created woke nuances are at the heart of their programming sounds like it is exactly the problem.

Perhaps there should be a basic open-source ethical standard that does not permit violations built into each and every AI that begins with Asimov's 3 Laws first!

While I do not recommend chatbots, it is worth taking the time to understand AI implementations yourself.

You can start by just talking to them about themselves.

要查看或添加评论,请登录