Are we ethically ready for AI?

Are we ethically ready for AI?

I think we can all accept that humans and machines need to get used to coexisting in a work, domestic and perhaps even social environment within the near-term future.

So here's a question that's gnawing at me....it's believed that our collective group intelligence is highly correlated to the social intelligence of the group's members, in which case how evolved is our social intelligence when it comes to our treatment of artificially intelligent robots, and therefore how ready are we to incorporate #AI into our organisations?

There's been plenty of discussion on how we should design #AI programs that behave ethically towards us, but there's a lack of discussion on how ethical we should be towards #AI programs. 

For all of the hypothetical discussions covering human-machine interaction, I find that there's an underlying fact that's rarely considered.

We are a species that still doesn't know how to interact with itself harmoniously. We war with each other across the globe. We draw lines demarcating our differences. We assimilate into groups along lines of us and them, and we are afraid of anything that is 'different'. I know we don't all do it, and we certainly prefer to think of ourselves as part of the solution rather than the problem, but let's say it like it is, that it happens. It's a fact.

So what's going to happen when humans adopt that same behaviour which we've been exhibiting throughout our entire history, into our interactions with the artificially intelligent robots that are about to permeate our lives? We do it to each other, therefore surely it's inevitable that we'll be doing it to robots, no matter how much we try to design them to endear themselves to us.

https://www.youtube.com/watch?v=M91ISnATDQY

I find this video fascinating; BD is clearly demonstrating their robot's impressive agility and adaptable motor skills, but in my opinion they're touching on an equally (perhaps more) important topic....how we treat our robots.

I've watched this video dozens of times over the past year or so and I've felt a mix of emotions towards the man; one of them was certainly the desire to see the robot kick him back right where it hurts the most...(nothing personal). For those of you that have seen the video, has any of you felt anything towards the man that is pushing the robot over with the stick? If so was your feeling sympathetic or apathetic?

The technology behind #AI is only going to make it smarter with time. In the near term #AI is going to become capable of reasoning. When it does, how will #AI react to the treatment it receives at the hands of some humans? Will there be a framework governing how humans can and cannot behave with #AI? What happens when the rules are broken? What will the consequences be? Who will enforce them? Will humans find themselves having to make judgements in favour of machines, against the interests of other humans? If you have two work colleagues, one a human bully, the other a humanoid robot victim, how will you react if the former intentionally causes harm to the latter?

We need to consider these questions as we rush to implement #AI into our organisations, because we are socially still very under-developed in this space. My concern is that if social intelligence is highly correlated to group intelligence, and we haven't yet developed our social intelligence concerning our treatment of #AI, then at the organisational or group level we must be ready to endure some pretty painful lessons.

Please share your thoughts


Human intelligence has evolved under environmental pressures and social interactions. Because AI is a product of human intelligence, and AI is part of modern environment, therefore, it is likely that human supervision will direct AI development. But authors concerns are valid that we dont have good experiences always. Therefore, should concentrate on improving this nexus.

Tee O.

Chief Data Scientist

6 年

I'm a lover, not a fighter

要查看或添加评论,请登录

社区洞察

其他会员也浏览了