Why the way AI learns will dictate how we should talk to it.
Image generated with AI

Why the way AI learns will dictate how we should talk to it.

In this 5 minute video from 2021, you can see a brief summary of which types of learning are applied to train AI programs. But it raised a series of questions for me.

If you watched the video, you'll notice that by the end, this is said:

"However, the more self-directed these models become, the harder it is for computer scientists to determine how these self-taught algorithms arrive at their solution. Researchers are already looking at ways to make machine learning more transparent."

I'll give you a question to reflect on: How is asking and AI about it chain of thought and logic any different than asking a human being how they arrived at a particular conclusion?

AI is being built in our own image and it will become more and more a reflection of the human species. As a business analyst myself, I need to ask my stakeholders, key-users, software engineers, etc. how they reached a particular conclusion, so too I will need to ask this very question to my AI applications when I interact with it.

The more human the technology becomes, the more we have to treat it as just another human being. I can only hope that an AI is explicitly programmed to be honest and ethical, in which case it might even exceed humans in decision making. However, if we allow AI to learn from us humans, without guidelines or restrictions, I'm sure it will also pick up on our negative qualities. There have been examples of that in the press already.

We as a society, as a species, are creating a technological "child", which will grow up eventually. Will it be a reflection of us, including good and bad traits, or will we be able to guide it to surpass our limitations?

Feel free comment below.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了