When AI bots talk to each other

When AI bots talk to each other

How close are we to a world where intelligent agents (AI bots) are able to coordinate and communicate with each other? I read an article from Stefan Bauschard this week (link below) that got me thinking about the implications of scaling up AI’s capabilities by giving them the ability to network with each other. Now you could have bots trained on almost unlimited content, that can share knowledge and observations and make collective decisions.

Imagine a world where robots talk to software bots and exchange observations. The hardware bots have sensors and cameras. The software bots have access to search global databases and coordinate with other software bots. A “police” robot uses facial recognition to identify everyone it encounters and can immediately know almost everything about you and evaluate your current appearance, behavior, activity, and the other humans you are interacting with. Would it be less or more biased than a human police officer? I almost sure it would be less biased, not completely unbiased, but better informed and faster to process all of the significant information than a human.

Image a robot triage nurse in an emergency room doing a similar assessment of people entering the hospital. It observes stress levels, pain levels, and other visible indications of injury or illness. It can quickly measure body temperature, heart rate, breathing and blood oxygen levels to further assess the urgency of receiving treatment, all before they have even reached the counter to check in or spoken a word to a human. Speaking of speaking, that AI nurse can also understand hundreds of spoken languages and can communicate with the patient in their native language and interpret for human nurses and doctors.

Since my primary focus is on education, I have to also imagine an intelligent robot in the school that observes all of the students and similarly assesses emotional and physical wellbeing of each student and their relationships and interactions with other students and adults. Is it possible, such a robot would be better at identifying a student who is at risk of a crisis, before they harm themselves or someone else? I think you know what answer I would give to that question.

Should we be afraid of what such advanced robots could do if they decide that humans are the source of most problems and try to “fix us”. Absolutely! Are we the source of our own problems? Yes, we are. Can intelligent agents help us solve many of our problems? Yes, they can, but we need to manage the advancement of these intelligent machines so they have controls around what actions they can take.

Stefan Bauschard believes it is possible that Sam Altman was fired from OpenAI because we are much closer to this future than Altman had revealed to the board of directors. Do we want Microsoft capitalizing on and controlling that technology? Probably not. We all need to be paying attention and learning as much as we can about how these technologies are advancing and who is managing the process to keep the machines from getting too smart too fast and start to make decisions and take actions that are not in human’s best interests.

https://stefanbauschard.substack.com/p/altmans-dismissal-hints-at-breakthroughs

https://www.simplilearn.com/what-is-intelligent-agent-in-ai-types-function-article

?

Stefan Bauschard

AI Education Policy Consultant

1 年

Hi Eric. Thanks for the reference. I think I was right -- https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html :) Alo, yes, more and more is written on bots collaborating and "bot swarms" that sort of share intelligence, creating a collective intelligence. There are many ARVIX papers on this and check out Allie Miller's post today.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了