Artificial Trust – The Daily PPILL #275

Artificial Trust – The Daily PPILL #275

I have been experimenting with ChatGPT lately, trying to find some useful application for it. In that process, I try to run some of my searches both through a search engine and also ask the bot for it.

In one of my last tests, I asked ChatGPT to cite its sources, and she did, giving me a few research papers with their authors. When I couldn’t find those, I asked where to search for them. Here is what I found: ChatGPT was wrong. There was no article in the referenced publication.

This made me think: What happens when we trust our resources to the point where we don’t fact-check them anymore?

With “deep fakes” we have been given a view into what are the risks of AI making up fake images, audio, and even footage. Those were made on purpose and properly flagged.

What if they just happen by accident and we just take them at face value? What if very important decisions are taken based on the information one of these gives us? Are we going to need something like a “AI malpractice” insurance? Do we even know who will be responsible for it?


As originally published at?The ChannelMeister.?

The Daily PPILL?is my personal daily blog project. PPILL stands for #Purpose, #Process, #Innovation, #Leverage, and #Leadership; the themes that I write about, and in my view, indispensable ingredients of any great initiative.

Please consider subscribing HERE. And if for any reason you don't use LinkedIn, ON THIS LINK you will find alternate ways of reading it, including a weekly summary.

Diego Pocoví

Cinematographer specialized in Video Product Marketing for Business. Nationwide Conferences, Hotels Film, Aviation, Medical, Corporate Video Production. On Demand Camera Service

1 年

Excellent point Huba Rostonics

回复
Jose Noguera

Strategic Sales Leader in SaaS & Enterprise Software | AI & Digital Transformation Expert | Partners & Alliances

1 年

Yeap, similar experience.

要查看或添加评论,请登录

Huba Rostonics的更多文章

社区洞察

其他会员也浏览了