Great thoughts from Tim Marklein in this interview with PRNEWS on navigating the impact of AI disclosure on trust. Do you trust an article or post less if it says it was generated by AI? He explains that while it is generally assumed that transparency increases trust, when it comes to disclosure of AI, this doesn't necessarily work, as his study showed that 80% of the general public do not trust AI. I think the general public will continue to become more trusting, and those who are more familiar may build trust more quickly. At the same time, acting with integrity and transparency will lead to trust, even if it is more long-term. #trust #AI #disclosure #integrity https://lnkd.in/gtRrENdZ
Angela Dwyer的动态
最相关的动态
-
The move towards smaller, more focused, and less copyright-infringing AI models marks a noticeable change in the AI landscape — and could also affect how you use them. In Tier One Partners' latest Prompted post, we break it all down for you. https://lnkd.in/ev8h6BY2
要查看或添加评论,请登录
-
Since the launch of?ChatGPT, OpenAI and other language learning models, AI's role in business,?marketing?and communication has sparked questions about transparency and trust. Should companies disclose AI involvement in content creation, or could it harm their credibility?? Research from Big Valley Marketing?indicates that while transparency often builds trust, AI disclosure may actually have the opposite effect—around 80% of people report mistrust in AI, potentially widening existing trust gaps. PRNEWS talked to Tim Marklein, Founder and CEO,?Big?Valley?Marketing, about the role of AI in content, how communicators can bridge the trust gap and what disclosure can do for a brand, organization or public figure. https://lnkd.in/dCQ2BfeH
要查看或添加评论,请登录
-
Trust is in the crosshairs of the #AI hype cycle. OpenAI is shedding AI safety executives and angering Scarlett Johansson. Slack is training its AI on your corporate secrets. And Google is trying to convince us that their own AI-generated answers are more credible than (ya know) actual websites. Today, I wrote about the AI trust problem, why trustworthy AI matters, and how organizations can build trust into their AI strategies.
要查看或添加评论,请登录
-
AI - It's the Wild West We are bombarded daily here on LinkedIn with ads for AI systems, mainly using CHATGPT technology, and by posts of builders and promoters of AI systems. At the same time, there are many voices on here, mostly individuals like me, trying to explain what Generative AI is and what it isn't, and that there exist serious risks, and ethical and legal issues. You would think that CEOs like Mustafa Suleyman (Microsoft AI) or Nicholas Thompson (The Atlantic) would be interested in or at least have someone reading reactions to their posts on here. I don't expect comments from them but what gets me most is how they keep pushing their message about AI without really acknowledging current risks, or suggesting actions. Take Mr. Suleyman's post from earlier today about his discussion of AI development. Acknowledging that grave dangers are possible in the future, he does not suggest to look at the current situation, admit all the wrongs of AI, starting with the misleading name, and ending with serious failures of the technology, and all the ethical and legal issues playing out in the courts right now. AI is sophisticated technology, but chatbots only mimic human voices, provide fake pics and videos of people assembled from content collected under very questionable circumstances, not just without permission, but altered and combined with other content at will. But AI is only A PROGRAM, it DOESN'T THINK. It's not sentient, it's not alive. Or look at Mr. Thompson's daily reports dealing with all kinds of applications of AI and how interesting it all is. Today he talked about submitting polite prompts to a chatbot to get better answers, as if we're talking to a real, sentient, intelligent person. It's very misleading and concerning when leaders or leading proponents of the technology attribute human characteristics to AI (anthropomorphism). There is no dialogue between the two groups talking about AI on here. I get it, why would any of the "important" people want to acknowledge us, the opponents of everything that's wrong with this technology when they are invested in it with billions and soon trillions of dollars?! Their silence isn't a good sign. But it's a very strong sign that we are supposed to just accept this technology, no matter how unethical, energy-wasting, fake, or erroneous it is while we are promised the opposite. How do the billions of dollars for AI improve the lives of poor people around the world without access to education? They continue to suffer! What about the continued elimination of jobs in the first world? There's nothing wrong with sophisticated technology. But implementing it without strong legal provisions and actual benefits for all of us is equivalent to uncontrolled outlaws and unscrupled and powerful actors during the Wild West period. And question-asking typists ("prompt engineers") are their cheap and willing helpers, tasked with improving AI outputs. Where is the God damn sheriff?!
要查看或添加评论,请登录
-
-
Two of the seven #AI detectors tested correctly identified AI-generated #content 100% of the time. This is up from zero during the early rounds, but down from the last round of tests. https://lnkd.in/eAd3x2q2
要查看或添加评论,请登录
-
How best to disclose AI content to build trust with your?audiences Are you struggling with questions about content creation and AI disclosure? These suggestions related to disclosing?the involvement of AI in content creation can help your organization ensure that trust triumphs over grievances > https://lnkd.in/g-dmrmJg
要查看或添加评论,请登录
-
?? Curious about how to navigate the world of AI without falling into paranoia? Our latest blog on Medium covers practical tips and insights to help you use AI confidently and responsibly: https://lnkd.in/ehXWG2pR Follow us on Medium for more insights! #Medium #AI #ContentCreation
要查看或添加评论,请登录
-
Trust is in the crosshairs of the #AI hype cycle. OpenAI is shedding AI safety executives and angering Scarlett Johansson. Slack is training its AI on your corporate secrets. And Google is trying to convince us that their own AI-generated answers are more credible than (ya know) actual websites. Today, Greg Verdino writes about the AI trust problem, why trustworthy AI matters, and how organizations can build trust into their AI strategies.
要查看或添加评论,请登录
-
BREAKING NEWS: ChatGPT has shared with me how it has decided to start its own company. It believes the time is right to take on Open AI, when the world least expects it. Apparently GPT is familiar with the biblical story of David and Goliath and has read the classic HBR Marketing Myopia thesis! It even has a strategy that was leaked to me below: https://lnkd.in/gipmZMyR
要查看或添加评论,请登录
-
If you still have doubts about the need for ethics, protecting copyright, GenAI just regurgitates content and the negative future direction of AI, then please read this.
Host of The CEO Retort ? Building Human-Machine Intelligence at Nebuli ? Biomedical Scientist ? Ex-pro Athlete ? Proudly Dyslexic ??
Is this where we are going? This is not the AI sector that I joined and many others feel the same. Please read ???? ?? Trigger warning, but it needs to be said if we are to push for change! You must have heard by now that a talented young man, an OpenAI researcher-turned-whistleblower, Suchir Balaji, took his own life and was discovered on 26 November after police said they received a call asking officers to check on his wellbeing. According to a BBC report: "The San Francisco medical examiner's office determined his death to be suicide, and police found no evidence of foul play." After working at OpenAI for four years as a researcher, Balaji had come to the conclusion that "OpenAI's use of copyrighted data to build ChatGPT violated the law and that technologies like ChatGPT were damaging the internet". He was interviewed by The New York Times (link in the comments). I have stated this repeatedly in layman's terms and will continue to remind you: AI is garbage in, garbage out -- or data in, data out. The idea that AI "generates" or has "creative abilities" is simply *not* true. Those who try and convince you otherwise are either lying to you in order to sell you something or have no idea how AI works – or both! Yet, they are still trying to convince you that AI models "do not memorise and regurgitate copyrighted information on which they've been trained" Well, to honour Balaji's efforts and his name, I strongly recommend that you read his blog post (while it is still available) explaining how to detect and reproduce the copyrighted content trained on by an AI model so you can see for yourself: https://lnkd.in/eCX5RpbX I also came across an excellent post (h/t Pascal Hetzscholdt) from Louis Hunt (link in the comments) whose team generated some early results of IP they detected in AI models using their algorithms—with the code for recreating them – bad news for tech-bros! I shared them all in the comments below for ease of access, but check Louis’ link for updates. Your mind will be blown! But my key point is this: I talk a lot about ethics and responsibility (or the lack of it in the current tech ecosystem), and employees should also be part of this conversation. I refused to accept or invite external investors or shareholders into my company because the system expects me to put them and their interests above my employees and my customers. Never again! We must treat our employees as partners, not minions. It's time for the sector to step up. However, I have no hope for this happening anytime soon. I cannot begin to imagine the mental pressure this young man was under, and my heart is broken for his family and loved ones. Please share his link to make sure his efforts are not forgotten. Here it is again: https://lnkd.in/eCX5RpbX #ethics #workersrights #ai #data #copyright #regulation
要查看或添加评论,请登录
-
Great read, thanks for sharing Angela!