AI: The digital Wild West
Ross Monaghan
Former CEO now sharing his skills and knowledge as an educator, strategist, and trainer.
With little government regulation, the #generativeai landscape is looking like a digital wild west.?
There’s the promise of gold in those AI hills, and those wanting a slice of the action and to get rich quick are pitching their start-up tents and staking their claims. Some of them are wild claims.
People buy into hype, especially those wanting to make quick and easy money.
Pitchbook estimates that AI #startups raised more than $US1.6 billion in the first quarter of the 2023 calendar year, with an estimated $US10.7 billion being raised in the second quarter. So expect more hype to come.
The rush for AI gold means that many apps are released with little testing or thought for security, privacy and ethics issues. Large organisations are rightly wary.
Apple has just announced bans on employees using #chatgpt and GitHub’s copilot. The mandate is similar to other decisions by large organisations such as Samsung, Amazon, Bank of America and Citigroup.
Many of these organisations cite security as the reason for bans, so it’s not surprising that banks and tech companies with time-sensitive product development issues are the first to move. Other large organisations should follow suit. Data breaches have not been confined to big tech.?
#corporatereputation is vital for every organisation. Data breaches are a clear and present financial risk, but just as important are the reputation risks associated with failing to consider the ESG risks of AI.?
To manage the risks and maintain reputations, organisations need policy and procedures for employees who use new and untested AI services. They will also need regular communication on the issue.?
Risk: Is AI the snake oil of 2023?
“Much of what’s being sold as “AI” today is snake oil — it does not and cannot work,” Princeton University Professor Arvind Narayanan said in a presentation in 2019 that went viral.?
Whilst there have been significant developments in the past four years, Professor Narayanan is now working on a book called ‘AI snake oil’.
In a preview of the book he suggests that organisations that deploy AI need to carefully consider their public reporting standards, even allowing external audits to validate accuracy claims.
“The burden of proof needs to shift to the company to affirmatively show that their AI systems work before they can be deployed,” Professor Narayana says.?
Scams: Fake ChatGPT apps exploit users
Scammers are luring victims online by offering “enhanced ChatGPT features” then collecting sensitive personal information, including credit card details.
IT Voice reports that scammers are using “sophisticated techniques to mimic legitimate payment gateways”.
“Often replicating the official ChatGPT interface, with minor alterations that are difficult for unsuspecting users to spot. These fake apps are then marketed aggressively through social media platforms, app stores, and online advertisements,” the story reports.
Brand: AI doesn’t understand the “why” of branding
Whilst #google, #meta and other organisations are talking about generative AI being the “basis of the next era in creative testing and performance”, according to VidMob’s Scott Hannan , this might be somewhat true for small businesses, it’s a different story for large brands.?
Writing in VentureBeat, Hannan points out that “why” is currently missing in creative development by AI. As an example he points to brands wanting to ensure they represent diversity, and don’t objectify models in “overly polished, idealized” images.?
领英推荐
“AI can ingest information and spit out new assets. AI can also test creatives and optimize toward the creatives that are performing. But when it comes to knowing why a creative performs better than another. AI falls short.For any enterprise that highly values its brand, AI will play a different role,” Hannan says.?
Standards: G7 establishes “Hiroshima AI process”
#reuters has reported that the leaders of the Group of Seven (G7) nations have called for global technical standards to keep AI “trustworthy”.?The pace of growth is not being matched by corresponding governance measures, according to the leaders.
“Leaders agreed on Friday to create a ministerial forum dubbed the "Hiroshima AI process" to discuss issues around generative AI, such as copyrights and disinformation,” Reuters reports.?
You can read the full G7 Hiroshima Leaders’ Communiqué here.?
Law: Should tech companies be off the hook for AI 'disinformation'?
Irish defamation lawyer Paul Tweed has said AI companies should be held liable for #ai disinformation.?
An expert in the field of media law and reputation management, Tweed told the Irish Examiner he “has heard concerns from organisations, including news publishers, that AI chatbots are taking up the role of ‘news aggregator’ but a lot of what is being churned out is misleading information.”
According to the report, Tweed said that Irish law isn’t geared to deal with the problems emerging from the growing use of AI. “The million dollar question is, who’s accountable,” he said.
Diversity: Tackling AI’s diversity crisis
“If it isn’t addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people”, according to influential science magazine Nature.?
In addition, any resulting ‘intelligence’ will be inaccurate, and won’t have the varied social-emotional and cultural knowledge, the story suggests.?
Change will be difficult because there is a culture of resistance with the sector being “dominated by mostly middle-aged white men from affluent backgrounds”. The article interviews five researchers leading efforts for change.??
Podcast episode of the week
In brief
Reputation Week provides general advice only and should not be used as a basis for making decisions about your particular circumstances.
Chair at Mountains Youth Services Team
1 年Timely! The Tech companies, banks etal appear to be wary of in-house use. Are they simply protecting their “patches” or are there more sinister issues? Since around 2012 with the advent of social media the negative impact on our youth is devastating and adding to mental health issues. Could AI add to our youth’s woes? Should regulators step in?