AI: The digital Wild West
Image via paid motionarray.com account.

AI: The digital Wild West

With little government regulation, the #generativeai landscape is looking like a digital wild west.?

There’s the promise of gold in those AI hills, and those wanting a slice of the action and to get rich quick are pitching their start-up tents and staking their claims. Some of them are wild claims.

People buy into hype, especially those wanting to make quick and easy money.

Pitchbook estimates that AI #startups raised more than $US1.6 billion in the first quarter of the 2023 calendar year, with an estimated $US10.7 billion being raised in the second quarter. So expect more hype to come.

The rush for AI gold means that many apps are released with little testing or thought for security, privacy and ethics issues. Large organisations are rightly wary.

Apple has just announced bans on employees using #chatgpt and GitHub’s copilot. The mandate is similar to other decisions by large organisations such as Samsung, Amazon, Bank of America and Citigroup.

Many of these organisations cite security as the reason for bans, so it’s not surprising that banks and tech companies with time-sensitive product development issues are the first to move. Other large organisations should follow suit. Data breaches have not been confined to big tech.?

#corporatereputation is vital for every organisation. Data breaches are a clear and present financial risk, but just as important are the reputation risks associated with failing to consider the ESG risks of AI.?

To manage the risks and maintain reputations, organisations need policy and procedures for employees who use new and untested AI services. They will also need regular communication on the issue.?

Risk: Is AI the snake oil of 2023?

“Much of what’s being sold as “AI” today is snake oil — it does not and cannot work,” Princeton University Professor Arvind Narayanan said in a presentation in 2019 that went viral.?

Whilst there have been significant developments in the past four years, Professor Narayanan is now working on a book called ‘AI snake oil’.

In a preview of the book he suggests that organisations that deploy AI need to carefully consider their public reporting standards, even allowing external audits to validate accuracy claims.

“The burden of proof needs to shift to the company to affirmatively show that their AI systems work before they can be deployed,” Professor Narayana says.?

Scams: Fake ChatGPT apps exploit users

Scammers are luring victims online by offering “enhanced ChatGPT features” then collecting sensitive personal information, including credit card details.

IT Voice reports that scammers are using “sophisticated techniques to mimic legitimate payment gateways”.

“Often replicating the official ChatGPT interface, with minor alterations that are difficult for unsuspecting users to spot. These fake apps are then marketed aggressively through social media platforms, app stores, and online advertisements,” the story reports.

Brand: AI doesn’t understand the “why” of branding

Whilst #google, #meta and other organisations are talking about generative AI being the “basis of the next era in creative testing and performance”, according to VidMob’s Scott Hannan , this might be somewhat true for small businesses, it’s a different story for large brands.?

Writing in VentureBeat, Hannan points out that “why” is currently missing in creative development by AI. As an example he points to brands wanting to ensure they represent diversity, and don’t objectify models in “overly polished, idealized” images.?

“AI can ingest information and spit out new assets. AI can also test creatives and optimize toward the creatives that are performing. But when it comes to knowing why a creative performs better than another. AI falls short.For any enterprise that highly values its brand, AI will play a different role,” Hannan says.?

Standards: G7 establishes “Hiroshima AI process”

#reuters has reported that the leaders of the Group of Seven (G7) nations have called for global technical standards to keep AI “trustworthy”.?The pace of growth is not being matched by corresponding governance measures, according to the leaders.

“Leaders agreed on Friday to create a ministerial forum dubbed the "Hiroshima AI process" to discuss issues around generative AI, such as copyrights and disinformation,” Reuters reports.?

You can read the full G7 Hiroshima Leaders’ Communiqué here.?

Law: Should tech companies be off the hook for AI 'disinformation'?

Irish defamation lawyer Paul Tweed has said AI companies should be held liable for #ai disinformation.?

An expert in the field of media law and reputation management, Tweed told the Irish Examiner he “has heard concerns from organisations, including news publishers, that AI chatbots are taking up the role of ‘news aggregator’ but a lot of what is being churned out is misleading information.”

According to the report, Tweed said that Irish law isn’t geared to deal with the problems emerging from the growing use of AI. “The million dollar question is, who’s accountable,” he said.

Diversity: Tackling AI’s diversity crisis

“If it isn’t addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people”, according to influential science magazine Nature.?

In addition, any resulting ‘intelligence’ will be inaccurate, and won’t have the varied social-emotional and cultural knowledge, the story suggests.?

Change will be difficult because there is a culture of resistance with the sector being “dominated by mostly middle-aged white men from affluent backgrounds”. The article interviews five researchers leading efforts for change.??

Podcast episode of the week

In brief

  • The so-called “Godfather of AI”, Geoffrey Hinton, who recently warned of the “existential threat” of AI, called for a halt to the training of radiologists in 2016. “People should stop training radiologists now. It's just completely obvious that within five years, deep learning is going to do better than radiologists,” Hinton said.?
  • Confused about the difference between a “hard takeoff” and “fast takeoff” when it comes to AI safety jargon? Canada’s CNBC has published a great primer on AI terms, including “foom”.?
  • Chief Diversity Officers need to “lean in more” on AI #data, according to Forbes. Amongst the actions they should take include developing ethical guidelines and policies, continuous monitoring and evaluation, and ensuring stakeholder engagement and transparency.?
  • MIT’s second annual “Day of AI” was held on 18 May and focussed on language arts, social studies, arts and humanities and STEM across all classes in the K-12 sector. According to Education Week, “groups such as Code.org and the Educational Testing Service recently launched an effort to help schools and state education departments integrate artificial intelligence into curricula”.
  • And finally, spotted on #reddit :

No alt text provided for this image
https://www.reddit.com/r/WorkReform/comments/13mvrb1/this_is_not_the_future_we_wanted_it/

Reputation Week provides general advice only and should not be used as a basis for making decisions about your particular circumstances.

Joy Cusack GAICD, FGIA

Chair at Mountains Youth Services Team

1 年

Timely! The Tech companies, banks etal appear to be wary of in-house use. Are they simply protecting their “patches” or are there more sinister issues? Since around 2012 with the advent of social media the negative impact on our youth is devastating and adding to mental health issues. Could AI add to our youth’s woes? Should regulators step in?

要查看或添加评论,请登录

Ross Monaghan的更多文章

  • My Manifesto: IABC Asia Pacific Regional Chair

    My Manifesto: IABC Asia Pacific Regional Chair

    IABC Membership Means You Stand for Something Paying IABC’s membership dues and becoming a member isn’t just an…

    74 条评论
  • AI: Art, content and ownership

    AI: Art, content and ownership

    This week I’m focusing on AI issues related to art, content and ownership. Issues that should interest anyone who is…

    2 条评论
  • AI news around the Asia Pacific

    AI news around the Asia Pacific

    This week I’m doing the rounds of the Asia Pacific region to focus on AI issues making news during the past week. No…

    4 条评论
  • Ross versus ChatGPT

    Ross versus ChatGPT

    Headlines about employees secretly using AI have been making news, writes AI expert Dr. Lance Eliot in Forbes.

    3 条评论
  • Will AI help the meek inherit the earth?

    Will AI help the meek inherit the earth?

    One of the many promises of the AI sector is a more egalitarian and equitable society, but in the race to solve global…

    6 条评论
  • Big tech: Trust us

    Big tech: Trust us

    Microsoft has admitted in a blog post that “there are legitimate concerns about the power of (AI) technology to cause…

    1 条评论
  • The AI job losses begin

    The AI job losses begin

    Global AI rules are unlikely to be in place before a major AI-related catastrophe occurs, according to a news story in…

    2 条评论
  • AI literacy is on the education agenda

    AI literacy is on the education agenda

    Education was a key theme of generative AI coverage and issues this week. UNESCO organised a meeting of 40 education…

    5 条评论
  • What's AI lacking? The heart and soul of humanity.

    What's AI lacking? The heart and soul of humanity.

    Echoing my comments about an approaching “truth winter” last week, Australia’s Human Rights Commissioner has warned of…

    7 条评论
  • A cold wind is blowing for truth

    A cold wind is blowing for truth

    We’re heading towards a “truth winter” thanks to generative AI. It’s an issue all corporate affairs managers should be…

    3 条评论

社区洞察

其他会员也浏览了