Thursday Thoughts on AI + Law (4/20/23)
California Poppies, 4/20/23

Thursday Thoughts on AI + Law (4/20/23)

Lots to digest this week, from key questions and ‘state of the world’ reports to the question of whether AI Drake is as good as real Drake.

  1. The WSJ has a great set of 25 questions we should ask if we want to start thinking about how to think about AI.
  2. And AI made the cover of The Economist. (More here.)
  3. Azeem Azhar outlines how the move towards greater AI use might be signaling a Copernican-like paradigm shift in how we think about systems, creativity, etc.
  4. The AI Now Institute is arguing that we need to move beyond what they view as self-serving accountability audits for AI. Perhaps, as the Economist suggests, greater explainability and transparency could help? This might, for example, include greater development/deployment of open-source models. Of course, this raises the question of what happens when AI models go beyond human cognitive limits.?
  5. Celebrity deep fake tools are going to discover an area of the law known as ‘rights of publicity.’
  6. Garbage in, garbage out: the Washington Post dug into training sets for some LLMs and revealed how some of the biases and errors might be introduced from training data.
  7. Not cool: Uber, Amazon, and others are using AI to ‘personalize’ pay for gig and contract workers.
  8. Axios attended the TED conference and watched the talks on AI so you didn’t have to.
  9. If LLMs are only as good as their training data and the training data is censored, what will China do?
  10. Tim O’Reilly, per usual, has a very good point: you can’t regulate what you don’t understand. Which is currently something of a problem.
  11. The EU is planning an AI Act, but German national regulators are also reportedly digging into how to regulate AI.
  12. Meanwhile, Japan’s government is using ChatGPT to make regulations more easily understood for the Japanese people. Pretty good idea!
  13. The potential wave of regulation in the EU might kill the continent’s AI industry before it takes off, according to entrepreneurs in the space.
  14. Canadian researchers are calling for support for the Artificial Intelligence and Data Protection Act (AIDA).
  15. A common theme of proposed regulations: transparency about who makes the AI system.
  16. GPT-5 is a long way off, according to Sam Altman. Is this part of a transition from LLMs to URLs (Unsupervised Reinforcement Learning models)?
  17. Microsoft provided a primer as to how it has approached responsible AI.
  18. Talk about growth: there were more articles about AI published in the week of 4/7/23-4/14/23 than there were in all of the 1990s.
  19. Many Google employees did not like Bard much at all when they were testing it.
  20. Daniel Miessler predicts that governments will start outright banning AI in the next few years in response to potential economic disruptions from job losses.
  21. Donald Rumsfeld did many bad things but one good thing he gave the world was the quip about ‘known unknowns’ and ‘unknown unknowns.’ Here are some of the known unknowns related to AI and its impact on media.
  22. Drew Breunig has a good essay questioning whether LLMs are compatible with data protection laws and how to address their increasing role in our society.
  23. Very fascinating (and something to try to fix): AI development and operations consume tremendous amounts of fresh water.
  24. GPT-4 can create self-regenerating/fixing python scripts. Fascinating.
  25. Perhaps Elon was yelling ‘pause’ just so he could try to catch up through x.ai, rather than because AI is going to ‘hit like an asteroid’? He’s now talking about a ‘TruthGPT,’ which will probably never live up to its name but sounds about on-brand for Musk. It doesn’t seem to be off to a great start, in any event.
  26. Interesting essay subject: what if explainability is a hindrance to progress? (Of course, what if you can’t explain a model…and it gets things very, very wrong?)
  27. The WSJ asks: should AI (and robots) have moral/legal rights?
  28. If you’re stress-testing ChatGPT, you might as well get paid: OpenAI announced a bug bounty.
  29. Algorithms matter, but so do chips and system designs. Relatedly: The Information reported that Microsoft might be getting into the chip game.
  30. Apple is offering high-yield bank accounts, but will it also offer a GPT rival?
  31. Applied correctly, AI can be used to help organizations with their compliance obligations.
  32. Adobe is rolling out new AI-powered features to help designers and creators.
  33. What kind of creators and creative works might be at greatest risk for AI-driven disruption.
  34. Here’s how investors and venture capitalists are thinking about some of the different players in the AI space.
  35. Banks are getting very excited about generative AI.
  36. Sounds like Warren Buffett is getting excited about AI, too, since it can … translate songs for him?
  37. Scale.ai released their 2023 “AI Readiness Report.”
  38. If there are ‘AI Wars,’ then Google sounds very poised to…strike back?
  39. The FTC’s Lina Khan is highlighting the potential competition challenges that will impact the AI industry.
  40. AI-created art won the Sony World Photography Award, but the ‘artist’ refused the prize and used the opportunity to draw a distinction between art and AI.
  41. AI is already resulting in layoffs for video game illustrators in China.
  42. ChatGPT is threatening the livelihoods of millions of coders in India. But it wouldn’t be able to get into any IIT, it appears.
  43. Finance might be ripe for AI-driven disruption, too?
  44. Same with market research (since, apparently, GPT models can simulate market participant behavior)?
  45. The UK is thinking about how to address workforce displacement as a result of AI.
  46. Some creators are going all-in on AI.
  47. Insider will start experimenting with AI to draft articles.?
  48. Reddit data might come at a cost for AI training.
  49. Snap’s AI bot is going viral, apparently.
  50. Project managers, rejoice: AI for JIRA is in early release stages.
  51. Google keeps advancing the ball on generative AI in healthcare.
  52. Interesting point from psychologists: if LLMs have no corporeal body, they will never understand all of what they create.
  53. OpenAI should start a waitlist for all the regulators coming their way (the latest: Spain).
  54. Amazon is framing their deployment of AI tools for AWS customers as an effort to ‘democratize’ AI access.
  55. Tens of thousands of Facebook users fell victim to a fake ChatGPT scam known as ‘Lily Collins.’
  56. Speaking of scams, we’ll have to see how many people were victimized by tax-related GPT-powered scams. We’ll also hear about how many people saved $$$ through AI-powered tax loophole exploration.
  57. Cool! Google is trying to use AI to save coral reefs. Maybe?
  58. First AI came for Taylor Swift. Now, ‘ghostwriter’ came for Drake and the Weeknd. Then the rights-holders came for ghostwriter (not sure that UMG and others actually have rights in the fake voice, but that’s something for courts to explore!).
  59. Ben Thompson has a great argument for “zero trust authenticity” to help creators in an AI-oriented world.
  60. NewScientist is calling for a long-overdue scientific community conversation about AI risks.

Frith Tweedie

Privacy | Responsible AI

1 年

It's getting increasingly difficult to keep up with everything - thanks for collating and sharing this list!

回复
Katharina Koerner

AI Governance I Digital Consulting I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

1 年

Amazing collection! ??????

Laura K.

Sales Enablement Leader | Military Strategist | National Security Expert

1 年
回复

要查看或添加评论,请登录

Jon Adams的更多文章

  • Thursday Thoughts on AI + Law (1/4/24)

    Thursday Thoughts on AI + Law (1/4/24)

    Hello, 2024! As always, there is a great deal happening at the intersection of AI and law. I don't expect that to…

    1 条评论
  • Thursday Thoughts on AI + Law (12/21/23)

    Thursday Thoughts on AI + Law (12/21/23)

    This is probably the last edition of the newsletter for 2023, but it's chock-full of interesting tidbits from the…

    9 条评论
  • Thursday Thoughts on AI + Law (12/14/23)

    Thursday Thoughts on AI + Law (12/14/23)

    It's getting close to the end of the year, but things certainly aren't quieting down in the world of AI! Last Friday…

    1 条评论
  • Thursday Thoughts on AI + Law (12/7/23)

    Thursday Thoughts on AI + Law (12/7/23)

    I delayed sending this until Friday in hopes that the AI Act negotiations would conclude in time for this week's update…

    1 条评论
  • Thursday Thoughts on AI + Law (11/30/23)

    Thursday Thoughts on AI + Law (11/30/23)

    It's been a busy two weeks since I last sent out a newsletter. Hopefully you all find this interesting.

    5 条评论
  • Thursday Thoughts on AI + Law (11/16/23)

    Thursday Thoughts on AI + Law (11/16/23)

    Each week, this list seems to grow longer and longer. Hopefully you find something interesting here--I found so much of…

    2 条评论
  • Thursday Thoughts on AI + Law (11/9/23)

    Thursday Thoughts on AI + Law (11/9/23)

    This week was a little less dramatic than last week, but it was by no means boring. Here's the latest: Axios highlights…

    4 条评论
  • Thursday Thoughts on AI + Law (11/2/23)

    Thursday Thoughts on AI + Law (11/2/23)

    I say it every week, but this week was a particularly important week for the evolving relationship between AI and the…

    1 条评论
  • Thursday Thoughts on AI + Law (10/26/23)

    Thursday Thoughts on AI + Law (10/26/23)

    Politico hits the nail on the head: in an era of convincing generative AI-created content, how can anyone trust what…

    5 条评论
  • Thursday Thoughts on AI + Law (10/19/23)

    Thursday Thoughts on AI + Law (10/19/23)

    The list is a little bit lighter this week due to travel and work, but hopefully it's still interesting enough! Marc…

    3 条评论

社区洞察

其他会员也浏览了