Thursday Thoughts on AI + Law (8/3/2023)
Crater Lake

Thursday Thoughts on AI + Law (8/3/2023)

It's been a busy two weeks, so I didn't get around to publishing a newsletter last week. But that just means I packed more into this edition. As always, I hope it's helpful/insightful, and feel free to share with others.

  1. The White House and leadership from seven major tech firms announced responsible AI commitments at the end of last week. Reportedly, the White House is hoping to do even more (and soon), since we’re obviously in the ‘early days’ of AI regulation. If you want to learn more about the direction regulations may take, Vox has you covered.
  2. Following that, Microsoft, Anthropic, Google, and OpenAI launched the Frontier Model Forum to coordinate on risk mitigation for powerful models. One way they’re working on that is by using AI labeling schemes (e.g., C2PA or even unicode).
  3. Let’s be honest: most “open” AI isn’t really open-source software or really even that open. Seriously. If you’re interested, here’s an epic Twitter thread on open source model performance.? Meanwhile, GitHub, HuggingFace, and others are calling for more open-source protections under the EU’s proposed AI Act (joining developers of proprietary systems in lobbying)
  4. Let’s throw out the Turing test: ChatGPT passed the test, and it’s clearly not actually intelligent.
  5. Lessons from Capitol Hill: if you want to get something passed, try to stick it in the big defense spending bill.
  6. Some are worried that Europe’s AI Act will kill AI innovation on the continent. Others are thinking that the EU’s sense of tech FOMO will ultimately help foster support for AI innovation.
  7. Stability AI scored a win in court. But the lawsuits keep spreading: now, Cigna is being sued for using AI to deny patient claims.
  8. Sam Altman is pushing Worldcoin as a solution to questions of authenticity in an age of AI, but perhaps we should instead be asking why, if we need such a terrible solution, we want to create this problem in the first place?
  9. Big question to resolve: when AI makes up something about a person and presents it as a fact, who is responsible for the damages?
  10. MIT Tech Review has a list of the ways in which AI might transform American politics.
  11. Since AI-augmented political ads are inevitable in this election cycle, here’s a useful primer for how to distinguish AI from reality.
  12. Banking interns are all in on ChatGPT it appears. So are hedge funds. And (American) banks. Which might cause the next financial crisis, according to the SEC’s Gary Gensler.
  13. Finance types are unlike skeptical lawyers who are mostly not using AI. Perhaps that’s because AI might result in a cratering of firm profitability.
  14. AI could make health care so much better. For example, AI can improve breast cancer diagnosis, improve other diagnostic efforts, and help design appropriate hypertension treatment protocols.? And AI can help make diagnostic errors a thing of the past. This is why companies like AWS are offering generative AI resources for health care companies. Related: Doctors probably shouldn’t use ChatGPT for patient notes.
  15. Generative AI might be damaging Stack Overflow, but it’s investing in its own AI tools to help developers.
  16. Valve is blocking developers from using generative AI unless they can demonstrate non-infringement. Meanwhile, other gaming groups are going gangbusters for generative AI.
  17. Protection of AI systems is apparently a hot business.
  18. Watch out, the glacier is moving way faster: Congress is advancing a bill calling for an 18 month study on AI accountability.
  19. The EEOC is warning about AI biases (and efforts to control them).
  20. If you are looking for a plain English explanation of how LLMs work, this might be the best one I’ve seen.
  21. A top policy advisor on the EU Parliament side argues for the Parliament draft of the AI Act as a mechanism for increasing competition for developers of foundational models (while also meeting civil society expectations regarding human rights).
  22. A few weeks ago, hundreds of business executives warned about the impact of the proposed AI Act. Hundreds of civil society advocates just responded.
  23. The American Chamber of Commerce issued their position on the proposed AI Act.
  24. European rights-holders and creators argued for appropriate transparency requirements for AI.
  25. The best use of AI is to get rid of boring, tedious work.
  26. We are seeing an unfortunate rise in the use of AI for interviewing employment candidates.
  27. And we’re also seeing some sketchy uses of AI to try to predict when employees might resign.
  28. Do AI developers have Oppenheimer moments?
  29. Medium is taking a stand: no AI-generated content is welcome there.
  30. The U.S. is leading on AI but continued success is not inevitable. Pablo Chavez published an incisive essay and analysis in Lawfare that outlines the geopolitical issues relating to AI development and argues that the U.S. needs to exert strong leadership here.
  31. The team at Modern Diplomacy is writing a series of essays on the impact of AI on economies and warfare. And Palantir’s CEO keeps arguing for AI-augmented weapons (why he’s the one beating that drum, I’m not entirely sure, but he’s being joined by other related Lord of the Rings-influenced companies).
  32. It’s still not clear whether generative AI is going to help or hurt hackers.
  33. What scares many about generative AI is that it’s a black box (sort of).
  34. Most companies want to “do something about AI” but a majority aren’t resourced to do it.
  35. Nathan Lambert kicks the tires on LLaMa 2 and points out some shortcomings.
  36. Very useful AI: Wayfair is offering AI to help people reimagine how their homes could look (of course it helps sell furniture on wayfair.com).
  37. AI training data is like gene pools: when generative AI trains on synthetic content, and then uses its output for training, eventually artifacts get amplified through a self-consuming loop.?
  38. Meta will reportedly be embedding various AI helper bots throughout its platforms. It’s part of a broader strategy towards AI ubiquity.
  39. AI hiring is highly, highly concentrated in a few cities.
  40. Ethan Mollick wrote a good post re: the “strange tide of generative AI.”
  41. Generative AI is going to create lots of opportunities for AI consultants.
  42. And SAP spoke with Axios on the spending needs associated with going big on AI.
  43. McKinsey thinks that middle managers will hold the key to unlocking the value of AI.?
  44. Many lower income, white collar occupations will be disproportionately impacted by AI.
  45. Alexa is going to receive a generative AI reboot.
  46. Not quite an A-lister salary, but Netflix is offering pretty high compensation for AI product managers.
  47. If you want to compete for the lead in AI, you have to be prepared to spend big. And work hard: at Google, Sergei Brin is jumping back into action. See also: Intel wants to put AI in everything.
  48. In the more mundane world of recommendation algorithms, there has been a series of articles published recently regarding how Facebook’s models work.
  49. Smart by Nvidia: invest heavily to help your customers’ businesses grow (so they’ll need more chips and all).
  50. Fascinating: AI2 unveiled its AI2 ImpACT license program.?
  51. Photoshop’s AI tools now let you ‘uncrop’ photos.
  52. Oh boy. An AI-powered ‘news’ channel will produce news clips tailored to the viewers’ political perspectives.
  53. Google’s Assistant is getting AI updates.
  54. Axios published a deeper look at Apple’s moves into the generative AI landscape.
  55. Microsoft and Leidos are reportedly partnering to expand AI use in the public sector.
  56. MIT announced “PhotoGuard” to protect images from AI edits. Here’s more on how to use it. Good news! But on the other hand, OpenAI shutdown its “AI detection” tool since it was pretty ineffective. Maybe Instagram’s tool will be better?
  57. IBM and Hugging Face are releasing a climate change-oriented foundation model, and Microsoft is using AI to help address wildfire risks.
  58. An AI startup is hoping to help diesel-powered trains clean up their act.?
  59. The AI arms race means that, evidently, Nvidia is facing insane demand for its chips.
  60. Fast Company argues that the AI boom is saving San Francisco.
  61. Michael Dempsey published a long post on how to think about R&D and capital development in the AI industry.
  62. Someone put together a list of the best AI-related newsletters where you can read deeper dives on all of the above topics.

Dott. Chiara Rustici

Chief Privacy Officer, Data Protection Officer as-a-service, Independent Legal Scholar on DSA, DMA, Data Act; Author "Applying the GDPR"; Policy Advisor for Data Regulation

1 年

Always fair. Always balanced. Always honest in attribution. Thank you for keeping an eye on the ball?

Smita Rajmohan

Senior Counsel, Co-Head, AI Practice Group at Autodesk | Board Director at Berkeley Law | x-Apple Product Counsel

1 年

Great stuff Jon!

Katharina Koerner

Corporate AI Governance Consulting @Trace3: All Possibilities Live in Technology: Innovating with Responsible AI: I'm passionate about advancing business goals through AI governance, AI strategy, privacy & security.

1 年

There is always something new to learn from your newsletters. Love them!! (The only ones I’m ever opening ??)

Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

1 年

Jon Adams excellent round up ????

要查看或添加评论,请登录

Jon Adams的更多文章

  • Thursday Thoughts on AI + Law (1/4/24)

    Thursday Thoughts on AI + Law (1/4/24)

    Hello, 2024! As always, there is a great deal happening at the intersection of AI and law. I don't expect that to…

    1 条评论
  • Thursday Thoughts on AI + Law (12/21/23)

    Thursday Thoughts on AI + Law (12/21/23)

    This is probably the last edition of the newsletter for 2023, but it's chock-full of interesting tidbits from the…

    9 条评论
  • Thursday Thoughts on AI + Law (12/14/23)

    Thursday Thoughts on AI + Law (12/14/23)

    It's getting close to the end of the year, but things certainly aren't quieting down in the world of AI! Last Friday…

    1 条评论
  • Thursday Thoughts on AI + Law (12/7/23)

    Thursday Thoughts on AI + Law (12/7/23)

    I delayed sending this until Friday in hopes that the AI Act negotiations would conclude in time for this week's update…

    1 条评论
  • Thursday Thoughts on AI + Law (11/30/23)

    Thursday Thoughts on AI + Law (11/30/23)

    It's been a busy two weeks since I last sent out a newsletter. Hopefully you all find this interesting.

    5 条评论
  • Thursday Thoughts on AI + Law (11/16/23)

    Thursday Thoughts on AI + Law (11/16/23)

    Each week, this list seems to grow longer and longer. Hopefully you find something interesting here--I found so much of…

    2 条评论
  • Thursday Thoughts on AI + Law (11/9/23)

    Thursday Thoughts on AI + Law (11/9/23)

    This week was a little less dramatic than last week, but it was by no means boring. Here's the latest: Axios highlights…

    4 条评论
  • Thursday Thoughts on AI + Law (11/2/23)

    Thursday Thoughts on AI + Law (11/2/23)

    I say it every week, but this week was a particularly important week for the evolving relationship between AI and the…

    1 条评论
  • Thursday Thoughts on AI + Law (10/26/23)

    Thursday Thoughts on AI + Law (10/26/23)

    Politico hits the nail on the head: in an era of convincing generative AI-created content, how can anyone trust what…

    5 条评论
  • Thursday Thoughts on AI + Law (10/19/23)

    Thursday Thoughts on AI + Law (10/19/23)

    The list is a little bit lighter this week due to travel and work, but hopefully it's still interesting enough! Marc…

    3 条评论

社区洞察

其他会员也浏览了