Thursday Thoughts on AI + Law (5/11/23)
Coastal Trail, May 2023

Thursday Thoughts on AI + Law (5/11/23)

It goes without saying that there is a lot to digest in the world of AI these days. Some of the interesting things this week: what role privacy plays in AI risk management, whether AI should play a role in human resource management, how to best govern AI to maximize societal value, whether open or closed AI models will dominate, and whether AI will replace creative workers while amplifying human disagreements.

But first, what does Snoop think?

  1. I think Snoop is channeling what many people are thinking about AI right now.
  2. Key committees in the European Parliament agreed on a draft AI Act proposal. The draft is largely–but not entirely–aligned with the Commission and Council drafts, and includes provisions relating to ‘foundational models’ (e.g., LLMs) that can serve a variety of functions. Once this draft passes out of the Parliament as a whole, the trilogue negotiation process between the Parliament, Council, and Commission will begin. This is just to serve as a reminder that we’re really only at the beginning of the global effort to regulate AI, and people can’t agree if we need some sort of ‘super’ regulation to address AI risks or just, well, the plain ol’ U.S. Constitution. Threading this needle will be difficult and require significant imagination.
  3. There are lots of layoffs occurring these days. Organizations are increasingly using AI to determine who loses their job. The use of AI to automate employment decision-making (i.e., fired or hired by AI) is definitely something that needs more regulation, particularly when you have influential magazines suggesting that AI will play a role similar to McKinsey’s historical role in corporate ‘right-sizing.’ (On a somewhat related note, thankfully, due to the ‘Cambrian explosion’ of AI tech, software engineer hiring is up again.)
  4. The Atlantic has an important essay about how AI has the potential to turbocharge the toxicity of social media. And that will be part of the reason why the 2024 election will be exceedingly impacted by AI (and we–as a society–are probably unprepared for this).
  5. Hollywood writers went on strike and AI is a huge part of why. And it’s not just the writers: even actors and voice actors are encountering contract terms that enable simulation of their voices for future projects.
  6. I strongly agree with Eric Goldman that privacy advocates are trying to push square pegs into round holes with respect to AI governance. For example, the IAPP–a great organization in the privacy space–is now creating a center to focus on AI governance. This debate exploded in part as a result of Omer Tene’s LinkedIn post on the topic (with which I agree).
  7. The other debate trend is in closed vs. open model development/deployment argument. Last week, I flagged SemiAnalysis’ leak of a Google white paper arguing that there are no moats in AI, and the topic gained steam this week with Meta releasing another open source model under CC BY-NC-SA terms.
  8. The future is here and it’s weird. There is growth in absurdist AI art, and the potential for AI-generated religions. And AI governance is also an area that should involve some positive disruption and cleverness: Anthropic’s effort to think about “AI constitutionalism” is a fascinating way to think about responsible AI.
  9. Competition is usually a pretty good thing (provided there aren’t too many negative externalities). Now we are seeing increased competition between Google and Microsoft on how to bring generative AI tools to the world with Google using its I/O conference to trumpet a bunch of new features. Google announced a lot of cool tools but I’m particularly excited about the immersive Google Maps experience.
  10. Regulatory alert! Minnesota is on track to pass a law that criminalizes deployment of sexual or political deep-fakes.
  11. Fresh off using AI to justify job opening reductions, and following a decade of arguably coasting, IBM is unveiling Watsonx to help enterprises with AI adoption. In theory, they are hitting the right notes for success (partnering with HuggingFace, deploying open source models in narrow lanes for automation, etc.), but we’ll have to see how it plays out in practice.
  12. Of potential interest to everyone who has ever worked at a major law firm: what will AI mean for billable hours?
  13. Sharing means caring, but it’s apparently total war in the AI space so sharing in AI research is over? In any event, some signs are pointing strongly to a future where open source models advance more expeditiously than controlled models. And, on the topic of ‘sharing,’ Pearson is crying foul over their data being ingested for AI model training (and is threatening legal action), while battles over IP in training data are heating up in Germany.
  14. Ethan Mollick has a good essay suggesting that we should stop thinking about AI as software and instead view it as ‘pretty good’ people. Which makes some sense.
  15. If you’re thinking about the nexus of office work and AI, read this: Microsoft’s Worklab has some very good data about how AI impacts the workspace. In short, many workers think AI will help save them from drudgery and burnout.
  16. No real business plan but you have an “AI company”? Someone might throw money your way. The shotgun VC strategy is operating at full capacity in the AI space. Billions will be made, billions will be lost. Insider published an oped claiming that AI is a “Hail Mary” for tech companies and tech investors (I disagree).
  17. OpenAI is investing in research to reverse engineer some of its models to advance explainability efforts.
  18. As Axios points out, yes, China is outpacing the U.S. on regulating AI. They even arrested someone recently for spreading AI-generated fake news. But this doesn’t mean that the Chinese regulations are good for China, AI development, or the Chinese people. And it doesn’t suggest that China will succeed in any sort of “AI race.”
  19. Speaking of China, companies there are being forced to be creative with their use of available technology as a result of sanctions in the chips space. Because if you don’t have chips, you don’t have great AI.
  20. People vs. Algorithms makes a good point: many companies–and industries–are at risk of being “Chegged” (i.e., disrupted by AI). One recent Chegging victim? Stack Overflow.
  21. After the White House events on AI, Congress is getting involved and bringing AI leaders in to testify.
  22. A slow stream of articles is starting to point out what might be somewhat obvious: standards bodies will have a role to play in the EU’s plan to regulate AI.
  23. As we head into 2024, many politicians indicate that they have learned nothing from the past few elections that might influence how they think about the impact AI will have.
  24. The Verge suggests that ChatGPT’s recent data protection troubles are the canary in the coal mine for regulatory issues facing generative AI tools.
  25. Runway is bringing AI moviemaking (and deepfakes) to the masses.
  26. Unlike crypto, scams are a bug not a feature of AI. The latest scam: fake Frank Ocean songs. I can understand why people would fall for it!
  27. Ah, there it is: Apple’s quarterly report was an opportunity for the company to (finally) discuss how it is thinking about AI. But, par for the course, it didn’t provide much in the way of details. Amazon is also opening up about its plans in this space.
  28. What do you do if you (1) are an industry leader in AI with a massive compute need to match and (2) are also aiming to become carbon negative within 20 years? If you’re Microsoft, that might mean getting in early on nuclear fusion. Reality is catching up with sci-fi, folks.
  29. Must feel pretty awesome: Ashton Kutcher raised $243 million in 5 weeks to invest in AI.
  30. I’m probably part of this: Alex Kantrowitz tackles the AI PR Industrial Complex.
  31. The British Medical Journal published a lengthy letter regarding the potential health implications arising from AI (when doctors start to speak of an ‘existential threat’ it might be worth listening!).
  32. In case you were wondering, here is how AI is currently transforming the media business.
  33. Huggingface and ServiceNow are going after Codex/Github Copilot.
  34. According to SemiAnalysis, Meta is making some unusual architecture choices regarding AI.
  35. Spotify has culled thousands of AI-generated songs recently.
  36. Using current AI tools to manage your communications with friends, family, and coworkers is not a recipe for success. And those tools will give you pretty antiquated (or classic, depending on your perspective) fashion advice.
  37. If you want to get a sense of how people in tech are using ChatGPT in their lives, HackerNews is a great place to start.
  38. U.S. government cyber-defense leaders are ringing the alarm about the potential for LLMs to be used for cyber attacks.
  39. Speaking of war and defense, the UN is convening a conference on the use of AI in war.
  40. Elon Musk has one operating principle relating to the use of AI: he does what he wants. In this case, he wants to have AI play more of a role in driving Teslas.
  41. The question of why a fake Edward Hopper painting surfaced as the top “Edward Hopper” search result in Google illustrates that, 25 years on, some people still don’t understand search algorithms.
  42. Gulp. EASA is looking into how AI will play a role in air travel…
  43. Perhaps discomforting but actually will potentially lead to some beautiful innovations: AI-augmented architecture.
  44. How will AI impact business insurance? It’s not entirely clear. And the impact of AI on the world of finance could be huge. In any event, AI will probably make for some interesting Risk Factor disclosures in securities filings!
  45. Unsurprisingly, kids are far more likely than adults to use AI tools, even if much of the effort in AI tool development has been intended to make boring work easier. Automation should replace drudgery, in other words. Of course, some people have low opinions of the chatbot trend.
  46. AI doesn’t need to be everywhere. Like, it doesn’t need to be in a drive-thru for Wendy’s.
  47. If you like charts on AI, here are some good ones.
  48. If you like ChatGPT but you don’t like the new Twitter blue-check system, check this out.

Heidi Saas

Data Privacy and Technology Attorney | Licensed in CT, MD, & NY | AI Consultant | Speaker | Change Agent | ?? Disruptor ??

1 年

#snoop was the best!

Jonathan Adams

Independent Wealth Manager

1 年

Again, another week filled with AI news. Microsoft push for fusion (#28) makes me wonder just how close it may be?

Erica Bacon

BS MPA CCT CRAT ACLS PALS

1 年

Very interesting, thanks!

要查看或添加评论,请登录

Jon Adams的更多文章

  • Thursday Thoughts on AI + Law (1/4/24)

    Thursday Thoughts on AI + Law (1/4/24)

    Hello, 2024! As always, there is a great deal happening at the intersection of AI and law. I don't expect that to…

    1 条评论
  • Thursday Thoughts on AI + Law (12/21/23)

    Thursday Thoughts on AI + Law (12/21/23)

    This is probably the last edition of the newsletter for 2023, but it's chock-full of interesting tidbits from the…

    9 条评论
  • Thursday Thoughts on AI + Law (12/14/23)

    Thursday Thoughts on AI + Law (12/14/23)

    It's getting close to the end of the year, but things certainly aren't quieting down in the world of AI! Last Friday…

    1 条评论
  • Thursday Thoughts on AI + Law (12/7/23)

    Thursday Thoughts on AI + Law (12/7/23)

    I delayed sending this until Friday in hopes that the AI Act negotiations would conclude in time for this week's update…

    1 条评论
  • Thursday Thoughts on AI + Law (11/30/23)

    Thursday Thoughts on AI + Law (11/30/23)

    It's been a busy two weeks since I last sent out a newsletter. Hopefully you all find this interesting.

    5 条评论
  • Thursday Thoughts on AI + Law (11/16/23)

    Thursday Thoughts on AI + Law (11/16/23)

    Each week, this list seems to grow longer and longer. Hopefully you find something interesting here--I found so much of…

    2 条评论
  • Thursday Thoughts on AI + Law (11/9/23)

    Thursday Thoughts on AI + Law (11/9/23)

    This week was a little less dramatic than last week, but it was by no means boring. Here's the latest: Axios highlights…

    4 条评论
  • Thursday Thoughts on AI + Law (11/2/23)

    Thursday Thoughts on AI + Law (11/2/23)

    I say it every week, but this week was a particularly important week for the evolving relationship between AI and the…

    1 条评论
  • Thursday Thoughts on AI + Law (10/26/23)

    Thursday Thoughts on AI + Law (10/26/23)

    Politico hits the nail on the head: in an era of convincing generative AI-created content, how can anyone trust what…

    5 条评论
  • Thursday Thoughts on AI + Law (10/19/23)

    Thursday Thoughts on AI + Law (10/19/23)

    The list is a little bit lighter this week due to travel and work, but hopefully it's still interesting enough! Marc…

    3 条评论

社区洞察

其他会员也浏览了