Thursday Thoughts on AI + Law (12/21/23)
San Francisco from the Marin Headlands, December 2023

Thursday Thoughts on AI + Law (12/21/23)

This is probably the last edition of the newsletter for 2023, but it's chock-full of interesting tidbits from the intersection of law and AI advances. I hope you all have a happy holiday season and a great start to 2024!

  1. Deepfakes at low cost and high scale are disrupting elections around the world. In the U.S., state regulators –including in California , South Carolina , and Florida –are working on the topic, and AI developers (like OpenAI ) are working to combat the problem. More generally, certain social media platforms are already facing problems with fake images being deployed at a staggering scale .
  2. Huge: LLMs have now demonstrated that they can solve hard, novel math problems .
  3. Israel is reportedly using AI to target weapons . This is something that needs more discussion.
  4. Reminder: the EU has reached political - but not textual - agreement on the AI Act. Related: will the AI Act function as a quasi-global standard (similar to the GDPR)? Also related: Europes technology sector is not impressed with the high level contours of the AI Act.
  5. Largely shadowed by the AI Act, the Council of Europe’s AI treaty is advancing.
  6. Former Pakistani PM Imran Khan is using AI to campaign from jail .
  7. Janet Yellen confirmed that the U.S. government is looking at AI’s impact on financial stability . As are several senators .
  8. Other senators are working to ensure that federal agencies appropriately approach civil rights issues arising from AI (joining forces with civil rights advocates and experts ).
  9. Meanwhile, Singapore’s Project MindForge is expected to analyze how to best incorporate generative AI into the banking sector while mitigating potential risks. Singapore also announced the latest iteration of its national AI strategy .
  10. If NIST is going to engage in standards development for AI, it should probably be funded to do so .
  11. OpenAI is researching whether weak supervisor models can effectively constrain more powerful models .
  12. Some powerful people think that optimization algorithms engage in cartel-like behavior .
  13. Oppenheimer Research has a really great research report on trends in AI .
  14. Bias in AI can lead to bad clinical outcomes . And, more generally, generalized foundation models may need to be adapted to support healthcare needs .
  15. As expected, lots of vendor-supplied AI governance tools come with their own problems .
  16. Why model weights are so important (and of interest to regulators).
  17. It should be unsurprising that Tiktok suppresses the spread of certain content and bolsters others and that the common factor is the interest of China’s government.
  18. AI is empowering more workers to apply to more jobs .
  19. With AI, you won’t need geotagging (AI can figure out where you are based on the photo ).
  20. If lots of data is needed to train AI and novel data is a competitive advantage, then perhaps data is the new oil (again )?
  21. Not good: AI might introduce new or reinforce existing challenges for African-Americans .
  22. EU…AI Act! EU… chip developers winning ?
  23. Judges in England and Wales are on advisement that AI should be approached cautiously.
  24. This is a very interesting comparison between various LLMs .
  25. Wild to see top tech leaders engaging in “oh, what could’ve been !” on X.
  26. Say AI could predict how long you’re likely to live . Would you want to know? And would you want insurers and other companies to know?
  27. Even before the final text appears, people are being advised to start preparing for AI Act compliance .
  28. NPR had a conversation with Yale Law’s Andrew Miller on the impact of AI in the U.S. legal system.
  29. Hugging Face declared 2023 the year of the open LLMs .
  30. Airbnb is using AI to help predict who is looking to party hearty on New Year’s Eve .
  31. Wild to see a legaltech company valued at $700m but apparently Harvey is.
  32. A good follow: a recent Substack post highlighted some of the best research papers in AI last month.
  33. Towards.ai published a great illustrated guide to RAG techniques .
  34. Krutrim has launched a multi-lingual Indian languages LLM .
  35. Dan Shapero, LinkedIn’s COO, explained to Business Insider how AI will make everyone’s lives easier .
  36. Wow: the Verge is reporting that ByteDance used OpenAI tech in efforts to build its own LLM, leading OpenAI to suspend ByteDance’s account .
  37. Sayash Kapoor and Arvind Narayanan wrote about the risks/benefits of open models .
  38. Dropbox’s reported sharing of data for AI purposes is getting it into potential hot water with customers.
  39. RiteAid, having deployed facial recognition AI in decidedly wrong ways , is banned from using the technology for five years.
  40. The Hacking Policy Council is calling for clarity around legal protections for red teaming efforts.
  41. I agree with Axios: let’s stop saying “AI did this” and start saying “people using AI did this .” In other words, AI abuse is a symptom, not a cause .
  42. Bridgewater published a great analysis of what a world of zero/low marginal cost cognitive productivity would mean for productivity .
  43. McKinsey issued a year-end report on AI trends.
  44. If you want to start to analyze data about trends for investment opportunities, you can use an LLM for that.
  45. Malaysia might be one of the next hubs for AI chipset manufacturers .
  46. Chile issued guidelines for public sector use of AI .
  47. Yikes: a massive, public, and frequently used image training dataset is alleged to contain lots of images of child abuse.
  48. AI companies keep negotiating deals with media companies. Here’s why .
  49. If you’re looking for a list of who the NYT excluded in its listing of important people in AI, you can start here .
  50. Speaking of, it’s important that broader media (i.e., outside of the tech ecosystem) is engaging with a broad range of AI experts .
  51. When everyone has an AI tool available to them, things might get very interesting .
  52. Bill Gates published his 2024 letter , which (of course) has a heavy focus on AI.
  53. The World Privacy Forum wrote a report on assessing responsible AI governance.
  54. Axios begs for a wider middle between the e/accels and decels in AI.
  55. Amazon is using generative AI to summarize reviews and inadvertently made them seem more negative .
  56. MIT Technology Review published a list of the six questions that may shape the future of generative AI adoption , as well as a list of four game changers in 2023 .
  57. The UK’s top court is in alignment with U.S. courts: AI can’t be an inventor for patent purposes . Korea appears primed to follow suit .
  58. Brazil’s ANPD discussed the balance between justice and innovation and between AI development and personal data protection. Related: the IAPP published a blog post about the balancing act of regulating AI in Latin America.
  59. It’s the end of the year but Congress is still working hard on thinking about how AI regulations might work .
  60. Anthropic is looking to draw a huge valuation . Speaking of, here’s an analysis of Mistral’s rocket-ship growth.
  61. Google and the Leadership Conference on Civil and Human Rights are partnering on a new AI-focused policy center
  62. Deloitte is reportedly using AI to help the firm avoid layoffs through employee reassignments.
  63. OpenAI is devoting funding to exploration of superalignment .
  64. There is geopolitical competition between the U.S. and China on chips but maybe not on AI research ?
  65. Perhaps the late twentieth century and the current century have more in common than thought .
  66. Microsoft’s Phi model is a pretty awesome small model .
  67. Semianalysis looks into the ‘race to the bottom’ for inference costs.
  68. OpenAI is being more public about its governance approach. And it released a guide to prompt engineering.
  69. Bruce Schneier makes some good arguments about the importance of trust in AI.
  70. We’re apparently not very good at recognizing satirical content .
  71. Answer.ai is trying to build consumer-oriented AI tools (I mean, ChatGPT is pretty consumer friendly!). In particular, this may be driven by a seeming convergence around certain design typologies for AI assistants .
  72. As Brookings points out, the relative lack of federal regulatory/legislative action on AI has led states and municipalities to try to fill the gap .
  73. Of course GPT can be used for document review .
  74. Unsurprisingly, other chipset manufacturers have thoughts on Nvidia and CUDA.
  75. As we end 2023, the NYT published a guest essay looking at how AI tools became thoroughly engrained in our consciousness over the past year .

Amir Towns

Investor looking to purchase businesses doing at least $200k in EBITDA

11 个月

Can't wait to read it! ??

回复
Dirceu Santa Rosa, CCEP-I, CIPM (IAPP)

Compliance, Data Protection and Ethics Counsel Manager ( Data + AI Group ), Accenture

11 个月

Thanks for the great content, and have an awesome holiday season, Jon Adams !!!

回复
Laura Lasher

Team Builder, Advisor, Coach, Consultant, Performance _ Mortgage and Real Estate Industry

11 个月

Wendy Lee Thank you for sharing, lots of information to sort through.

Will Jennings

Gretel | Synthetic Data | Sustainable AI

11 个月

One of my favorite newsletters. Thanks as always.

Biswajit Tripathy

Engineering @ Google

11 个月

That is a pretty detailed list ?? AI, in short term, would create more problems than it solves - though, no doubt, it would solve some of the really hard problems with far reaching consequence.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了