Thursday Thoughts on AI + Law (3/9/23)
DALL*E2 prompt: a black and white pencil sketch of two robots working on computers in an office

Thursday Thoughts on AI + Law (3/9/23)

It’s hard to summarize all of the AI-related news from this past week. On the one hand, the story for AI in the world of business and technology is: full speed ahead for development. On the other hand, people are continuing to identify and speak to the risks–some of which may be fairly remote, others are much more real and cognizable–that unfettered AI development may present. And it’s not clear when–or if–there will be alignment between these threads. If nothing else, the impetus for thoughtful conversations and risk-oriented regulation is increasing daily.?

  1. Yesterday was International Women’s Day and everyone should be aware of how women have played (and continue to play) a very significant role in starting and advancing the fields of AI and computing more generally.
  2. Vox gets it: AI isn’t crypto, and this time the hype is (probably) justified.
  3. Bloomberg dives into the question Eliezer Yudkowsky raised of “will AI kill us all?”
  4. Noah Smith thinks it won’t (at least, LLMs won’t kill us all, and he’s probably right about that, though LLMs might replace jobs leading to political instability, or create deepfakes that lead to political instability, or… you get the picture: people could misuse LLMs for very bad things). But the reason LLMs won't directly kill people is because LLMs aren’t HAL.
  5. A new (and probably very important) paper out of MIT suggests that the use of ChatGPT is already transforming white collar work. The New York Times offers up some examples.
  6. Marc Andreessen, conversely, put forward a half-baked argument about why AI won’t transform the world of work and cause significant unemployment.
  7. If AI is going to transform white collar work, here’s where it’s most likely to have an impact (including the legal field).
  8. AI has already transformed coding but it isn’t always a smooth transition.
  9. Axel Springer thinks AI will completely transform the news industry.
  10. AI continues to show tremendous promise in improving medical services: in this case, it predicted cancer years before it emerged.
  11. I’ll keep beating on this drum: AI-powered deepfakes are highly likely to spark a political scandal, a geopolitical crisis, or worse.
  12. Deepfakes are already being used to scam people left and right: the latest scam involves ‘loved ones in distress.’
  13. The U.S. Chamber of Commerce released a report regarding the promise of AI and calling for a risk-based regulatory framework.
  14. So, where are the regulators (in the U.S., that is)???
  15. The New York Times headline generalizes but is on point: “As AI booms, lawmakers struggle to understand the technology.”? Which means they’re less likely to push for meaningful regulation.
  16. China, on the other hand, is putting forward very strict regulations regarding deepfakes.
  17. YouTube is banned in China but, if it weren’t, I wonder if the regulations would apply to the new creator tools the platform is planning.
  18. What if, instead of perpetuating biases and inequities, AI could be used to foster equality and equity?
  19. On a related note, HBR has a good article that highlights how removing demographic data from datasets can result in increased algorithmic discrimination.
  20. Noam Chomsky gives a linguist/philosopher perspective on what he labels ‘the false promise of ChatGPT.’
  21. Demis Hassabis, CEO of DeepMind, gave an interview with Axios; some of his ideas, like citations for content, seem likely to become standard fare for LLM applications in the future.
  22. OpenAI team members gave MIT Technology Review a good rundown on how ChatGPT came into being.
  23. And the WSJ has a good story about how Google stalled out on bringing a chatbot to market.
  24. If OpenAI is creating a GPT-powered ecosystem, who wins and who loses? And, if you want to invest in the winners, how would you do that?
  25. As is the case with many things, Elon Musk has some…interesting views on AI.
  26. Meanwhile, Reid Hoffman is stepping away from OpenAI to invest more heavily in the AI space.
  27. The Atlantic queries whether increasing the parameters of LLMs will really be fruitful or if it’ll lead to bloat. Nature magazine has a related piece on the need for smaller models.
  28. Remember when everyone freaked out about ChatGPT in schools? Well, life moves on, and teachers and students are both finding ways to make it work for them.?
  29. AI has entered the CRM market with Dynamics. And with SFDC (and Slack!).
  30. Last week I mentioned that Romania had hired an “AI” policy advisor. This week, that advisor is in trouble for copyright violation.
  31. Unofficial internet regulator Apple delayed the release of an email app that incorporated generative AI for message generation on the fears that it could suggest content ill-suited to minors.
  32. Missed this in the OpenAI API announcement last week: OpenAI will no longer default to using customers’ data submitted through the APIs for product improvement.
  33. And Microsoft has now released ChatGPT for the Azure OpenAI API service.
  34. ForHumanity has asked OpenAI to put ChatGPT into a regulatory sandbox to stress test the model for broader purposes.
  35. Gibson Dunn published a good top ten list of AI-related considerations for employers.
  36. In case you missed it last week, funding competition for AI startups is heating up.
  37. The Atlantic continues to beat the “generative AI will drive misinformation campaigns” drum. But others are not convinced that a misinformation apocalypse will befall us.
  38. The publishing industry is thinking of data traceability as a means for contesting AI models’ appetites for training data.?
  39. On a related note, the news media is realizing that they may have inadvertently played a large role in training LLMs.
  40. Conversely, some researchers argue that the current ‘opt-out of training data’ approach is insufficient for visual artists.
  41. The headline makes it sound bad, but basically companies are pressing the New York City employment regulators to ensure that their regulations are actionable and effective.
  42. Meta’s LLaMa model was leaked online.
  43. Meta also released Casual Conversations v2 to help AI teams combat biases in training.
  44. Meta may be onto something…AI could promote privacy: if you can infer enough about who might visit a page, you may not need to track them.
  45. Taylor Wessing provided guidance on how to approach IP in AI model training under Czech law.
  46. Eversheds provides an overview of the commonalities between the EU AI Act and the regulatory initiatives relating to AI popping up across the U.S.
  47. U.S. lawmakers are re-introducing legislation that would limit the use of facial recognition and biometric technology tools.
  48. The European Parliament made incremental regulatory changes relating to AI (extending the ban on social scoring and reducing the scope of the authority of the AI Office).
  49. Speaking of the AI Act, the European Parliament has been grappling with how to define AI; they have now aligned on the OECD definition.
  50. Australia’s Human Rights Commission proposed a National Human Rights Act for Australia, which would include, among many other things, guardrails relating to AI in the workforce context.
  51. The FT dives into how Chinese firms interested in AI are using cloud services to evade U.S. sanctions on chip exports.
  52. Those Chinese firms will also have to start planning to evade Dutch controls as well, as the Netherlands is now restricting chip exports to China.
  53. Bing’s GPT integrations have new features.
  54. Thought-provoking: is AI-generated art fostering the emergence of a new artistic style?
  55. Stepping away from the generative AI space to focus on social feed relevancy algorithms, The Atlantic has a primer for how to ‘take back control’ of what you read on the Internet.
  56. The New Republic dives into generative AI’s ‘worst enemy’: the U.S. Copyright Office.
  57. Algorithmically selecting at-risk teens to funnel into a chatbot research program seems like a horrible idea but it happened.
  58. Artificial intelligence is being used as a tool for anti-money laundering compliance.
  59. This article is focused on ChatGPT as a tool for improving search ranking, but highlights the potential that ChatGPT could be used to train countless spin-off models.
  60. The Brookings Institute published a good thought piece on the policy options that exist for handling AI inventorship. (It’s already a very real question before the UK Supreme Court.)
  61. A bit late, but a good retrospective: here are the most-cited AI papers in 2022.

Thanks for Sharing! ?? Jon Adams

回复

要查看或添加评论,请登录

Jon Adams的更多文章

  • Thursday Thoughts on AI + Law (1/4/24)

    Thursday Thoughts on AI + Law (1/4/24)

    Hello, 2024! As always, there is a great deal happening at the intersection of AI and law. I don't expect that to…

    1 条评论
  • Thursday Thoughts on AI + Law (12/21/23)

    Thursday Thoughts on AI + Law (12/21/23)

    This is probably the last edition of the newsletter for 2023, but it's chock-full of interesting tidbits from the…

    9 条评论
  • Thursday Thoughts on AI + Law (12/14/23)

    Thursday Thoughts on AI + Law (12/14/23)

    It's getting close to the end of the year, but things certainly aren't quieting down in the world of AI! Last Friday…

    1 条评论
  • Thursday Thoughts on AI + Law (12/7/23)

    Thursday Thoughts on AI + Law (12/7/23)

    I delayed sending this until Friday in hopes that the AI Act negotiations would conclude in time for this week's update…

    1 条评论
  • Thursday Thoughts on AI + Law (11/30/23)

    Thursday Thoughts on AI + Law (11/30/23)

    It's been a busy two weeks since I last sent out a newsletter. Hopefully you all find this interesting.

    5 条评论
  • Thursday Thoughts on AI + Law (11/16/23)

    Thursday Thoughts on AI + Law (11/16/23)

    Each week, this list seems to grow longer and longer. Hopefully you find something interesting here--I found so much of…

    2 条评论
  • Thursday Thoughts on AI + Law (11/9/23)

    Thursday Thoughts on AI + Law (11/9/23)

    This week was a little less dramatic than last week, but it was by no means boring. Here's the latest: Axios highlights…

    4 条评论
  • Thursday Thoughts on AI + Law (11/2/23)

    Thursday Thoughts on AI + Law (11/2/23)

    I say it every week, but this week was a particularly important week for the evolving relationship between AI and the…

    1 条评论
  • Thursday Thoughts on AI + Law (10/26/23)

    Thursday Thoughts on AI + Law (10/26/23)

    Politico hits the nail on the head: in an era of convincing generative AI-created content, how can anyone trust what…

    5 条评论
  • Thursday Thoughts on AI + Law (10/19/23)

    Thursday Thoughts on AI + Law (10/19/23)

    The list is a little bit lighter this week due to travel and work, but hopefully it's still interesting enough! Marc…

    3 条评论

社区洞察

其他会员也浏览了