Thursday Thoughts on AI + Law (6/1/23)
From the San Pedro Summit on 5/31/2023

Thursday Thoughts on AI + Law (6/1/23)

Memorial Day led me to nearly forget that today was Thursday!

This was the week of worrying about ‘existential’ threats from AI, for diving deeper into how AI will impact employment, for seeing continued inquiries into AI’s privacy impact, for trying to see how Nvidia rose (and who’ll rise next), and for seeing a whole slew of other developments that might shape our future.

As always, thanks for reading, and feel free to share with anyone else who might find this interesting.

  1. The recent statement from the Center for AI Safety is short and to the point: 22 words to summarize why AI regulation is important and necessary. It’s different from the ‘pause’ call in that it doesn’t want to halt AI development more generally, but instead calls upon political and governmental organizations to step up and join the conversation about how to shape how we use AI in our world. As for the statement itself (“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”), I (and others) worry that people are placing too much agency on AI systems here (i.e., like pandemics, out of our control), when the graver risk is probably more like nuclear war (i.e., humans weaponize progress). The debate on the topic in Hacker News is pretty interesting to read as well. Related: a host of top-flight AI researchers published an article detailing why focusing on model-level risks is critically important (and how, at a high level, one might want to go about doing that). Not one to miss the risk-warning party, the Chinese government also issued a statement regarding the significant risks posed by AI (I suspect they’re thinking about it slightly differently).
  2. Gizmodo highlights how the employment process (from hiring to working to leaving) is being AI-augmented in various ways. And the Mercury News highlights that chatbots are increasingly used for job interviews. Much of this should be subject to some degree of regulation. Related: here’s what American workers are feeling about AI. TL;DR: It’s a great tool, but they’re worried about replacement. And, unfortunately, research is starting to show that gender biases in white collar occupations might place women at greater risk of AI-driven displacement.
  3. What to do if you’re concerned about AI displacement? Lean into what makes you human, Professor Po-Shen Loh argues.
  4. It’s pretty wild that ChatGPT only came out six months ago. Despite all of the excitement, only ~14% of Americans have tried it. We’re still very early in this process of exploring and understanding AI.
  5. Canada’s privacy regulator, as well as several provincial-level regulators, announced an inquiry into OpenAI.
  6. The Congressional Research Service published a short (but helpful) report on the nexus between generative AI and data privacy law. Not to be outdone, EPIC published a lengthy (but helpful) white paper on legal issues relating to generative AI.
  7. I will keep beating this drum: deepfakes could up-end our political processes and we aren’t prepared for this.
  8. Once upon a time, humanity could solve big problems. It’s not clear that we are going to rise to the challenge with AI (unless something forces our collective hands). In the meantime, regulators are reportedly pushing for a voluntary pact for AI governance, maybe involving private sector parties in the process, and some in civil society are arguing (again) that enforcement of existing laws may suffice for curtailing many of the potential AI risks we face.
  9. I did not expect this: the leading LLM on HuggingFace is Falcon, which was developed and open-sourced by…the UAE. AI discourse often focuses on the U.S., China, and EU but other countries will play a major role, and we are already seeing major tech companies launch country-specific AI tools to capitalize on this (e.g., Microsoft in India).
  10. The Australian government published a very thoughtful and thorough white paper on generative AI.
  11. I wonder if this will launch in the EU? Amex is going to use AI to make creditworthiness determinations. Maybe less risky, or maybe more so? JPMorgan is getting into the chatbot space.
  12. Apparently Japan is leaning towards a training data free-for-all?
  13. Evidently there are fissures within the Biden administration with regard to how the U.S. should approach the EU’s proposed AI Act.
  14. WPP is partnering with Nvidia to advance generative AI’s role in digital advertising. News media isn’t far behind…
  15. If Nvidia’s stock rises because it makes AI chips, then maybe Apple’s stock will rise because they collect a 30% tax/toll on all AI-powered apps available in their ecosystem?
  16. ESG investing has been battered as a result of perceived wishy-washiness in accountability. AI can change that.
  17. Humanloop has a good rundown of the current thinking about OpenAI’s plans.
  18. Pablo Chavez, writing for the Center for European Policy Analysis’ Bandwidth, put forward a good argument for developing a regulatory approach that can accommodate both open and closed AI systems.
  19. It’s subtextual in many of the conversations about chatbots and AI copilots, but AI may radically change the user experience and interfaces for computing experiences.
  20. The EU AI Act is likely going to have different impacts in different EU economies and some countries will fare far better than others as a result.
  21. Daniel Riedel published an interesting essay on how AI may shift economic activity to a more distributed Internet.
  22. Pretend you’re a VC and you’ve invested lots of money in start-ups, and then ChatGPT comes along. You’d probably be scouring your investments to see which are likely at risk, right?
  23. It’s impressive/scary when an LLM can create a better algorithm than people who have worked in the field for decades…
  24. AI is only as good as its training, and if you train AI weapons improperly you might create deadly consequences. In other words, yet another reason AI-augmented deadly weapons should be outright banned. (h/t Dad!)
  25. Politico has a good point: the political squabbling in Congress over social media and the internet more generally could slow down efforts to regulate AI.
  26. The Verge also has a good point: if tech companies are blocking employees from using generative AI tools at work, consider why that is occurring and consider what you can learn from that fact. (Sounds like the European Commission got the message. As did the Office of the Privacy Commissioner of New Zealand.)
  27. As Bloomberg points out, AI-powered companies are carrying the rest of the S&P 500 on their back.
  28. Axios has a good summary of how much VC money is flowing into AI right now.
  29. Generative AI mistakes are not very good when you’re using generative AI to conduct legal research and draft court filings, as one lawyer discovered. Perhaps in light of this, one judge in the Northern District of Texas implemented new filing rules to account for AI tools. Despite all this, some law firms are continuing to lean into generative AI.
  30. Using radiology as an example of a field that could be disrupted by AI, various MDs and computer scientists debated the near-term disruptive impact from AI.
  31. The Neuron has a great, curated list of useful AI tools for productivity.
  32. Are chatbots a ‘product’ or a ‘service’? The question probably seems overly philosophical, but it has legal implications. (Somewhat related: what is a ‘gpt’ app, and how will OpenAI approach the issue?)
  33. Very cool: Crowdstrike unveils AI-powered tools to reduce cybersecurity risks.
  34. Axios highlights how news and media companies are rushing to partner with AI service providers.
  35. Ben Thompson dove deep into the AI platform shift and what it means for the Windows OS ecosystem.
  36. Bloomberg Law has a deep dive into the post-Warhol fair use considerations for generative AI.
  37. The Information reported that Nvidia is supporting non-big tech cloud providers because it is aware that big-tech companies are developing competing chips.
  38. Miss buying Nvidia pre-earnings (i.e., before it became a $1 trillion company)? Motley Fool is providing alternatives. As we’re learning, there’s $ in the gold mines, but selling pick-axes and jeans can also create huge windfalls. Take, for example, Lightmatter, which recently raised a significant Series C and focuses on energy efficiency in computing (huge in AI).
  39. When professors empower students to use generative AI for their work, interesting and (slightly) problematic things occur. Related: the Australian government is digging into the use of AI in education.
  40. Generative AI tools can be both Cliffs Notes and an editor for a book.
  41. Last week, the Washington Post profiled Microsoft’s Brad Smith. This week, the Guardian profiled Rumman Chowdhury, who offers a thoughtful take on the balance between technological progress and accountability.
  42. Chowdhury has been on a tear, as she was also one of the AI engineers discussing AI-related concerns with the NYT this week.
  43. Likewise, Jay Obernolte, a member of Congress from California (Washington Post profile here), is in the news arguing for establishing data guardrails relating to AI.

Dirceu Santa Rosa, CCEP-I, CIPM (IAPP)

Compliance, Data Protection and Ethics Counsel Manager ( Data + AI Group ), Accenture

1 年

Thanks !!!!

回复
Soribel F.

Senior AI and Tech Policy Advisor @US Senate | Responsible Tech | Data Privacy | AI and Healthcare | AI and Education | AI and Labor

1 年

#7 ??

回复
Igor Portugal

Technology Innovator | Fractional CxO | AI | Cyber Security | Investor | Author | Empowering Businesses, Enhancing Lives: Uniting technology and human insight for a more prosperous, enjoyable, smarter and safer world.

1 年

Corporations want to regulate AI to protect their monopoly. Regulation will push AI out of the hands of the open-source community and give monopoly power to large corporations. To that extent, regulation like licensing or patenting AI will have a disastrous effect. Imagine living in a world where the only people controlling AI are Elon Musk, Vladimir Putin and Kim Jong Un. This is why AI regulatory restriction is a bad idea and we must reject it at all costs. The only thing worth legislating is ensuring there is a human always liable for any action of AI.?This thesis explores this further: https://liberty-by-ip.blogspot.com/2023/05/why-ai-regulation-is-bad-idea.html I am interested to hear your feedback!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了