Thursday Thoughts on AI + Law (6/8/23)
Marin County Sky, June 6, 2023

Thursday Thoughts on AI + Law (6/8/23)

This week’s edition of the newsletter really indexes pretty high on the “law” side of AI+Law. There is just so much happening in this space. Hopefully this summary is a good read and useful for you.

  1. Marc Andreesen has often proven correct about technology, from Mosaic to software eating the world. But I think his views regarding AI are unduly utopian (perhaps to combat the unduly dystopian views of others) and belied by trends we are seeing elsewhere in technology; if his prescription is followed, we might end up in a utopia but we’re more likely to end up in what Gary Marcus calls a “bleak future .” Andreesen’s and Marcus’s essay highlights an ancillary–but important–point that Fortune magazine recently spotlighted: the narrative about AI (and the stories we tell about it) may shape the role of AI in our world more than AI technology does .
  2. As AI increasingly impacts our economy , news stories are starting to come out about AI-driven displacement of entry-level white collar workers . For example, legal assistants and paralegals , even lawyers . And maybe Hollywood writers (but not directors )? This trend is only likely to accelerate as technology advances faster than policy. BCG asks: what do workers think about all this ?
  3. The EU is strongly pushing for generative AI transparency in advance of any AI Act or AI Pact (whatever that ends up looking like ) coming into effect.
  4. Speaking of the AI Act, the Kluwer Competition Law Blog has a very good overview of what requirements will be incumbent upon deployers of ‘high risk’ AI as defined under the AI Act proposal. And DigitalEurope issued a report on what the AI Act might mean for start-ups and SMEs in the EU.
  5. The European Commission is also calling on tech firms to label AI-generated disinformation . How that will work–particularly when users generate content on one service to post on a different platform–is unclear.
  6. The AI Act (and Pact) has been getting lots of attention in the EU and it seems like perhaps everyone forgot about the proposed revisions to product liability rules relating to AI and software?
  7. Now, in the U.S., recent policy proposals relating to AI seem, well, somewhat scattershot . Rep. Torres has a plan that would require labeling of AI-generated content which, in theory, is great, but is going to be incredibly difficult to implement in practice. And Sen. Hawley is proposing a package of…sound-bytes?? One group of senators is thinking critically about the importance of AI for defense and economic purposes. In any event, the Senate will be holding a series of hearings on the topic of AI in the coming weeks. (Oh, and of course different states are continuing to propose their own rules for AI .)
  8. It is rarely the case that I find myself agreeing with the Federalist Society but, while I disagree with their overall conclusion, they make some good arguments in favor of exercising thoughtfulness and thoroughly investigating the potential risks and rewards from AI when developing a regulatory program.
  9. Singapore is providing a great model for industry/government collaboration on standards development, with the recently announced AI Verify program working to set standards relating to responsible AI and AI governance.
  10. The UK is also working outside of the U.S./EU/China paradigms, but is focused on working with ‘like-minded’ countries . PM Sunak is convening a global forum for discussing AI safety this fall.
  11. Sam Altman suggests that AI regulations shouldn’t apply to SMBs . While that might reduce market entry costs, it’s a terrible idea that would almost certainly create a net-negative impact for society.
  12. If AI is ubiquitous in Google products, what does that mean for media and publishers ?
  13. Wall Street is very, very bullish on AI. But that doesn’t mean that the market won’t pick winners and losers . One group that’s winning? People who work in AI and want to work for Wall Street .
  14. People are dunking on Apple for avoiding AI hype, but it makes sense . I rather like their measured approach and continued use of the term ‘machine learning’ to reflect the reality of what’s happening.
  15. Microsoft unveiled a set of commitments to customers regarding its approach to AI, which should help assure customers that they can use Microsoft AI tools in a responsible manner. Speaking of which, Microsoft will be supplying access to GPT tools to the U.S. government .
  16. If, say, you were working for a VC or PE firm and wanted to get an edge on investment targets , you might also try using AI for insights.
  17. The good ol’ “right of publicity” is coming back into vogue in an era of deepfakes. Same with “defamation ” when AI creates false statements about real people.
  18. The CFPB issued a white paper on the impact of chatbots in the consumer finance sector.
  19. Here are some insights from media/publishers on the impact of AI at six months post-ChatGPT.
  20. Yep: AI models are expensive to run, so some of the best tech isn’t generally available to the general public.
  21. The NYT goes there: how does AI intertwine with CDA Section 230 ?
  22. Some say that the major players in AI want to have their cake (unfettered access to data from everywhere on the internet) and eat it too (by not enabling the data from their services to be used by others).
  23. People debate the open versus closed approach to models. One factor in that debate that is often unduly ignored is the impact of open/closed models on deployment safety . Which is why Meta moving heavily in favor of open models might give some cause for excitement on one hand but concern on the other.
  24. The team at LinkedIn published a great article on how to think about AI fairness in different contexts.
  25. Google is right on this: the USPTO should publish guidance on how it is thinking about AI/patents.
  26. More artists are leaning in on AI .
  27. HBR published a good (but very high-level) guide to what organizations should think about when approaching responsible use of generative AI .
  28. Perceived changes in GPT-4 effectiveness have some asking if AI models can decay (and whether ‘nerfing’ from safety-oriented restrictions is at the root of the change).
  29. Many AI start-ups are basically “GPT+ ____” and, as a result, many are staying in stealth mode to avoid being quickly commodified. Of course, some theoretically commodifiable businesses are still raking in huge investments.
  30. The privacy investigations into chatbots continue in Japan and Germany, with Japan issuing specific guidance to OpenAI regarding appropriate use of ChatGPT in the country.
  31. Israel, conversely, is leaning in heavily towards AI as a foundation of its future economic growth.
  32. Nicholas Taleb is suggesting (perhaps correctly) that, owing to data and parameter limitations, we’re at (or near) peak LLM performance . Related: researchers at MIT have built self-learning models that out-perform larger LLMs .
  33. Stop trying to make “fetch” happen: crypto advocates are now touting crypto as a tool to help combat the ‘excesses’ of AI.
  34. There could be a whole series of articles on this: what is the impact of AI on Buddhism (and religions and philosophies more generally)?
  35. The EU Agency for Cybersecurity is working on addressing key issues relating to AI/security. Related: the United Nations Interregional Crime and Justice Research Institute published a toolkit for thinking about AI use in law enforcement.
  36. The FBI issued a warning about the increased use of AI deepfakes for sextortion and other criminal behavior.
  37. The race to apply generative AI to meetings is accelerating.?
  38. (Not really) Tired: AI bias. Wired: Concerns about neurotechnology bias ?
  39. Evidently, you can start a (fake) design firm in a weekend using AI.
  40. We can’t regulate AI because it’ll enable China to get ahead of us ” is a common refrain. But is it based in reality ??
  41. While on the topic of China, Dr. Kris Shrishak of the ICCL argues in the EuroNews that the EU could stand to learn a couple of things from China’s approach to regulating generative AI tools.
  42. China is seeing some companies race to lead in the AI space , but it is also facing a surge of AI-fueled scams . Everywhere else is, too (or will be soon), I assume. Recent research has shown, after all, that roughly one-third of people can’t distinguish between a person and an AI bot.
  43. Character.ai seems to have gained some traction (as sad as the idea of creating ‘characters’ with which to chat might be).
  44. Generative AI is going to transform QR codes in a major way.
  45. A recent Forbes article highlights several questionable claims and representations made by Emad Mostaque, the CEO and co-founder of Stability AI. He’s reportedly sad about the report.
  46. Jenna Burrell has a good essay on why a continual focus on the ‘proximate future’ with regard to AI is a cop-out for addressing existing risks and problems.
  47. Many educators are pushing back against AI. The professor of a popular CS course at Harvard, conversely, is leaning in .
  48. Could be helpful: a new startup is trying to apply AI to help people know whether something belongs in the trash or recycle bin .
  49. AI is driving a resurgence in people moving back to SF .
  50. Good! AI is helping break down barriers to information sharing relating to climate change.
  51. Clever but ill-advised: thieves are scraping code to steal OpenAI credits , apparently.?
  52. How AI might be used by online media to reduce the clickbait effect .
  53. GoBubble serves a niche purpose but it’s a shame that it’s necessary.


#AI #artificialintelligence #aiml #machinelearning #data #tech #technology #openai #microsoft #apple #google #meta #stabilityai #gpt #chatgpt #llm #sf #characterai #eu #euaiact #us #congress #law #legal #regulation #legislation #china #uk #innovation #singapore #cda230 #copyright #privacy #dataprotection #cybersecurity

Soribel F.

Senior AI and Tech Policy Advisor @US Senate | Responsible Tech | Data Privacy | AI and Healthcare | AI and Education | AI and Labor

1 年

Shaking my head on #7. #34 is out there. Thank you!

Miroslaw Rogala, Ph.D.

PhyGital/ Generative/ Interactive Media Artist, Plotter Drawings, Video/ Media Opera, Post-Photography, Former Chair/ Professor/ Digital Arts (now), Pratt Institute, New York

1 年

Excellent article!

  • 该图片无替代文字

要查看或添加评论,请登录

社区洞察

其他会员也浏览了