Thursday Thoughts on AI + Law (10/26/23)
Lake Crescent, Washington, October 2023

Thursday Thoughts on AI + Law (10/26/23)

  1. Politico hits the nail on the head: in an era of convincing generative AI-created content, how can anyone trust what they see ? More importantly, what does that mean for epistemology and how ordinary people trust media and content (e.g., the news)? It’s a sobering question and highlights how AI can be abused to erode societal values. But as Audrey Tang, Taiwan’s digital affairs minister explains, the situation isn’t inherently dire and AI can be used to support democratic norms .
  2. Worth a read, if you care about the intersection of data protection and AI: the EDPS opinion letter regarding the AI Act draft .?
  3. MEP Drago? Tudorache gave Euronews an update on the AI Act trilogue negotiations . The thorniest parts of the AI Act are of course the hardest parts for achieving alignment .
  4. A group of leading AI researchers published a good set of recommendations for mitigating risks presented by AI systems .
  5. The UN announced the creation of an Advisory Body on AI . Perhaps it’ll ultimately help lead to the equivalent of a CERN for AI ?
  6. Making good on their promises, a group of major tech firms announced the creation of the Frontier Model Forum .
  7. Pablo Chavez makes the argument (and I’m quite inclined to agree with it) that the U.S. is actually making great strides in advancing AI regulations. In a separate post, he also highlighted the similarities and (often subtextual) distinctions between the G7 and Chinese AI governance principles .
  8. Much was made of the Foundation Model Transparency Index (FMTI) that a Stanford team released last week. As people have had time to digest it, more (lengthy) opinions on the utility and validity of the findings are circulating.
  9. The Senate held another hearing on AI on Tuesday, with more to come . Axios gave a rundown of the different voices and perspectives being brought to the table.
  10. Messaging matters. And headlines like “AI Could Spur an Economic Boom. Humans Are in the Way. ” are not going to help build trust in AI developers.
  11. Anthropic is trying to establish a ‘constitutional’ approach to AI governance but is running into a unique challenge: what Anthropic thinks is important doesn’t really line up closely with what most people find to be important .
  12. There’s one school of thought that thinks that basically all generative AI models are (or should be) unlawful or at great legal peril due to IP, data protection, and other reasons. One group not in that school? Venture capitalists . (For what it’s worth, I’m of the school that Pandora’s box has opened up and it won’t likely be closed again.)
  13. Ethan Mollick makes a great point: if we have multiple powerful models available to nearly everyone on the planet, how do we focus on using those tools in the most useful or “best available” manner ?
  14. Leica is releasing a camera with content authentication built into its image processing . More devices will likely follow and, maybe at some future point, non-authentication will be a mark of suspicion (as Adobe’s GC suggests) . Google also unveiled some image authentication tools for images found on the internet .
  15. Someday, maybe Monday, maybe another day: the White House will eventually issue an EO regarding AI . It promises to be a grab bag of industry rules and national security restrictions and more, but Congress is already suggesting it doesn’t go far enough (and, to be fair, perhaps regulation via legislative process is better than via executive order but that would require Congress to function (something Sen. Amy Klobuchar believes is possible )).
  16. Interesting: if there’s an AI arms race between the U.S. and China, ScaleAI is aiming to become the leader of the next version of the military-industrial complex . And I think we now know which side HuggingFace might support .
  17. I missed this last week: the U.S. and Singapore agreed on developing interoperable AI governance frameworks . Brookings is encouraging similar engagement with China .
  18. UK PM Sunak is pushing countries to force the labeling of AI that is capable of generating ‘catastrophic harms. ’ Of course, he might want to look at the UK government’s own use of AI , which is potentially increasing efficiency but is reportedly coming with some pretty significant negative consequences.
  19. On the topic of AI-created harms: Yoshua Bengio suggests that there needs to be an organization dedicated to defending humanity from AI . He also co-authored a recent paper outlining how to pair AI progress with support for democratic processes .?
  20. But, hey, Yann LeCun keeps saying there’s no real reason to worry (yet).
  21. Proposals for regulations often (in democratic societies) offer an opportunity for stakeholders and the public to weigh in. But what if AI, rather than regulators, reviews the comments or what if, even worse, AI floods the commentary zone ?
  22. If you’re interested in supporting organizations that are fighting for more transparent and ethical AI, then this article in ZDNet is for you.
  23. I guess if you run a social media platform and you don’t like the laws in a particular jurisdiction relating to content moderation, you might choose to leave the market ?
  24. If you ask a sample of people to compare ethical advice from ChatGPT versus the guidance given by the NYT’s Ethicist , you might be surprised to see the results.
  25. Apple is reportedly working hard to catch up in the generative AI space but spending a $1 billion a year on generative AI is not keeping up with how much competitors are reportedly spending.
  26. Korea’s Personal Information Commission has created an AI-oriented task force .
  27. Members of Congress say that a data privacy law is foundational to AI regulation in the U.S. but that makes me wonder if we’ll have to wait decades for AI regulation (as we’ve had to for general privacy legislation at the federal level). More here .
  28. Massive AI startup valuations aren’t just a Western thing: China’s Baichuan raked in $300 million in their latest round and Zhipu hit nearly $350 million .
  29. Cities are the labs for experiments with AI and governance.
  30. Kaggle published a cool report on the state of AI in 2023 .
  31. AI can be used to develop novel proteins and drugs, but it’s not clear how well they translate to practice .
  32. Let’s face it, AI healthcare tools get things wrong. Sometimes, it happens as a result of hallucinations. Other times, it comes from human biases that translated into racist views in the models .
  33. Sobering and worth a read: Wired took a look at the millions of people training AI models for pennies a day .
  34. Gary Marcus takes aim at the AI hype cycle .
  35. If, as reported, Jon Stewart walked away from Apple TV over differences regarding AI and China , that prompts a whole slew of questions.
  36. Mind-blowing: it’s faster/cheaper to train robots in a virtual environment , so that’s just what Nvidia and Meta are doing.
  37. What’s in a name? A look at “AI”, “machine learning”, and how they’re used by different parties for different purposes in the AI value chain.
  38. Microsoft is investing significantly in Australia with a focus on AI, cloud computing, and information security.
  39. Gen Z workers have plenty of thoughts about the role of AI in the workplace.
  40. People talk about AI saving office workers from drudgery; robots might save warehouse workers from actual injury .
  41. MIT’s Lincoln Lab is figuring out ways to reduce the energy costs of AI development/deployment.
  42. Hoo boy, Amazon is bringing conversational AI to kids .
  43. A McGill team offered a critique of Canada’s Artificial Intelligence and Data Act , focused on its potential impact on prosperity.
  44. YELLING IN ALL CAPS is bad form on the internet but might be helpful in LLM-based programming (where ‘tone’ is important).
  45. If you train AI models with images of attractive people, is it really surprising that they will provide attractive people as outputs ?
  46. Stack Overflow’s blog has an interesting post relating to the management of data protection when building with generative AI.
  47. Good thought piece: what if there was a massive open training data set for robotics purposes ?
  48. SemiAnalysis offered up a useful overview of the current state of AI chip manufacturing export controls/restrictions .
  49. Scientific American looks at how nearly all of the Internet became an AI data training ground .
  50. Want to understand Large Multimodal Models? Chip Huyen has you covered .
  51. Glad this got some good press: LinkedIn’s InfoSec team found a way to use generative AI to take care of more mundane tasks .
  52. Bill Gates evidently thinks that generative AI has hit a plateau .
  53. The WSJ looks at how generative AI will quickly reshape most American consumers’ experience on the internet .
  54. If you work at an AI startup (or in tech, more generally), odds are that Okta is being used for identification management .
  55. Nature published a great state-of-play article regarding the “AI revolution” in medicine.
  56. Andrew Ng is offering a free course on generative AI.
  57. Google has reportedly been developing a powerful multi-modal tool code-named “Stubbs .”
  58. As tech company earnings reports come in, it looks like AI bets are starting to pay off , resulting in major market gains . But convincing customers to pay for a company’s AI features isn’t always easy as competition heats up .
  59. Business Insider reports that lots of job candidates are using ChatGPT (and poorly) in their job search process.
  60. Reddit is evidently planning to fight back against generative AI developers .
  61. There’s evidently good money in trying to clean up AI training data.
  62. Politico dug into the AI lobbying industry swarming Washington these days.
  63. Yep: AI might finally kill captcha .
  64. Checks out: there’s also good money in the “AI for insurance ” market.
  65. Qualcomm is building chips to bring better generative AI capabilities to smart phones . It makes sense for a variety of reasons.
  66. The robotaxi fleet cruising SF will be a little smaller now .
  67. Generative AI tools are driving improvements in scam email quality .
  68. Yet another paper points out that “AI detection tools” don’t work .
  69. Just like every other tech boom, students are dropping out of school to get in on the action .
  70. When we look back at 2023 from the future, will we think of the explosion of AI tools as being like a ‘first flight’ moment ?

Oliver Villegas

?? Generate Leads and Sales Through Search Engine Optimization; specialized for Law Firms, Veterinarians, Local Business and Ecommerce Sites ????

1 年

Impressive insights on the ever-evolving AI and legal landscape! Your updates are a valuable resource for those navigating this intersection. Keep up the great work! ??

回复
Daniel I. Levy

Legal Leader | Financial Services | Cyber, Privacy, eDiscovery, AI

1 年

It’s staggering to see this all in one place and realize how fast things are moving in this space.

Goldah Nekesa Matete

AI and Algorithmic Auditing | International Human Rights Lawyer| Policy and Governance |

1 年

Thanks for my weekly reading list, Jon Adams!

Abhishek Chhabra

Growth Catalyst, Activator, Ideator, Maximiser

1 年
Felicita J Sandoval MSc., CFE

Cybersecurity (Global GRC) | Let’s Talk About AI Security and Data Governance | CEO/Co-Founder | Consultant | Speaker | PhD Candidate - AI Research | Leadership

1 年

This is by far my favorite newsletter! I’m always waiting for you to post, Jon! Thanks??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了