This is probably the last edition of the newsletter for 2023, but it's chock-full of interesting tidbits from the intersection of law and AI advances. I hope you all have a happy holiday season and a great start to 2024!
- Deepfakes at low cost and high scale
are disrupting elections around the world. In the U.S., state regulators
–including in California
, South Carolina
, and Florida
–are working on the topic, and AI developers (like OpenAI
) are working to combat the problem. More generally, certain social media platforms are already facing problems with fake images being deployed at a staggering scale
.
- Huge: LLMs have now demonstrated that they can solve hard, novel math problems
.
- Israel is reportedly using AI to target weapons
. This is something that needs more discussion.
- Reminder: the EU has reached political - but not textual
- agreement on the AI Act. Related: will the AI Act function as a quasi-global standard
(similar to the GDPR)? Also related: Europes technology sector is not impressed
with the high level contours of the AI Act.
- Largely shadowed by the AI Act, the Council of Europe’s AI treaty
is advancing.
- Former Pakistani PM Imran Khan is using AI to campaign from jail
.
- Janet Yellen confirmed that the U.S. government is looking at AI’s impact on financial stability
. As are several senators
.
- Other senators are working to ensure that federal agencies appropriately approach civil rights issues arising from AI
(joining forces with civil rights advocates and experts
).
- Meanwhile, Singapore’s Project MindForge
is expected to analyze how to best incorporate generative AI into the banking sector while mitigating potential risks. Singapore also announced the latest iteration of its national AI strategy
.
- If NIST is going to engage in standards development for AI, it should probably be funded to do so
.
- OpenAI is researching whether weak supervisor models can effectively constrain more powerful models
.
- Some powerful people think that optimization algorithms engage in cartel-like behavior
.
- Oppenheimer Research has a really great research report on trends in AI
.
- Bias in AI can lead to bad clinical outcomes
. And, more generally, generalized foundation models may need to be adapted to support healthcare needs
.
- As expected, lots of vendor-supplied AI governance tools come with their own problems
.
- Why model weights are so important
(and of interest to regulators).
- It should be unsurprising that Tiktok suppresses the spread of certain content
and bolsters others
and that the common factor is the interest of China’s government.
- AI is empowering more workers to apply to more jobs
.
- With AI, you won’t need geotagging (AI can figure out where you are based on the photo
).
- If lots of data is needed to train AI and novel data is a competitive advantage, then perhaps data is the new oil
(again
)?
- Not good: AI might introduce new or reinforce existing
challenges for African-Americans
.
- EU…AI Act! EU… chip developers winning
?
- Judges in England and Wales are on advisement
that AI should be approached cautiously.
- This is a very interesting comparison between various LLMs
.
- Wild to see top tech leaders engaging in “oh, what could’ve been
!” on X.
- Say AI could predict how long you’re likely to live
. Would you want to know? And would you want insurers and other companies to know?
- Even before the final text appears, people are being advised to start preparing for AI Act compliance
.
- NPR had a conversation
with Yale Law’s Andrew Miller on the impact of AI in the U.S. legal system.
- Hugging Face declared 2023 the year of the open LLMs
.
- Airbnb is using AI to help predict who is looking to party hearty on New Year’s Eve
.
- Wild to see a legaltech company valued at $700m
but apparently Harvey is.
- A good follow: a recent Substack post
highlighted some of the best research papers in AI last month.
- Towards.ai
published a great illustrated guide to RAG techniques
.
- Krutrim has launched a multi-lingual Indian languages LLM
.
- Dan Shapero, LinkedIn’s COO, explained to Business Insider how AI will make everyone’s lives easier
.
- Wow: the Verge is reporting
that ByteDance used OpenAI tech in efforts to build its own LLM, leading OpenAI to suspend ByteDance’s account
.
- Sayash Kapoor and Arvind Narayanan wrote about the risks/benefits of open models
.
- Dropbox’s reported sharing of data
for AI purposes is getting it into potential hot water with customers.
- RiteAid, having deployed facial recognition AI in decidedly wrong ways
, is banned from using the technology for five years.
- The Hacking Policy Council is calling for clarity
around legal protections for red teaming efforts.
- I agree with Axios: let’s stop saying “AI did this” and start saying “people using AI did this
.” In other words, AI abuse is a symptom, not a cause
.
- Bridgewater published a great analysis of what a world of zero/low marginal cost cognitive productivity would mean for productivity
.
- McKinsey issued a year-end report
on AI trends.
- If you want to start to analyze data about trends for investment opportunities, you can use an LLM for that.
- Malaysia might be one of the next hubs for AI chipset manufacturers
.
- Chile issued guidelines for public sector use of AI
.
- Yikes: a massive, public, and frequently used image training dataset is alleged to contain lots of images of child abuse.
- AI companies keep negotiating deals with media companies. Here’s why
.
- If you’re looking for a list of who the NYT excluded in its listing of important people in AI, you can start here
.
- Speaking of, it’s important that broader media (i.e., outside of the tech ecosystem) is engaging with a broad range of AI experts
.
- When everyone has an AI tool available to them, things might get very interesting
.
- Bill Gates published his 2024 letter
, which (of course) has a heavy focus on AI.
- The World Privacy Forum wrote a report
on assessing responsible AI governance.
- Axios begs for a wider middle
between the e/accels and decels in AI.
- Amazon is using generative AI to summarize reviews and inadvertently made them seem more negative
.
- MIT Technology Review published a list of the six questions that may shape the future of generative AI adoption
, as well as a list of four game changers in 2023
.
- The UK’s top court is in alignment with U.S. courts: AI can’t be an inventor for patent purposes
. Korea appears primed to follow suit
.
- Brazil’s ANPD discussed
the balance between justice and innovation and between AI development and personal data protection. Related: the IAPP published a blog post
about the balancing act of regulating AI in Latin America.
- It’s the end of the year but Congress is still working hard on thinking about how AI regulations might work
.
- Anthropic is looking to draw a huge valuation
. Speaking of, here’s an analysis
of Mistral’s rocket-ship growth.
- Google and the Leadership Conference on Civil and Human Rights are partnering on a new AI-focused policy center
- Deloitte is reportedly using AI to help the firm avoid layoffs
through employee reassignments.
- OpenAI is devoting funding to exploration of superalignment
.
- There is geopolitical competition between the U.S. and China on chips but maybe not on AI research
?
- Perhaps the late twentieth century and the current century have more in common than thought
.
- Microsoft’s Phi model is a pretty awesome small model
.
- Semianalysis looks into the ‘race to the bottom’
for inference costs.
- OpenAI is being more public
about its governance approach. And it released a guide
to prompt engineering.
- Bruce Schneier makes some good arguments about the importance of trust
in AI.
- We’re apparently not very good at recognizing satirical content
.
- Answer.ai
is trying to build consumer-oriented AI tools
(I mean, ChatGPT is pretty consumer friendly!). In particular, this may be driven by a seeming convergence around certain design typologies for AI assistants
.
- As Brookings points out, the relative lack of federal regulatory/legislative action on AI has led states and municipalities to try to fill the gap
.
- Of course GPT can be used for document review
.
- Unsurprisingly, other chipset manufacturers have thoughts
on Nvidia and CUDA.
- As we end 2023, the NYT published a guest essay looking at how AI tools became thoroughly engrained in our consciousness over the past year
.
Investor looking to purchase businesses doing at least $200k in EBITDA
11 个月Can't wait to read it! ??
Compliance, Data Protection and Ethics Counsel Manager ( Data + AI Group ), Accenture
11 个月Thanks for the great content, and have an awesome holiday season, Jon Adams !!!
Team Builder, Advisor, Coach, Consultant, Performance _ Mortgage and Real Estate Industry
11 个月Wendy Lee Thank you for sharing, lots of information to sort through.
Gretel | Synthetic Data | Sustainable AI
11 个月One of my favorite newsletters. Thanks as always.
Engineering @ Google
11 个月That is a pretty detailed list ?? AI, in short term, would create more problems than it solves - though, no doubt, it would solve some of the really hard problems with far reaching consequence.