There’s never a dull moment. This week, we saw more and more ways in which AI would be changing our future, from proliferating deep fakes and spam to saving Earth (thanks, NASA!) and improving health. Again, it becomes more and more clear that AI, like many other things in tech, can be a tool or a weapon, depending how and why it’s being used.?
In this time of great economic and technological progress for AI, the regulators and lawmakers are starting to put the pieces together for how AI governance will work. This week we saw regulations on AI bias audits finalized, new requirements for generative AI in China, policy debates in Brussels and Washington, and new investigations and working groups launched in Europe and Africa.?
As always, feel free to comment on the LinkedIn post, share with friends, and keep me posted on what’s happening in AI from your vantage point.
- The European data protection regulators announced they are digging into ChatGPT.
- Building off of last week, are we headed towards a place where there are only a few major models (and everything else is just some form of implementation problem-solving)? Or not?
- New York City’s regulations implementing Local Law 144 (which requires bias audits in certain scenarios where automated employment decision tools are used) were finalized.
- Senator Schumer is spearheading the Congressional effort to regulate AI, focusing primarily on transparency, explainability, and ethical requirements.?
- Companies are rushing to build generative AI into everything, and spending and app development in this space are growing exponentially. Bankers and others in finance are trying to figure out who’ll do AI best and bet on them.
- Forbes has released the “Forbes 50” in AI. (Though being on a Forbes list doesn’t quite have the cachet it once did.) Forbes is also pooh-poohing calls for AI regulation, while suggesting that legal liability standards might be the way to approach AI risk mitigation.
- The Ada Lovelace Institute published a thought-provoking white paper about the need for civil society input in the AI Act proposal.
- The Initiative for Applied Artificial Intelligence has a great report summarizing enterprise AI systems and how they would slot into the AI Act’s risk scheme.
- A group of civil society leaders are pushing the EU to be more burdensome in its regulations of general purpose AI via the AI Act.
- Euractiv suggests that the EU Parliament will likely vote on their draft of the AI Act later this month.
- Responsible and ethical AI must also account for environmental and human costs associated with model development (in addition to risks associated with models once deployed).
- Wired magazine has an essay calling for international regulation for AI use. At first, it seems like a pipe dream–how could we get the EU, U.S., and China to agree to standards–but then again, recall that we were able to develop international standards for nuclear weapon use during the Cold War.
- As the NYT points out, deep fakes are here and people struggle to identify them.
- Indigenous peoples from the U.S. to New Zealand are concerned about the use of their language and identity by audio AI systems.
- Axios has a good point about being a realist with regard to AI: development won’t stop, and we’re not going to destroy the world tomorrow as a result, so how should we approach it?
- Publishers are facing an important question: build AI tools, or sue AI developers?
- Almost hilarious: as AI tools become more powerful and more ‘human-like’ they become more prone to human-like errors.
- Brookings has a good commentary on why the ‘pause’ effort is problematic (it’s a mix of pragmatic and ethical considerations).
- While Elon was busy yelling pause, he was also buying 10,000 GPUs and setting up a secret AI project at Twitter.
- AI is bringing back the debate over open source ethos. And the fair use question is returning to techie conversations.
- Lawyers at Ropes and Gray provided a good overview of IP (and other risks) associated with AI-generated content. And the team at Clark Hill highlighted the Three Cs of deploying AI: Contracts, Compliance and Culture.
- Meet PassGAN, the (overly hyped?) AI-powered password cracker.
- The U.S. Department of Commerce (through the NTIA) is starting a conversation about how the U.S. might regulate AI. If you want to join that conversation, head over to the NTIA website.?
- Meanwhile, states are filling the void on employment AI.
- Stuart Russell says that people expect the government to ‘step up’ and take meaningful efforts to regulate the tools (just like there are regulations around pharmaceuticals, airplanes, etc.).
- Political conservatives in the U.S. argue for a ‘soft’ approach to AI regulation.
- Andrew Ng and Yann LeCun had a good conversation on the state of AI and its likely future.
- Eric Schmidt is right: pressing ‘pause’ gives China time to catch up. Or maybe President Xi’s thin skin will prevent advances in visual generative AI? In any event, any generative AI launching in China is apparently subject to a full regulatory review. (More on that here.) Having made it through the gauntlet, Tongyi Qianwen is the latest entrant on the scene, courtesy of Alibaba.
- South Africa’s Information Regulator is apparently looking into ChatGPT.
- Noah Smith has a great point: no one knows how many jobs will be automated, or what ‘automated’ even means.
- As Paul Krugman notes, it might take a while for implementation of AI to occur at sufficient scale to increase productivity for workers. But as Azeem Azhar notes, AI adoption is moving quickly and what was true in the past may not be true now.
- Erik Brynjolfsson joined a Microsoft podcast to talk about how AI will transform productivity.
- LinkedIn discussed the ways in which it operationalized its Responsible AI Principles in recent generative AI-powered product launches.
- In case you’ve lost track of who is on what side (or what sides even exist) in the debate about Responsible AI and the path forward, the Washington Post has you covered.
- Let’s not jinx it, but the Atlantic is right that generative AI hasn’t (yet) become a flashpoint in the culture wars.
- Why do so many Americans fear generative AI? Maybe because the past five decades of inequality and slower growth might have made us less optimistic about scientific progress. Compare, say, Tom Swift books to The Circle.
- Maybe the way forward on AI governance is having Ron Conway get people in a room (and keep them there until they agree to self regulate?).
- A paper out of Stanford/Google treats generative AI models as if they’re Sims, and sees some pretty human behavior come out of it.
- Video AI is going to blow everyones’ minds.
- Ben Thompson dove into the history and meaning of ChatGPT for Stratechery.
- Could AI technology eat its creators? Good question!
- It was only a matter of time before people started theorizing that chatbots could radicalize young men.
- AI developers are running into an interesting problem: server shortages.
- Amazon is jumping into the “AI everywhere” battle.
- If you’re Anthropic, one way to address shortages is to raise $5 billion.
- CAPTCHA and online proofs of ‘humanness’ will need to evolve to keep up with AI.
- Lawyers are pressing clients to require explainability when contracting for AI-powered services.
- Survey results are in and developers freaking love Copilot.
- In another survey, illustrating how question framing impacts respondent choices, YouGov finds that a majority of Americans support pressing pause on AI development.
- Wired magazine warns about the flood of robo-lawyers that will likely arrive in the coming years. (Of course, we’ve heard this before…)
- But don’t worry about robo-economists, at least not yet.
- ChatGPT has a huge lead on other chatbots and is increasing its userbase significantly!
- AI-created industry: non-AI content creators to feed AI training.
- Reddit might be bracing for a GPT-powered spam storm.
- Wild: rumor has it that Stability.ai (makers of Stable Diffusion) is at risk of burning through their cash reserves.
- NASA is using AI to help save Planet Earth.
- And AI can help historians investigate the past (but let’s hope that hallucinations are detectable!).
- Can AI be used to promote transparency and accountability for politicians?
- Is zero-shot learning the future for LLMs and similar AI systems?
- If you want to make photographs that adopt the look of famous photographers, and don’t want to write your own prompts, here’s a cool tool.
- Tools to take GPT-4 code straight to production are being built.
- Nvidia is saying we’re at the ‘iphone moment’ for AI.
- Eric Topol gets it: general transformer models can be very helpful as a generalist tool in the healthcare space.
- I’m not always a fan of mnemonics, but “SEPARATE DB TABS” is a fun way to think about structuring key points related to AI ethics and governance.
- When AI feeds itself, the content around us might become very mediocre and homogenous.
- Sounds like there’ll be some kind of big AI conference in SF next month.
What does it feel like to be working (or not working) in AI right now?
Entrepreneur. Recovering journalist. Pulling on threads.
1 年This is excellent Jon Adams. Great resource and so succinct. Thank you!