This week’s installment focuses more on the ties between the potential impacts of AI on society–increased productivity, job losses, disinformation, art disruption–and efforts to govern AI. As always, there is a good deal to digest as we head into the weekend.
- AI is not data protection, and the efforts to regulate AI in the EU will likely force politicians and regulators there to truly balance (or decide between) regulatory oversight and innovation and growth. For example, Google is releasing Bard pretty much everywhere…except for the EU and Canada.?
- The failure of the U.S. Congress to effectively legislate on the topic of AI and the hamstrung ability of the Biden administration to issue regulations are leading various states to fill in the gap. But the wheels of Congress are starting to turn, with bills proposed to train federal employees on AI, require parental consent before kids use AI, and bar the use of AI in political ads. Speaking of political ads, Turkey’s recent election showed how deepfakes could impact U.S. elections in 2024, so action by Congress is pretty important on this front!
- The EU, U.S., and China take up a lot of the oxygen in conversations about AI regulation, but Latin American countries are also highly focused on this space. Recently, for example, Brazil announced a government-supported AI sandbox, the Ibero-American Network for the Protection of Personal Data initiated a coordinated action relating to ChatGPT, and the Mexican government created a new AI-oriented agency.
- Pro-tip: if you’re going to claim a video is a deep-fake to rebut the validity of the video as evidence in court, you should be very certain that the video in question is definitely a deep-fake, and not a real video of Elon Musk saying things that Tesla’s lawyers wished he didn’t say.
- The rise of open AI models may threaten the leadership of large technology companies in the AI space, but the main reason the open source models are available is because of the largesse of large technology companies. Model development is very expensive, you know! More on the topic here.
- As we are learning more and more about the impact of AI on employment and how we work, it’s becoming clear that it is likely that there will be some fairly negative impacts on employment in certain fields. The Economist says your job is probably safe, but many people aren’t seeing it that way. If jobs are shaken up by AI, Barron’s published a list of which companies may be most impacted. But more importantly, policymakers really need to start planning for how society will be shaken up (and I’m not convinced UBI is the answer, but it’s better than nothing).
- This is a somewhat intuitive but neglected potential impact of AI: increased loneliness as a result of less social engagement, with chatbots filling in roles that fellow people have had for our species’ entire existence. What you might find particularly scary is the subtlety with which chatbots can shift human perspectives.
- People are scared of AI, it seems. But the fear of AI is mostly of the Skynet sort, and not the (at least currently) more realistic fears (e.g., job displacement, political havoc, etc.)
- There has been a great deal of focus on the role of privacy in AI governance. I’m hoping that the advocates for privacy having a greater role in this space will engage in the debate regarding DHS’s use of panopticon-style AI tools to gather and parse data about citizens and immigrants for law enforcement purposes.
- Writers are still striking to protect their livelihoods against, among other things, encroachment from AI-powered writing tools. What makes this possible? In large part: union membership, which is why more writers are trying to join unions. Illustrators–who are also at risk for AI displacement–might want to think about this as well. Meanwhile, the Center for Artistic Inquiry and Reporting published an open letter calling for pushing back on efforts to use AI to supplant artists.
- Google, in a recent filing, explained that it generally agrees with the USPTO’s perspective on inventorship: AI should not be treated as an inventor. On a different IP-related front, OpenAI leader Sam Altman, along with other AI leaders, is pushing for standardized licensing for AI training data, as well as pleading with Congress to take more meaningful action in the AI space more generally. More on that here and here.
- This is getting increasingly meta: Nvidia is talking about using generative AI to develop chips to…develop/deploy generative AI.
- MIT Technology Review dove into the world of AI and content moderation. It’s a pretty difficult problem to solve (i.e., while in the abstract ‘block all bad content’ may sound easy, there are lots of grey areas and over-indexing on blocking will result in takedowns of tons of constructive or positive content) and I’m not sure the article does sufficient justice to that challenge.
- Open AI models processing on-device will probably result in some huge advances in consumer tech and adoption of AI tooling, but come with limitations owing to hardware.
- If AI is going to turn into skynet, the New Yorker asks, what can be done (or, maybe, what will actually be done) to stop it?
- Last week the WSJ dug into the impact of AI on the billable hour. Bloomberg also dug into the impact of AI on law firms more generally.
- Ah, M&A. One Japanese 32 year-old is using AI to facilitate deals (and making a mint in the process). And Snowflake might buy Neeva?
- The genie is out of the bottle: students are using ChatGPT and other AI tools all the time for basically everything they can. Which is probably fine! But it’s more worrisome, to me at least, that admissions offices are using it as well.
- Everyone is really excited about GPT and its equivalents. Perhaps that’s because of the low expectations we have as a result of Siri? (One must assume Apple is working on some good AI embeddings into their newest products.)
- If Meta is going to remain relevant, it will likely need to lean on its AI teams. There is probably a great deal of growth potential for them here.
- Casey Newton published in the Platformer a helpful list of principles for how journalists should cover topics related to AI.
- If Bard replaces Google search, what does that mean for the Internet and how we organize information online?
- Anthropic upped the amount of data Claude can consume in a prompt, making for much more contextual answers.
- The market is not entirely convinced that Bard and Google’s efforts in generative AI are going to result in the same dominance that Google has experienced in other markets. That might change if Google is able to figure out how to move AI to personal devices.
- The Neuron, which has a good newsletter that skims various AI-related topics, is putting together a curated list of their favorite AI tools.
- There is often tension between security and transparency, and the generative AI space is no different.
- AI takes a lot of energy to run, so the UK is investigating ways to improve efficiency and reduce the environmental impact.
- McKinsey wrote a good summary of some of the ways in which generative AI might turbocharge sales channels.
- I’m not sure if this idea for an “AI Influence Level” (on how much a particular piece of content was influenced by AI) is the exact right approach but the development of some standardized sort of nutrition label for AI-generated content is probably a good idea to explore.
- We didn’t need this: Shell will use AI to try to improve deep sea oil drilling.
- But we do need this: pharma companies will use AI to discover new drugs faster.
- AI is dominating the up-and-comers segment of the enterprise IT stack.
- The litigation over Copilot/Codex will continue.
- AI music is becoming ubiquitous, but it’s kind of like mash-ups in the mid-2000s: the quality is quite varied.
- If you want to learn what prompt injection is, here’s where you can do that.
- Very cool: the similarities between machine learning and acupuncture.
Privacy + Data Policy at Meta
1 年Where was this week's cover pic taken from?
Senior AI and Tech Policy Advisor @US Senate | Responsible Tech | Data Privacy | AI and Healthcare | AI and Education | AI and Labor
1 年Thank you so much for this newsletter Jon Adams! There's a lot of good content here and I've already saved some of these articles for later.
Principal, Android Security & Privacy | Google | Veteran | Risk Management
1 年This is great!
Thanks for Sharing! ?? Jon Adams
Corporate AI Governance Consulting @Trace3: All Possibilities Live in Technology: Innovating with Responsible AI: I'm passionate about advancing business goals through AI governance, AI strategy, privacy & security.
1 年So much good content! Thanks for working through this for us. ????