There were so many developments at the intersection of AI and law this week. The AI Act draft passed out of the EU Parliament, and governments around the world stepped up other efforts to regulate AI. And there are more indications as to the impact of AI on jobs and economic activity. Feel free to share.
- The EU Parliament agreed on an AI Act text. But it wouldn’t have been interesting if there hadn’t been some last-minute drama, right?
- Wired Magazine has a good explanation as to why Marc Andreesen’s AI utopianism should be taken with a few grains of salt. And don’t get me started on the idea that we’re rapidly approaching ‘the Singularity.’
- To be clear, AI is a tool (and potentially a weapon) and the notion that geopolitical rivals will put aside differences to collaborate on issues relating to that tool is belied by pretty much all of human history. It might make the lives of certain AI developers easier, though.
- An interesting study out of Italy shows what lengths people will go to in order to get around a government block on generative AI tools.
- We can all agree that discriminatory bias in AI applications is, as a general concept, bad. It’s also bad for business. And some generative AI tools, if not policed, might exacerbate biases.
- I’m not saying I agree with this but: what if the hype, hope, fears, etc. over generative AI are all overblown?
- So the ICO is warning makers of generative AI apps to address privacy risks before coming to market. Related: Google delayed the launch of Bard in the EU due to data protection concerns.
- Some have proposed that the IAEA model might work for AI, but not all are convinced that it would work. Instead, it might be the case that every country is on their own. Related: here is a very good take on why the idea of licensing/non-proliferation of AI models is unlikely to be effective.
- The U.S. Senate held a hearing on AI and human rights. Lots to digest there.
- Sam Altman is ‘optimistic’ that we can align on global coordination on AI regulation.
- Watch out, frontline workers: McKinsey is predicting massive AI-related disruption in banking, retail, and other sectors. Read the report here. And McKinsey consultants are probably telling their clients how to effectuate that disruption. (The Big Four accounting firms certainly are…and Accenture isn’t slouching in this space, either.) Likewise, Bloomberg points out that there is a new construct between tech firms and employees. AI isn’t killing jobs yet, though (maybe?). In any event, hopefully AI can make the boring work easier!
- In any event, Yann LeCun says we shouldn’t worry about AI eliminating jobs.
- Digital rights advocates are also flagging that our focus on existential threats is limiting our ability to focus on cognizable harms happening now.
- Post-truth political candidates are leaning in heavily towards using generative AI for political mischief.
- So it’s no wonder that German politicians are worried about AI’s risk to democracies.
- Politico has a perspective on the arms race to identify AI-generated content.
- Two of my favorite topics: AI and trail-running.
- Generative AI may reduce the need for coders but right now it’s leading to a hiring spree for certain firms in India.
- LSE has a good essay on the role that African countries can play in regulating AI.
- Musicians are concerned about AI in a general sense, but the Beatles are getting by with a little help from their AI.
- Shocking but maybe it shouldn’t be: doctors are using ChatGPT to help improve their bedside manner.
- Can algorithms be used to help antitrust compliance programs?
- FP has a good article on why governments should invest in AI development so it can better serve broader populations.
- AI is going to take-off but the financial expectations are below that of the iPhone. Which suggests that perhaps the metrics might need adjusting.
- Microsoft, Meta, and others are pushing for improved practices relating to synthetic media via the Partnership on AI.
- State AGs are encouraging the Biden administration to develop a risk-based framework for addressing potential AI-related harms.
- The IAPP digs into the Atlantic pact between the U.S. and UK on AI. And the Economist digs into Rishi Sunak’s plan to make the UK an AI superpower.
- Google announced an AI security framework.
- Oh, now Meta wants to put generative AI everywhere.
- Firefly for Adobe Photoshop is now GA for enterprises, and design may never be the same.?
- Classical music seems ripe for an AI-driven renaissance.
- And, on the topic of art, AI is driving some really cool advances in how we even conceptualize what might be ‘art.’
- AI ETFs are coming. If you’d invested in one in mid-2022, you’d be sitting pretty right now.
- Humans might want AI. AI probably needs humans (at least, current AI does). And, as Rumman Chowdhury writes for The Hill, we should be aiming to ensure AI works for humans, after all.
- Texas of all places has established a task force to evaluate how to approach AI.
- Another U.S. judge is requiring attorneys to disclose any use of AI tools in preparing their filings.
- Salesforce is trying to make a pitch that its AI-deployed tools are safer and more trustworthy than much of what’s out there on the market (while also trying to bring ChatGPT integrations into their CRM offerings).
- OpenAI released some helpful guidance for how to get the most out of GPT-4.
- Hong Kong is increasingly being treated like Mainland China for purposes of rolling out new products/services.
- Clever: French tax authorities used AI to identify undeclared pools (and thus tax cheats).
- AI is expensive. Here’s a tool to help you keep track of spending.
- As a privacy lawyer, I find it funny that the U.S. Congress might act on AI rapidly when they’ve spent decades dithering on digital privacy.
- A Harvard-led institute is looking to tackle questions relating to AI fairness in job hiring.
- OpenAI and DeepMind are opening up their models to the UK government.
- Professor Ethan Mollick has some thoughts on how to use AI in the classroom.
- The White House is hoping to establish privacy norms with AI firms soon.
- The WSJ dove into the ‘power couple’ that is Microsoft and OpenAI. It sounds like it might be working well for Microsoft!
- YouTube might turn out to be a huge asset in Google’s AI portfolio.
- Meta is thinking about how to make money through open-source AI.
- AI is empowering better decision-making in workers’ compensation claims.
- What if AI could help the world see?
- Worth watching: can generative AI be useful in parsing Amazon reviews?
- Seems distracting: Mercedes is adding ChatGPT to its cars?
- Generative AI can fake voices, and faked voices can easily be used to scam people.
- I’ve flagged previously that we should avoid thinking of AI as a religion; we should also probably be conscientious about using AI tools to advance religious objectives.
- Sorting out what is and isn’t AI-generated is a drag on our brains.
- A survey says that many CEOs are likely unduly pessimistic about the long-term implications of AI.
- The Information digs into the impact of existing laws on AI, even before the AI Act comes into effect.
- Finally, if ChatGPT is a person, is it occupying a standard ‘dad persona'?
Jon Adams It still remains that normal people in the street don’t understand that AI is just another form of ‘observation’ of the human being to nurture machines and make algorithm become more performant so essentially we continue to ‘produce’ data for the same companies around the world and we continue to consume their services not understanding we should be compensated for what we do.
Thanks for Sharing! ?? Jon Adams