Lots to digest this week, from key questions and ‘state of the world’ reports to the question of whether AI Drake is as good as real Drake.
- The WSJ has a great set of 25 questions we should ask if we want to start thinking about how to think about AI.
- And AI made the cover of The Economist. (More here.)
- Azeem Azhar outlines how the move towards greater AI use might be signaling a Copernican-like paradigm shift in how we think about systems, creativity, etc.
- The AI Now Institute is arguing that we need to move beyond what they view as self-serving accountability audits for AI. Perhaps, as the Economist suggests, greater explainability and transparency could help? This might, for example, include greater development/deployment of open-source models. Of course, this raises the question of what happens when AI models go beyond human cognitive limits.?
- Celebrity deep fake tools are going to discover an area of the law known as ‘rights of publicity.’
- Garbage in, garbage out: the Washington Post dug into training sets for some LLMs and revealed how some of the biases and errors might be introduced from training data.
- Not cool: Uber, Amazon, and others are using AI to ‘personalize’ pay for gig and contract workers.
- Axios attended the TED conference and watched the talks on AI so you didn’t have to.
- If LLMs are only as good as their training data and the training data is censored, what will China do?
- Tim O’Reilly, per usual, has a very good point: you can’t regulate what you don’t understand. Which is currently something of a problem.
- The EU is planning an AI Act, but German national regulators are also reportedly digging into how to regulate AI.
- Meanwhile, Japan’s government is using ChatGPT to make regulations more easily understood for the Japanese people. Pretty good idea!
- The potential wave of regulation in the EU might kill the continent’s AI industry before it takes off, according to entrepreneurs in the space.
- Canadian researchers are calling for support for the Artificial Intelligence and Data Protection Act (AIDA).
- A common theme of proposed regulations: transparency about who makes the AI system.
- GPT-5 is a long way off, according to Sam Altman. Is this part of a transition from LLMs to URLs (Unsupervised Reinforcement Learning models)?
- Microsoft provided a primer as to how it has approached responsible AI.
- Talk about growth: there were more articles about AI published in the week of 4/7/23-4/14/23 than there were in all of the 1990s.
- Many Google employees did not like Bard much at all when they were testing it.
- Daniel Miessler predicts that governments will start outright banning AI in the next few years in response to potential economic disruptions from job losses.
- Donald Rumsfeld did many bad things but one good thing he gave the world was the quip about ‘known unknowns’ and ‘unknown unknowns.’ Here are some of the known unknowns related to AI and its impact on media.
- Drew Breunig has a good essay questioning whether LLMs are compatible with data protection laws and how to address their increasing role in our society.
- Very fascinating (and something to try to fix): AI development and operations consume tremendous amounts of fresh water.
- GPT-4 can create self-regenerating/fixing python scripts. Fascinating.
- Perhaps Elon was yelling ‘pause’ just so he could try to catch up through x.ai, rather than because AI is going to ‘hit like an asteroid’? He’s now talking about a ‘TruthGPT,’ which will probably never live up to its name but sounds about on-brand for Musk. It doesn’t seem to be off to a great start, in any event.
- Interesting essay subject: what if explainability is a hindrance to progress? (Of course, what if you can’t explain a model…and it gets things very, very wrong?)
- The WSJ asks: should AI (and robots) have moral/legal rights?
- If you’re stress-testing ChatGPT, you might as well get paid: OpenAI announced a bug bounty.
- Algorithms matter, but so do chips and system designs. Relatedly: The Information reported that Microsoft might be getting into the chip game.
- Apple is offering high-yield bank accounts, but will it also offer a GPT rival?
- Applied correctly, AI can be used to help organizations with their compliance obligations.
- Adobe is rolling out new AI-powered features to help designers and creators.
- What kind of creators and creative works might be at greatest risk for AI-driven disruption.
- Here’s how investors and venture capitalists are thinking about some of the different players in the AI space.
- Banks are getting very excited about generative AI.
- Sounds like Warren Buffett is getting excited about AI, too, since it can … translate songs for him?
- Scale.ai released their 2023 “AI Readiness Report.”
- If there are ‘AI Wars,’ then Google sounds very poised to…strike back?
- The FTC’s Lina Khan is highlighting the potential competition challenges that will impact the AI industry.
- AI-created art won the Sony World Photography Award, but the ‘artist’ refused the prize and used the opportunity to draw a distinction between art and AI.
- AI is already resulting in layoffs for video game illustrators in China.
- ChatGPT is threatening the livelihoods of millions of coders in India. But it wouldn’t be able to get into any IIT, it appears.
- Finance might be ripe for AI-driven disruption, too?
- Same with market research (since, apparently, GPT models can simulate market participant behavior)?
- The UK is thinking about how to address workforce displacement as a result of AI.
- Some creators are going all-in on AI.
- Insider will start experimenting with AI to draft articles.?
- Reddit data might come at a cost for AI training.
- Snap’s AI bot is going viral, apparently.
- Project managers, rejoice: AI for JIRA is in early release stages.
- Google keeps advancing the ball on generative AI in healthcare.
- Interesting point from psychologists: if LLMs have no corporeal body, they will never understand all of what they create.
- OpenAI should start a waitlist for all the regulators coming their way (the latest: Spain).
- Amazon is framing their deployment of AI tools for AWS customers as an effort to ‘democratize’ AI access.
- Tens of thousands of Facebook users fell victim to a fake ChatGPT scam known as ‘Lily Collins.’
- Speaking of scams, we’ll have to see how many people were victimized by tax-related GPT-powered scams. We’ll also hear about how many people saved $$$ through AI-powered tax loophole exploration.
- Cool! Google is trying to use AI to save coral reefs. Maybe?
- First AI came for Taylor Swift. Now, ‘ghostwriter’ came for Drake and the Weeknd. Then the rights-holders came for ghostwriter (not sure that UMG and others actually have rights in the fake voice, but that’s something for courts to explore!).
- Ben Thompson has a great argument for “zero trust authenticity” to help creators in an AI-oriented world.
- NewScientist is calling for a long-overdue scientific community conversation about AI risks.
Privacy | Responsible AI
1 年It's getting increasingly difficult to keep up with everything - thanks for collating and sharing this list!
AI Governance I Digital Consulting I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security
1 年Amazing collection! ??????
Sales Enablement Leader | Military Strategist | National Security Expert
1 年Amari Gonzalez ??