Issue #5
The?Armilla Review?is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
Last month, the FTC warned AI vendors about making deceptive claims about their AI products. This month, they take on AI used for deception. That is, in its most recent letter, the FTC cautions AI vendors against making and/or selling AI that can be used to deceive — even if that’s not the product’s intended sole purpose. The missive comes after a wave of stories about how generative AI is being misused to enhance deepfakes, perpetuate fraud, use voice clones for scams and more. Specifically, the FTC instructs businesses to undertake efforts to mitigate risks that these products could be misused for harm?before?they hit the market.
In this newsletter, you’ll find:
Top Articles
Bill Gates explains why AI is as revolutionary as personal computers, mobile phones, and the Internet, and he gives three principles for how to think about it.
Microsoft researchers?released a paper?on the arXiv preprint server titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” They declared that GPT-4 showed early signs of AGI, meaning that it has capabilities that are at or above human level.
This eyebrow-raising conclusion largely contrasts what OpenAI CEO Sam Altman has been saying regarding GPT-4. For example, he said the model was “still flawed, still limited.” In fact, if you read the paper itself, the researchers appear to dial back their own splashy claim: the bulk of the paper is dedicated to listing the number of limitations and biases the large language model contains. This begs the question of how close to AGI GPT-4 really is, and how AGI is instead being used as clickbait.
When used correctly, these types of tools will begin to significantly enhance your team’s responsiveness, efficiency, and effectiveness but do the benefits outweigh the risks?
State governments should address the challenge of artificial intelligence regulation by passing laws that covers any technologies that support critical decision-making, mandates algorithmic impact assessments, covers both public and private use, and identifies clear sectoral enforcement authority.
The Guidance on AI and Data Protection has been updated after requests from UK industry to clarify requirements for fairness in AI. It also delivers on a key?ICO25 commitment, which is to help organisations adopt new technologies while protecting people and vulnerable groups.
领英推荐
Microsoft has warned some Bing-powered search engines that it will revoke access to the company’s search index if they continue to use it as the foundation for their AI tools.
Under pressure from its rivals, Google is updating the way we look for information by introducing a sidekick to its search engine.
A startup — and a community — that will build a trustworthy and independent open-source AI ecosystem. Mozilla will make an initial $30M investment in the company.
The vision for Mozilla.ai is to make it easy to develop trustworthy AI products. We will build things and hire / collaborate with people that share our vision: AI that has agency, accountability, transparency and openness at its core.
The scenarios where Stanford evaluates all the models for accuracy, calibration, robustness, fairness, efficiency, bias, toxicity and more. You’ll see that Cohere models appear to be outperforming others in these testing categories.
Topics discussed between Sam Altman and Lex Fridman include GPT-4, Political bias, AI safety, Neural network size, AGI, Competition, From non-profit to capped-profit, Political pressure, Truth and misinformation, Microsoft, SVB bank collapse, Future applications, Advice for young people and the Meaning of life.