The AI House That Paul Allen Built
In today’s issue of "The AI Economy," discover Ai2, a lesser-known AI research non-profit based in Seattle, Washington, that has been at the forefront of technology for over a decade. Founded by the late Microsoft co-founder Paul Allen, this studio has published over 1,000 artificial intelligence papers and is helping evangelize the importance of open-source models.
I had the opportunity to sit down with Chief Executive Ali Farhadi, during which we discussed Ai2's mission, the push for openness, why he thinks we're in an evaluation crisis, and his thoughts about artificial general intelligence (AGI).
Below are snippets from our conversation:
From Researcher to Leader
A University of Washington professor, Farhadhi joined Ai2 in the early years, where he built the company's computer vision team, now known as PRIOR. His work on developing Xnor networks for low-power, edge-based AI led to creating a spin-off entity called Xnor.ai . Years later, the startup was acquired by Apple, and Farhadhi went on to work on the machine learning team.
In 2023, he became Ai2's CEO, succeeding inaugural leader Oren Etzioni. Under his leadership, Ai2 has undertaken a more narrow focus, shifting to areas where its work will have a more significant impact. "The way I think about it is that in the first decade of Ai2, we wanted to prove ourselves and show that we could actually be one of the best research institutes in AI," he tells me. "Now, we're trying to broaden [our] impact and to sort of take you to the next level."
Today, Ai2 has three main focuses. The first is on the open AI ecosystem: "Humanity needs more openness in AI," Farhadi contends. Without it, we're in big trouble." The company has released at least two open-source LLMs, OLMo and Molmo, among its contributions to this effort.
The second focus is an unreleased project called Nora. It's a research assistant agent for scientists that you can converse with, have it execute code, understand literature, provide topic summarization, and more. The last focus is on conservation, an area of great interest to Allen, a well-respected philanthropist. Multiple efforts are being developed on this front, including Earth System, a multimodal foundational model used worldwide to help with animal tracking and land monitoring; Skylight, which monitors what's happening at sea, such as illegal fishing and trafficking; and climate modeling.
'AI Is Born and Raised in the Open'
Why is it critical for AI vendors to make their models open-source? It's because of open development that led AI to the state it's in today, Farhadi argues. The technology's achievements took time to happen, and it was the result of a single team. "It's just a communal effort, and it's going to be like that if you would like to keep innovating in the space of AI. And we are basically deploying these solutions at such a massive scale with a shallow understanding of what we're deploying as a whole community."
He warns that keeping AI closed will have a detrimental impact on the tech and on humanity. "How well can I actually build a cancer solution around these things? How else can I actually build a new model? How else can I ensure safety? How else can I empower others to build on top of these things?"
Don't Believe the Benchmarks You Read
"We are in an evaluation crisis," Farhadi proclaims. "These big tables that people put out, [Ai2] built half of those benchmarks that people put out there and evaluate those things. But they're using those benchmarks in such a ridiculous wrong way that you look at it and you're like, 'Wow, what are those datasets that we released?'"
He views evaluations as "bogus" and advises that we take them with "a grain of salt." However, he concedes that these benchmarks are the best tool today for judging a model's quality. "It's a hard problem," Farhadi acknowledges before saying he has no answer.
So, while we might compare one model to another to find out which one is superior, there won't be a single "God-given" LLM that will handle everything we want. Farhadi believes generic models will tackle 85 percent of the task at most, but we'll need to enlist multiple models to finish the job. "There's going to be a ginormous ocean of models, each of which will be built to do certain things really well..."
What Does He Think of AGI?
"It doesn't make any freaking sense. Technically, it's marketing jargon." Farhadi jokes that if those letters are uttered by his students at the University of Washington, "they just delay their graduation by six months."
That being said, he is impressed by AI's progress over the past decade, calling the amount of investment made in the space "unheard of. I don't know any other sector that has received this much investment." He notes that the students he admits to his university program have more published papers than before. "It's just phenomenal...I'm just so happy to have a job and don't need to compete with these folks. They're impressive, well rounded, know how to talk, write data, good at coding [and] math. It's just phenomenal."
Farhadi predicts that the gap between open and closed models will shrink in the future, and smaller models will outperform larger models on the same task.
Today's Visual Snapshot
Slack has published its Fall 2024 Workplace Index , which shows that excitement around artificial intelligence is cooling among workers. This tempering is believed to be driven by a decrease in U.S. respondents saying they're excited about AI helping them complete tasks at work. "With so many businesses making AI investments right now, these findings are a real wakeup call to leaders," Christina Janzer, Slack's Workforce Lab lead, writes. "With sentiment around AI dropping, businesses need to help employees accelerate their AI journey and address the cultural and organizational blockers standing in their way."
Quote This
"The big novelty is that every student can now have access to a personalized AI tutor throughout their life and explore any subject, including the most inaccessible ones. Access to knowledge has no limits. Of course, we must be aware of AI's potential risks, but we must encourage our children to be more ambitious, more curious, and to use AI as a learning tool."
— Microsoft Chief Executive Satya Nadella responding to a question about how we teach children to prepare them for the AI world. (Le Point)
This Week’s AI News
?? AI Trends and Industry Impact
?? AI Models and Technologies
?? Generative AI and Content Creation
?? Funding and Investments
?? Enterprise AI Solutions
?? Hardware, Robotics, and Autonomous Systems
?? Science and Breakthroughs
?? Business, Marketing, Media, and Consumer Applications
?? Retail and Commerce
?? Legal, Regulatory, and Ethical Issues
?? Disruption, Misinformation, and Risks
?? Opinions, Analysis, and Editorials
?? Podcasts
End Output
Thanks for reading. Be sure to subscribe so you don’t miss any future issues of this newsletter.
Did you miss any AI articles this week? Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy .”
Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology.?
Do you have a story you think would be a great fit for “The AI Economy”? Awesome! Shoot me a message – I’m all ears!
Until next time, stay curious!