The next paradigm shift in AI
Dr. Fei-Fei Li, Stanford Institute for Human-Centered Artificial Intelligence.

The next paradigm shift in AI

On last week’s episode of Exponential View, I had the pleasure of speaking to Dr. Fei-Fei Li, a computer scientist at Stanford and one of the directors of Stanford's Institute for Human-Centered Artificial Intelligence. She is a creator of ImageNet, a database widely credited for bringing about some of the world’s most significant paradigm shifts in artificial intelligence and machine learning.

We covered a lot in the discussion: from how the fight against Covid-19 can benefit from computer vision to the next generation of AI researchers and machine values– you can listen to the conversation in full here.

What would you say we would need to do in order to construct systems that increase rather than diminish trust?

Fei-Fei Li: “Our algorithms are, by and large, black boxes – they're hard to be interpreted. They're not explainable. The robustness and the safety constraints are not well understood, and that erodes trust. That’s the shaky foundation that we need to solidify. So, a lot of theorists and theoretical computer scientists, statisticians, and machine learning researchers are now working on that very problem.

The second bucket is the whole algorithm design – human interface is another huge issue.he human issues cannot be afterthoughts. They have to be baked into the design of a technical system. That starts from where you get data, how you annotate data, how you use the data, how you interpret the results.

Everything I said should be embedded in ethical frameworks that technologists alone cannot come up with – we need the scholars, trained in social science and ethics and philosophy to work with us.”

I think back to where image recognition was in 2011, 2012: it was much better than it was in 2009, but it was nowhere near as impressive as we see today by the combination of better data and optimizations and the algorithms and more processing. Now as a scientist in the field, how do you interpret what's gone on over the last six years?

Fei-Fei Li: “ImageNet was born out of this desire: we needed a radical shift. Our hypothesis at that time is not too different from a lot of scientific discoveries. We need to establish a North star that can truly drive the research of visual intelligence. That North star is a combination of defining the right problem – about object categorization at large scale and creating that path to achieve that problem, which was through big data. I think we succeeded in creating that North star again. We were stepping on the shoulders of giants. We didn't pull out that North star out of thin air. There were 30 years of cognitive neuroscience research and computer vision research that was fueling that thinking. We did, in a way, define that North star and establish a critical path – in order to reach that North star, machine learning at that time needed to go through supervised learning with big data. With Moore's law carrying the chip advancement, which was a parallel development to the internet creating data at scale humanity had never seen, these three forces converged. This is what brought that paradigm shift.”

What do you think we need from civil society in terms of understanding of the future potentials that these technologies can create?

Fei-Fei Li: “As a technologist, I keep reminding myself how little I know about so many things. I'm not an ethics scholar, I'm not the person in the ICU bed experiencing the diseases, so I need to listen and be humble. I do hope this kind of multi-stakeholder conversation is built upon that kind of mindset.”

My full discussion with Fei-Fei is available here.

David Gossett

Product Design and Development | Emerging Tech | A.I., NLP and Machine Learning | Researcher | Startups

4 年

Human-centered A.I. is an oxymoron. I will make the argument we need to completely remove humans from A.I. for it to work properly. The human part of the A.I. interface is whether we accept the A.I. (probabilistic) outputs. We are diners not cooks. Early A.I. -ish technology has been highly exploitative. We don't want to win Jeopardy, we (humans) want to crush the opponent. Technology is exploitative by it's very nature of having a keyboard. It's hard to let go of the steering wheel (literally). But to achieve computational trust, we have to get out of the rules business. In other words, leave the kitchen to the machines!

  • 该图片无替代文字
Michael L. Allen, BSIT, ADCLDCMP Certificate

I.T. degree professional pursuing Cloud Security Foundation | CISSP Certification-Univ of Maryland // UMGC

4 年

Imaginative discussion topic. My take on the matter, AI that 'thinks' as we perceive as humans do...that is hard to grasp, the permutations are endless, and thus hard to fathom... A consequential: how do you teach AI to perceive being mad or perplexed, without being destructive? I admire Fei-Fei Li's perspective on the matter regarding AI Thank you much Azeem

回复
Dr Barney Gilbert

Psychiatrist & Healthcare Innovator

4 年

Thanks as always Azeem - the Fei-Fei podcast inspired our team at Pando, who couldn't agree more with the interdisciplinary collaboration required between developers, designers, clinicians and ethicists to make meaningful progress in healthcare. Thanks for moulding our thinking on this!

Sicco Maathuis

Transitie-herder: gids | Trend Researcher | Speaker | Incompany transitions to create futureproof business. ?

4 年
回复
Tyrone Lobo

Director of Sales | HPC, AI, Storage, and Cloud solutions for Public Sector & Service Providers | Vision, Action, Results

4 年

Azeem Azhar I listened to this excellent podcast on the weekend. Then today I read Scientific American and see the influence of Fei-Fei Li, John Etchemendy and the Stanford Institute for Human-Centered AI in this blog post by Stanford PhD students about the role of tech to advance healthcare:

要查看或添加评论,请登录

Azeem Azhar的更多文章

  • ?? What's the deal with Manus AI?

    ?? What's the deal with Manus AI?

    Six things you need to know to understand the hype The online discourse around Manus AI typically falls into three…

    8 条评论
  • AI’s productivity paradox

    AI’s productivity paradox

    I want to play a game of counterfactuals..

    10 条评论
  • Why the AI surge isn't like 1999

    Why the AI surge isn't like 1999

    Economist Paul Krugman sees parallels between the late-90s tech bubble and today’s AI frenzy. In my conversation with…

    4 条评论
  • What OpenAI’s Deep research means for search

    What OpenAI’s Deep research means for search

    Originally published in Exponential View on 4 February OpenAI released yet another add-on on to its growing suite of AI…

    4 条评论
  • ??DeepSeek: everything you need to know right now.

    ??DeepSeek: everything you need to know right now.

    My WhatsApp exploded over the weekend as we received an early Chinese New Year surprise from DeepSeek. The Chinese AI…

    38 条评论
  • ?? Stargate & DeepSeek R-1 – What matters

    ?? Stargate & DeepSeek R-1 – What matters

    In the past week, a lot was written about the US government’s “Stargate” partnership with OpenAI AND DeepSeek R-1…

    12 条评论
  • Davos Daily, Day 1

    Davos Daily, Day 1

    The energy here is different this year, so I’ll share my daily takes from the Forum to help you understand what it’s…

    6 条评论
  • ?? Join me live on AI, deep tech & geopolitics

    ?? Join me live on AI, deep tech & geopolitics

    Hi all, I am going live in two hours from DLD — one of Europe’s most important annual events focused on the…

    5 条评论
  • Five contrarian ideas about genAI in the workplace

    Five contrarian ideas about genAI in the workplace

    ChatGPT alone sees over 300 million weekly users—roughly 7% of all mobile phone owners worldwide. Nearly a third of…

    13 条评论
  • ?? AGI in 2025?

    ?? AGI in 2025?

    We can't ignore Sam's bet that..

    11 条评论

社区洞察

其他会员也浏览了