The next paradigm shift in AI
On last week’s episode of Exponential View, I had the pleasure of speaking to Dr. Fei-Fei Li, a computer scientist at Stanford and one of the directors of Stanford's Institute for Human-Centered Artificial Intelligence. She is a creator of ImageNet, a database widely credited for bringing about some of the world’s most significant paradigm shifts in artificial intelligence and machine learning.
We covered a lot in the discussion: from how the fight against Covid-19 can benefit from computer vision to the next generation of AI researchers and machine values– you can listen to the conversation in full here.
What would you say we would need to do in order to construct systems that increase rather than diminish trust?
Fei-Fei Li: “Our algorithms are, by and large, black boxes – they're hard to be interpreted. They're not explainable. The robustness and the safety constraints are not well understood, and that erodes trust. That’s the shaky foundation that we need to solidify. So, a lot of theorists and theoretical computer scientists, statisticians, and machine learning researchers are now working on that very problem.
The second bucket is the whole algorithm design – human interface is another huge issue.he human issues cannot be afterthoughts. They have to be baked into the design of a technical system. That starts from where you get data, how you annotate data, how you use the data, how you interpret the results.
Everything I said should be embedded in ethical frameworks that technologists alone cannot come up with – we need the scholars, trained in social science and ethics and philosophy to work with us.”
I think back to where image recognition was in 2011, 2012: it was much better than it was in 2009, but it was nowhere near as impressive as we see today by the combination of better data and optimizations and the algorithms and more processing. Now as a scientist in the field, how do you interpret what's gone on over the last six years?
Fei-Fei Li: “ImageNet was born out of this desire: we needed a radical shift. Our hypothesis at that time is not too different from a lot of scientific discoveries. We need to establish a North star that can truly drive the research of visual intelligence. That North star is a combination of defining the right problem – about object categorization at large scale and creating that path to achieve that problem, which was through big data. I think we succeeded in creating that North star again. We were stepping on the shoulders of giants. We didn't pull out that North star out of thin air. There were 30 years of cognitive neuroscience research and computer vision research that was fueling that thinking. We did, in a way, define that North star and establish a critical path – in order to reach that North star, machine learning at that time needed to go through supervised learning with big data. With Moore's law carrying the chip advancement, which was a parallel development to the internet creating data at scale humanity had never seen, these three forces converged. This is what brought that paradigm shift.”
What do you think we need from civil society in terms of understanding of the future potentials that these technologies can create?
Fei-Fei Li: “As a technologist, I keep reminding myself how little I know about so many things. I'm not an ethics scholar, I'm not the person in the ICU bed experiencing the diseases, so I need to listen and be humble. I do hope this kind of multi-stakeholder conversation is built upon that kind of mindset.”
Product Design and Development | Emerging Tech | A.I., NLP and Machine Learning | Researcher | Startups
4 年Human-centered A.I. is an oxymoron. I will make the argument we need to completely remove humans from A.I. for it to work properly. The human part of the A.I. interface is whether we accept the A.I. (probabilistic) outputs. We are diners not cooks. Early A.I. -ish technology has been highly exploitative. We don't want to win Jeopardy, we (humans) want to crush the opponent. Technology is exploitative by it's very nature of having a keyboard. It's hard to let go of the steering wheel (literally). But to achieve computational trust, we have to get out of the rules business. In other words, leave the kitchen to the machines!
I.T. degree professional pursuing Cloud Security Foundation | CISSP Certification-Univ of Maryland // UMGC
4 年Imaginative discussion topic. My take on the matter, AI that 'thinks' as we perceive as humans do...that is hard to grasp, the permutations are endless, and thus hard to fathom... A consequential: how do you teach AI to perceive being mad or perplexed, without being destructive? I admire Fei-Fei Li's perspective on the matter regarding AI Thank you much Azeem
Psychiatrist & Healthcare Innovator
4 年Thanks as always Azeem - the Fei-Fei podcast inspired our team at Pando, who couldn't agree more with the interdisciplinary collaboration required between developers, designers, clinicians and ethicists to make meaningful progress in healthcare. Thanks for moulding our thinking on this!
Transitie-herder: gids | Trend Researcher | Speaker | Incompany transitions to create futureproof business. ?
4 年Marijn Markus
Director of Sales | HPC, AI, Storage, and Cloud solutions for Public Sector & Service Providers | Vision, Action, Results
4 年Azeem Azhar I listened to this excellent podcast on the weekend. Then today I read Scientific American and see the influence of Fei-Fei Li, John Etchemendy and the Stanford Institute for Human-Centered AI in this blog post by Stanford PhD students about the role of tech to advance healthcare: