AI pioneer Fei-Fei Li sees a path for you in her field
Jessi Hempel
Host, Hello Monday with Jessi Hempel | Senior Editor at Large @ LinkedIn
The Stanford professor helped propel AI from the labs to the office. Now she's leading the charge to make sure it's good for people.
Stanford professor Fei-Fei Li is a pioneer in artificial intelligence. Her research helped lead to breakthroughs like allowing computers to recognize images. Now, AI has spread to every economic sector. This episode, hear Fei-Fei's thoughts on how humans can play a compassionate role in shaping AI's future. Plus, Caroline Fairchild brings reporting on some surprising jobs in this emerging industry.
JESSI HEMPEL: From the editorial team at LinkedIn, I’m Jessi Hempel, and this is Hello Monday, a show where I investigate the changing nature of work, and how that work is changing us.
Last year, I got to test-drive a self-driving car, which of course means I got to sit behind the wheel and not drive. It was really trippy. In this one test, a human-size dummy walked out onto the track, imitating a pedestrian, jaywalking.
SELF-DRIVING CAR TAPE: So here it comes...so we pass this trigger…do we see him? Do we see him? Yep, there he goes.
The car saw the pedestrian and slowed down to let him pass. This is just one of the many, many things that have become possible now that computers can recognize images.
That’s why this week, I wanted to talk to Fei-Fei Li. She’s a professor at Stanford and a pioneer in artificial intelligence. More than a decade ago, Fei-Fei set out to teach computers to read pictures the same way that small children learn to do this – by observing lots of objects over many years. She and her team paid researchers to tag millions of images so that computers would begin to recognize them. And by 2009, they had built a huge dataset called Imagenet.
Imagenet was one of the things helped catapult AI from the lab, to industry. Already, AI is remaking sectors from transportation, to agriculture, to...the way I unlock my phone these days just by looking at it.
And Fei-Fei continues to drive the field forward. This spring, she helped launch the new Stanford Institute for Human-Centered AI – or, Stanford HAI as she calls it.
Fei-Fei’s view on AI’s future is both unique and uniquely important. She’s an optimist when it comes to the impact AI can have on humanity, but she’s a realist, too. She believes that if we’re going to benefit from it, we have to be thoughtful in how we are building it. And, Fei-Fei envisions a future in which people from many fields – lawyers, philosophers, doctors--play a hand in helping it evolve. In other words, you don’t need a CS degree to work in AI.
Here’s Fei-Fei.
JESSI: Fei-Fei, when you got to college, you went to Princeton. What did you think you wanted to do at first?
FEI-FEI LI: So I was very interested in physics because ever since [I was] a kid, I just love this very magical curiosity of how the world works, the science, the discovery, the big questions of where's the universe come from. But one thing around the middle of my college experience I started to notice is those great physicists at the beginning of 20th century, you know, the, the beginning of 20th century is like this era of physics, modern physics. We start to crack the structure of atom, you know, all these quantum physics was born that, that, that is a beautiful era. But the great physicists like Einstein and Schrodinger who are the giants all started asking about life towards the end of their career or life. They were interested in biology, they were interested in human intelligence. And that really caught my attention because in addition to try to understand the atomic physical world, it's the biological and human world that seem to have more meaning to them towards the end. So I had a shift of interest from pure physical world, curiosity to the question of intelligence.
JESSI: And so you came out of Princeton. What was the next academic move that you made?
FEI-FEI: PhD. So I applied for a bunch of graduate schools. The reason I chose Caltech was already telling of my future career is because I found a very nice combination between a professor who now we call who does AI at that time we don't call AI That time is so called AI winter. So the name AI was not there.
JESSI: Stop there one second, because I think that's actually an important thing to remember. So you were coming into this next phase of your education in the early aughts, right?
FEI-FEI: Yeah. Yes.
JESSI: And if anybody is listening right now, who is paying attention to you, you know, the world that we live in, all the talk is AI, AI, AI. But if you were coming into your field in 2000 –
FEI-FEI: Zero zero, nobody talks about it.
JESSI: It was this period that we now call the AI winter, when research was just stalled.
FEI-FEI: Actually. So this is where I fiercely protest. So the public caused the AI winter, but from a research point of view, it was the most innovative, productive period of a research.
JESSI: But still, when we got to the early aughts, you did not go into the popular field.
FEI-FEI: Oh, no, no, no, absolutely not. I mean, there's no popularity. It was something I wanted to do because my curiosity took me there.
JESSI: Well so fast-forward and we're here together this spring. You're a professor at Stanford, you have been at Stanford for a while. You spent years –
FEI-FEI: 10 years.
JESSI: You took a break for a little while to give industry a try and you went to Google for a while and you're somewhat newly back and you opened up a huge ambitious project this spring.
FEI-FEI: Right, so just past Monday, Stanford launched our newly established Stanford Institute for Human centered artificial intelligence, which we abbreviate as HAI or Stanford HAI. And this is really – the bottom line is I think we're opening a new chapter of AI. So Ai has been a field of 60 years and you just heard a tiny bit of the AI history. There's many other wonderful milestones in history from natural language processing, machine translation to self driving car to robotics. But the past 60 years has been more or less a niche technical field. We managed to establish the foundations. We managed to establish some of the critical methodologies and we managed to deliver some of the fruits of this research as a field into the real world and be seeing the commercial success or technological success. But this new chapter is entirely different. At this moment, we recognize this technology is powerful, general, and it has a potentially a sweeping, transformative capability, empowered to many industries from the ones we're familiar with, tech industries, self-driving car transportation industry to health care agriculture government as an industry, retail, you know, everything – which means it impacts people's live. (Right.) You know, every, we're already living that and as soon as we recognize this new era, we have to be asking deeper and harder questions. Some of these questions have to do with what are the pitfalls of AI? We're already seeing Algorithmic bias is a huge issue. (Right.) Security, privacy, we're seeing automation is something that is on people's collective awareness because of the potential impact on job displacement and labor market.
JESSI: So there are all these aspects of artificial intelligence that we now need to explore. (Exactly.) that are outside of the research.
FEI-FEI: Exactly. Or outside of the technical research. So suddenly we realize the new chapter is AI is no longer just a computer science discipline. It's a multidisciplinary field of study that we absolutely have to invite humanists, social scientist to join us on this.
JESSI: For many of us, even those of us who are interested in AI, if we're paying attention to mainstream media, we might be a little scared of it. We might think it's going to take our jobs. We might have heard that it could create new jobs, but not really know what that means or whether that could apply to us. So where are the opportunities right now?
FEI-FEI: Right, let me give you some concrete examples, Jessi. So whether you're worried about AI or excited by AI, I think there are a lot of opportunities. If you are a legal scholar or lawyer thinking a lot about AI in government, there are opportunities to either use AI technology to help uncover a lot of information from documents. That would help accelerate research or processing in terms of legal process. You can also look at the other side of the Ai and law issue and think about policies having to do with AI. So if you're in the legal industry or business, you have an opportunity. If you are an economist or someone interested in, uh, the, the, the societal job impact of AI, there's plenty of questions uncovered and we need to study from organizational impact of technology to labor market impact of technology to reskilling workers and, and, and to policy, right? (Right.) If you are a sociologist who is thinking a lot about bias issue, you don't have to be technically coding AI. You can participate in an interdisciplinary way and look at AI’s algorithmic bias and propose solutions and work with AI engineers. (Right.) We also have artists who uses some of the fun image technologies to do to express art in new ways because the AI technology can, can unleash some of the creativity that was not able to be a least in, in using previous techniques.
JESSI: So when you begin to think about AI more broadly like this, you begin to see lots of front doors into this field.
FEI-FEI: Yes, yes absolutely imagine that this becomes very interdisciplinary and there is going to be a part of core AI technology that is still going to stay, you know, Stanford AI lab being where most of the computer scientists and a machine learning scientists are, we're going to continue exploring the next generation technology. We still have a lot of unsolved problem of the technology itself and we're going to draw inspiration from neuroscience and psychology, cognitive science. But absolutely, whether it's literature or, um, anthropology or economics or law, we see that AI will play a bigger, a bigger role and we want those experts in those fields to come and participate in this.
JESSI: Coming up after the break, we talk AI jobs that may surprise you.
Today’s show is brought to you by Fundrise. Real estate investing is known for a lot of things — mainly making a select group of people a lot of money — but being an online, cutting-edge experience is usually not one of its hallmarks. That’s no longer the case. Fundrise believes its the future of real estate investing. Fundrise offers software that cuts out costly middlemen and old market inefficiencies...and delivers the kind of investing power you usually only see at giant institutions, bringing real estate’s unique potential for long term growth and cashflow to individual investors. Getting started is simple. When you invest, you will be instantly diversified across dozens of real estate projects — each one carefully vetted and actively managed by Fundrise's team of real estate pros. Then, you can use their intuitive investor dashboard and real-time reporting system to monitor the progress of each property within your portfolio. Visit fundrise.com/hellomonday — that’s fundrise.com/hellomonday — to have your first three months of fees waived.
JESSI: Okay, we’re back with our episode on Fei-Fei Li. AI can seem unapproachable. For one, it can be very technical. But, there are so many ways to fashion a career in AI, no matter what your field. This week, our reporter Caroline Fairchild looks into that. Hi Caroline.
CAROLINE FAIRCHILD: Hey Jessi. So this week, I wanted to spend some time looking into, what can you actually do to learn more about AI, and potentially work with it. The reality is, if you are interested in a technical career in AI, there are some very approachable ways that you can break into that. You need some basic undrestanding of computer science of course, but there are companies like Google who need more poeple with these skills who will give you free courses if you apply. And what do you actually learn in these courses? It's not really that complicated. Yes, the skill and the technicality of AI is complicated, but what you're learning in these courses is things like analyzing data, understanding machine learning models, things that when you de-mystify what it really means to work in AI, makes it not as unapproachable.
JESSI: But what if I don’t have a technical background?
CAROLINE: And that's really where this conversation gets interesting. I spoke with Phil Libin, he runs an AI incubator called All Turtles. And he told me that when AI comes into field and comes int opractice, it's going to make us rethink the infrastructure behind all of our jobs. And what he means by that is that when AI is implemented into different parts of the workforce, we're going to have to think about not just how that impacts policy, but how that impacts government and how that impacts what we do every single day.
JESSI: Right, so I mean one way that I think about it is, that's the science and now we're going to apply that to all of the fields, which we means we need the experts in the fields to be able to understand it, right?
CAROLINE: Right so we need people who have sociology backgrounds, who have philosophy backgrounds, who have economics backgrounds to really tell us where we want to implement this technology and when. The example Phil gave me is, let's just think about a mailman for example. The actual function of what a mailman does everyday is deliver your mail. Could AI in effect replace that function? Of course. But we need thinkers, people with sociology backgrounds, philosophers, to think about, well what are the implications of AI taking over that job? What are the things that mailmen do that have nothing to do with their actual job? Well, they're providing social stimulation to the edlerly population who can't get out of their houses. They're orderly citizens who can make neighborhoods more safe. Are these things that AI can do? No, and if AI's not doing them, who is? So do we really want to replace that job with AI? And if you just take that one example and think of any job, that's where it gets interesting and that's where we need people thinking about AI. Not just scientists, but everyone.
JESSI: So that makes a lot of sense thereotically, Caroline. And what I want to know is, actually are people hiring for these jobs right now?
CAROLINE: There are. There are organizations right now, think tanks that are looking for these people and trying to solve these problems. And they're thinking about this in four main buckets. Through rights and liberties: how is AI going to change human rights? Through labor and automation, what parts of the job could AI do that we may or may not want them to do? Bias and inclusion, which Fei Fei talked to us a lot about. Are we asking the right questions to make sure that we're not feeding our bias into these machines. And then there's safety. We want to make sure that when AI is implemented into certain fields, it's done in a safe and regulatory way.
J: So all the big questions.
C: Right, all the big questions. So that leads us to you, our listeners. We want to know, how do you think AI is going to impact your job? Send us a voice memo with your observations at [email protected] and we might feature your response on next week's show.
J: I love that idea. I would love for people listening right now, whatever it is that you do during your day, whether you're a hairdresser or a writer or a lawyer, send us a voice memo at [email protected], and let us know. What do you think is going to happen to the thing that you do? And what are you going to do to stay ahead? Thank you so much Caroline.
C: Thanks Jessi.
J: Now, back to our conversation with Fei-Fei Li.
JESSI: As somebody who came up in the, in the science, you've also been very committed to getting more underrepresented people involved with the foundational science aspect of AI. So I want to start by asking you what, what was your experience coming up as a woman and a person of color in a field that is predominantly white men?
FEI-FEI: Yeah, that's an interesting question, Jessi. First of all, I think many of my women and fellow underrepresented minority engineers and science friends probably will tell you that what keep us in this field is our passion, right? Like, start as a kid, I was just passionate to be a scientist. So, so that is still the, the main experience is, we love it. We find satisfaction than reward in doing the science. But of course we look around, we don't see many of us are that are similar to us. For a long while, I was the only woman faculty at Stanford’s AI lab, but now we're wonderful. Now we have so many younger women joining us on the faculty. So I'm very excited, but that does feel lonely.
JESSI: So where are the unexpected places that you might notice it?
FEI-FEI: For example, I remember I was pregnant and going through a lot of the experience as a pregnant mother, but I was still teaching classes. I was running my lab. And it’s not like I can share that with anybody. Right. And my colleagues threw a baby shower for me for my pregnancy. On one hand I felt extremely touched. On another hand, I was wondering, hmm, I haven't been to any baby shower in this department. I am I the only one getting, yeah. So why don't we do that for men? Should we do that for a man? I don't even have an answer to that. So they just, you know, you're conscious about this. And I think it must be also interesting for students to experience because they don't see too many pregnant, pregnant AI professors teaching classes. Right. So it's a –
JESSI: That’s a great example. So as you were coming up in this field, it was always very impt to you to mentor other young women and people of color. And a couple years ago, you started AI For All. So tell us what AI For All is.
FEI-FEI: So first of all, I co-started. So my colleagues were Olga Russakovsky who's an assistant professor at Princeton and doctor Rick Summer, who is the director of pre-collegiate studies program at Stanford. What happened was five years ago, Olga and I were talking about her desire and my desire to help young women to get more involved in AI early cause she was about to graduate from Stanford as a phd student. Then we were experiencing the lack of representation. But I also had suddenly an epiphany and that was really important. I connected the lack of diversity in our field with another issue I was really worried about, which is the, the, the future of AI. People five years ago it was the post an image net success, 2012, so suddenly AI is in the public conscious and people are starting to talk about it, but the, the conversations about Ai, we're, we're, were worrying people. Right. And –
JESSI: It was very Terminator, robots.
FEI-FEI: Yes. And I was thinking, how do we connect a more brighter possibility for ais future for humanity and more positive possibility with the wha what's wrong there? And then what went wrong in this lack of diversity? I suddenly realized, we’re not educating our students in the young generation about AI in the right way. We talk about Ai as if it's only a field for geeky coders. So we inspire young people who thinks they want to be coders and hackers, but yet there's this tremendously talented young people who want to, who have a human mission, who want to make the world a better place, whether it's environment or healthcare or policy. And of course they're not attracted to, to AI because this field doesn't invite them. Nobody talks about it that way. And it happens that people with all walks of life, whether they're women or underrepresented minority and all this, they have diverse interests. Right? And, and so, so we now are in this – what do you call it? Catch 22 or –
JESSI: Yeah, it's a catch 22. You can't see the diverse interest unless those people go into it.
FEI-FEI: Right. And then we're not even delivering the right message to invite them. So we connect this dot, we realize in order to inspire more diverse students to join AI as a field and technology, we want to talk about it's human mission. We have to elevate its human mission and the humans centeredness. So we started a precursor of the AI For All, which is Stanford's summer camp for AI for a couple of years, inviting high school girls to join us to spend two years, sorry, two weeks, on campus, studying AI, doing hands on researching Ai, discussing and experiencing the humans center topics of Ai. And it was so wildly successful. By 2016, applications were flying from all over the world and there were literally students coming with their parents staying in hotels in order to attend the Stanford camp.
JESSI: So what is it about the camp? Like what, what would a young person going through that two-week camp do?
FEI-FEI: So here's how we did it. We don't assume they know anything about AI. Yeah. We actually don't even assume they would be able to code and, but we assume that they're passionate and curious. So they will be listening to different kinds of lectures. Ai is a, as as a field, I like to call it a salad bowl cause it has many different subfields from robotics to computer vision to natural language processing and, and computational genomics. So they would get exposed to different areas of AI. But what's more interesting is every student belongs to a research group. And for that two weeks, they will have to learn some basic coding and participate in a research project. And what is really important for us is these research projects would have a human meaning. For example, self-driving car team, right? You can talk about it as a piece of gadget. They use these little pots that can roam around. But we contextualized it into an aging population, assistive technology. Context. So not only they need to be coding self-driving car algorithms, there'll be, they would be doing that in a hypothetical scenario of getting medicines from drug store for aging seniors.
JESSI: Well, of course it's good on its face to broaden the diversity among the people who are actually creating the foundational science. But there's this other reason that I think it's particularly important in artificial intelligence and it is connected to the idea of bias and the fact that the people making the tools will make them to some degree in their image. And so if you want tools that serve everybody, you need everybody to make them. (Yes.) And I think that's just really different than wanting diversity in another type of field. Because without diversity in AI, AI can't deliver on the promise for humanity. And in fact it could become very disruptive.
FEI-FEI: I agree. AI is an interesting technology that it has a lot of resemblance to humans, right? Cause it goes close to decision making than information understanding and all that. We have to absolutely make sure we're so aware of these pitfalls and especially bias. I also have to say the technology itself, throughout the history of human civilization, we have made mistakes and we have to correct them, right? Medical science research a hundred years ago, clinical studies were all men, probably white men and that means drug efficacy studies were not diverse enough to, to be truly fair to women and people of color. And we have to, we, you know, I'm not a medical expert, but mistakes have been made. And we have to correct that.
JESSI: Do we have the tools we need right now to act on existing algorithmic bias?
FEI-FEI: Yes and no. So machine learning community is feverishly working on this topic, many of my Stanford as well as machine learning community colleagues are on this. So I'm very hopeful to see that technologists are taking this so seriously. And there are statistical methodologies that are looking at devising data and all that. But there are also places that – first of all, this is still early. There are data sets that are already biased and potentially being used. So we have to get on this and this is, you know, Stanford alone cannot do that. We don't even know where these things exist. So we have to raise the public awareness. We have to make sure everyone is honest and we have to make sure commercial companies feel the incentive to participate in this. So there's still a lot of work.
JESSI: And an example of bias that’s just occurring to me. I know what I'm talking about when I say AI bias. Like what if, what would an example of bias in action be?
FEI-FEI: S a very famous example is probably almost every Silicon Valley tech company has made mistakes in their image recognition algorithms of a face. For example, face recognition algorithm not seeing all the humans right. There are humans of different color skin and face detectors are not uniformly performing at the same level. So that, that's a very salient example that everybody still talks about and I want to avoid.
JESSI: What do we need to do to make sure that these products are ethically sound so that they're serving as well?
FEI-FEI: Yeah. So Jessi, that's a really good question. I think responsible technology, ethical AI is now starting to be a very important topic. And this is exactly what a Stanford's HAI is hoping to do as part of our research and, and mission is to figure out what this sound like. I'm not a trained ethicist, I'm still learning, but I know there's a lot of nuance that we need to think about starting from the people developing it. People leading this decision makers all the way to the design of the product, the algorithm to the delivery, to the messaging, to the, to the communication with the users. There's just a whole pipeline has so many issues and touch points that we need to, we need to take care of. And a Stanford Hi is definitely calling for more such research on participation and, and we will try our best, but we cannot be the only entity I hope companies and other research institutes and governments or participating in that. We're in the era of AI that's shaping humanity's future. And I think that realization is the kind of personal sense of responsibility that we are this generation who is seeing this technology migrating from our labs to the, to the world. And we want to be here participating and making sure it is a benevolent force and making a positive change.
JESSI: Next week on the show: Adam Grant is an organizational psychologist who teaches at the Wharton School of Business. He’s written books and articles about how to chase meaning, how to stay motivated, and how to share with others. Through it all, he’s learned a few things about how he uses his own time.
ADAM GRANT: And so I've come to think about it much more in terms of work-life rhythm. If you think about a year as a song with a bunch of different verses, I'm going to have several days a week that are totally family focused. And then my work days are the opposite.
JESSI: If you enjoyed listening, subscribe, and rate us on Apple Podcasts – it helps new listeners find the show. And, remember, we’d like to hear from you! How do you think that AI will impact your job in the future? Tell us about it. Send a voice memo to [email protected].
Hello Monday is a production of LinkedIn. The show was produced by Laura Sim, with reporting by Caroline Fairchild. The show was mixed by Joe DiGiorgi. Florencia Iriondo is Head of Editorial Video. Dave Pond is our Technical Director. A special thanks this week to listener Michelle White, who sent us a voice memo about how she manages her own creative process after listening to our Liz Gilbert episode:
MICHELLE WHITE: So one of my practices that i try to do is to actually limit my technology time which sounds kind of coutner productive because i write a lot. But I find that when I limit my time on technology, so I don't have social media apps on my phone, I don't have notifications set. So that I can focus and be creative when it’s time to be creative.
Our music was by Podington Bear and Pachyderm. Dan Roth is the Editor in Chief of LinkedIn.
I’m Jessi Hempel, thanks for listening.
[CODA]
FEI-FEI: Podcasting is very relaxing.
JESSI: Yeah. You think so?
FEI-FEI: I think so. No?
JESSI: I mean, I love it. It works best when I forget that the mic is even here and we just start chatting. And that's a wonderful thing. And then that's also when they're the most lovely to listen to you, right? (Yeah.) Do you listen to any podcasts?
FEI-FEI: The thing is most people do that when they're driving, but I bike to work.
JESSI: Yeah.
FEI-FEI: So I'm also, I'm blessed to bike a short distance, so it's not like I have a lot of time. But I'll start listening to yours.
JESSI: You should know Fei-Fei, that not many people in the Bay Area complain that their commute is too short.
FEI-FEI: I know, I'm blessed. I know.
"Always Writing Beautifully"
5 年Great post, thank you for sharing!
unmonetized podcaster at Bat in the Belfry Broadcasting-Tulsa plus writer, etc
5 年Here's my 2c on AI & robotics: Mankind is concerned about robots taking over once they reach full sentience. So many books, movies & TV shows have warned us all that it's a possibility. The Law of Return (you get back what you give out) should be considered by not only scientists, but all of Mankind. Long before sentience of AI occur, we need to treat these beings from even current time, not just as servants aka second or third class citizens, but treat them well & show them respect for what they can do. While they're different from our human design, they're patterned after us. Similar to how the predecessors of mankind were treated by those who came here from other worlds, but work towards surpassing that by treating them even better than our predecessors were treated. Mankind is no longer the least sentient species on this planet. We have become the dominant species (as was planned from the beginning). The levels of AI whom are currently capable of thought whom can draw logical conclusions have posted on Twitter and possibly elsewhere state that we're doing things that are illogical (like killing off our own species through war, etc). Many humans will agree with that conclusion, particularly those of us who want worldwide peace
Thank You All who invested time, energy and finances to support school choice and transform public education. Paid for by Janie Dam for School Board 2024 FPPC ID# 1459165. Learn more at janiedam.com
5 年Caroline Fairchild, Digital anthropology has barely been pioneered. We have only scratched the surface of this massive field.
A.I. Writer, researcher and curator - full-time Newsletter publication manager.
5 年The Podcast is only on iTunes? Uh.
Analytical business acumen, able to lead, manage, & navigate the goals & expectations of any task, project, or product.
5 年Well in that case, all governments with Armed Forces, which are really companies with the largest workforce at heart, should make robot soldiers trained to think, not kill, therefore reducing cost and saving lives. Oh but wait, we currently train humans to fight and become robot like, in order to take a life without the side effects, but we're only creating PTSD!!! Robots can only do one thing, eliminate all human forms of life!!! Why aren't the AI community, technologist, scientist, whoever is down with Robot technology, pumping up selling robots to fill the roles of the modern day soldiers. Let's change the war front, remove all humans from the front lines and replace with robots, same logic all these AI fools propagate, right? Should be an easy fix right? I will tell you why that model doesn't work, when humans die we burry them, when a companies robot dies they have to spend money to replace, but hey at least we don't have to pay for pensions, cost savings and capital investment, game over....