20+ Pieces Of Advice From AI Experts To Those Starting Out In The Field

20+ Pieces Of Advice From AI Experts To Those Starting Out In The Field

Following on from our previous expert-led series, we asked our community of AI experts what advice they would give to both those starting out their careers and those that have a desire to potentially join the field.

Below we have contributions from those working in both academia and industry settings, all at the forefront of AI development. What pieces of advice would you add?

TL;DR available in the footer.

Alexia Jolicoeur-Martineau, PhD Researcher, MILA

Here are some general advices, they are a bit more research oriented, but the perfectionism one applies even to applied positions..

  • Don't over obsess on having everything 100% perfect. Develop a getting things done attitude. Academic perfectionism is the biggest plague in academia
  • Don't expect your research to speak for itself. Promotion (on social media) and dissemination (with blog posts) is important

Jane Wang, Senior Research Scientist, DeepMind

Don't chase after the hottest trends or the biggest splashes, as these areas will have the most competition and also will likely be superseded quickly anyway. Think about what kinds of problems you're most interested in solving, and what problems are likely to make the most impact if solved. The first involves being aware of what kinds of work you like doing (programming, theorizing, playing with real-world data, etc), and the second involves looking around and being informed about how the rest of the world lives. It's important we don't silo ourselves off in a bubble in this field, because this technology is making and will continue to make a huge impact on everyone in the world. Try to figure out what kind of role you want to have in that impact, and how to actively shape that impact to be beneficial for everyone.

Abhishek Gupta, Founder, Montreal AI Ethics Institute

For those who are just getting started in an AI role, my primary advice would be to take a measured approach in believing AI to be a magic bullet that solves everything. As reality and business constraints emerge, an emerging practitioner will realize the role that simplicity plays in achieving real-world deployments and how taking an engineering and scientific approach to the deployment of the systems is more important than chasing every shiny new technique that they see on the internet. A deep understanding of the societal impacts of their work is also something that they should consider as a part of their work. And finally, thinking about the value of collaborating with domain experts should be something that is front and center for an emerging practitioner. I have seen far too many new practitioners apply AI skills to projects where there is no domain expertise involved and it inevitably leads to non-meaningful outcomes that diminish the value of the time and effort that goes into creating a project and can also result in harm for the people who are the subjects of that system.

Andriy Burkov, Director of Data Science, Gartner

1. Learn the foundations. "Hands-on" alone, without an understanding of the underlying math, will not let you become the best in this profession. Today, and especially in the next 5-7 years, the tools will become so mature, that only your imagination will count. In AI, you cannot imagine anything meaningful if you don't know how the machine "thinks." Take a sculptor, an architect, or a painter. The best of them know everything about the tools and materials they work with. The same is true for AI.2. Go where the data is. AI is nothing without data, just as your talents.

Jekaterina Novikova, Director of Machine Learning, WinterLight Labs

My first advice goes to people in the very beginning of their career, who are looking for their very first AI-related role. The best way to find a job is to be able to demonstrate your skills and experience. So please show your capabilities and your interest in AI through personal projects, course assignments, Github repositories or something else you can share online. Create a portfolio that illustrates how you can apply in practice the skills learned during your academic courses. It would become an evidence of what tools you are familiar with and will demonstrate your competency. In addition, this portfolio of projects would help you be more confident and convincing when you’ll need to describe your experience at the interviews.And my second piece of advice would be - don’t get attached to any specific tool or technology. AI field is changing rapidly and it’s much more important in your day-to-day work to be able to learn new skills fast and to make yourself aware of the changes that are happening every day.

Alireza Fathi, Senior Research Scientist, Google

The two main pieces that I can think of that can result in a successful career in AI are the following: (a) strong math and coding fundamentals and (b) being on top of the most recently published papers in the field. I think a strong background in Algebra, probability and statistics, machine learning and at the same time powerful coding skills are critical to the success of an AI scientist in the long term. Sometimes (like now) the field becomes more practical and result oriented where coding is a very necessary skill to be able to implement ideas and run experiments quickly and efficiently.

But there will always be times when things become more theoretical and fundamental where having a strong math background can make a big difference. Being on top of AI literature is another very important skill. Unlike most other established fields where one can learn a lot from books, AI is a very new field that is evolving day to day. Methods that were achieving state of the art performance six months ago will be obsolete now. Thus, being able to quickly read papers, understand them and position them in the large body of previous work is a necessary skill for a successful AI scientist.

Sarah Laszlo, PhD, Senior Neuroscientist, X, The Moonshot Factory

1) Don't judge yourself for not knowing something.

One of my team's principles for working together is that we all agree that "Not knowing something only means not knowing something." It doesn't have any other meaning: it doesn't mean you are stupid; it doesn't mean you are not trying hard enough; it doesn't mean you aren't good at your job. It only means that you don't know that particular thing. No one can know everything, and you shouldn't judge yourself because you don't.

Which brings us to #2:

2) Don't work with people who judge you for not knowing everything.

Don't work with people that make you feel like you don't belong, aren't smart enough, can't do this job. To the extent that you get to choose what teams you work on, gravitate to teams where not knowing something is seen as an opportunity to learn something new, rather than a strike against you.

In my experience, a good sign of a good team is an environment where questions are not only welcome, but encouraged. When a complicated topic is raised, do people ask questions, or sit silently with a grim look of determined understanding on their face? Is the environment welcoming for questions and curiosity, or do team members seem embarrassed to ask questions? Do team members make an effort to explain complicated topics? Does the team value presentations that everyone in the audience understands? Wherever you can, you should require the teams that you work on to create an inclusive environment where everyone’s curiosity is respected and valued. It is possible to work in the AI field without constantly feeling impostor-syndrome dread; don't accept it as the norm.


Frankie Cancino, Senior AI Scientist, Target

Landing your first AI role is electrifying. A career in AI comes with many challenges, but it allows for innovation and exciting possibilities. My first piece of advice – don’t forget the basics and what got you there. It is easy to jump at the newest tech and methodologies. However, building a solid foundation with skills that can be applied across many domains will help with the additional building blocks. These skills will include writing code, statistics, probability theory, and linear algebra. Which leads into my second bit of advice – never stop learning. Continue to put yourself in situations that will require you to learn.

If you follow the first bit of advice, you will set yourself up for success and give yourself flexibility in implementing solutions. Keep in mind, the mathematical knowledge alone may not be enough. You will likely need to develop some domain expertise in the area you decide to pursue. Other skills such as writing (good) code, scalability, excellent user experience, and learning from past mistakes are important to develop. Since technology continuously evolves, you will have to continuously apply this learning mindset to keep up. Something extra I will leave with – remember your why. As with any job or career, it rarely goes as smooth as you would hope. Personally, I find work in AI fascinating, interesting, and fun. You may have a different reason for entering the field of AI. Whatever your reason may be, it’s good to remind yourself on occasion.

Jeff Clune, Research Team Leader, OpenAI

When speaking to Jeff earlier this year, I asked him what advice he would give to someone starting a career in AI or Data Science. It was simple (mainly due to the quick-fire question format of our interview), but one that leaves you pondering. "To quote the words of Wayne Gretzky, the greatest of all time, skate to where the puck is going not where it is now."  

Anirudh Koul, Machine Learning Lead, NASA 

Learn by building projects: Everybody learns differently, but you need excitement, you need the motivation to keep learning day after day - after the glamour of AI buzz words dies out. So what better way to learn than to take something relatable, build it in a few lines of code using high-level APIs. And as you start getting comfortable, then start to look at the inner theory and improve your breadth and depth in knowledge step by step.

Training is just half the battle: Questions to ponder over when you build any project:

  • What would your complete end to end pipeline look like?
  • How would you build a cloud API to serve the web frontend?
  • How do you scale from hundreds of images vs millions to billions?
  • What would be the cost involved in scaling this up?
  • How would you evaluate performance metrics, eg latency, and accuracy for model drift?
  • How would you process the new incoming data? When would you retrain the model?
  • While scaling up how do you make your network and pipeline efficient? How do you reduce the floating-point computations in your network? How would you reduce the size of the embeddings while still having the same representative power?
  • What could be potential sources of bias?

An experienced AI practitioner asks these questions from the get-go. So building experience in end to end projects teaches you industry-relevant skills early on.

Lofred Madzou, AI Project Lead, World Economic Forum

Over the past decade, artificial intelligence (AI) has emerged as the software engine that drives the Fourth Industrial Revolution, a technological force affecting all disciplines, economies, and industries. As an AI engineer or data scientist, you will be one of the architects of the digital society that we’re all going to inhabit. If you work in the industry, the very decisions that you make may end up determining who gets access to credit, who is being recruited, or who is being diagnosed with cancer. These are highly consequential decisions and the processes through which they are made must be carefully designed. Policymakers are likely to pass legislation to prevent potential misuse of AI soon but that’s also your responsibility.

Indeed, as technologists, you must consider the ethical implications of your work and take action to mitigate the potential adverse effects of your AI systems on consumers, citizens, and society at large. To this end, you should do two things. First, you need to integrate responsible AI practices into your machine learning workflow. There is a growing set of tools available online to help you define, design, audit, and deploy trustworthy AI systems. Second, you need to communicate effectively on the capabilities and limitations of your systems to various non-technical stakeholders. For instance, risk and compliance managers need non-technical explanations about which regulatory requirements are met or not by the model that you built. Is your training set sufficiently representative in terms of gender or ethnicity to prevent potential discriminatory outcomes? Does your model enhance or undermine end-users privacy? And so forth.

Kavita Ganesan, Founder, Opinosis Analytics

Here are two things that I’ve found to be very useful in my career:

AI is a broad field and there are already many “generalists” out there. Instead of trying to become a master of everything, gravitate towards the areas that most interest you. It can be specific sub-areas like mastering computer vision while remaining a breadth of understanding in other areas. Or you can become a master of AI tools if that’s what interests you. Become known for being really good at something.

Practice makes perfect. A lot of people talk about AI, very few actually take ideas from conception to reality to serving customers. If you want to make an impact, you need to practice delivering solutions that can be used in a real-world setting and this comes with a tonne of practice. Mastering the theory is just not enough.

Muhammed Ahmed, Data Scientist, Mailchimp

Tip #1 - Learn how to give and receive feedback. Learning how to give and receive honest feedback was one of most beneficial things that I’ve learned in my career. On my data science team, we embrace the feedback approach covered in the book Radical Candor by Kim Scott. In summary, a team should practice frequent and honest feedback that is:

  1. Actionable. Because no one benefits from comments like “Your notebook isn’t very good,” or “That’s a terrible plot.”
  2. Pragmatic and respectful by mastering the art of thoughtful disagreement.
  3. Established as normal by making it a habit in the office.

Most importantly, all constructive feedback should be well received. You can accomplish this by letting your commenter feel your enthusiasm and accepting attitude, so that they’re comfortable voicing their honest opinions. That way you get the most direct feedback.

Tip #2 - Be mindful of data ethics. Early on in your career, you’re just trying to get the hang of things. It can be hard to pay attention to ethics, but this is a more important time than ever to establish those habits. As an AI practitioner, the models you deploy will be used to automate the mundane, boost revenue, drive business decisions, and also impact human lives. Many times, we don’t realize the power that we have as practitioners and the implications that our models could have downstream. Your models could be the reason why political disinformation is spread, a person’s health care gets cut, someone is accused of being a criminal, or a person is denied college acceptance. We can reduce these risks by being mindful of common ethical issues like bias, fairness, feedback loops, and the risks of over emphasizing metrics. We can also exercise precautionary measures like holding an ethical pre-mortem before deploying a model that is expected to have a high impact. These are all topics that Rachel Thomas covers in her course on Practical Data Ethics, which I highly recommend!

Puneet Dokania, Senior Researcher, University of Oxford

This is something I enjoy discussing with people around me. If the aspiration is to become an independent researcher, my first advice would be to spend some time on a daily/weekly basis reading standard textbooks (machine learning, optimization, statistics etc.) while implementing the latest papers of your interest, on your own. With time, this will give you the right perspective and the ability to understand the underlying principles driving the field. AI will change its form over time but the fundamentals will remain more or less the same. Second, there are too many AI buzzwords around and it's very easy to get distracted. Not proud to say but I also struggle with it most of the time. Focus on one topic for at least 5-6 months before moving into a new one. My third and last advice (bonus), a top-down approach of finding a problem to work on might be more helpful than a bottom-up. What I mean is, think of a problem from high-level that you would like to solve (eventually), and then go deeper to find suitable subproblems to work on. Also make sure that you choose a project that is at the intersection of your interest, expertise of people around you and your own skill set.

Hilary Nicole, Information System Analyst, Google

When I was asked to author this piece for the Re-Work Expert Blog Series on advice for someone beginning their career in AI, I was a bit hesitant. A writer at heart with an analytical mind, I am enthralled and often equally appalled by the socio-technical systems deployed throughout the wild of our everyday lives. Currently at Google, I work on a collaborative team that utilizes an equity and justice lens to critically deconstruct machine learning algorithms and artificial intelligence systems. I am fortunate to have had a richly diverse and vibrant intellectual community of colleagues to guide me as I came into the field. Individuals who daily serve as a source of inspiration and education in how I approach this work. I am grateful to a couple of them for taking the time to sit down with me for this piece.

A self-proclaimed nerd, Alex Hanna who serves as a Research Scientist on Google’s Ethical AI team, always knew she wanted to work with computers. Alex credits her background in social movements and involvement in labor activism with ultimately driving her to the deep learning and AI space. What struck me in speaking to Alex was the life-long commitment to interdisciplinary collaboration and action-oriented education that informs her work at the nexus of Ethics, Data and AI to this day. It wasn’t the technical programming languages and knowledge of data architecture she wished she'd had more of, but rather knowledge of critical race theory, trans and gender studies, and broader exposure to the studies of science and technology that is integral to her daily work.

Jamilia Smith-Loud, a UX Researcher on Google’s Ethical Machine Learning team provided similar insight during our chat. Coming from an applied policy and civil rights background, Jamilia’s work looks at the gravity of impact(s) that policy administration and AI technology has on the lived reality of communities and users. One piece of the interview that really stood out to me was the importance she placed on seeking out multi-disciplinary expertise and viewpoints in the development of new technologies. Highlighting that many, but not all, problems can be measured quantitatively, Jamilia eloquently speaks to the need for mixed-methodological approaches in examining our work. The importance of using qualitative and ethnographic research approaches to understanding the harms to individuals and communities that can come from the built environment are an area she would like to see greater awareness around.

In my own experience, and coming to this work largely by accident following one of LinkedIn’s recommender algorithms promoting my profile to a tech recruiter, I’d echo these sentiments. One of the most significant net positive impacts on my short time in the field has been the ability to work with and learn from folks coming from all different disciplines. Acquiring the hard technical skills of an engineer or the strategic view necessary to guide the ideation, build and iterative development of a product is only one aspect of being successful in AI. The other, equally if not more critical, is to seek exposure to divergent perspectives and examine how the frontiers you push in a build may rub up against the spaces of people’s lives. It means being humble enough to back off when the harms to the few outweigh the marginal benefits of the many and being willing to confront the context of your own ignorance and privilege in the process.

We're extremely lucky to have the individuals above explaining some of the lessons they learnt on their career. Did you have anything to add? Make sure to add any thoughts in a comment on the initial post!

Short TL;DR

  • Learn the foundations
  • Don’t over obsess on having everything 100% perfect
  • Do not be afraid to share your work with others
  • Give and receive feedback
  • Don’t judge yourself for not knowing something
  • Don’t forget the basics and what got you there
  • Take a measured approach in believing AI to be a magic bullet that solves everything
  • Don’t chase after the hottest trends or the biggest splashes
  • Ensure that you stay on top of AI literature
  • Be extremely mindful of ethics when working with data

Further Reading:

No alt text provided for this image





Tom Allen

Having fun building The AI Journal ??

4 年

Loved this, Nikita Johnson - thanks so much for sharing!

Anirudh Koul

Author of O'Reilly's Practical Deep Learning Book | AI@Pinterest & ML Lead @NASA FDL | UN/TEDx Speaker | Seeing AI founder

4 年

Reading this definitely reduces the imposter syndrome, and boosts confidence, even for non-beginners. The care taken in writing some of these insights is exceptional, sharing it with my students! (Bonus points for the bold summaries).

要查看或添加评论,请登录

社区洞察

其他会员也浏览了