To AI, or to Not AI: When not to use AI to solve a problem?

To AI, or to Not AI: When not to use AI to solve a problem?

Worrying about an AI takeover is like worrying about overcrowding on Mars - ML Researcher Andrew Ng

Today’s AI is not exactly harmless though. Let’s say a Silicon Valley startup is offering to save companies time by screening job candidates, identifying the likely top performers by analyzing short video interviews. Let us look at 4 warning signs that clearly tell us this is not really a good problem for AI to solve.

Warning Sign #1: The Problem Is Too Hard/Broad

  • The thing about hiring good people is that it’s really difficult. Even humans have trouble identifying good candidates.

Warning Sign #2: The Problem Is Not What We Thought It Was

  • We aren’t really asking the AI to identify the best candidates. We’re asking it to identify the candidates that most resemble the ones our human hiring managers liked in the past, because that is the data we are providing it to take an example from.
  • The AI doesn't know the meaning of what hiring a candidate means, or what a candidate is, its world only comprises of what we give to it and that is not the real world per se.
  • Plenty of bad and/or outright harmful AI programs are designed by people who thought they were designing an AI to solve a problem but were unknowingly training it to do something entirely different.

Warning Sign #3: There Are Sneaky Shortcuts

  • A convenient shortcut to predict the best candidate : prefer white men. That’s a lot easier than analyzing the nuances of a candidate’s choice of wording. It can maybe look at the camera metadata and choose the candidates who use the specific one.

AIs take sneaky shortcuts all the time - they just don’t know any better!

  • Another example being when an AI was tasked to identify cancer cells, instead of doing that (i.e, differentiating between cancer cells and healthy cells) which is a difficult task, the AI found it lot easier to look for the presence of a ruler in the picture (the pictures containing cancer cells had a ruler to show the difference in size from a healthy cell).

Warning Sign #4: The AI Tried to Learn from Flawed Data

  • Garbage in, garbage out.
  • If the AI’s goal is to imitate humans who make flawed decisions, perfect success would be to imitate those decisions exactly, flaws and all.
  • Warning sign numbers 1 through 3 are most often evidence of problems with data.

Doom or Delight

The difference between successful AI problem solving and failure usually has a lot to do with the suitability of the task for an AI solution.

要查看或添加评论,请登录

Anurag Pola的更多文章

社区洞察

其他会员也浏览了