AI and Bias: Don't Ask Steve Jobs How to Make Rice Pilaf

AI and Bias: Don't Ask Steve Jobs How to Make Rice Pilaf

To many of us, the impact of AI is very abstract until we see it in our lives. At the same time, as we learn more, we will gradually envision how we can benefit from its obvious power. One of the key things to remember is that AI should reflect human values and should determine how we choose to design it. If we design AI so that we maintain control, we shouldn’t be threatened by it, or, at least, be able to limit our fear of where it might take us.

AI, in terms of decisions and goals, I see as a human-designed system that makes decisions based on a goal. AI can take different forms such as Assisted, Augmented, and Autonomous.?

  • Assisted helps humans reach goals but makes few decisions.?
  • Augmented is when AI mimics human intelligence, emotions, and decisions (Intelligence Amplification or IA) and in which AI and the human can prompt each other.?
  • Autonomous is where decisions carry out a human goal within a scenario independent of human input.?

Let’s fast forward a bit and see that the future holds big challenges for AI:

Removing bias and keeping people cognitively engaged.

Here are some of my thoughts and caveats on the future of AI.

AI and Testing

Our education will likely find that tests are not a matter of Scantron bubbles or written essays. The reason those methods have been widely used is due to the logistics and expense of person hours. In other words, we have accepted those means to assess learning because of economic limits. A teacher cannot interview each student. In retrospect, these methods have been a temporary solution until AI came along. We can design AI to test better than humans.

AI makes available to us a better testing process that would focus on verbal critical thinking and the retention of the content. For example, facial recognition will ensure that the student is identified. Voice recognition will transcribe the content. Artificial intelligence will rate the value of the content, remove bias, and do it all without fatigue and employee benefits. Of course, not all lessons can be tested this way; there will be math calculations that must be written.

When you test an individual in an interview setting, you’re emphasizing the speed of the answer. Even if you’re testing with Scantrons or any in-person test, you’re relying on speed. IQ tests are based on speed. SATs are based on speed. Everyone is required to complete the questions in the requisite time allowed. In a job setting, you are allowed time, but we all know time is money, therefore, AI monitoring interviews will require the test subject to organize their thoughts. Verbalizing their responses will be a test of preparation, spontaneity, organization of thought, and content. Whether this is superior to written essays and multiple-choice questions might depend on the subject matter. Let's look at bias.

Bias, the Wikipedia Effect

Removing human and cultural bias.

When delivering information without bias. AI can disseminate the content to objective content. Much as a crowd business model there would be a component that weeds out bias. For example, in recalling a news story, events will get its due prorated weight in the reporting. That is, the story will be presented objectively. The relevance of historical events will be judged upon its significance to today and having shaped society, and not subjective shaping by a narrow breadth of parties.

Ah, but significance, and subjective are words that are defined by who is asking the question, to put it another way, the answer you get depends on who the asker chooses to ask.

If you ask Steve Jobs how to make rice pilaf and Wolfgang Puck about PCs, you reveal your bias, not theirs.

For example, Wikipedia takes a democratic view of reporting. It has no corporate sponsors and uses the crowd business model of contribution. With some constraints, Wikipedia is open to the public, but model aims to weigh and weight the contributions fairly. That is, Einstein’s mini bio won’t have a long paragraph about his sister (I don't even know if he had a sister) because its relevance does not warrant the space.

With autonomous cars, consider who designs the braking distance of the car when it sees a yellow light. It might be 40 feet or 60 feet depending on who is coding. The downstream impact is replacing the brake pads that might be one-tenth of a percent of the operating cost of autonomous delivery vehicles. There might be personal or cultural differences that define how the car should behave. Some AI designers might not care about that downstream cost, but there are hundreds of thousands of decisions being made.

That's the challenge; to not see bias might be biased.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了