Is Your Organization “AI-Ready”? Thoughts from a SAS Panel Discussion
I attended SAS Webinar "Becoming an AI-Ready Organization" on December 2, 2020, and they talked about explainable AI.

Is Your Organization “AI-Ready”? Thoughts from a SAS Panel Discussion

This post contains affiliate links and I will be compensated if you make a purchase after clicking on my links.

I am not involved in developing artificial intelligence, so I decided to try to learn about it by taking one of SAS’s free webinars they are holding over December 2020. Since I do mostly data management and statistics, I signed up for what turned out to be the last session in a series called “AI Pathfinder”, which appeared to be for professionals like me, who are new to serving customers asking for artificial intelligence. The format was  a panel discussion with four industry participants, took place on December 2, 2020, and was titled, “Becoming an AI-Ready Organization” (now available on demand).

I attended, because I wanted to figure out what I could do with my data management skills and my statistical knowledge to help my customer organizations become “AI-Ready”. Although the panel talked about many different topics, two themes stood out to me.

To Become AI-Ready, You Need to Clean Up and Document Your Data

Like regular statistics, AI predictions are based on actual data, not just ideas. So if you say that you believe a customer with “bad credit” will be most likely to default on a loan, you are going to have to operationalize “bad credit” into an actual variable. People have told me that in AI, you want your independent variables (called “features”) to be 1,0 flags as much as possible. So, that means that a customer with “bad credit” could be flagged as a 1. But then, maybe their actual credit score, a continuous variable, would hold data that is useful beyond the flag. But if you make both, then they are hopelessly correlated.

If you have taken my LinkedIn Learning courses, you will know I’m big on operationalizing data, and then documenting it. You can learn this in my descriptive SAS course, my descriptive R course, or part 1 of my big data study design course.

So operationalizing the data – and what actual features are candidates to be included in an algorithm – is a very challenging and time-consuming task. When I asked the panel for examples of organizations that are actually good at this, they said they see banks and insurance companies becoming AI-ready in this regard. They are investing in their data infrastructure, because they know that is the data will use to train their AI, so they better get their data in order.

An AI algorithm is like this baby who needs to be trained only with good data and healthy information

Think of it as two young parents, one of them pregnant, practicing changing their informal lexicon with each other, so they stop swearing before the baby is born. You want to be ready to expose your algorithm to only the good stuff – so you need to get your data AI-ready.

After Than, You Need to Be Able to Explain the Inputs to your AI

A term that was used was “explainable AI”. The panelists pointed out how this can get very dicey in the healthcare space. In healthcare, AI is not used for decisions that should be left to a human, like making the final diagnosis of a patient having a latent disease. But the physician making the diagnosis could be relying on an AI algorithm to help make decisions about whether they call the disease “positive” or “negative”, because the disease is latent.

AI like IBM Watson can take laboratory and other data and use an algorithm to make health predictions

When I was a secretary, one of the other secretaries in my pool had “bad lab readings” for many years, and we figured she had something wrong with her, but no one knew what. We had the best doctors at Hennepin County Medical Center, but some diseases are like that. But even they could not figure out where to look in this secretary to find where the disease was and the source of her “bad lab readings”.

This is exactly the situation AI is supposed to help. So let’s say this secretary’s doctor had had an AI thing, something like IBM Watson, that guided them to the point that they wanted to rule out leukemia, so they go do a bone marrow biopsy on her. It turns out that the biopsy site gets infected, because she really had something else that made wound healing difficult, and we unfortunately didn’t know that or we wouldn’t have ordered the biopsy.

She ends up getting really sick from the biopsy, and her leg is amputated. Which is a problem because she apparently has an undiagnosed autoimmune disorder, so now she’s rejecting everything we are putting in her. 

And she doesn’t even have @#$% leukemia! And now they are suing the medical center!

This is where explainable AI comes in. How can they explain in court that the AI suggested they rule out leukemia, which sounded logical, and then they did the biopsy? In the olden days when we did stuff like that, we’d just yell back, “It’s medicine! It’s biology! It’s indeterminate! We’re not God! We did our best! We thought we should rule out leukemia! What would you have done??”

No alt text provided for this image

But AI kind of feels like God. Can you imagine being the statistician trying to defend this hypothetical? Because if the AI hadn’t suggested leukemia, maybe they wouldn’t have thought of it and done the biopsy. Kind of like that old-fashioned “I Love Lucy” show.

AI…you got some ‘splainin’ to do!

If after all that, you are still somehow interested in learning about artificial intelligence, check out this LinkedIn Learning course in explainable AI (XAI) aimed at a general audience.

Image credits: Cute baby by Realt0n12, available here. Sample health lab report by Meanmicio available here. Lucille Ball as Superman by CBS Television, available here.

要查看或添加评论,请登录

Monika Wahi的更多文章

社区洞察

其他会员也浏览了