Being Human For The Love of AI – Part 2.
edition forty-eight of the newsletter data uncollected

Being Human For The Love of AI – Part 2.

Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data we missed and are yet to collect. In this newsletter, we will talk about everything the raw data is capable of – from simple strategies of building equity into research+analytics processes to how we can make a better community through purpose-driven analysis.


Yes, I want you to hesitate before responding when a stranger on the bus asks, “so, are you in favor or against AI?” (remember that last conversation we had?)

?

Meet Juan and Minzoya – two immigrants who work with local government-provided caseworkers to support their long-term immigration status. Besides managing the immigration process, those caseworkers are also responsible for supporting Juan and Minzoya in finding appropriate services for settling in a new country – health care, housing, jobs, etc.


Juan is a 35-year-old male from Brazil. He immigrated a year ago with his wife and 3-year-old son. He is looking for decent-paying jobs while juggling health issues. He was an IT Security Analyst before moving. He has been hospitalized in the last year for a stroke, twice. He needs a job that pays well for his family and health needs.

?

Minzoya is a 52-year-old schoolteacher from Southeast Asia. She immigrated six months ago with her two teenage daughters. After losing her husband two years ago, they have been trying to move out to a country where she can provide better opportunities for her daughters. So far, it hasn’t been the easiest experience with the move, but they are doing their best by sticking together.

?

Their caseworker has been supporting both these families in navigating different processes, starting with ensuring they have a stable version of a long-term visa.?

?

One day their caseworker receives the notification that the in-house AI product that helps to predict and allocate resources for new clients has removed the names of Juan and Minzoya. So, the caseworker needs to send both families a letter stating they are no longer supported directly and can continue with other third-party associations.

?

In this hypothetical example, the question is, why is there no ‘why’? We haven’t even learned how and what Juan and Minzoya were sharing regarding their data, but the lack of transparency, explainability, error, and possible biases – all exist and are very real. And these issues are going nowhere – from algorithms for the judicial system and health care to algorithms for funding and online shopping.?

?

So, what can we do here? Or?can?we do anything here?

?

We can. You will be surprised to know how many summers I have spent finding unique combinations of coding constructs that can tremendously improve the user experience of technology more than previously expected. I believe we can define and play our parts here.

?

Today you and I will attempt to design a framework of evaluation…at least a version of it. We are still exploring the same question as last time - What does being human in this era of?algorithmified?look like?


Let us take another example (other than the local city train SS, something closer to our nonprofit industry):

?

Say a nonprofit, Vancouver Sustainable Health (VSH), has recently acquired an AI-based solution to support its internal and external members. The same solution offers features for fundraisers (to improve fundraising efforts) and programming individuals (to improve service delivery efforts). VSH is a mid-size to large organization with separate teams like Leadership, Planned Giving, Major Gifts, Annual Giving, Programs, Research, etc. They are excited to leverage this solution collectively to serve the local community in building better mental health. Their programs include support for those experiencing mental health issues, including a dedicated program for suicide prevention 24x7 call line.


If VSH intends to design a framework for its AI solution, how can it go about it?

  • They can either pick what the AI solution vendor company can offer (if the vendor company has some framework) or,
  • They can design a framework of their own (which takes elements from their vendor)

?

Let’s say we want to go with the latter option. What next?

?

Here is a comprehensive (and rough!) picture of?who (in green)?and?what (in blue)?of a suggested framework:

No alt text provided for this image
a template for framework to evaluate algorithms by nonprofits


Here are some of the underlying assumptions in this design:

1. AI Governance is a continuous work, and hence a representation from the external community, experts, board, and staff – on a rotating basis – exists to continuously ensure that the right evaluation questions are asked of the AI solutions.


2. The AI Governance team builds the pillars of evaluation driven by a “why.” For example, here I am sharing capability (of what the algorithm does), utility (of how useful the algorithm is in aligning the solution with the mission), and adoption (of how easy it is to learn, integrate, and implement the algorithm’s solution). They also provide guidelines on how metrics under each pillar can be designed and what questions to consider when engaging with AI solutions.


3. Every department has a role to play in determining what they track in terms of their metrics. If we want AI to be sustainably adopted throughout the organization, we must create space for every individual to engage and be accountable. So, in this case, every department chooses what they intend to track and measure under capability, utility, and adoption.?


4. In collaboration with the rest of the organization, the AI Governance team is responsible for designing their AI values. Remember edition 27: 7 Tenets of AI?


5. Every organization member commits to continuous collective learning around algorithms and human-centricity.


6. This framework differs from the testing and evaluation strategy a group of researchers, analysts, and data scientists will employ when finalizing which algorithm to push forward (during the modeling stage). In this example, however, VSH is designing this framework for an AI solution they have paid for and are not involved in designing by any means.


7. Finally, this is only a start, and this framework will evolve over time.

*********************************

This is new for us – to engage and yet learn to challenge algorithms.


But the first steps are always hard. The first question to evaluate the algorithm may not be perfect. The first group of people in the room collecting to protect what comes out of the algorithm may not be complete. The evaluation framework itself may need many iterations.


We may feel underprepared and overwhelmed. Nope, we?will?feel underprepared and overwhelmed.


I want you to know that’s okay.


You have the agency, power, and voice to affect the algorithms handed over to us. Don’t let the fear of unknowns affect your ability to design your well-being in this algorithmifying digital age.


You and I carry a simple goal – to demand being human with every algorithm we engage.

?

So, yes, I do want you to hesitate before responding when a stranger on the bus asks, “so, are you in favor or against AI?”

?

?

***?What do I want from you today (my readers)?


Nathan Chappell, MBA, MNA, CFRE

On a mission to reignite philanthropy through Responsible & Beneficial AI | Head of AI at DonorSearch AI | Co-Author of Generosity Crisis | AI Inventor | Co-Founder of Fundraising.AI | Podcast Host

1 年

Love this!! Thanks so much Meenakshi (Meena) Das for lending your voice to keep our priorities of people-first in order!

Grayson Bass

Imagine. Innovate. Build. I solve complex problems and unlock #disruptive #innovation through compassion. Academic, Industry, and Government experience in #northamerica #uae #europe #latinamerica #africa #asia

1 年

Meenakshi (Meena) Das this is so timely! Would love to see how that governance framework starts to build out...I think Michael Robbins, FRSA would be a great person for you to chat with about this...he has a great perspective that I respect

CHESTER SWANSON SR.

Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan

1 年

Love this.

Rye Whalen ??

Engineering Consulting, Design & Prototyping – Expert in rapid design, fabrication, integration, and testing.

1 年

Always great reads! Thanks for sharing Meenakshi (Meena) Das

Meenakshi (Meena) Das

CEO at NamasteData.org | Advancing Human-Centric Data & AI Equity

1 年

Links mentioned in the article: ●?Being Human For The Love of AI – Part I: https://www.dhirubhai.net/pulse/being-human-love-ai-part-i-meenakshi-meena-das ? ●? “Getting Started with AI” for nonprofit leaders: https://www.namastedata.org/freebies ? ●? 7 Tenets of Human-Centric AI: https://www.dhirubhai.net/pulse/7-tenets-human-centric-ai-meenakshi-meena-das ? ●? Workshop: Towards Human-Centric AI: https://data-is-for-everyone.teachable.com/p/workshop-towards-human-centric-ai

要查看或添加评论,请登录

社区洞察

其他会员也浏览了