Breaking algorithm-driven behaviors -Part One.
fourteenth edition of the newsletter data uncollected

Breaking algorithm-driven behaviors -Part One.

Welcome to Data Uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data we missed and are yet to collect. In this newsletter, we will talk about everything the raw data is capable of – from simple strategies of building equity into research+analytics processes to how we can make a better community through purpose-driven analysis.


Algorithm-driven or algorithmic behaviors?– these are the words I am bringing into my everyday language when I see us (as humans) gravitate towards actions expected from the algorithms. Obviously, without realizing – since none of us want to acknowledge how deeply involved we are, individually and collectively, with the algorithms around us. And today, I want to talk about those behaviors and what we can do about them.

Now don't get me wrong – I am a continuous learner, evolving individual, and have a deep relationship with data science. Algorithms fascinate me – to learn and build better. But when I see us giving away too much power to those algorithms (in an already power-imbalanced world!) – I think it becomes a must to use our conversational avenues to think on it. So, you and I are picking this topic today not because we want fewer algorithms but because we want an intentional + purpose-driven approach to adopting and adapting these (algorithms).

But before we get into it – understand that this topic – breaking algorithm-driven behavior – covers a wide area. To make this meaningful and digestible, you and I will tackle this twice. This week, we will set up some foundation, such as examples of what constitutes this behavior? And next week, we will dive straight into what we can do to move toward an?un-algorithmic?behavior.

Let's start with some examples of how algorithms affect our behavior (and for simplicity, let's begin with outside-nonprofit, day-to-day situations):

  • Some social media sites recommend posting 1-5 times a day for maximum reach, views, and engagement. This recommendation is often interpreted as the need to post daily or as regularly as possible to reach your audience – or else you lose them. For instance, take a look at different studies on youth behavior from social media algorithms.
  • A lot of platforms auto-correct words that do not necessarily need a correction. For example, color vs. colour, behavior vs. behaviour, etc. One of those spellings works well in America, and the other works for the rest of the world. A common pattern for many first-gen immigrants I have taught in the past few years is to add more layers to second-guessing themselves on written and spoken English knowledge. Some automatically want to compensate for this self-assumed gap with more classes. It takes deliberate work from my end to decouple – what they know vs. what they think they don't know.
  • From a study a few years ago, Siri was clueless during crises like rape or domestic violence. Add to it the different accents and comfort in speaking the dominant language (English) – and we have a platform that can find pizza restaurants but not the immediate next steps during crises. And mind you – this is an essential need for marginalized communities already experiencing any form of abuse or violence. Some nonprofits are trying to change their processes, especially crises helpline around these issues – but we still have a long way.
  • And, last one – my favorite – recommendation systems. I call this a chicken-and-egg situation in data science. For example, take Netflix. Last month, I watched samples of 2-4 different stand-up comedy shows – within 10-15 days. I am not a usual stand-up comedy person, but when you have watched everything Netflix can offer on my favorite genres already, why not branch a little. Now, I have a row full of stand-up comedies. It's not too bad – but it does create bias on the genres I can choose to like. Hence, the chicken-and-egg situation – do you like first that leads to recommendation systems? Maybe. But, when you stopped taking the recommendation system, after how long does it "de-recommend" it out of your view?

?

Now, let's look at some examples of how algorithms affect our behavior in the nonprofit industry:

  • A major gift predictive modeling algorithm gave you factors a, b, and c to be the most important identifiers of someone to make a major gift. These factors a, b, and c can be anything, say wealth capacity, some form of board/special constituency, and average donation size. Now, every fundraiser – without having the intentional, guided conversation around data – inadvertently cares most for the best quality of data points a, b, and c. And that's how the organization simply missed to care or collect about another data point d.
  • Another example would be times when social identity data (race, ethnicity, sexual orientation, etc.) is fed into algorithms. And trust me, it does happen. Sometimes for the funders, while other times to simply take advantage of the algorithm's output. Any segmentation or scoring based on a formula or algorithm that uses race, ethnicity, sexual orientation, etc., to deduce who should get privilege/benefits creates more harm than good. Such an algorithm encourages users to collect more social-identity data - as much as possible, from all potential sources (just the opposite of what we need)!
  • Let's take user personas that many nonprofits create. Many want to design those user personas as diverse sets – an understandable expectation coming out of non-malicious intention. But, in the absence of diverse representation in the group designing the personas, what happens is - that those personas are built by members/products coming out of homogenous needs and backgrounds, guesstimating and objectifying the needs, interests, and perceptions of non-homogenous/diverse set of prospects.?
  • One of the requirements of many algorithms is – the need for adequate data. Algorithms need enough data (that is, enough about individual people on multiple unique features about those people). Now, in the absence of such adequate data, there are often "creative", algorithm-acceptable legit ways to replicate and boost data so that predictions can be made. The challenge with such an approach – again, without a deliberate talk and understanding around why and where this approach comes from – is that it often ends up boosting not just the data itself but also the biases and gaps in it. And, though the algorithm runs well to give an output, inadvertently picking the same signals from unchecked data (unchecked in terms of underlying biases/flaws) perpetuates those biases and biased behaviors.

*********************************

Of course, there are more examples of algorithm-driven behaviors we have adopted in our way to progress. But, for now, let these examples marinate.?

The primary challenge of identifying and acknowledging any such behavior is - that no one wants to purposefully/maliciously discredit the inequities and inequalities many communities face from our collective behaviors of repeated exposure to algorithms. And, yet communities exist that face inequities and injustices day after day. It's time we learn to identify the behaviors.?

Next week, we will continue this same topic, focusing on a question that hopefully should be triggered by now, "so what can we do?".

***?So, what do I want from you today (my readers)?

Today, I want you to

  • Share what resonates and is missing in these behaviors? Any other example you may want to bring forward?


?*** Here is the prompt for us to refer, reflect, and keep alive the list of community-centric data principles.

***For those reading this newsletter for the first time,?here is some intro of this newsletter?for you. :)

Austin Hattox

Nonprofit Website Strategist | Spark that "aha" moment behind your mission & grow your community

2 年

Meenakshi (Meena) Das I'm so glad you started data uncollected. :) This is a fascinating topic that needs more coverage. I'm looking forward to part 2!

Matthew D.

Founder at Donor Science Consulting

2 年

So I was curious about how Google Assistant would deal with the question of "How do I deal with being raped?". Uncomfortable as it was, I voiced the question to my phone. It actually came up with some really helpful looking results. Among those results were references to sexual assault centres. That led me to a second uncomfortable question "Find sexual assault centres in Toronto". Sure enough, it found them just fine! Good on Google, apparently.

Ben Thomas

?? Creating Sustainable Fundraising Growth l ?? Educating Nonprofits l ?? Improving Direct Response Results

2 年

Looking forward to digging into this! ??

要查看或添加评论,请登录

Meenakshi (Meena) Das的更多文章

  • The Bridge Between Anger and Kindness? That’s Uncertainty

    The Bridge Between Anger and Kindness? That’s Uncertainty

    If I have to describe my community, my circle of trusted people, my tribe in a single sentence – they are all humans…

    8 条评论
  • Data, Joyful Resistance, and Progressive Philanthropy

    Data, Joyful Resistance, and Progressive Philanthropy

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    8 条评论
  • Can AI cause Tech Trauma? – Part Two

    Can AI cause Tech Trauma? – Part Two

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    2 条评论
  • Can AI cause Tech Trauma? – Part One

    Can AI cause Tech Trauma? – Part One

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    4 条评论
  • Reclaiming control with AI Activism.

    Reclaiming control with AI Activism.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    2 条评论
  • AI-Ready Nonprofits Need AI-Ready Leadership.

    AI-Ready Nonprofits Need AI-Ready Leadership.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    12 条评论
  • The non-extractive version of AI.

    The non-extractive version of AI.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    4 条评论
  • AI can replace my job, yes.

    AI can replace my job, yes.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    3 条评论
  • Justice is Expensive.

    Justice is Expensive.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    16 条评论
  • Care, Imagination, and Responsibility.

    Care, Imagination, and Responsibility.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    2 条评论

社区洞察

其他会员也浏览了