Stopping AI and the Digital Divide from Dividing Society
Haves and have-nots

Stopping AI and the Digital Divide from Dividing Society

“The digital divide continues to be the greatest economic and civil rights issue in the nation.”?

― Larry Irving, former administrator of the National Telecommunications and Information Administration (NTIA), February 11, 2021.


During the Covid pandemic, we all became aware of how dependent we’ve become on the internet. Over the past thirty years, access to technology and the internet has essentially become a prerequisite for full participation across nearly all aspects of society - jobs, education, government services, healthcare, civic life, and more. Yet profound disparities persist in affordable access to devices, broadband infrastructure, and digital literacy that correlate strongly with existing lines of disadvantage around income, geography, age, and race.

The "digital divide" presents the greatest civil rights challenge of our time because it fundamentally deprives marginalized groups of agency, voice, and opportunity in the modern world.?

Harlem’s technology innovator, Silicon Harlem (SH), was founded 10 years ago to put an end to the digital divide. SH has already embarked on pioneering projects to address local challenges around broadband deployment, economic and educational opportunities, and civic life. Their initiatives offer models of community-driven innovation that place public interest, inclusion, and real-world social impact at the center. This past year the federal government has finally taken action to address the digital divide through the introduction of the Broadband Infrastructure Program . SH deserves much credit for its leadership in issues around the digital divide and the growing realization that broadband must be treated as a public necessity such as water, electricity, and phone service.?

The Quest for Equity Continues

New innovations like AI risk amplifying digital inequity gaps even further. Addressing the digital divide with urgency through policy, public investment, and accountability efforts is essential to enable digital progress to uplift all of society rather than exclude it. Collaboration that cuts across government, industry, nonprofits, and communities will be integral to ensuring technology expands rights and possibilities for marginalized peoples rather than curtailing them.

“The future is already here – it's just not evenly distributed”.

― William Gibson, The Economist, December 4, 2003

The rapid proliferation of artificial intelligence holds enormous promise to transform our society in many ways across sectors like healthcare, education, transportation, and more. However, there are growing concerns that the benefits and capabilities enabled by AI will also be uneven across society.?

Significant gaps are emerging in who can access, understand, and harness these powerful technologies. This widening “AI divide” threatens to exacerbate historic inequalities and deny opportunities for empowerment to marginalized communities.??

This article will examine the contours and implications of the AI divide and pathways toward more democratized innovation guided by communities themselves. Realigning AI progress with ethics and collective well-being will require intention but remains within our grasp if we act now.? Neighborhoods like Harlem contain immense cultural wisdom and agency to lead the quest for ethical AI design.

The Intelligence Divide

Without deliberate countermeasures, misaligned dynamics will widen gaps between AI “haves” and “have-nots” along lines of race, class, gender, and geography. Power and agency will concentrate among those already privileged by the status quo. Potentially AI risks becoming seen as inherently inequitable, fueling broad backlash rather than realizing its enormous potential for positive transformation. On our current pathway, AI proliferation could:

  • Exacerbate Economic Inequality: Automation and predictive analytics concentrate wealth while displacing many unskilled laborers
  • Undermine Informational Freedom: Pervasive data collection and surveillance reinforce authoritarian control and manipulation of citizens.
  • Entrench Discrimination: Flawed algorithms and unrepresentative data perpetuate biases against women, minorities, and marginalized groups.??
  • Erode Accountability: More civic decisions and functions are turned over to opaque AI systems owned by corporations, removing human checks.
  • Reinforce Power Asymmetries: With few channels for contesting outcomes, communities impacted have little recourse for AI harms.
  • Limit Opportunity: As the capabilities and employment possibilities AI enables concentrate among the privileged few, prospects for disadvantaged groups narrow further.?

Each of these possible futures represents dimensions of the divide deepening - materially, politically, and socially. But technological change also contains openings for progress if steered with purpose. Alternatives exist. Pockets of innovation point towards how AI could be developed equitably and for shared benefit.

Racism can manifest in AI systems and their outputs in a few key ways:

  • Biased training data - If the data used to train AI models reflects societal biases and stereotypes, those biases will be perpetuated in the system's outputs. For example, image datasets that under-represent people of color or over-associate them with negative attributes.
  • Flawed algorithms - Algorithms can discriminate inadvertently based on proxies like zip codes or surnames that correlate with race. They may also rely on discriminatory assumptions or be optimized in ways that disadvantage minorities.
  • Lack of diversity - Homogenous teams building AI systems may overlook potential harm to minority groups. Without diverse perspectives, harmful biases go unnoticed.
  • Reinforcing stereotypes - AI applications such as facial recognition have misidentified people of color more often than whites. Generative systems also associate name ethnicity with stereotypical occupations and traits.
  • Amplifying hatred - Toxic language models trained on internet data have produced racist slurs, conspiracies, and abusive outputs targeting minority groups.
  • Digital red-lining - Unequal access to technology infrastructure and limitations of training data can result in AI systems underserving marginalized communities.
  • Dehumanization - Some AI can fail to recognize non-white faces altogether. Voice recognition struggles with non-native accents.?

Overall, great care needs to be taken in how data is sourced, algorithms are designed, and systems are tested to identify and mitigate any discrimination or biases. Having diverse teams involved at all stages of development is also key to addressing blind spots. Ongoing audits are necessary, along with transparency and accountability when issues emerge.

Alignment Through Community-Centered Innovation

The Circle of Life-Mastery , Beauty of Community campaign, and Imagine Harlem platform - exemplify how community-driven technology efforts can powerfully combat biases in AI and steer innovation in alignment with community needs and aspirations.

Through the Beauty of Community campaign, we will be introducing the potential of generative AI in partnership with local organizations, schools, artists, and technologists to visually celebrate the vibrancy of the Harlem and Uptown cultures. Community-led prompts highlighting joys, strengths, and humanity will provide a compass for generative models. Additionally, the campaign will offer skills training and business opportunities in the emerging print-on-demand sector. The key to this campaign is that community engagement will also allow us to begin addressing AI racism directly on the generative AI art program Stable Diffusion.?

From a recent article in Bloomberg titled:

HUMANS ARE BIASED, GENERATIVE AI IS EVEN WORSE?- Stable Diffusion’s text-to-image model amplifies stereotypes about race and gender — here’s why that matters

“Stable Diffusion generates images using artificial intelligence, in response to written prompts. Like many AI models, what it creates may seem plausible on its face but is actually a distortion of reality. An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes — worse than those found in the real world.”

Here is a real-world example, the resulting Stable Diffusion image when I entered the following prompt: [Harlem NYC, 125th street, Apollo Theater, jazz band, street fair, beautiful sustainable neighborhood]

Prompt: Harlem NYC, 125th street, Apollo Theater, jazz band, street fair, beautiful sustainable neighborhood

Prompt Engineering is the term used to describe the way text is turned into images. Prompt Engineering is less like computer programming and more like writing poetry or haiku. Each word in the prompt helps to create a visual metaphor based on its training data. In the prompt above the first word started with Harlem NYC. Once the system started from that perspective underlying misalignment and racist interpretation are likely to emerge.

The Beauty of Community

Imagine Harlem impact programs are guided by the principles of Beauty, Goodness, and Truth. We believe these three universal ideals of humanity represent the cornerstones of our social realities.

Our first campaign focuses on Beauty and how AI art can be tuned to create beauty and enhance the sense of community. We believe that through responsible community-based stewardship, AI platforms can also help to enhance Truth and Goodness in our communities.

The Beauty of Community campaign leverages AI in partnership with local artists and technologists to visually celebrate the vibrancy of marginalized cultures. Community-led prompts highlighting joys, strengths, and humanity provide a compass for generative models. The resulting images will be showcased through public projections challenging negative stereotypes.

The campaign motivates participants to explore and express their own neighborhoods while learning and enhancing skills through the creation of digital photography, the use of AI-art generation, and Web3 technology.

The AI Training Process

Here are some technical steps that could help make AI image generation systems like Stable Diffusion less biased and more inclusive in a community-based framework:

Training Data:

  • Actively source training images that represent the diversity of the community accurately and proportionately. Seek out underrepresented groups.
  • Crowdsource image datasets directly from community members to improve representation.
  • Weight the sampling of training data to favor underrepresented classes and correct imbalances.
  • Remove outright offensive training content that promotes stereotypes.

Data Augmentation:

  • Use techniques like style and attribute mixing to synthesize additional diverse training examples missing from the original dataset.
  • Apply image translations to populate different skin tones, ages, genders, etc. missing in the data.

Model Architecture:

  • Employ techniques like adversarial debiasing which penalize the model for learning unwanted biases during training.
  • Explore federated learning to train models on decentralized data while preserving privacy. This facilitates community-based data sharing.

Prompt Engineering:

  • Co-design prompt templates with community members that frame concepts positively and avoid tropes.
  • Have focus groups review generated outputs and refine prompts to mitigate observed biases.

Model Testing:

  • Establish a bias-testing dataset that proactively checks for skewed or offensive outputs before deployment.
  • Create an ethics review panel with diverse community stakeholders to flag issues missed by automated testing.

Feedback Mechanics:

  • Build reporting tools for bad outputs that community members can use to further improve the model.
  • Continuously update training data and fine-tune models as diverse community input helps identify remaining flaws.

By centering community participation, bias mitigation, and thoughtful testing in the model development and deployment process, generative AI can be steered towards more inclusive and empowering representations aligned with the public's values.

These projects exemplify principles for community-centered innovation:

  • Grounding technology in the knowledge and values of marginalized groups directly, not assumptions.
  • Applying AI to increase the visibility of ignored or misrepresented experiences.
  • Incorporating oversight mechanisms like bias testing and impact reviews gives the community agency over how technology represents them.
  • Enabling participation and consent through transparent processes co-designing solutions.
  • Deploying AI to restore justice - accurately documenting historical wrongs, removing barriers to opportunity, and enriching cultural expression.

Keys to Participatory Progress

Imagine Harlem convenes stakeholders to steer innovation transparently towards community-defined goals around sustainability, economic inclusion, and cultural heritage.?

This framework of empowerment stands in stark contrast to AI designed from homogeneous viewpoints in distant institutions for efficiency and profits alone. Technology controlled by communities presents a path to prevent harmful bias and align AI with pluralistic human values. Expanding this vision will require:

  • Policy incentives encouraging social good AI and mandatory bias audits for public systems
  • Public funding for grassroots labs exploring community-driven approaches.??
  • Education on AI ethics, design, justice, and representing marginalized communities respectfully.
  • Inclusion mandates for diverse teams in AI research and development.?
  • Allies within tech companies to embed ethical review and community participation practices.??
  • Platforms for marginalized groups to articulate needs directly to technologists.
  • Regulation banning exploitative data collection and surveillance.
  • Democratically governed data trusts sustaining public sector AI.

AI contains immense potential for liberation or oppression. By leading innovation grounded in care, wisdom, and justice, communities can steer it towards expanding human potential equitably. The future remains undetermined - we now have the ability to co-create it together.

More info: Circle of Life-Mastery , HolonCity

要查看或添加评论,请登录

社区洞察

其他会员也浏览了