AI Hackathon for Equity: Tackling Harmful Bias in the Information Ecosystem

AI Hackathon for Equity: Tackling Harmful Bias in the Information Ecosystem

Getting AI to be more inclusive might mean asking it to act like Curious George

By Nevin Thompson , Hacks/Hackers Marketing and Communications Lead

Just as wisdom comes from asking questions before judging, AI can overcome bias by taking a moment to understand context rather than rushing to respond.

That was the conclusion of the team that developed “Curious GeorgePT,” one of the projects hatched during a two-day hackathon in Baltimore that focused on tackling bias in AI. Organized by Hacks/Hackers, The Real News Network and Baltimore Beat, with support from MacArthur Foundation and media partner Technical.ly, the event — AI for Equity: Tackling Harmful Bias in the Information Ecosystem — explored how AI can identify, reduce and eliminate bias in the news and information landscape.

Examples of model bias.

Journalists, engineers and technologists formed teams to collaborate on projects, and were encouraged to address different types of biases, including:

  • Demographic bias: AI systems perform differently for various gender, racial or ethnic groups
  • Data bias: Training on skewed datasets leads to biased outputs
  • Algorithmic bias: The model's design itself can favor certain outcomes
  • Contextual bias: Poor performance when applied outside the training context
  • Automation bias: Over-reliance on AI systems, assuming they're more accurate than they are
  • Confirmation bias: AI reinforcing users' existing beliefs rather than providing balance
  • Linguistic bias: Better performance in widely-spoken languages vs. less common ones
  • Temporal bias: Historical training data may not reflect current realities

The hackathon teams then tested biases in leading AI products — including OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini — by querying them with a variety of prompts to draw out potential biases from the AI models’ data and training.

Prompts used by hackathon teams during the Baltimore hackathon that tested for subtle bias in LLMs.

Teams then worked over two days to devise solutions including prototypes, best practices, prompt libraries and other projects.

The Curious GeorgePT team explored how to craft prompts that persuade AI assistants to become more curious by requiring the agents to ask clarifying questions before generating a response.

"We were able to tangibly investigate new approaches to mitigating bias from AI, as companies rapidly push closer to an agentic AI status quo,"?said Curious GeorgePT team member Jay Kemp, who traveled to Baltimore from the Berkman Klein Center for Internet & Society at Harvard University.

Another team, Paralzying Bias, also considered how curiosity can help LLMs become more personal, holistic and inclusive. The team explored ways of building infrastructure that incorporates inclusivity from the ground up.

Wikipedians who joined the hackathon in Baltimore noted how Wikipedia and the media cite each other as reliable sources. Introducing AI-generated content into this loop creates a concerning risk: AI models trained on this circulating information could become less accurate over time — a phenomenon known as model decay — while simultaneously amplifying existing biases.

“Wikipedia is one of the biggest open data sources in the world, and as a platform is influential when data models are created,” said Sherry Antoine, a hackathon attendee and executive director of AfroCROWD, an initiative to create and improve information about Black culture and history on Wikipedia.?

Antoine and her collaborators on the Why Not Yellow? hackathon team suggested that a bot that identified instances of bias in Wikipedia articles, “a bot to police the AI Bots,” would help address the issue while improving information integrity.

“AI has potential to lessen economic disparities and empower everyone to live better lives, but that won’t happen if the models continue to perpetuate stereotypes based on historical wrongs embedded in their training data,” said Burt Herman, co-founder of Hacks/Hackers. “While some benchmarks for AI bias exist, this should be a industry-wide standard to test for bias and representation in all AI products.”



要查看或添加评论,请登录

Hacks/Hackers的更多文章

社区洞察

其他会员也浏览了