I need my shamanic skills more than ever in the age of AI
Ay?egül Güzel
HumaneIntelligence Fellow | Responsible & Sustainable AI | AI for Social Impact | Certified in Ethics of AI at LSE
Today Mozilla Fest Amsterdam is starting, I have already checked the schedule and created my impossible draft program.
Many incredible sessions are going on at the same time. That is why I will slow down to reflect on my intentions, start the game of trusting my gut, and wander around with awe!
Here are some examples of great discussions/workshops/social moments and community plenary sessions:
We are Data Keynote with Mona Chalabi
Mona Chalabi, journalist and keynote speaker is joining us at MozFest House Amsterdam!
Her work has earned her a Pulitzer Prize, a fellowship at the British Science Association, an Emmy nomination, and recognition from the Royal Statistical Society.
Her work is rooted in humanizing data and unveiling statistical research around social and economic justice issues including police violence, viral transmission, the cost of dying, and more.
At MozFest House Amsterdam, you will hear directly from Mona about the importance and urgency of distilling statistical data, and data in general, into humanized information to make visible social and economic injustice.
Designing regenerative technology
During this session participants will be challenged to come up with radical alternatives for the current mainstream extractive technology. What happens when we take nature's intelligence as a starting point? How can we apply regenerative principles to digital technology? This workshop will tap into your ability to think out of the box and apply your creativity. The concepts created in the session will provide the basis for design principles.
What does AI mean for our lives in the city?
The rapid developments in Artificial Intelligence (AI) have the potential to change how we live in the city. That is why the Municipality of Amsterdam is working on a vision for AI. With this vision, we want to provide direction for a sustainable and responsible use of AI, in which the values of our city are central.
In this vision, we approach the development from what it does to the interactions and human experiences in the city. In recent months, we have held dialogues in every district with residents, at primary schools and with experts from the business community and knowledge institutions.
We are thrilled to have a dialogue with you at Mozfest House as well. Join us and help us shape the vision for AI in Amsterdam!
We're not done yet: Organizing around implementing the AI Act
The EU's AI Act has been adopted, but the work isn't done. What does it take to make implementation of the law a success? And how can civil society ensure that the AI Act will actually make AI more fair and inclusive?
In this workshop, we map the timeline, strategic opportunities for civil society, and next steps during the AI Act’s implementation phase. We also explore what participating organizations will be focusing on in their work, what they need to do so successfully, and how participants - advocates, technologists, and researchers - can get involved and support each other during the implementation phase.
Holding companies to account on ethical AI
In September 2022, the World Benchmarking Alliance (WBA) launched the Collective Impact Coalition for Ethical AI (“CIC for Ethical AI”), a collaborative engagement initiative aimed at driving measurable progress in the adoption of ethical AI principles by digital technology companies assessed under the Digital Inclusion Benchmark. As of April 2024, the initiative comprises 60 investors as well as 12 non-governmental organizations, research institutions, and other entities that collectively form a broad civil society arm. The campaign, builds on the findings of WBA’s Digital Inclusion Benchmark, which has revealed large transparency gaps in companies’ disclosures on ethical AI.
In its first phase (Sep 2022 – Feb 2024), the investor wing of the CIC has coordinated engagement efforts by lead and supporting investors for each company, with the primary aim of eliciting public ethical AI principles from those that lacked them. Through a structured allocation process that incorporated their engagement preferences, investors were assigned lead or supporting roles and sought dialogue with the companies, subsequently reporting on the outcomes in quarterly CIC investor meetings and via reporting forms. The CIC for Ethical AI is currently in Phase 2. In this new phase, the initiative’s formal scope will expand to include three new expectations of companies, building on the core expectation of publishing their AI principles. Companies will be asked to demonstrate i) how they are implementing their published AI principles; ii) how AI risks are reflected in their human rights impact assessments, and iii) what governance mechanisms underpin the development, deployment, and procurement of AI technology. The CIC will continue to engage with the companies that do not yet have public ethical AI principles. However, its coverage will expand to all 200 companies evaluated in the 2023 Digital Inclusion Benchmark.
The session will share the benchmark findings, the experiences of the CIC and will crowdsource ideas for bringing more members of civil society and investors together. The session is an opportunity for individuals to come together to share their experiences and learnings with working to hold companies to account on their actions and commitments to ethical AI.
A critical, plural and collective reimagination of the futures of AI
This session offers an introductory presentation by members of the first cohort of the Master in Design for Responsible AI by Elisava and IAM, coming from Colombia, Canada, Brazil, India, Poland, Serbia and the UK synthesising key insights from human (and non-human rights), anti-racist, decolonial, feminist, non-Western and queer perspectives that have been part of their learning experience, framed in the context of the ongoing climate emergency.
Afterwards, they will host and facilitate a workshop where participants will break into smaller groups to discuss and use those insights to collectively imagine alternative, inclusive and fair narratives about the futures of AI, in the form of collaborative poetic outputs. Overall, participants will practice the idea of plurality, as defined by Audrey Tang, “technology that recognizes, honors, and empowers cooperation across social and cultural differences”.
Data and Diversity in AI
How does data collection impact different communities around the world? Who gets disproportionately harmed or left out? How does it affect elections?
Why AI needs intersectional feminism - Challenging the status quo to create an equitable AI future
This interactive discussion session introduces participants to the biases present throughout the AI lifecycle and examines the potential of integrating an intersectional feminist approach into AI development and deployment. Despite significant progress in artificial intelligence, current AI systems often perpetuate existing disparities and inequalities, adversely affecting marginalised and underrepresented communities.
While efforts to create ethical, inclusive AI have been made, these approaches often fall short in addressing power dynamics and their role in perpetuating these issues. This session presents a bold vision for reimagining the AI landscape through the lens of intersectional feminist principles, leveraging their long-standing expertise in challenging the status quo.
This session is a deep dive for participants to explore how societal power imbalances manifest itself in AI systems, what is the status quo and it limitations and how intersectional, feminist principles can help create an AI enabled future that works for all.
Outcomes: Increased awareness of the importance of intersectional feminist principles in AI. New perspectives on the challenges and opportunities of incorporating these principles into AI design and development. Collaborative exploration and engaging in reflexivity for creating a more inclusive AI landscape.
Future of algorithm auditing to build Trustworthy AI
This session explores the critical need to operationalize AI Safety and Ethics through Governance within the realm of large-scale, multimodal Generative AI models. To establish a foundation for our discussion, we'll initially question the purpose and incentive for companies to establish governance as a business objective and whether relying solely on regulation is sufficient to mitigate the risks associated with these systems.
This inquiry will set the stage for our exploration, emphasizing the significance of incorporating tactical socio-technical measures to complement endeavors in governance. Subsequently, we will delve into implementing ethics and safety for the evolving use cases of generative AI, focusing on two primary approaches: model evaluations and algorithm audits. This section will involve a detailed examination of the various types, characteristics, and limitations inherent in these approaches. In the final segment, we will discuss the practical implications of integrating these interventions, and particularly how their effective application and deployment can guide the responsible release of these models into the external world.
领英推荐
Harnessing plural and citizen input to create better AI democratic alignment
The premise: Artificial General Intelligence (AGI) is increasingly taking on roles once performed by humans, raising ethical issues about fairness and adherence to democratic norms. "AI alignment," ethics, and governance aim to address the risks posed by these technologies to ensure they benefit humanity and the world. In the court, the integration of AI is growing, amplifying ethical challenges related to accuracy, bias, transparency, and accountability.
Various countries passed legal frameworks to manage AGI's evolution and application. However, these efforts may face the "regulatory alignment problem," where laws may not adequately tackle specific AI risks or may conflict with other societal and regulatory goals.
To answer these ambiguous questions about risk level, maintaining digital rights, privacy or rightfully, legitimate representation in these new regulation frameworks, we will ask the citizens to express their fear, usefulness for, aspiration, risk perception to guide the legislator’s work with veritable citizen representation.
The synopsis: This event will serve as a pilot for what is known as an “AI Alignment” assembly where participants are asked about their opinions before and throughout the event on the subjects of development, deployment (usage) and governance in a way to
The session: In this session, we will present the underlying organizational process of organizing an AI alignment assembly, a new known moniker for citizen dialogues about AGI. We will also present co-present as facilitators specific project that imagines the ethical deployment of AGI systems to promote democratic themes such as promoting (public) knowledge integrity on the internet, accessibility and generally augmenting public services and access to information.
The Hackathon “GenAI for Democracy Hackathon: Empowering Informed Citizenship” takes place on June 10th-11th at the University of Amsterdam. The Hackathon seeks to achieve a pilot of an “AI alignment assembly” or “rapprochement” of views that are mandated by a public audience in the inherent fissures of GenAI and AGI as a new technology or platform or catalyst, medium, etc.
Inclusive data for AI: focus on low resource languages representation and unbiased content
In the rapidly evolving landscape of AI, the transformative potential across industries worldwide is immense. Yet, this potential is markedly hindered by limitations in the quality and diversity of the data used to train these systems. Predominantly, over 90% of the data feeding into popular Large Language Models (LLMs) is in English, leaving a significant gap in digital representation for low-resource languages. This disparity not only limits AI's reach but also perpetuates biases and inequities in technological solutions. At TAUS, we are committed to addressing these challenges through our Human Language Project (HLP). Our mission is to democratize AI by enhancing data diversity and ensuring fair representation across all languages. This session aims to discuss the challenges in creating a more inclusive AI ecosystem and to share insights from our ongoing efforts to:
AI as Social Facilitator? Exploring New Roles for AI for Social Debates and Multi-Stakeholder Conversations
AI is traditionally envisioned (and thus) developed as a ‘thinking body’ – a sophisticated decision-making entity that is expected to provide with the best expert answers, resolve conflicts or even command our activities; in these roles, AI is either demonized or idolized.
What if we reimagined AI in a more modest, yet very important and crucial role, as a background enabling element within the social fabric, of facilitator that will neutrally support social debates and complex stakeholder conversations?
Less Talking about Responsible Tech and More Doing
Have you noticed the number of times you have seen the phrases "Responsible AI", "Responsible Tech", "AI Ethics" lately? There's a lot of attention for these words and there are a lot of people studying them, primarily researchers, academic institutions and occasionally someone from a tech company.
And how many times do we see these concepts actually transition from (digital) paper to practice? Rarely because the thinkers are not always the doers. Or they have no context of the people and communities who are impacted by their thinking...
It's time to change that. Let's build an action list together for Responsible AI/Tech and commit to doing it. What are you waiting for?
In this workshop, bring your three favorite actions for making AI/tech more responsible and inclusive. Together, we will create a toolkit and commit to making it real.
Everyone is welcome to contribute and participate. No advanced degrees or tech experience needed. Instead bring your curiousity, your committment to communities and your common sense.
The Values of Voice: Mapping Community Desires around Voice AI
This workshop welcomes vocal artists, researchers, data scientists, policy makers - or just anyone with an interest in human voice(s) - to join us in reflecting on the value of the human voice within the current state of voice technologies. What do our voice(s) do? How do they travel? And to whom do they go?
Most importantly, how can understanding our values around speaking, singing, shouting, being heard or being silent help us to understand what is at stake when voice(s) become "data" in networks, algorithms and the hands of others.
Are you a musician wondering how your own voice data might be used in training AI voice clones or music production tools? Or someone doing open research involving voice data who wishes there was more ethically sourced data within your field? Or a policy maker trying to better understand the landscape of stakeholders around vocal data?
In this workshop we will work on creating a set of vocal values, starting from each person's individual relationship to voice(s), and try to map out how voices travel through the world and connect to others in their many possible forms.
Together we will reflect on what kinds of pathways for voice exist across AI voice technology and data collection practices, and reflect on what we really need. We will collect and map out all of these wishes and reflections as a first step towards building a concept of Vocal Values for technology.
Songs of the Living Performance
Toshi Reagon created Songs of the Living as a way to center coming together, signing, and gathering with different creative expressions to find alignment with our living world. The musical experience draws inspiration from Octabia E. Butler’s “Parable of the Sower,” set in the year 2024. The novel prompts contemplation of “Technology of the Living”: the profound idea that we are the predecessors of the machines we engineer, setting the precedent for them.
As we grapple with the implications of AI on our lives, the legacy of The Parables Experience serves as a guiding light for the mission of Songs of the Living. This transformative event highlighted the importance of creating spaces for solidarity, where individuals can come together and ideas can thrive. From these critical dialogues emerges the Songs of the Living experience.
If you are around to join Mozilla fest this year, let me know!