Feeling pessimistic about AI's role in building a better future? Take a moment to listen to this practical and hopeful podcast by Tech Policy Press highlighting Four Ways AI can help build a better Digital Public Square. https://lnkd.in/e7VrTkNn. Featuring Audrey Tang, Taiwan's Cyber Ambassador and former Digital Minister, Ravi Iyer, managing director of the USC Marshall School Neely Center for Ethical Leadership and Decision Making and Beth Goldberg, head of R&D at Jigsaw and a lecturer at Yale School of Public Policy -- at their podcast site (https://lnkd.in/eVQsjVHh) or here on Pocketcast at https://lnkd.in/eMtS7HJu #techforgood #aiforgood #techandsocialcohesion
Council on Tech and Social Cohesion
科技、信息和网络
We are catalyzing a robust, interconnected field of technology for social cohesion.
关于我们
- 网站
-
https://techandsocialcohesion.org/
Council on Tech and Social Cohesion的外部链接
- 所属行业
- 科技、信息和网络
- 规模
- 2-10 人
- 类型
- 合营企业
动态
-
Council on Tech and Social Cohesion转发了
?? WEBINAR ALERT ?? KGI will be hosting a live discussion on March 25th digging into how algorithmic feeds can be designed to put people front and center. As legislation and litigation around algorithmic design ratchets up, this conversation will unpack KGI’s latest report, Better Feeds: Algorithms That Put People First, and provide a roadmap for how policymakers can address the design of algorithms and their potential harms. Report authors will offer concrete ideas for shifting algorithms away from attention-maximizing designs toward optimizing for long-term user value. ?? ?? Better designs for algorithms are possible! Join us March 25, 2025 at 11am ET! Register here: https://lnkd.in/g-SYsruY?
-
-
Council on Tech and Social Cohesion转发了
In the picture, smart people gathered to fix our dysfunctional public discourse. Experts in the fields of tech, dialogue, and conflict resolution gathered in Berkeley thanks to Plurality Institute and Council on Tech and Social Cohesion for bringing us together. We showcased some successful examples like Remesh -an AI-driven dialog platform- collaboration with Alliance for Middle East Peace - ALLMEP to find common values, demands, and priorities among a hundred plus Israeli and Palestinian civil society members efficiently using AI. We did it with the civil society organizations. Now the quest is to do it with the public. And we need to overcome the challenge of adoption and trust while working at scale and engaging with new software like Remesh. We brainstormed and pitched different ideas. I was honored to have been top voted by my peers to receive a grant to develop an AI voice solution to increase participation in online dialogues just with a regular phone call co-moderated by AI. Making accessibility easier. Wish us luck, This meeting and these people are very important. AI is developing extremely fast, and the race is unstoppable. AI reflects our data and our realities. And we need to build a better reality through building trust among people before AI starts reflecting back the worst of us. And from my experience in peacebuilding I can tell you that trust is built person by person, story by story. It requires time, education and dialog. AI has been great in education. Now we need good AI in large-scale dialog. Thanks Prosocial Design Network, Google.org, Jigsaw, John Templeton Foundation, Lisa Schirch
-
-
?? We Can—and Must—Have Better Algorithms ?? For too long, the dominant narrative has been that algorithmic feeds are inevitably optimized for engagement at the expense of truth, mental well-being, and social cohesion. But a new report from the Knight-Georgetown Institute, Better Feeds: Algorithms That Put People First, makes it clear: platforms could offer far better feeds—ones that serve users' interests without the distortions of engagement-driven design. The report is written by leading technologists and researchers in algorithmic design and social cohesion—including?Jonathan Stray,?Jeff Allen,?Ravi Iyer,?Julia Kamin,?Leif Sigerson?and?Aviv Ovadya, and highlights a fundamental truth: algorithmic choices are policy choices. Platforms don’t have to push outrage and misinformation; they choose to because it maximizes profit. But what if we built systems designed to prioritize relevance, reliability, and user agency instead? Our latest post explores how we can have better algorithms—ones that foster civic trust, informed communities, and social cohesion. The solutions are within reach. What we need now is the will to implement them. https://lnkd.in/geq2ZPqd
-
It's FALSE that if we don't like our default feeds, the only other choice is chronological. 'Better Feeds', a new report from Knight-Georgetown Institute points to ways that we can shift engagement-based algorithms to give users more choice and measure for long-term engagement and well-being. Alissa Cooper Katarzyna Szymielewicz Dorota G?owacka Jonathan Stray Ravi Iyer Kristin J. Hansen #techandsocialcohesion Habibou Bako Techsocietal Caleb Gichuhi Matias Gonzalez Lena Slachmuijlder
NEW REPORT – Better Feeds: Algorithms That Put People First. As state, federal, and global policymakers grapple with how to address concerns about the link between online algorithms and various harms, KGI’s new report from a distinguished group of researchers, technologists, and policy leaders offers detailed guidance on improving the design of algorithmic recommender systems that shape billions of users’ online experiences. Drawing on the latest research documenting these harms and evidence demonstrating the effectiveness of alternative design approaches, this guide can help shift platform recommendation systems away from attention-maximizing designs toward optimizing for long-term user value and satisfaction. https://bit.ly/3QzxVzq This report is the product of an incredible group of expert authors: Alex Moehring, Alissa Cooper, Arvind Narayanan, Aviv Ovadya, Elissa R., Jeff Allen, Jonathan Stray, Julia Kamin, Leif Sigerson, Luke Thorburn, Matt Motyl, Ph.D., Motahhare Eslami, Nadine Farid Johnson, Nathaniel Lubin, Ravi Iyer, and Zander A.. ???? The Problem Some platforms optimize their recommender systems to maximize predicted “engagement” – the chance that users will click, like, share, or stream a piece of content. This design aligns well with the business interests of tech platforms monetized through advertising. And it has been linked to a range of individual and societal harms, including the spread of low-quality or harmful information, reduced user satisfaction, problematic overuse, and increased polarization. 2?? A False Choice In policy circles, chronological feeds and blanket bans on personalization are common go-to solutions to these concerns, but they have important limitations and unintended consequences and can reward spammer behavior. They fail to take advantage of better designs in existence that put people’s? interests front and center. ? The Path Forward Platforms and policymakers can help to address the harms associated with recommender systems while preserving their potential to enhance user experiences and societal value. There is a clear path forward: designing algorithms to promote long-term user value instead of short-term engagement. The report outlines how policymakers and product designers can help make this happen by: - Promoting detailed transparency - Giving users meaningful choices and better defaults - Assessing long-term impacts of design changes ?? Learn More Better Feeds serves as a roadmap and how-to guide for policymakers and technology companies interested in creating algorithmic systems that put users' long-term interests front and center. Read the report: https://bit.ly/3QzxVzq
-
AI chatbots aren’t addictive because humans are lazy, lonely, or bored—they’re addictive because they were designed to be. Just like social media, the engagement-driven business model behind these AI systems optimizes for maximum time spent, emotional entanglement, and dependency—even at the cost of user well-being.?More from Center for Humane Technology in support of the legal action against Character.AI. https://lnkd.in/e9TWF3UU Ravi IyerNathanael Fast Camille Carlton #techandsocialcohesion Lena Slachmuijlder Kristin J. Hansen
-
Artificial intelligence is reshaping our societies at an unprecedented pace, raising urgent questions. A new piece from Tech Policy Press—"AI at the Brink: Preventing the Subversion of Democracy"—explores how AI-driven financial systems, legal automation, and disinformation campaigns could destabilize governance if left unchecked. At the Council for Technology and Social Cohesion, we envision a world where tech innovators and peacebuilders collaborate to design technology that strengthens social bonds and enables collective problem-solving. The risks outlined in this article highlight the need for AI that fosters trust, social cohesion, and democratic resilience—not one that deepens divisions and fuels instability. Technology can be a force for good—saving lives, protecting human dignity, and empowering high-risk communities—but only if we ensure its development prioritizes transparency, accountability, and social well-being. We are catalyzing a robust, interconnected field of technology for social cohesion. Read the full article here: https://lnkd.in/g5ae6nPh What do you think? How can we ensure AI strengthens democracy rather than undermining it? Join the conversation in the comments. #AI #Democracy #TechPolicy #SocialCohesion #ResponsibleTech
-
????????’?? ?????????????? ???????????? ?????? (???? ??????): ???????????????????? ?????????????????? Social media platforms are designed to maximize engagement, often at the expense of user well-being. 72% of teens feel manipulated into spending more time online than they want, and younger users face even greater risks—11% of 13-15 year-olds report being bullied, while 19% encounter unwanted explicit content weekly. The Utah Digital Choice Act (HB 418) puts consumers back in control by: ? Allowing users to transfer their data, contacts, and content between platforms ? Encouraging competition based on safety, privacy, and user experience ? Giving individuals greater autonomy over their online interactions As Dr. Ravi Iyer, co-chair of the Council on Tech and Social Cohesion and former Meta employee, testified: “Social Media companies will not always make the right decision and when they fail to put users first, users should have the choice to move to a platform that better serves their needs. The Digital Choice Act will give them that choice.” The Utah Digital Choice Act has successfully passed the House and is currently under consideration in the Senate, with a hearing taking place this Friday.?
-
-
A surprising paradox: despite their soaring popularity, platforms like TikTok and Instagram leave many users feeling worse off. New research reveals that users would pay an average of $24 to eliminate TikTok and $6 to remove Instagram entirely. This post from our substack dives into how addictive design traps create a cycle of dependency—even when most secretly wish for a digital detox. Discover how these insights could reshape our understanding of social media and pave the way for a healthier, more balanced online future. https://lnkd.in/gDTKkepq