Artificial intelligence is reshaping our societies at an unprecedented pace, raising urgent questions. A new piece from Tech Policy Press—"AI at the Brink: Preventing the Subversion of Democracy"—explores how AI-driven financial systems, legal automation, and disinformation campaigns could destabilize governance if left unchecked. At the Council for Technology and Social Cohesion, we envision a world where tech innovators and peacebuilders collaborate to design technology that strengthens social bonds and enables collective problem-solving. The risks outlined in this article highlight the need for AI that fosters trust, social cohesion, and democratic resilience—not one that deepens divisions and fuels instability. Technology can be a force for good—saving lives, protecting human dignity, and empowering high-risk communities—but only if we ensure its development prioritizes transparency, accountability, and social well-being. We are catalyzing a robust, interconnected field of technology for social cohesion. Read the full article here: https://lnkd.in/g5ae6nPh What do you think? How can we ensure AI strengthens democracy rather than undermining it? Join the conversation in the comments. #AI #Democracy #TechPolicy #SocialCohesion #ResponsibleTech
Council on Tech and Social Cohesion
科技、信息和网络
We are catalyzing a robust, interconnected field of technology for social cohesion.
关于我们
- 网站
-
https://techandsocialcohesion.org/
Council on Tech and Social Cohesion的外部链接
- 所属行业
- 科技、信息和网络
- 规模
- 2-10 人
- 类型
- 合营企业
动态
-
????????’?? ?????????????? ???????????? ?????? (???? ??????): ???????????????????? ?????????????????? Social media platforms are designed to maximize engagement, often at the expense of user well-being. 72% of teens feel manipulated into spending more time online than they want, and younger users face even greater risks—11% of 13-15 year-olds report being bullied, while 19% encounter unwanted explicit content weekly. The Utah Digital Choice Act (HB 418) puts consumers back in control by: ? Allowing users to transfer their data, contacts, and content between platforms ? Encouraging competition based on safety, privacy, and user experience ? Giving individuals greater autonomy over their online interactions As Dr. Ravi Iyer, co-chair of the Council on Tech and Social Cohesion and former Meta employee, testified: “Social Media companies will not always make the right decision and when they fail to put users first, users should have the choice to move to a platform that better serves their needs. The Digital Choice Act will give them that choice.” The Utah Digital Choice Act has successfully passed the House and is currently under consideration in the Senate, with a hearing taking place this Friday.?
-
-
A surprising paradox: despite their soaring popularity, platforms like TikTok and Instagram leave many users feeling worse off. New research reveals that users would pay an average of $24 to eliminate TikTok and $6 to remove Instagram entirely. This post from our substack dives into how addictive design traps create a cycle of dependency—even when most secretly wish for a digital detox. Discover how these insights could reshape our understanding of social media and pave the way for a healthier, more balanced online future. https://lnkd.in/gDTKkepq
-
Council on Tech and Social Cohesion转发了
Our LLMs and Public Discourse event, co-hosted by Council on Tech and Social Cohesion, will be livestreamed. Join us Thursday, February 27th @ 6:30pm PT / 9:30pm ET: https://lnkd.in/ef5bURxb
-
The AI Action Summit in Paris revealed deep divisions among global leaders. Some push for rapid innovation, while others stress the need for guardrails.The future of AI isn’t just about technology—it’s about politics. Decisions made today will shape whether AI deepens power imbalances or fosters trust, inclusion, and social cohesion. Read our latest analysis in our latest piece from Lena Slachmuijlder #AI #technology
-
Neutral AI is a myth. In high-conflict settings, peacebuilders use multi-partiality—engaging all perspectives fairly without treating them as morally equivalent. AI should follow the same model, not aiming for false neutrality but fostering constructive engagement. AI will inevitably shape human perception—but we must decide whether it fuels conflict or facilitates understanding. If done right, AI can strengthen social cohesion and help societies navigate contentious issues with nuance. If done wrong, it risks deepening divides and entrenching misinformation. How do we design AI that reduces harm and fosters engagement with complexity?