Aula Fellowship for AI Science, Tech, and Policy的封面图片
Aula Fellowship for AI Science, Tech, and Policy

Aula Fellowship for AI Science, Tech, and Policy

智库

Montreal,Quebec 548 位关注者

Workshops on AI: Think Tank - Facilitating the responsible proliferation of A.I.

关于我们

We do workshops for AI literacy and strategy, world-wide. Youtube: https://www.youtube.com/@theaulafellowship *La version fran?aise suit. Decision makers in AI science, tech, and policy are trying to answer a set of hard problems. These are problems that affect all of society, and so the answers must come from all of society. Therefore, the principal Fellowship project, one which involves all of the Aula Fellows, is to get everyone's voice to the table on AI questions. Our tools for this are workshops and other events to foster conversations, white papers and consultations, research, and engagement with decision-makers. We also do research for impact as a service. As we understand it, we are stronger when we work together. If you also working on this common mission, then we are already fellows. ——— La concorde Aula pour la science, la technologie et la politique de l'IA s'est réunie dans le but commun de veiller à la prolifération responsable de l'IA. Nous venons de nombreuses disciplines et du monde entier. Chaque Compagnon est totalement indépendant et aucun Compagnon ne parle au nom des autres. Les décideurs dans les domaines de la science, de la technologie et des politiques en matière d’IA tentent de répondre à un ensemble de problèmes difficiles. Ce sont des problèmes qui touchent l’ensemble de la société et les réponses doivent donc venir de l’ensemble de la société. Par conséquent, le principal projet de la concorde, qui implique tous les Compagnons Aula, est de faire entendre la voix de tout le monde sur les questions d'IA. Nos outils à cet effet sont des ateliers et autres événements pour favoriser les conversations, les avis d'experts et les consultations, la recherche scientifique et l'engagement avec les décideurs. Comme nous le comprenons, nous sommes plus forts lorsque nous travaillons ensemble. Si vous travaillez également sur cette mission commune de la prolifération responsable de l’IA, alors nous sommes déjà compagnons.

网站
www.theaulafellowship.org
所属行业
智库
规模
11-50 人
总部
Montreal,Quebec
类型
非营利机构
创立
2023
领域
A.I.、L'I.A.、Recherche、Research、Social Strategy、Stratégie sociale、Policy、Politique publique、Advocacy、Droits de la personne、Human Rights、Tech4Good和Tech Stewardship

地点

Aula Fellowship for AI Science, Tech, and Policy员工

动态

  • Newsletter Launch! ???? Friend of the Aula Fellowship Susan Furnell invites you to join in the conversation on AI: AI RESET: Shaping AI’s Path & Future Help her out and take the poll, please. ??

    查看Susan Furnell的档案

    AI RESET | Who Controls AI, Controls the Future

    AI’s Future Is Being Captured—And You're Not at the Table AI isn’t just advancing—it’s being shaped by those who control it. The real question is not just what AI can do, but who it will serve. Training costs for large models like GPT are soaring. Deep Seek’s innovations help but not as much as claimed. Capital fuels Big Tech, while governments scramble to regulate—are they protecting innovation or monopolies? Big Tech knows how to capture and dodge regulation. Errors are slowing adoption, raising doubts about AI’s real economic returns. Context and sector matter, but if costs and errors remain high, does AI deliver value—or is a hype correction coming? Small, local language models offer cost savings and privacy but lack power for mass-market applications. Meanwhile, AI’s trajectory is shaped by geopolitics, from supply chains to state-backed models. A university expert I spoke with raised a critical question: ???What if we’re not on track to solve AI’s error problem at an acceptable cost? Fixing errors demands more compute, energy, and investment—but at what point do costs outweigh benefits? Could AI hit a?dot-com-style crash?that resets expectations? Geopolitically, the West can’t afford that reset. China’s AI expansion is state-backed, while the West relies on private investment. If markets stumble, does AI power shift permanently? This debate isn’t just about AI’s potential—it’s about control, rules, and who benefits. This is why I’m launching AI Reset next week—a newsletter and initiative to challenge Ai's current trajectory and explore how we shape its future. AI will define the economy, global power, and the way we work. We need a debate that isn't dictated by Big Tech. If we don’t shape AI’s future, someone else will. If this matters to you, help push this conversation forward - vote, repost and tag key voices who should be weighing in. And comment below?? with your opinion in the debate. Follow me for AI Reset’s launch next week. #AI #ArtificialIntelligence #AIReset #GlobalEconomy #BigTech #TechPolicy #Geopolitics #Automation #AIRegulation #DecentralizedAI #OpenSourceAI #FutureOfWork

    此处无法显示此内容

    在领英 APP 中访问此内容等

  • Connecting the dynamics of Technology, Social Justice, and Public Interest empowers lawyers and legal scholars. How? Why does it matter? Human Rights Lawyer, educator, Aula Fellow, and all-round nice person Jake Okechukwu Effoduh explains.

    查看Jake Okechukwu Effoduh的档案

    Assistant Professor at Lincoln Alexander School of Law, Toronto Metropolitan University.

    Sharing my introduction to the Big Tech, Social Justice and Public Interest class here at the Lincoln Alexander School of Law This course engages law students with cutting-edge perspectives on platform regulation, content moderation, algorithmic surveillance, gig economy classifications, and the pioneering regulatory models of big tech governance. Through theoretical insights, case studies, and some experiential learning, students are invited to gain practical skills for shaping the future of responsible tech regulation.

  • Thanks, Luiza Jarovsky !

    查看Luiza Jarovsky的档案
    Luiza Jarovsky Luiza Jarovsky是领英影响力人物

    Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ??Join our AI governance training (1,100+ participants) & my weekly newsletter (55,000+ subscribers)

    ???AI governance is hiring: Here are 40 AI governance job openings from the last few days. Subscribe to our job alert here: https://lnkd.in/en4vpzZq TRUSTEQ GmbH - Consultant AI Governance PwC - AI Governance Manager American Express - Director, EU AI Governance JPMorgan Chase - AI Governance & Regulatory Manager American Century Investments - AI Governance Lead RandomTrees - AI Governance Lead Charles Russell Speechlys - Senior Data & AI Governance Manager Standard Chartered - Associate Director, WRB AI Governance Capco - Data Protection & AI Governance Advisor AXA - AI Governance Lead Dexian - AI Governance Analyst TruStage - Lead, AI Governance Novo Nordisk - AI Governance Lead Fisher Investments - AI Governance Program Manager EY - Digital Risk, AI Governance & Compliance Senior Manager AstraZeneca - Enterprise AI Governance Policy Manager Sutter Health - AI Governance Analyst GEICO - Senior/Manager, Model & AI Governance UNIQA Insurance Group - AI Governance Expert Relyance AI - Manager, Software Engineer, AI Governance QuantPi - AI Policy Working Student Platinum Technologies - AI Policy Advisor Genentech - Data & AI Policy Analyst Scale AI - AI Policy Lead Accenture - Global Data & AI Counsel NXP Semiconductors - Senior Legal AI Counsel Woxsen University - Professor, AI Ethics and Governance Sony - AI Ethics Research Scientist Jio-bp - Head of COE for Artificial Intelligence Sony AI - HCI Intern (AI Ethics Team) Koniag Government Services - AI Ethics Project Manager Deloitte - Junior Data and AI Ethics Vodafone - Data Privacy and Responsible AI Manager Accenture - Responsible AI Advisor Prudential Financial - Lead, Global Responsible AI PepsiCo - Responsible AI Enablement Manager Afarax - Responsible AI Consultant Accenture - Responsible AI Advisor Manager Cohere - Technical Governance and Responsible AI Researcher Cotiviti - Director Risk and Responsible AI ?? To receive our weekly AI governance job alert, subscribe to the job board (link below). ?? Subscribe to my newsletter and join 55,700+ people who never miss my AI governance updates (link below) #AI #AIGovernance #AIJobs #LinkedInJobs #JobSearch #ResponsibleAI

    • 该图片无替代文字
  • Consultancy reporting on AI isn’t doing enough. ?? New Research Alert: Easy to Read, Easier to Write: The Politics of AI in Consultancy Trade Research How well do we understand AI use in business and policymaking? Consultancy reports are among the most widely read sources on AI implementations, and for good reason. They are well written, engaging, pertinent, and timely. At the same time, our research finds major gaps in evidence quality, transparency, and breadth of coverage. Our latest study, published Open-Access in Cogent Social Sciences (Taylor & Francis Group), examines consultancy reporting on AI since the launch of GPT and highlights key strengths and weaknesses: ? What consultancy reports do well: ?? Speed. Rapid production keeps them relevant ?? Client-focused. Tailored to business needs ?? Actionable. Clear, practical recommendations ?? Where they fall short: ? Self-referential. Often cite their own surveys rather than independent data. ? Transparency issues. Data collection methods can be unclear & reports often include consultancy services in the strategic recommendations, obscuring the line between strategic report and sales pitch. ? Limited scope. Mostly focus on large firms, leaving SMEs unaddressed. Little or no consideration of well-known patterns of abuse in the AI labour force, nor of environmental, energy infrastructure, and second-order consequences. The takeaway? We call for more collaboration between business consultants, management scientists, and policymakers to produce reliable, well-rounded insights on AI in business. We believe that the function of business is to provide opportunities for people to work together to build great things. Our practical experience is that consultants have a vital role in the endeavour to make AI tools a part of a bright future for everyone. ?? Read the full article here and please share: https://lnkd.in/eA_WYnqW Citation: Mackenzie, T., Radelji?, B., & Heslinga, O. (2025). Easy to read, easier to write: the politics of AI in consultancy trade research. Cogent Social Sciences, 11(1), 2470368. DOI: 10.1080/23311886.2025.2470368 About the Aula Fellowship: ?? Coalitions are a powerful force for change. Co-authors Tammy Mackenzie, Branislav RADELJIC, PhD SFHEA and Olivia Heslinga are Green Hall Aula Fellows. This research is part of the work we do at The Aula Fellowship for AI Science, Tech, and Policy. Our mission is to get everyone to the conversation on AI. We are available to discuss this research and your work. #AI #BusinessStrategy #Policy #Management #Consulting #AIinBusiness #AIReporting #Research #AulaFellowship #rand #pwc #accenture #deloittes #mckinsey #ey

  • Aula Fellowship for AI Science, Tech, and Policy转发了

    查看Valentine Goddard的档案

    Lawyer & artist. Pioneering AI Governance with Civic Engagement & the Arts | Trusted Policy Advisor to Canada & UN | Expert in Ethical AI & Digital Democracies. Consulting/public speaking. Global network&exp. ペラペラFR/EN.

    It's quite timely that our new Prime Minister announced a new "Culture and Identity" minister! Digital and cultural sovereignty are cardinal to our identities, and the freedom to live by our values. It is why our agenda focuses on the creative economy and its the implications of Generative AI for the arts and cultural sector. Expect an upsurge of disinformation coming our way, from our neighbours this time, and lots of tough decisions to be made on how can we adopt AI, be innovative WITHOUT giving all our data to American Big Tech. How do we not completely isolate and maintain, or build new, partnerships in creative economies? In this short blog, I explain how I define "creativism" in this context, and I do believe it might be among the most important tools we have to navigate this chaos. I ask business leaders, economists and policy makers, especially those who aren't used to thinking our of the box and might not be thinking of the arts as key, to come and listen attentively March 27th. By the way, I can help your business organize workshops to a digital governance strategy and AI policy. It's no longer a luxury! #creativeeconomy #identity #culture #canada #digitaleconomy #knowledge #disinformation #art #ai Kelly Wilhelm Cultural Policy Hub at OCAD U Inspirit Foundation Michael Power Teresa Scassa Samuel La France Innovation, Science and Economic Development Canada Patrimoine canadien -- Canadian Heritage

  • Aula Fellowship for AI Science, Tech, and Policy转发了

    查看Rachel Coldicutt的档案

    Careful and community tech. Founder and non-exec.

    POLITICO readers may have caught sight of this today: a brief mention of some questions that a small group of technologists, researchers, and policy people, informally calling ourselves the Sycamore Collective, drafted yesterday morning in response to Keir Starmer's announcements on AI on government. These are the questions we shared: Labour’s escalating claims around the potential use of?AI in government are borne out of relatable constraints and welcome ambition for the UK. But we are deeply concerned that in an effort to increase efficiency and find savings, the UK could end up wasting £100s of millions of public money in failed projects. This risks damaging public services - and ultimately holding the UK back -?rather than taking the opportunity to make them fit for the future. It might sound like a silver bullet but we can’t just shoehorn AI into existing services. Here are 10 questions that we believe the Government needs to demonstrate it has answers to before signing away millions of pounds of taxpayer’s money with no break clause on the contracts if things go wrong. ?? 1 Are the technology and the data ready and if not how much will it cost to make them ready before AI can be used?? ??2 Are the needs of the public or public servants expected to benefit from new AI technology sufficiently well understood?? ??3 Are staff ready and trained to use AI technologies - if not how much will this cost and how long will it take?? ??4 Has proper evaluation been done of the jobs required to make use of AI as well as those that can be cut?? ??5 By what process will the impact of AI use in government be evaluated?? ??6 What mechanisms exist to make corrections when things go wrong? ??7 What processes exist for frontline workers to contribute expertise and raise concerns about the impact of AI use in government? ??8 What steps are being taken to avoid locking the UK into expensive contracts which cannot be easily undone if the technology doesn’t deliver?? ??9 What parliamentary oversight will there be of new AI systems which make decisions affecting the quality of life of people in the UK?? ??10 How will you trial and gradually introduce changes to avoid the risk of going too far too fast and losing valuable expertise? The Sycamore Collective consists of concerned citizens with experience working across the technology sector, government, academia and civil society. As part of a bigger group of contributors and collaborators, was great to work with Anna Dent Simon Cross (PHD) Peter Wells Julian Tait Jonathan Tanner Gordon Guthrie and others to frame these questions.

    • GOVERNMENT
AI-VE GOT SOME QUESTIONS: "The power of government has gone," the prime minister declared yesterday in Hull. Part of his remedy is greater use of Al, which he says will make the government more responsive and efficient. But that of course creates another power problem; how do we stay in control of the technology, and is it worth it?
When tech takes over: A group of people working in government, academia, civil society and tech, calling itself The Sycamore Collective (newly formed by Rachel Coldicutt) put out 10 questions for the government to answer on that matter yesterday. They range from procurement and skills to oversight and data.
Similarly: The Ada Lovelace Institute released a briefing today on lessons from studying Al use in the public sector. The list is long but the main learnings are worryingly basic. They include a lack of clarity about what "Al" is and where it is being deployed in the public sector.
Aren't you the boss? Starmer's Hull speech was extraordinary i
  • Aula Fellowship for AI Science, Tech, and Policy转发了

    查看David Evan Harris的档案

    Business Insider AI 100 | Tech Research Leader | AI, Misinfo, Elections, Social Media, UX, Policy | Chancellor's Public Scholar @ UC Berkeley

    Brilliant paper! "Pitfalls of Evidence-Based AI Policy." This challenges recent calls to wait to regulate AI because of lack of evidence. This "deny and delay playbook" has been used before by big tobacco and big oil in their efforts to fight regulation. To be clear, we have plenty of evidence of AI harms across numerous domains, from AI that encourages children to commit suicide or kill their parents to deceptive deepfakes to scams and fraud to discrimination and bias. There is no need to wait! But the authors go further than calling out historical parallels—they argue that there is also an inherent paradox in these calls to wait for more evidence. Without deliberate policymaking, we actually will have a much harder time collecting evidence, so waiting on policymaking until some arbitrary line of "adequate" evidence is obtained actually actively reduces the chances that such evidence is ever gathered. They provide a list of 15 different categories of AI policy that would support evidence collection. There's also a fantastic analysis here of how AI companies generate a disproportionately large amount of academic research in the field, and that numerous people who have called for "evidence-based policy" have undisclosed relationships with AI companies. So next time you hear someone talking about AI say "we shouldn't regulate until we have more evidence," remember that this line has been used in bad faith before, and that we need policies to help gather evidence. From the opening: "At this very moment, I say we sit tight and assess. "– President Janie Orlean, Don’t Look Up "Abstract: Nations across the world are working to govern AI. However, from a technical perspective, there is uncertainty and disagreement on the best way to do this. Meanwhile, recent debates over AI regulation have led to calls for 'evidence-based AI policy' which emphasize holding regulatory action to a high evidentiary standard. Evidence is of irreplaceable value to policymaking. However, holding regulatory action to too high an evidentiary standard can lead to systematic neglect of certain risks. In historical policy debates (e.g., over tobacco ca. 1965 and fossil fuels ca. 1985) “evidence-based policy” rhetoric is also a well-precedented strategy to downplay the urgency of action, delay regulation, and protect industry interests. Here, we argue that if the goal is evidence-based AI policy, the first regulatory objective must be to actively facilitate the process of identifying, studying, and deliberating about AI risks. We discuss a set of 15 regulatory goals to facilitate this and show that Brazil, Canada, China, the EU, South Korea, the UK, and the USA all have substantial opportunities to adopt further evidence-seeking policies." Massive thanks to the authors, Stephen Casper of CSAIL MIT, David Krueger of Mila - Quebec Artificial Intelligence Institute, and Dylan Hadfield-Menell also of Massachusetts Institute of Technology CSAIL. #AI #AIPolicy

  • 查看Allen Munoriyarwa的档案

    Media, Journalism and Surveillance Researcher

    After months of hard work, I'm happy to announce that " our article is finally live! This project means a lot to the team.??It is available on open access here:?https://lnkd.in/dmKJzGC3. Many thanks to Dr #Lyton Ncube, UB, Dr. #Albert Chibuwe, MSU. Ms #Refilwe Whitney Mofokeng, TUT and Mrs #Antonette Kakujaha-Murangi, UNAM, for their immense hard work. What a productive collaboration!?

  • For your reading list? Please share their original post so that people can hear about it

    查看Yonah Welker的档案

    Public Technologist, Vis. Lecturer / Prev. Ministry Advisor, Tech, Science Envoy / @MIT @EU Commission projects / EU-MENA-US, Atlantic

    I'd love to congratulate Emma Ruttkamp-Bloem and Seydina M. Ndiaye (United Nations AI advisory body) and our peers and colleagues who contributed to the book "Trustworthy AI - African Perspectives" which help to further forge the tech and policy sovereignty of the region. Not only economy and technology, but policy becomes predominantly multipolar, driven by regional and historical contexts which are deeply connected with the data sets accuracy and representation, models and systems development, which plays critical role in every area from healthcare to public sector. The book explores a normative African perspective to AI development, deployment, and governance, trustworthiness, resources allocation, including autonomous systems, health management, road safety and cities, data justice and many other areas. The book - https://lnkd.in/g6ijhBGK Acknowledgements: Dr. Kutoma Wakunuma Damian Eke Simisola Akintoye George Ogoh, PhD Bernd Carsten Stahl Michael Zimba Angella Ndaka Ph.D. Maha Jouini Ayomide Owoyemi, MD, PhD Muhammad Adamu Memunat Ajoke Ibrahim Makuochi S. Nkwo PhD, FHEA, MBCS Dennis Munetsi Barbara Glover Joseph AKINYEMI Eugeniah Arthur Ph.D. Kehinde Aruleba Khadijat Ladoja Abigail Oppong Harriet Ratemo Elnathan Tiokou and others. #ai #ethics #policy

  • Aula Fellowship for AI Science, Tech, and Policy转发了

    查看Ley (Ashley) Muller的档案

    Senior AI product manager ? Women in AI Norway & AI Governance Nordics ? PhD in addiction treatment ?? trained in non-violent communication ? Build high-performing teams through care and radical inclusivity.

    Getting more involved with the Women in AI Norway and the stunning people who drive it ?? Rialda Spahi?, Dianne Christine Geronimo, Rosangela Sarno +++ ... and having more and more important conversations with people from the #GlobalMajority (#globalsouth) about the need for AI networks that center their voices, needs, and perspectives. ?? Can my network help grow this list? ?? Drop a comment tagging high-quality AI networks and groups that connect people in the Global South and I will add —and flag what kind of connections you’re looking for! ??(I will do my absolute best to connect you.) 1?? AI FOR DEVELOPING COUNTRIES FORUM - a fantastic networking arena. Next physical meeting in June! Link to an earlier post in the comments. 2?? Artificial Intelligence for Development (AI4D) - highly recommended by Alice Liu. 3?? AI & Equality - exposed me to excellent critical voices and hands-on training by South African researchers. 4?? Empire Partner Foundation - yet another in South Africa! 5?? What about the spaces in Indonesia and Malaysia, Hesti Aryani & Endry Lim Zhen Wen? 6?? The Distributed AI Research Institute (DAIR) - an important group to follow. "Creating spaces for communities to question elites". 7?? The Algorithmic Justice League - streams lot of great online events. 8?? AI Global South Summit - hoping there will be 2025 conferences? 9?? Aula Fellowship for AI Science, Tech, and Policy with Tammy Mackenzie - recommended by Branislav RADELJIC, PhD SFHEA. ?? Datasphere Initiative w/ Lorrayne Porciuncula & Sophie Tomlinson. 11 Global Center on AI Governance w/ Rachel Adams 12 GobLab UAI - great rec by Paloma Baytelman 13 AI Now Institute - ???? Mrinalini Luthra 14 Deep Learning Indaba - "goal of Africans being active shapers and owners" 15 Check out Raymond Suns open-source AI regulation tracker - link in comments 16 Data Science Africa(DSA) - ???? Morine Amutorine 17 ?? 18 ?? 19 ? Without #globalsouth-centered spaces, it unfortunately looks very easy for interest groups to exploit people who are pivoting to the AI field - through the promise of access to elite AI spaces, often dominated by the US, marketed as otherwise inaccessible. We need to join - and make! - community-based, distributed spaces for people to grow and share their AI skills (from dev to policy analysis), impact and steer AI use and policy, and, as DAIR puts it, create the technological future that WE want.

    查看Loukas Tzitzis的档案

    AI, Cybersecurity, Fintech, MedTech, Web3, Digital | CEO, Entrepreneur, Board Member & NED | Former Amdocs, Tech Mahindra, Gentrack Leadership roles.

    #AIFODGeneva2025: Setting priorities collaboratively! During the #AIFOD summit at the United Nations Office at Geneva, I got a chance to ask a question to the wonderful participants of Session 7 (Late-mover Advantages in AI Development). The panel was excellent in all aspects! Dr. Allison Fisher provided excellent moderation and panelists gave wonderful, thought-provoking, and insightful input. Many thanks to Prof. Dr. Michael Gerlich, Niamh Peren (coolest intro ever, reminded me of my days working for a #NZ company), Julie Gunderson, Tommie Edwards FRSA, and Prof. Dr. Helena Liebelt. My stance before the summit was always about how "can we help developing nations close the gap, fast and cheap in #artificialintelligence"? However, there are so many angles to consider when thinking of the "late mover advantage": ?? #ANI vs. #AGI vs. #ASI ?? Geographical needs and related use cases ?? Innovation at which layer: Software Infra (#Tech), #Platform, $Apps? ?? Legislation, regulation, Physical Infra, Energy ?? #Investment required and public-private ecosystem for acceleration and that is just a few... I felt compelled to ask this question... Enjoy!

相似主页

查看职位