Aula Fellowship for AI Science, Tech, and Policy的封面图片
Aula Fellowship for AI Science, Tech, and Policy

Aula Fellowship for AI Science, Tech, and Policy

智库

Montreal,Quebec 553 位关注者

Workshops on AI: Think Tank - Facilitating the responsible proliferation of A.I.

关于我们

We do workshops for AI literacy and strategy, world-wide. Youtube: https://www.youtube.com/@theaulafellowship *La version fran?aise suit. Decision makers in AI science, tech, and policy are trying to answer a set of hard problems. These are problems that affect all of society, and so the answers must come from all of society. Therefore, the principal Fellowship project, one which involves all of the Aula Fellows, is to get everyone's voice to the table on AI questions. Our tools for this are workshops and other events to foster conversations, white papers and consultations, research, and engagement with decision-makers. We also do research for impact as a service. As we understand it, we are stronger when we work together. If you also working on this common mission, then we are already fellows. ——— La concorde Aula pour la science, la technologie et la politique de l'IA s'est réunie dans le but commun de veiller à la prolifération responsable de l'IA. Nous venons de nombreuses disciplines et du monde entier. Chaque Compagnon est totalement indépendant et aucun Compagnon ne parle au nom des autres. Les décideurs dans les domaines de la science, de la technologie et des politiques en matière d’IA tentent de répondre à un ensemble de problèmes difficiles. Ce sont des problèmes qui touchent l’ensemble de la société et les réponses doivent donc venir de l’ensemble de la société. Par conséquent, le principal projet de la concorde, qui implique tous les Compagnons Aula, est de faire entendre la voix de tout le monde sur les questions d'IA. Nos outils à cet effet sont des ateliers et autres événements pour favoriser les conversations, les avis d'experts et les consultations, la recherche scientifique et l'engagement avec les décideurs. Comme nous le comprenons, nous sommes plus forts lorsque nous travaillons ensemble. Si vous travaillez également sur cette mission commune de la prolifération responsable de l’IA, alors nous sommes déjà compagnons.

网站
www.theaulafellowship.org
所属行业
智库
规模
11-50 人
总部
Montreal,Quebec
类型
非营利机构
创立
2023
领域
A.I.、L'I.A.、Recherche、Research、Social Strategy、Stratégie sociale、Policy、Politique publique、Advocacy、Droits de la personne、Human Rights、Tech4Good和Tech Stewardship

地点

Aula Fellowship for AI Science, Tech, and Policy员工

动态

  • Hello everyone on this beautiful Saturday morning from Montreal. Delighted to invite you to take some time and view Tammy Mackenzie our Director of the Aula Fellowship for AI Science, Tech, and Policy's in depth conversation with Renee Black founder and lead at GoodBot. Renee is based in Vancouver, Canada. Warms my heart to see the amazing engagement and work done from Coast to Coast to Coast in our amazing country, in the field on #ai focused on #bot #technology. We look forward to receive your comments and feedback on the work that we're doing to hopefully improve this still partly undiscovered field with both depth and an innovative balance. With my thanks for you attention and interest! https://lnkd.in/e2iTEvcm on YouTube

  • Newsletter Launch! ???? Friend of the Aula Fellowship Susan Furnell invites you to join in the conversation on AI: AI RESET: Shaping AI’s Path & Future Help her out and take the poll, please. ??

    查看Susan Furnell的档案

    AI RESET | Who Controls AI, Controls the Future

    AI’s Future Is Being Captured—And You're Not at the Table AI isn’t just advancing—it’s being shaped by those who control it. The real question is not just what AI can do, but who it will serve. Training costs for large models like GPT are soaring. Deep Seek’s innovations help but not as much as claimed. Capital fuels Big Tech, while governments scramble to regulate—are they protecting innovation or monopolies? Big Tech knows how to capture and dodge regulation. Errors are slowing adoption, raising doubts about AI’s real economic returns. Context and sector matter, but if costs and errors remain high, does AI deliver value—or is a hype correction coming? Small, local language models offer cost savings and privacy but lack power for mass-market applications. Meanwhile, AI’s trajectory is shaped by geopolitics, from supply chains to state-backed models. A university expert I spoke with raised a critical question: ???What if we’re not on track to solve AI’s error problem at an acceptable cost? Fixing errors demands more compute, energy, and investment—but at what point do costs outweigh benefits? Could AI hit a?dot-com-style crash?that resets expectations? Geopolitically, the West can’t afford that reset. China’s AI expansion is state-backed, while the West relies on private investment. If markets stumble, does AI power shift permanently? This debate isn’t just about AI’s potential—it’s about control, rules, and who benefits. This is why I’m launching AI Reset next week—a newsletter and initiative to challenge Ai's current trajectory and explore how we shape its future. AI will define the economy, global power, and the way we work. We need a debate that isn't dictated by Big Tech. If we don’t shape AI’s future, someone else will. If this matters to you, help push this conversation forward - vote, repost and tag key voices who should be weighing in. And comment below?? with your opinion in the debate. Follow me for AI Reset’s launch next week. #AI #ArtificialIntelligence #AIReset #GlobalEconomy #BigTech #TechPolicy #Geopolitics #Automation #AIRegulation #DecentralizedAI #OpenSourceAI #FutureOfWork

    此处无法显示此内容

    在领英 APP 中访问此内容等

  • Connecting the dynamics of Technology, Social Justice, and Public Interest empowers lawyers and legal scholars. How? Why does it matter? Human Rights Lawyer, educator, Aula Fellow, and all-round nice person Jake Okechukwu Effoduh explains.

    查看Jake Okechukwu Effoduh的档案

    Assistant Professor at Lincoln Alexander School of Law, Toronto Metropolitan University.

    Sharing my introduction to the Big Tech, Social Justice and Public Interest class here at the Lincoln Alexander School of Law This course engages law students with cutting-edge perspectives on platform regulation, content moderation, algorithmic surveillance, gig economy classifications, and the pioneering regulatory models of big tech governance. Through theoretical insights, case studies, and some experiential learning, students are invited to gain practical skills for shaping the future of responsible tech regulation.

  • Thanks, Luiza Jarovsky !

    查看Luiza Jarovsky的档案
    Luiza Jarovsky Luiza Jarovsky是领英影响力人物

    Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ??Join our AI governance training (1,100+ participants) & my weekly newsletter (55,000+ subscribers)

    ???AI governance is hiring: Here are 40 AI governance job openings from the last few days. Subscribe to our job alert here: https://lnkd.in/en4vpzZq TRUSTEQ GmbH - Consultant AI Governance PwC - AI Governance Manager American Express - Director, EU AI Governance JPMorgan Chase - AI Governance & Regulatory Manager American Century Investments - AI Governance Lead RandomTrees - AI Governance Lead Charles Russell Speechlys - Senior Data & AI Governance Manager Standard Chartered - Associate Director, WRB AI Governance Capco - Data Protection & AI Governance Advisor AXA - AI Governance Lead Dexian - AI Governance Analyst TruStage - Lead, AI Governance Novo Nordisk - AI Governance Lead Fisher Investments - AI Governance Program Manager EY - Digital Risk, AI Governance & Compliance Senior Manager AstraZeneca - Enterprise AI Governance Policy Manager Sutter Health - AI Governance Analyst GEICO - Senior/Manager, Model & AI Governance UNIQA Insurance Group - AI Governance Expert Relyance AI - Manager, Software Engineer, AI Governance QuantPi - AI Policy Working Student Platinum Technologies - AI Policy Advisor Genentech - Data & AI Policy Analyst Scale AI - AI Policy Lead Accenture - Global Data & AI Counsel NXP Semiconductors - Senior Legal AI Counsel Woxsen University - Professor, AI Ethics and Governance Sony - AI Ethics Research Scientist Jio-bp - Head of COE for Artificial Intelligence Sony AI - HCI Intern (AI Ethics Team) Koniag Government Services - AI Ethics Project Manager Deloitte - Junior Data and AI Ethics Vodafone - Data Privacy and Responsible AI Manager Accenture - Responsible AI Advisor Prudential Financial - Lead, Global Responsible AI PepsiCo - Responsible AI Enablement Manager Afarax - Responsible AI Consultant Accenture - Responsible AI Advisor Manager Cohere - Technical Governance and Responsible AI Researcher Cotiviti - Director Risk and Responsible AI ?? To receive our weekly AI governance job alert, subscribe to the job board (link below). ?? Subscribe to my newsletter and join 55,700+ people who never miss my AI governance updates (link below) #AI #AIGovernance #AIJobs #LinkedInJobs #JobSearch #ResponsibleAI

    • 该图片无替代文字
  • Consultancy reporting on AI isn’t doing enough. ?? New Research Alert: Easy to Read, Easier to Write: The Politics of AI in Consultancy Trade Research How well do we understand AI use in business and policymaking? Consultancy reports are among the most widely read sources on AI implementations, and for good reason. They are well written, engaging, pertinent, and timely. At the same time, our research finds major gaps in evidence quality, transparency, and breadth of coverage. Our latest study, published Open-Access in Cogent Social Sciences (Taylor & Francis Group), examines consultancy reporting on AI since the launch of GPT and highlights key strengths and weaknesses: ? What consultancy reports do well: ?? Speed. Rapid production keeps them relevant ?? Client-focused. Tailored to business needs ?? Actionable. Clear, practical recommendations ?? Where they fall short: ? Self-referential. Often cite their own surveys rather than independent data. ? Transparency issues. Data collection methods can be unclear & reports often include consultancy services in the strategic recommendations, obscuring the line between strategic report and sales pitch. ? Limited scope. Mostly focus on large firms, leaving SMEs unaddressed. Little or no consideration of well-known patterns of abuse in the AI labour force, nor of environmental, energy infrastructure, and second-order consequences. The takeaway? We call for more collaboration between business consultants, management scientists, and policymakers to produce reliable, well-rounded insights on AI in business. We believe that the function of business is to provide opportunities for people to work together to build great things. Our practical experience is that consultants have a vital role in the endeavour to make AI tools a part of a bright future for everyone. ?? Read the full article here and please share: https://lnkd.in/eA_WYnqW Citation: Mackenzie, T., Radelji?, B., & Heslinga, O. (2025). Easy to read, easier to write: the politics of AI in consultancy trade research. Cogent Social Sciences, 11(1), 2470368. DOI: 10.1080/23311886.2025.2470368 About the Aula Fellowship: ?? Coalitions are a powerful force for change. Co-authors Tammy Mackenzie, Branislav RADELJIC, PhD SFHEA and Olivia Heslinga are Green Hall Aula Fellows. This research is part of the work we do at The Aula Fellowship for AI Science, Tech, and Policy. Our mission is to get everyone to the conversation on AI. We are available to discuss this research and your work. #AI #BusinessStrategy #Policy #Management #Consulting #AIinBusiness #AIReporting #Research #AulaFellowship #rand #pwc #accenture #deloittes #mckinsey #ey

  • Aula Fellowship for AI Science, Tech, and Policy转发了

    查看Valentine Goddard的档案

    Lawyer & artist. Pioneering AI Governance with Civic Engagement & the Arts | Trusted Policy Advisor to Canada & UN | Expert in Ethical AI & Digital Democracies. Consulting/public speaking. Global network&exp. ペラペラFR/EN.

    It's quite timely that our new Prime Minister announced a new "Culture and Identity" minister! Digital and cultural sovereignty are cardinal to our identities, and the freedom to live by our values. It is why our agenda focuses on the creative economy and its the implications of Generative AI for the arts and cultural sector. Expect an upsurge of disinformation coming our way, from our neighbours this time, and lots of tough decisions to be made on how can we adopt AI, be innovative WITHOUT giving all our data to American Big Tech. How do we not completely isolate and maintain, or build new, partnerships in creative economies? In this short blog, I explain how I define "creativism" in this context, and I do believe it might be among the most important tools we have to navigate this chaos. I ask business leaders, economists and policy makers, especially those who aren't used to thinking our of the box and might not be thinking of the arts as key, to come and listen attentively March 27th. By the way, I can help your business organize workshops to a digital governance strategy and AI policy. It's no longer a luxury! #creativeeconomy #identity #culture #canada #digitaleconomy #knowledge #disinformation #art #ai Kelly Wilhelm Cultural Policy Hub at OCAD U Inspirit Foundation Michael Power Teresa Scassa Samuel La France Innovation, Science and Economic Development Canada Patrimoine canadien -- Canadian Heritage

  • Aula Fellowship for AI Science, Tech, and Policy转发了

    查看Rachel Coldicutt的档案

    Careful and community tech. Founder and non-exec.

    POLITICO readers may have caught sight of this today: a brief mention of some questions that a small group of technologists, researchers, and policy people, informally calling ourselves the Sycamore Collective, drafted yesterday morning in response to Keir Starmer's announcements on AI on government. These are the questions we shared: Labour’s escalating claims around the potential use of?AI in government are borne out of relatable constraints and welcome ambition for the UK. But we are deeply concerned that in an effort to increase efficiency and find savings, the UK could end up wasting £100s of millions of public money in failed projects. This risks damaging public services - and ultimately holding the UK back -?rather than taking the opportunity to make them fit for the future. It might sound like a silver bullet but we can’t just shoehorn AI into existing services. Here are 10 questions that we believe the Government needs to demonstrate it has answers to before signing away millions of pounds of taxpayer’s money with no break clause on the contracts if things go wrong. ?? 1 Are the technology and the data ready and if not how much will it cost to make them ready before AI can be used?? ??2 Are the needs of the public or public servants expected to benefit from new AI technology sufficiently well understood?? ??3 Are staff ready and trained to use AI technologies - if not how much will this cost and how long will it take?? ??4 Has proper evaluation been done of the jobs required to make use of AI as well as those that can be cut?? ??5 By what process will the impact of AI use in government be evaluated?? ??6 What mechanisms exist to make corrections when things go wrong? ??7 What processes exist for frontline workers to contribute expertise and raise concerns about the impact of AI use in government? ??8 What steps are being taken to avoid locking the UK into expensive contracts which cannot be easily undone if the technology doesn’t deliver?? ??9 What parliamentary oversight will there be of new AI systems which make decisions affecting the quality of life of people in the UK?? ??10 How will you trial and gradually introduce changes to avoid the risk of going too far too fast and losing valuable expertise? The Sycamore Collective consists of concerned citizens with experience working across the technology sector, government, academia and civil society. As part of a bigger group of contributors and collaborators, was great to work with Anna Dent Simon Cross (PHD) Peter Wells Julian Tait Jonathan Tanner Gordon Guthrie and others to frame these questions.

    • GOVERNMENT
AI-VE GOT SOME QUESTIONS: "The power of government has gone," the prime minister declared yesterday in Hull. Part of his remedy is greater use of Al, which he says will make the government more responsive and efficient. But that of course creates another power problem; how do we stay in control of the technology, and is it worth it?
When tech takes over: A group of people working in government, academia, civil society and tech, calling itself The Sycamore Collective (newly formed by Rachel Coldicutt) put out 10 questions for the government to answer on that matter yesterday. They range from procurement and skills to oversight and data.
Similarly: The Ada Lovelace Institute released a briefing today on lessons from studying Al use in the public sector. The list is long but the main learnings are worryingly basic. They include a lack of clarity about what "Al" is and where it is being deployed in the public sector.
Aren't you the boss? Starmer's Hull speech was extraordinary i

相似主页

查看职位