AI, Impact & the Double Standard of Automation
Jan Leyssens
Building, coaching, and experimenting in the world of impact. Always open to meeting good people and meaningful projects.
In the past months I've tried to get sort of a grip on the role AI can play in the world of impact entrepreneurship, and I have the amazing luck that I have quite some people flooding me with articles with the pros, cons and wows in it (I'm mainly looking at you Kurt Peleman , Lieve Vereycken , Ravi Bellardi and Astrid Leyssens !).
And one the things that right now really fascinates me are multi-agent models. Not just for the sake of the buzzword, but implications it has. The good, the bad, the ugly. So... Here's an attempt to bring together my reflections and open questions together in something somewhat coherent (I hope).
Relevance Over Novelty
AI innovation is often framed around novelty—what’s new, what’s disruptive, what’s next. But in the world of philanthropy and impact-driven initiatives, the default lens is different. There, the discussions on innovation aren't just about novelty, they're about relevance and impact.
That’s why I find it fascinating that philanthropy is investing so much in AI right now. Not because I work in that industry, but because it serves as a space where AI is evaluated not for how advanced it is, but for whether it actually makes a difference. For me, it's my go to sector to get a critical-curious voice on AI and it's (potential) role in Impact.
Multi-agentic models
Over the past few months, I’ve been exploring multi-agent AI systems — how AI agents don’t just analyze data, but interact with each other and a user, negotiate, and make decisions. And the potential of this in sustainability, impact, and knowledge-building is huge.
A few articles recently stood out to me, each offering a different perspective on where this is going:
I really believe in the power of a good interface between users and agents to make sure you're involved in the conversation (not just looking at it happening), and your feedback and opinions are fed back to a self-learning database / reference document in such a way that neurodiversity increases because you're interacting with the multi-agent model (whether or not we'll actually be able to make that happen is something we're still researching atm).
The Double Standard of AI & Jobs
One thing that stands out to me in every AI conversation: the panic over white-collar job automation.
There’s a hypocrisy in how we discuss AI replacing jobs. We’ve spent decades normalizing the automation of blue-collar labor—machines replacing factory workers, self-checkouts replacing cashiers, logistics AI replacing warehouse staff. But when AI threatens white-collar stability—writers, lawyers, consultants—the conversation suddenly shifts to panic.
Shouldn’t we be asking a different question? Instead of fighting to keep jobs unchanged, how do we rethink work itself? If AI can remove bottlenecks in sustainability and impact—translating knowledge faster, adapting solutions to context, making good decisions repeatable—shouldn’t we see that as an opportunity rather than a threat?
I see so many "learning networks" all over Europe to share insights and lessons learned in sustainability and circular projects. And very often these networks are highly ineffective in what they attempt. Not because of the idea of being a network. I love that. But because they focus mainly on transferring information and (often generic) learnings, and not on the actual networking.?
With LLMs and AI, it becomes much easier to ask the question "how can I make this about me". Sustainability is inherently context-dependent. By largely automating that part of the conversation, we can focus more on the hard part of it: "What does that imply?".
Some Open Questions
Right now I'm diving in head-first in learning the basics of automation with n8n (check it out, it's mind blowing how far low-code and no-code platforms have come!), I'm working further in exploring prompt-engineering, self-learning databases and how knowledge flows in innovation and impact, but most of all, I'm just trying to keep up with the bigger picture without getting lost in the tech development race AI is in right now.
So here are the 3 main questions or mindsets I have at the moment when looking at AI and automation:
This is something I’m actively exploring with Manu Vollens and?ImpactGenie, but also?in our AI-powered design research project. If you're working on similar challenges—or just questioning where all of this is going—I’d love to connect.
?? Further reading:
Project manager & social innovation designer
7 小时前And I believe Laura Stevens PhD's session can offer us more context to explore the questions in your article: https://www.youtube.com/watch?v=OHpeHvnXHW8
Project manager & social innovation designer
7 小时前I thought you'd like to follow David Mattin's new take on this: "When does this kind of virtual world stop being a simulation, and start being a society in its own right? I think we’ll see virtual worlds inhabited by millions of AIs, trading with fiat currencies and crypto, and building GDPs that exceed that of real-world nations." More here: https://www.newworldsamehumans.xyz/p/simulating-the-post-human-future
Dear Jan Leyssens, I am going to check out some links you have shared. Thanks ! Trying to redefine work is coming home for myself. Once I was in the business of connecting talent to IT and business consulting jobs. During those days I was part of a pro-basic-income movement. Last year I read the latest book of Kate Raworth. She pondered about blockchain, tokens and the ownership of AI robots working for us. Who owns ? If we connect the income of the AI robots to communities/ people. Are we in that case working on a basic income idea ? That's what she asked. And that is why I love to co-design Co-Inpetto Farm set-ups - www.co-inpetto.design.
Project manager & social innovation designer
3 天前This is marvelous to read Jan. Thanks for sharing your thoughts and work! It's very inspiring to know more both about your engagement in strategically framing "this AI wave" and doing balanced deep hands-on work in it. Your rant makes me think about how some AI works of art loose their relevance for those (humans) who experience it, after they know that AI made it... It says a lot about what we look for in art: real human connection between equal beings. Sustainability questions are, of course, different in the way they deal with hardcore, in your face, physical disasters. But how we face and deal with these disasters have a lot to do with "how we feel about it", as you mentioned. Weirdly enough, I do think the deeper involvement in action, implementation and ownership of these AI "alien" beings are largely rooted in what we, humans, want to mean to each other and the world around us. Let's work with them (tools, co-pilots, agents) while helping each other to excel in what humans do best.