One man's fight against Dutch AI corruption
Marco van Hurne
AI & ML advisory | Author of The Machine Learning Book of Knowledge | Building AI skills & organizations | Data Science | Data Governance | AI Compliance Officer | AI Governance
Welcome again, people. This time, I want to waste some words on the twisted, and bureaucratic circus that is the Netherlands. My home-country. And I am ashamed of it. Because here is where incompetence is an art form, and the one thing that is more constant than our national obsession with cheese sandwiches is our government’s ability to screw up technology.
Here, taxpayer-funded AI systems wreak havoc on the lives of ordinary citizens, and the country's biggest institutions seem locked in a race to see who can misuse AI with the most catastrophic results.
This story is about @DreesMarc as well.
He is the one who inspired me to write, and I usually take the piss out of him in my writing, and vice versa, but he is also a crusader. He not only writes about AI’s corrupted use within the government, the injustice it caused, and the absurdity of it all, but he actively takes on the giants of tech corruption head-on.
To me he is a modern day Don Quixote.
He charges at the Dutch windmills of bureaucracy. He is armed with only his righteous fury and an idealistic belief in fairness that most would consider either brave or batshit insane. And who better to believe in justice than Marc himself - an unstoppable force who is as relentless about transparency as he is about dragging the worst offenders into the light.
But Marc doesn’t ride alone.
He’s got a ragtag team of Panzas, including me, your humble squire. We are all in this fight together, and we are united by a deep, collective hatred for the hypocrisy that defines the Dutch government’s approach to AI. It’s a flawed system, a sea of bureaucracy, and it has the stench of corruption.
Yeah, we’ve got a problem with that.
If you like my rants and want to support me:
The Dutch government is champion of AI chaos
Let’s set the stage first. The Dutch government, as always, learned nothing from the catastrophic disaster which has come to be known as the Social Benefits scandal. Maybe you remember that one, because it is the worst example of how AI has ruined lives - literally - by allowing bias and prejudice to reign havoc in a system.
The country’s tax system to be precies.
When AI algorithms decided to label innocent families as fraudsters, it wrecked their lives in the process. Families lost their homes, children were traumatized, and in a lot of documented cases, the cost was literally lives. Some people couldn’t handle the stress anymore of being separated from their children or facing the enormous penalties they wrongfully had to pay.
And all because an AI couldn’t differentiate between a genuine mistake and an actual crime.
And what did the government learn from that utter debacle?
A moment of reflection, perhaps a reconsideration of their approach? A solemn promise to never blindly trust AI with people's futures again?
Nope.
Nada.
Niente.
Zilch.
Hell naw.
The Dutch government doubled down. And how did they do that. . . by, you know, purchasing a shiny new matching algorithm from 8Vance. Yup, the company that illegally trained their AI on all of our LinkedIn profiles, your online resume’s and everything else they could get their hands on. And all of that without a single shred of consent.
The Dutch government’s approach to AI is grounded in reckless decisions. In this case they are knowingly using stolen data to fuel their next technological misstep. Privacy concerns are ignored in favor of pushing forward with new systems.
But I’m not going to stop there.
The true horror of the Social Benefits scandal is in its human cost.
Sure, you’ve heard about the families losing their benefits and being labeled as criminals, but what about the psychological toll? Parents living in a nightmarish scenario all because a broken system labeled them as fraudsters.
Now, imagine those same parents losing their livelihoods, being financially ruined, their children were being taken from them, and being dragged through the legal and emotional wringer. They are left with nothing, except the haunting realization that an AI, a mindless, soulless algorithm, decided their fate. And in some tragic cases, that was too much to bear for them.
People were driven to suicide.
Suicide.
Can you even begin to comprehend the devastation of a life reduced to ashes by a mistake in an algorithmic decision?
Families were torn apart, futures were destroyed, and all of that because a system that was designed to detect tax fraud failed so spectacularly that it became a death sentence.
But, hey, who cares.
The bureaucrats who were responsible have never been brought to trial. They have probably moved on to the next shiny AI project, probably with a promotion or a new government gig. Who knows. They might even be over at the UWV, the Dutch Public Employment Service, and cooking up the next great AI disaster.
And that, my smart friends, is not a comforting thought.
Data stealing is a national pastime
We have all grown numb to Big Tech harvesting our data, packaging our lives, and selling us off piece by piece. But what if your own government joins this privacy data heist, and is treating your personal details as just another commodity to exploit?
Now that is precisely what's happening in the Netherlands, where the Dutch government, yes, the same institution tasked with protecting its citizens, is feeding our LinkedIn profiles directly into the AI systems of companies like 8Vance, without consent, without transparency, and clearly against Dutch privacy laws, which they vigorously impose on corporations.
But they conveniently look the other way when it comes to their own actions.
It is data theft in broad daylight. Yet nobody seems willing to stop it. Even LinkedIn, which explicitly forbids data scraping in its own terms of service, just conveniently turns a blind eye when the Dutch government engages in this illegal practice.
Marc Drees is our relentless advocate for digital accountability, and he has repeatedly flagged this issue with LinkedIn. He has exposed both the hypocrisy and the complicity of LinkedIn in the affair.
And why do you think that does LinkedIn look the other way?
Simple.
They have been guilty of doing exactly the same thing.
As I've outlined in my previous article on LinkedIn's own data-harvesting scandals, the platform itself has scraped and mined user data extensively to train its algorithms, and they are violating the very standards that it claims to uphold.
Now the Dutch Public Employment Service, has sunk €65 million of taxpayer money into an AI-based job matching platform built on top this stolen data. It is a system that is so fundamentally flawed that it matches candidates to jobs with all the accuracy of a fortune-teller.
This is bureaucratic incompetence coupled with a deliberate disregard for privacy, accountability, and legality. Yet, the authorities, including the Privacy Authority which is meant to guard our rights, remain inexplicably silent.
They continue to watch this from the sidelines as privacy laws are shredded and ethical guidelines are mocked. It seems that everyone involved, from government bureaucrats to LinkedIn executives, is content to keep the charade going. They are all confident that they'll face no consequences. They are quietly hoping that we will all just look the other way as they continue exploiting our digital identities.
It is happening across the EU
And if you thought that this kinda stuff is only happening in the Netherlands, you’re dead wrong. It is happening across the EU. Europe loves to trumpet its “leadership in ethical AI”. They are filling endless pages of whitepapers, and making lofty proclamations about transparency, fairness, and human-centric algorithms.
But beneath all this bureaucratic virtuous mumbo jumbo lies the reality that governments across the EU are deploying AI systems that destroy lives, devastate families, and violate basic human rights.
The Dutch toeslagenaffaire is just the most notorious symptom of a continent-wide disease.
We have witnessed plenty, from Latvia's flawed social benefit systems to Slovakia’s intrusive eKasa surveillance tools. The governments across Europe seem disturbingly comfortable with placing blind faith in algorithms whose failures leave behind financial ruin and shattered lives.
Is this incompetence, or a calculated willingness to look the other way?
Think about it.
The Social Benefits scandal led thousands of innocent families into catastrophic debt, homelessness, and despair. Children forcibly separated from their parents, livelihoods destroyed overnight, people driven to suicide by unjust accusations and unbearable fines. Tens of thousands to hundreds of thousands of euros demanded because of a cold, and unfeeling AI system.
And still, the EU continues to spin fairy tales about "human-centric AI" and “ethical guidelines” while spectacularly failing to enforce them.
There is even an institution that has been brought to life to enforce this. The European Center for Algorithmic Transparency is established to keep AI accountable, but they remain chronically understaffed, and underfunded, and incapable of confronting the tech-driven injustice that is unfolding under their nose.
Amnesty International has repeatedly slammed the EU AI Act for failing to protect human rights. But EU officials still cling to their freaking illusion of ethical superiority, and they are smiling politely at industry summits while entire communities suffer the devastating consequences of unchecked algorithms.
And the hypocrisy of it all is that private companies are held rigorously accountable for failures in their AI system. It could be through public outrage, or market pressure, and even legal battles like in the case of Clearview, the firm that illegally obtained biometric data from European citizens.
Remember Amazon's infamous hiring algorithm fiasco? Public scrutiny forced rapid change.
But European governments are insulated by layers of bureaucracy and make empty commitments to ethics. They are the ones who evade meaningful accountability. They are quick to sanction and regulate private sector AI but turn conveniently silent when their own systems devastate lives.
This double standard is blatant hypocrisy.
Let’s face it. . .
Europe isn’t na?ve.
These systemic AI failures aren't random accidents. They are symptomatic of institutions that choose bureaucratic convenience and superficial innovation over genuine accountability and rigorous ethics.
The uncomfortable truth is that EU governments, knowingly or not, are sacrificing people’s lives at the altar of technological experimentation.
If Europe truly wants to lead in ethical AI, it must first confront its own complicity and hypocrisy. Until then, the EU’s commitment to "human-centric AI" is nothing more than a hollow marketing slogan covering up a dark and ongoing tragedy.
So join one man’s battle for AI accountability
Marc Drees is practically the last line of defense against the Dutch government’s catastrophic handling of AI governance. He charges against the corrupt windmills of bureaucratic incompetence and privacy violations. He isn’t satisfied with merely exposing the failures and hypocrisy. He is taking action. He is confronting the perpetrators head-on, and dragging companies like 8Vance, indifferent bureaucrats, and negligent privacy authorities into courtrooms and the harsh spotlight of accountability.
Marc writes tirelessly, advocates fiercely, and openly names those who abuse power. He refuses to sit quietly as corruption spreads. But it is a lonely battle. Often it feels unwinnable, but Marc won’t be stopped by apathy or systemic corruption.
He is committed to the long fight, because he understands deeply what's at stake. Because this is not about theoretical ethics. We are talking about real people’s lives, us, our privacy, our jobs, and our futures.
The fight is about making sure that the systems which is built to serve us doesn’t end up destroying us instead.
The stakes are high, and Marc can’t do it alone.
It is time to rally behind him, as I did.
And you can do that by sign the petition.
Lend him your voice, and help Marc hold the Dutch government accountable before more lives are disrupted or destroyed by reckless AI misuse.
If we don’t support Marc’s fight now, the consequences will be devastating and lasting. He’s out there battling these windmills, but he needs us - all of us - to stand by him.
If we don't step up to fight, then who will?
Signing off from the hypocrisy of Dutch AI policies.
Marco
pages.
Well, that’s a wrap for today. Tomorrow, I’ll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ??
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google appreciates your likes by making my articles available to more readers.
To keep you doomscrolling ??
Adviseur ux & usability
1 天前I just moved my family, pets and other valuables to our panic room to weather the storm. I can only hope it will be possible for me/us to pick up our lives after the dust settles.