"Presentabilty"
A recent AEA365 article put forward the issue of "presentability", noting that
“AI’s ability to enhance presentability means that individuals, initiatives, or organizations can present themselves in their most favorable light. Given that evaluation is all about examining whether claims about an initiative are accurate, AI presentability will cause problems for many evaluation, quality assurance, and regulatory systems.”
But does this issue just stem from AI, or does it reveal deeper flaws within existing evaluation and project management systems?
Presentability: the elephant in the room
Presentability might be a new word but has been for a long time an elephant in the room for development projects. Grassroots groups are positioned at the end of a long chain to access donor resources—a chain primarily driven by presentability. The in-between actors recruit project officers or consultants to craft polished proposals and package partners' achievements into donors' templates, matching their requirements and expectations. ?
This raises a provocative question: Is the intermediaries' role in crafting polished narratives so different from what AI does? Grassroots actors do not lack vision or expertise. But they might lack the know-how to produce “presentable” reporting. This gap is precisely where AI could level the playing field, democratising the sector in favour of genuine change-makers.
(and yes... of course, grassroots actors also lack access to platforms for a more direct link with donorship. This is the crux of the localisation challenges.)
?
The systemic flaw: paperwork over reality
The fear that AI-enhanced narratives could obscure true impact is valid, but it also reveals a deeper, systemic flaw: documentation has increasingly become all that matters, the project itself. As AI takes over this process, distinguishing between what’s authentic and merely polished becomes urgent. We might ask: whose narrative is being presented? Is it genuine, or just an AI embellishment? This can’t be determined by paperwork alone.
“Presentability" may not be an issue when claims are straightforward (e.g. sharing output indicators) or grounded in solid methodology and evidence. However, it is problematic in areas that rely on context and nuance. When confronted with broad ideas like “empowerment” the current system often compresses complex realities into oversimplified templates. Monitoring and evaluation systems are weak at unravelling and making vivid the complexity of change, of human experience, and the interrelations within social ecosystems —a weakness pre-dating AI. If AI can elevate reports to the point where they overshadow reality, the real challenge isn’t AI—it’s the system’s detachment from what truly matters.
Calls to restrict AI use in project proposals and reports miss the point—the genie is already out of the box. Not only would such restrictions be difficult to enforce, but they would also disproportionately impact those who could benefit most from these tools. In my experience, grassroots actors are eager to harness AI’s potential to quickly create compelling proposals and reports. Instead of imposing limitations, we should focus on intentional experimentation, implementing guardrails and a clear vision to guide its use effectively so AI can help with paperwork. But not only.
?
Here are some starting points to ponder:
1.??Presentability or authenticity?
The aid chain focuses on conforming to the expectations of those in power. We should ask: if presentability were fashion, who should be setting the trends instead? As narratives move up the chain, they become jargonish, sanitized and lose authenticity—a problem AI simply exacerbates by polishing and embellishing stories in the blink of an eye and with a fraction of the cost.
However, if we recognise that AI acts as a powerful mirror, distorting and amplifying existing biases, we might find part of the solution. AI could become a game changer if we commit to strengthening Ethical and Participatory AI use, fostering genuine representation and challenging top-down and stereotyped narratives.
2. Presentability as conformity: are we missing the change worth seeing?
When reporting demands uniformity, it’s no surprise that reports start looking the same. Yet, in my evaluation work, I’ve seen countless extraordinary initiatives go unrecognized because there’s no incentive to document what doesn’t fit neatly into a narrow template. Caught in the endless back-and-forth of pre-set requirements, people often lack the time and energy to convey what lies beyond the blueprints. The focus on meeting donor expectations and using the right buzzwords also subtly devalues local initiatives. Even evaluations aimed at uncovering learning or unexpected outcomes are still seen as innovative—when, in reality, this should be the norm.
This mindset needs to change. AI, combined with more immersive and participatory methodologies, could help capture and narrate the rich experiences and learning that are currently overlooked. But is our sector ready to be surprised and embrace the fact that change is much richer and more complex than our conventional expectations?
3.?Scratching AI's surface appeal: critical reading skills.
领英推荐
Many "empowering" support for grassroots entities involved proposal writing and reporting skill training. This is deeply about presentability—i.e. equip them with writing skills that make donors happy. Now that AI can do this almost instantly, it's time to shift our focus.
4. And what about the readers? Reporting as a process of dialogue and exposure.
If we're concerned that AI-sugar-coated reports might fool readers, it reveals a deeper issue. The problem isn’t whether AI was used (just as it no longer matters if a document was handwritten or typed). The real concern is a system where papers are passed around, driven by shallow, upward forms of accountability that lack meaningful validation. As described above, if you know that what you receive has been validated at the grassroots with sound processes, the gloss of AI becomes less concerning.
If we only want to listen, AI might remind us that the true merit of a project isn’t found in bureaucratic checks and adherence to expectations. It is meaningful conversations, connections, and direct exposure to humanity and reality—whether through innovative M&E approaches or simply through engaging in more regular, deep dialogue with the people driving change. A richer dialogue than just demanding a handful of indicators or adding a few side comments to reports. Readers need to welcome narratives that challenge their pre-set expectations. Reporting should be seen as an engagement process, not just a final product.
5.?Which incentives?
All of this hinges on the right incentives. The current structure rewards adherence to blueprints and templates over embracing complexity and honesty. It prioritises control over trust-building: whilst most actors genuinely strive to create positive change, the system works under the assumption that actors are more likely to cheat than to do good—and this comes at a high cost.
If all this persists, AI will likely be used in line with the fears we've discussed.
Why worry now? The democratization of presentability
We’ve always known that making reports appealing is important—it’s the art of selling ideas, securing funding, and gaining support. So why are we only now sounding the alarm about presentability? The truth is, this art has become popularized. AI has made it accessible to anyone, potentially putting intermediaries—the gatekeepers of polished narratives—at risk. But perhaps this isn’t a problem; maybe it’s an opportunity.
Instead of doubling down on ever-more sophisticated reporting or (vain) checks, we could decide that what we need is less reporting and more real-world action. AI could free us from the endless cycle of blueprint reporting, allowing us to focus on what truly matters—real impact on the ground. The fear isn’t that AI will take over; it’s that we’ve become so entangled in presentability that we’ve lost sight of why we’re reporting in the first place.
Let’s not blame the messenger: the deeper challenge lies not in AI itself, but in the current quality assurance systems, in our heavily bureaucratized approaches, in the values and the power we put forward.
Of course, AI presents its challenges. It remains biased and, despite its potential, can further marginalize those who lack proficiency or reinforce mainstream perspectives. But, at a time when the development enterprise has become paper-heavy—often at a significant opportunity cost—can we consider using AI to reduce the time spent on bureaucracy and rather invest it in reclaiming the true value of partnerships?
The value of our partnerships lies not in editing and producing more presentable paperwork, but in more human endeavours: sharing learning, strengthening networks, building solidarity whilst challenging power, and safely experimenting with new possibilities. Having rang an alarm bell, a smater use of AI can also help us reconnect with these core values: it might free time, space, and energies to bring back relationships and humanity into our process of change. And probably better assessments of what change looks like from the perspectives of those living it.
Unless of course, we decide that the system is all about bureaucracy and reporting.
NOTE: This article is a working one. I might get back to it, re-read and make small changes, which I will not track.
?
Evaluation facilitation
6 个月Thank you, Silva! I like to say that there are many good organisations that write bad reports and probably just as many bad ones that write good reports… If all important interaction happens in the form of written reports only, it is bound to fail.
Feminist, gender advisor, long time development researcher, teacher and practitioner, consultant
6 个月Many very good points here for sure. Sadly AI is coming at a time when so much evaluation is almost entirely about showing good results, simplifying complex realities and making the messages simple. It will further many depoliticised trends in development and humanitarian work and further distance to evaluators from the relations they need to develop to really learn and understand …. As relationships are the heart of projects that go beyond technical assistance I fear the problems well highlighted will deepen. Much of the work is political and complex but current evaluation practice avoids conflict or disparate understanding and goes for polished and simplified answers.