#3 Mindful friction, digital identity and consent
A confidence design pattern adds friction and helps users better understand AI’s limitations.

#3 Mindful friction, digital identity and consent

Hello, welcome,

We made it through January! Congratulations. It was a long month this year, with much to worry about and things to look forward to.

Here at IF no one likes the cold, damp, grey signature days of a British January. The team have been getting through it with a mixture of:

Here’s to bluer skies in February.


Continuing our research on adoption enablers for digital identity in the UK - As our research progresses, we remain convinced that we need to start prototyping the user experience. As our CEO Sarah said, “digital identity isn’t solved until the UX is”.


Continuing a programme of work on adoption of critical AI enabled services - Our work with a large UK based organisation is helping to develop high stakes, AI-enabled services safely, by applying mindful friction. Mindful friction is an approach to build in moments that make people pause, think, and stay involved when working with AI. These pauses ensure that humans, not machines, stay in control and make the final decisions. It’s a way to make sure people can understand what AI is doing, check the work, and act on it.

We’ve heard again and again that users need help to make better quality decisions alongside AI, and also to prevent skills-fade over time. Moments of friction drive this. Our view continues to be that the best adoption comes from prioritising trustworthiness.


Derisking innovation through designing and implementing AI policies - We’ve been helping global businesses develop, plan and begin to implement their AI policies. This has involved getting closer, again, to challenging issues like fairness in AI - where there are no easy answers.


Sarah’s speaking at the UK’s AI Fringe - Join Sarah on 11th February at the British Library for the AI Fringe. The AI Fringe is a series of events hosted across London and the UK, bringing a broad and diverse range of voices to have the most important conversations about AI today. Given the recent twists and turns of AI development both technically and politically, there is much to discuss.


  • We’re getting the social media crisis wrong is an article that describes where the real problem lies: degraded publics not disinformation. Follow this read by listening to Channel 4’s CEO speak to the impact that’s having on Gen Z from their latest research. Alex Mahon says, “Gen Z is facing growing uncertainty about who and what to trust” and this echoes what we are hearing in our research too. We are less convinced by Channel 4’s answers, trust marks are hard and watermarking AI content doesn’t work either. There are no silver bullet answers to a complex and systemic challenge.?
  • Our pals at DXW’s summary of the Online Safety Act from Ofcom. The Online Safety Act is 1500 pages of dense guidance which should remind us that regulation is a service, not a huge .pdf that few people can grok. Guidance like the ICO’s children code shows a better way.
  • There’s lots to like in the Blueprint for Modern Digital Government. What we like most is the commitment to transparency, and the suggestions of what that could start to feel like. If you watch carefully you’ll see links to “version history”, “data used by this service” and “verified credentials” that tell us this next generation of services will be able to prove to citizens and businesses that they are worthy of public trust.
  • Congratulations to our friends at AWO who won an important case against Sky Betting and Gambling. It’s important because it has consequences for all controllers that rely on consent and for the online advertising industry. Advertisers must ensure that their user journeys are consentful. We think the best will put consent at the heart of their data strategy and marketing.


— The IF Team


Find us on LinkedIn and Medium for more insights and updates.

要查看或添加评论,请登录

Projects by IF的更多文章

社区洞察

其他会员也浏览了