Introducing SPM: a powerful new tool to close the door on harmful content

We're utilizing something new at Civitai to tackle one of the more challenging issues in the AI-generated content space: harmful content. It's called Semi-Permeable Membrane (SPM) technology, and it's an important step forward for our platform and others in the AI field.

With our implementation of SPM, we’re able to take existing models and replace concepts that are prone to abuse with innocuous alternatives. Our goal with the use of SPM is to prevent models from generating illegal content without stifling the creative possibilities of AI. Indeed, research on improving diffusion models has shown significant potential for preventing the generation of CSAM and non-consensual pornography.

Targeting the root cause of harmful content

Because of how they were originally trained, some of the most widely used open-source models are, unfortunately, capable of generating CSAM. In spite of this, these models are still essential to our community’s creative output. That is why we are implementing an approach that retroactively removes the concept of children to make them safer. The process happens seamlessly, maintaining the quality and variety of the AI-generated images without limiting the user's ability to create.

We’ve been testing SPM on the platform and are encouraged by the results. As we continue to roll out the technology, we will identify additional models to apply it to.?

No system is perfect – our community is always welcome to flag potential errors or contact our team if they believe SPM has been applied inappropriately. But overwhelmingly, we see it becoming a core tool in our comprehensive system to support a safe, collaborative, and open environment for all creators.

Making SPM an open-source resource for everyone

We're looking forward to sharing SPM technology more broadly by making it open-source. We hope that it will be embraced by the developer community and that creators will use it to take control over the ways in which their models are used.

We’ll continue to test and iterate SPM before we make it broadly available. Our roadmap has several tools to strengthen further its ability to eliminate harmful content. At this time, targeted concepts are being replaced with innocuous alternatives.?

An invitation to our community

At Civitai, we’ve always believed in a shared journey that empowers our community while supporting rigorous safety and privacy standards. This development isn't just about us at Civitai deciding and moving forward; it's about engaging with you, our users, and the wider AI community. We're interested in your thoughts on SPM technology and how we can continue to improve the Civitai platform to serve your needs while ensuring a responsible and safe environment for creativity.

Thank you for being a part of our community. We're excited to keep building tools that let you create freely, efficiently and contribute to a safer and more responsible AI ecosystem.

Ahmed-Amine Homman

PhD | Research Project Manager at ClaraVista

10 个月

I wholeheartedly agree with you and salute this effort. Such actions are crucial to ensure that GenAI can benefit everyone and prevent one of its most malicious use. I however have a question about SPM : if the concept of children is removed from the models, it will become impossible to generate licit and safe content (such as children's illustrated book images for instance) representing children right ? If it is the case, this would be a significant hindrance to the creative use of these models imo. Therefore, are you planning on conducting research that would prevent models to only forget concepts related to children AND harmful content ? Is such thing even possible?

回复
Christopher Sicurella ∴

Point of view ? Perspective ? Attitude ? Clarity

10 个月

Smart move Civitai

要查看或添加评论,请登录

Civitai的更多文章

社区洞察

其他会员也浏览了