Artificial Intelligence (AI) Watermarking Techniques
As part of its AI Futures Initiative, ITI published a new policy guide aiming to address the pressing need to authenticate AI-generated content.

Artificial Intelligence (AI) Watermarking Techniques

Watermarking has emerged as a pivotal tool in authenticating AI-generated content, offering a unique means of creating transparency and visibility for both developers and end-users. This discussion explores various types of watermarking techniques mentioned in recent research and their roles in enhancing visibility for AI-generated content.

1. Visible Watermarking:

Visible watermarks play a crucial role in making AI-generated content easily identifiable. Techniques such as overlaying information about the source or creator on images or text provide a visible indicator of the content's AI origin. Examples include stamps on text outputs (e.g. “DRAFT” or “Not for Release” on a Word document or an embargoed press release) and visual identifiers like those used by Getty Images.

2. Invisible Watermarking:

Invisible watermarking is a technique that involves embedding a small amount of data in the pixels of an image or video. It is especially useful for tracking content distribution and allows the content owners to track its spread. This technique aids in protecting copyrighted material, preventing unauthorized distribution, and facilitating content verification.?

3. Dataset Watermarking:

Dataset watermarking is employed to track and verify input and training data used during model development, ensuring traceability back to the original owner or creator. By embedding watermarks during the model training process, organizations can track data ownership and enhance ML model verification, preventing unauthorized use.?

4. Model Watermarking:

Model watermarking involves embedding information within the parameters or structure of an AI model, allowing for tracking and verification of ownership and usage. This technique addresses concerns related to unauthorized use, distribution, or modification of AI models, reinforcing accountability and transparency in the AI ecosystem.

5. Differential Watermarking:

Differential watermarking combines various watermarking techniques, targeting different elements of inputs or outputs with unique signals. ?This approach supports the direct sourcing of AI-generated content, allowing generative AI applications to more accurately cite the sources of factual text by embedding watermarks in both the output and metadata.?

AI authentication techniques aim to increase transparency, minimize risks, and boost trust in generative AI broadly and across the AI value chain. They can also help increase trust in the information ecosystem. Learn more about different kinds of authentication techniques in ITI’s new policy guide for global policymakers.

If you’re interested in hearing more about ITI’s global AI advocacy work, connect with our policy experts Courtney Lang and John Miller below.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了