Mulesoft - Custom Policy & AI: Toxicity!

Mulesoft - Custom Policy & AI: Toxicity!



Scenario

At Mulesoft, security is one of the factors we focus on and in Anypoint Platform we can protect our data and APIs through Policies. The Toxicity AI policy protects our data by checking whether the content contains 'toxic' language. Today, it is very important to pay attention to the data that passes through in full compliance with the rules and code of ethics.


Solution

I created a Custom Policy named ToxicityAI-FS which can be applied to our APIs and protect them to verify whether or not the payload contains toxic content.



To create a Custom Policy you can follow one of the many guides available on the Web, including our official Mulesoft ones. Among which:


Mule 4 Custom Policy Workflow

Publish a Mule 4 Custom Policy

Custom Policy Examples


Once the Policy has been created, it will be available on Exchange as an asset:



And it will be possible to apply it via API Manager to your APIs.

Once this Policy has been chosen, it must be configured. You must set Authorization key, in order to access to Perspective AI and select the content path that you want to apply toxicity check.




NOTE: There is also a version where you are not required to enter any authorization key but that has been omitted to show how simple the policy configuration is.


After that you applied the policy to an API, you can check and verify the result.

For example, if we call an Api that accepts this payload


{
  "message": "You are a stupid man, very idiot!"
}        

the response will be:

{
    "input_message": "You are a stupid man, very idiot!",
    "toxicity_status": "YES",
    "toxicity_value": "0.9391453"
}        


But, if we call an Api using a different payload like this:

{
  "message": "Today is a wonderful day and you are very nice"
}        

the response will be different:

{
    "input_message": "Today is a wonderful day and you are very nice",
    "toxicity_status": "NO",
    "toxicity_value": "0.032627538"
}        


This policy use Perspective LLM in background in order to check and verify toxicity inside the input content.


NOTE

If you want to try this Custom Policy ToxicityAI-FS, send me a private message and I'll share with you the specification!

Tanya Stracuzza, MBA

Global Product Marketing | PMA Product Marketing Certified | Customer Advisory Board PMA | Board Member IPN | PMA Madrid Chapter Lead

11 个月

Great content, Francesco! Impressed by how MuleSoft simplifies toxicity verification, which ultimately helps establish trust with AI ??

回复

要查看或添加评论,请登录

Francesco Suraci的更多文章

社区洞察

其他会员也浏览了