Most people believe that “safe” or “ethical” AI is the same as guardrails, resulting in heavy regulations, safeguards via base models, and, ultimately, the cauterization of models.
This loss of value makes people doubt it’s possible to build both a successful company and ethical AI. This is a false dichotomy.
Good does not only mean avoidance of harm. Being good is doing what you think is right, given a specific context. Ethics built into AI as acting according to what you think is right, rather than restricting potentially harmful behaviors, brings much more value to users. You can build ethical models without crippling them.
We’ve found that building ethical AI models works under two themes: transparency and control.
Transparency means open, real communication with your users, which builds trust and stronger communities. Users will help solve problems when they’re in the loop.
Control but in the hands of the individual. AI should reflect specific context and values of individuals. General rules do not work well with AI, often leaving users feeling frustrated with safeguards and incentivizing jailbreaking. The relationship between user and business becomes one of a helicopter parent and child - a low trust relationship with low satisfaction on both sides.?
Accordingly, daios is building two tools:
1. Values Engine: A personalized LLM ethics layer that is controllable by users, with transparent training data (check out our demo training data docs in the comment below).
2. Data Card: An accessible page for non-technical users on training data and AI behavior.
Check out more on our website below. If you’re building in gaming, digital clones, or marketing, let’s chat.
Stay tuned for more information for the launch of our data card with a client from the gaming industry ??
#ethicalAI #guardrails #LLMs #AIethics #ethicalvalues