User-Centric Design: Key to Robust AI Governance
Generated by Adobe Firefly, prompted by Alex Wall

User-Centric Design: Key to Robust AI Governance

This article summarizes AI governance principles from several key authorities, relates them to each other, and then applies 3 different kinds of user control that users request when using AI.?

The article argues that following a user-centric approach leads to improved AI governance and better products.? This approach can be applied to multiple kinds of products, language models, image and video models, or any other kind of generative model.

This exercise demonstrates that AI governance is not just a legal or regulatory device, but a practical and crucial component of AI deployment and development – by ensuring that AI accomplishes what its users set out to achieve, creating increasingly trustworthy AI.

For reference, see the below section of AI Governance principles and their sources.


AI Governance Principles and their Source Authorities

Transparency

EU AI Act, NIST AI RMF, GDPR

People should be told about the AI with clear and sufficient information to make decisions about its application, including appropriate notices that explain the AI’s purpose, functionality, and potential limitations in plain language.

Accountability

EU AI Act, NIST AI RMF, HUDERIA, GDPR

A human control structure should answer for the actions of an AI, for example, an AI governance committee empowered to oversee uses of AI in an organization.

Human-Centricity

EU AI Act, NIST AI RMF, GDPR

The AI should be designed to serve, improve and enhance the lives of humans and society, not destroy or replace it.

Fairness and Bias Mitigation

EU AI Act, NIST AI RMF, HUDERIA, GDPR

AI should treat all groups of society fairly in the context of the AI's purposes so that irrelevant immutable characteristics about humans do not unfairly skew AI outputs.

Data Minimization and Privacy

EU AI Act, GDPR

The AI should use no more personal or confidential data than necessary to accomplish its purpose. For example, one can: mask or pseudonymize personal data when not directly related to the stated purpose and uses, avoid identifiable personal data in training data, inject differential synthetic data to obscure personal associations, or if an AI is intended to work upon personal data, ensure all principles of privacy are implemented.

Reliability and Robustness

NIST AI RMF, HUDERIA

The AI should be designed to handle edge cases and unexpected data and provide consistent quality and reliability. For example, an AI creating images should not be trained on data unrepresentative of the real world.

Explainability and Interpretability

NIST AI RMF, HUDERIA, GDPR

It should be possible to describe with sufficient detail how an AI arrived at an output. Sufficient detail means that it is possible to measure the output versus the explanation of how the AI works to arrive at a conclusion as to whether the AI succeeded, and why or why not.

Continuous Monitoring and Oversight

NIST AI RMF, HUDERIA

An AI should be subject to continuous monitoring and oversight to avoid incidents affecting adherence to the other principles. This might include an AI-specific or integrated incident response plan, process, and stakeholders.

Security and Resilience

NIST AI RMF, EU AI Act

An AI should be designed to be secure and pose no harm to humans, and be hardened against various forms of attack, including to availability. Attacks might include specially designed prompts—crafted inputs deliberately structured to exploit weaknesses in the AI's logic or training—intended to mislead or confuse the AI into performing unsafe actions.

Societal and Environmental Well-being

EU AI Act, HUDERIA

An AI should be designed to minimize harm to the environment or society. For example, AI should be designed to use no more energy than necessary to meet appropriate accuracy benchmarks and avoid excessive processes on a curve of diminishing marginal improvement.

Risk Management Framework

NIST AI RMF, GDPR

An AI should be subject to a risk management framework, so that risks can be evaluated consistently and methodically.

Ethical Use of AI

HUDERIA, EU AI Act, GDPR

An AI should be designed to avoid unethical uses of the AI that violate the aforementioned principles.


Deployments of Agentic AI Will Demand Improved Control over AI Output by Users

As technology rapidly progresses toward agentic AI applications. increased user control is necessary. AI agents multiply risk by essentially ‘rolling the die’ multiple times as they complete chains of tasks, each step dependent upon the accuracy of the output of the prior operation. To the extent each prior operation is flawed, the ultimate output will be more so. As agents are granted the ability to affect real work systems, the stakes of their output increase as well. The greater the consequences to the user (spending money, causing harm to individuals important to the user, or potentially embarrassing the user), the more discerning the user will be about the trustworthiness of the AI.

For example, if one tries to make a picture with an AI tool, 95% of the time it might make something novel or beautiful, but not what was requested. This is the equivalent of an employee who shows a manager interesting work that they did, but that wasn’t assigned. With agentic AI, we will run into similar problems until core issues are solved.


Key Areas of User-Centricity that Align with AI Governance

Creative Control – The user wants to use AI to create something aligned with their specific vision and inspiration.

The more invested the user is in the creation task, the more specific their demand will be upon the AI, and the more control they will desire. Many current generative AI tools are very good at creating output that seems amazing at first, until the user realizes that they cannot actually obtain the output they envision or iterate toward the vision with enough control. For example, a basic initial image generation might show promise, but pieces are out of place, and attempts to iterate are too different from the output that the user initially thought promising.

This desire for creative control bolsters the AI governance principle of human-centricity, by helping to ensure that the human maintains primary authorship. When AI outputs adhere more reliably to user instructions, there is a strong argument that the user is the author, and (unless the prompt is requesting a copy of another work) the output will be more original by virtue of allowing greater creative input to the user. As AI models are trained to be more responsive to creative controls, these controls will imbue originality into outputs, which will become less like particle board mashups of (potentially copyrighted) training data.

Similarly, users demand creative control not just in image or text generation but also in agentic applications. If you are asking a model (say a Large Action Model (LAM)) to decide to take an action based upon particular conditions, there is design creativity in that configuration and the system in which it plays a part. Like artists, engineers creatively solve problems, so the same principles of creative control apply.

When AIs adhere closely to creative instructions, the AI is more accountable, with outputs being more predictable and reliable. Arguably, outputs are also more safe (provided the user’s intention is not dangerous). Such AIs also benefit from greater transparency, because if an AI follows instructions, many outputs and actions of the AI are self-explanatory, not requiring as many notices and documentation. There is even the mitigation of bias and unfairness with creative control, as outputs will align with the cultural sensitivity of the user, as humans understand cultural nuances that can be lost on AI.

Contextual Control – The user does not want to lose time and productivity by outputs that are biased by irrelevant context (e.g., a model's bias toward coding in a certain style, emphasis on irrelevant facts in an essay, or generating an image with a particular blur in the background).

More control over where the AI focuses, or what information the AI uses to produce its output will be necessary for AI to become more trustworthy. 128k tokens may seem to be a large context window, but current-gen AIs – regardless of the stated context window – suffer from limitations on the ability to focus on the 'needles in the haystack’ of most importance to the user. Context is therefore both a volume and weighting problem.

By contrast, the human mind stores vast amounts of information, and we can recall relevant content from years ago in the form of learned experience, and our minds attach more significance differently than AIs do. AIs often use a kind of ‘averaging’ of inputs rather than appropriately weighing factors in reasoning. For example, if I give an LLM a spreadsheet of facts in a divorce proceeding to analyze, say, the best interests of a child, it is likely to be heavily swayed by the prompt and provide output that summarizes all the factors, rather than identifying the key facts of the case like a human lawyer would.

To bridge the gap between a user’s expectations and AI’s current-gen performance, contextual control is key. It enables the user to provide nuanced instruction and focus the AI on the examples that are most helpful for the output.? This goes beyond simply enabling a response grounded in domain knowledge (such as RAG/Retrieval Augmented Generation) and will require enabling users with relatively easy ways of coaching the models at a deeper level with personalized context.? By enabling this capability, an AI increases its transparency, because the user can understand better how the AI works and what it is doing to create its output, and the user can better troubleshoot problems and trust the output.

Current generative AI models are great at providing summaries and translating language into terms that are easier to understand because this plays to the sweet spot of their strength in weighting word/token associations. However, if one factor is much more important than another and is only lightly referenced in the prompt or training data, the output will usually not match the user’s expectations.

Therefore, larger context windows with more memory of prior instructions and context, and greater user control over fine-tuning the underlying models, will improve the performance of models to user expectations.

Control over context aligns with the privacy principle of data minimization, as it ensures users can regulate general context and, importantly, what personal data is used or generated. Under GDPR and similar laws that are being adopted globally and across the United States, providing user control over personal data used in context (and training data) is the first priority and most effective means to support user privacy rights. Closely related, context control supports purpose limitation, as the more closely an AI adheres to the user expectations as to what personal data is used and why, the closer the model adheres to the privacy principles of purpose limitation. This further compounds the transparency benefit, as the context is more self-evident, requiring fewer notices.

Until models can be trusted to remember the things that we tell them are important and to appropriately weight sparse but significant information amid large amounts of additional context, they will not be reliable enough to be allowed to work on their own as agents.

The AI governance principles of fairness and bias mitigation are bolstered when users have more contextual control, as then users are more aware of how the AI is attempting to answer a question or create output. It makes users more likely to see and avoid egregiously biased outputs.

This example also brings us back to transparency and accountability principles, because the context that influences the output should be as transparent as possible to the user, and allows both the user and the model to be more accountable for those outputs. In short, context control bolsters transparency and accountability in AI, by reinforcing user awareness of ‘garbage in, garbage out.’

A user’s control over AI context also increases the explainability and interpretability of a model’s outputs, helping to answer the question “why did the AI say/do this?”

Consistency Control – Having achieved a certain output (a great image, a workflow solution, a text output, etc.), the user will want to do it again and again consistently.

This is because if they are using AI to do something productive, such as win influence or make money, then they must be able to reproduce the success and/or be able to riff on variations. The user does not want to waste time dealing with the bias of irrelevant context (e.g., a model's bias toward coding in a certain style, emphasis on irrelevant facts in an essay, or generating an image with a particular blur in the background). When you tell an AI "that's great, do it again," the probabilistic nature of current AI—referring to its reliance on probability-based mechanisms to generate outputs—will usually prevent re-creating the same output.

The user’s natural requirements for consistency of output strongly align with the cornerstone AI governance principles of robustness and reliability, which are designed to ensure that AI systems perform consistently and reliably under varying conditions.

Monitoring the context of AI outputs is also a key part of the principles of human oversight and continuous monitoring of AI systems.


Summary of User-Centric Control Principles related to AI Governance Principles

This section connects three user control principles (creative control, contextual control, and consistency control) with the AI governance principles restated above:

Creative Control -Ensures that users can direct AI outputs to align with their intentions, enhancing human authorship and responsibility. Adhering to user instructions promotes fairness and reliability while aligning with transparency and accountability.

Relates to: Human-Centricity, Accountability, Transparency, Reliability, Fairness

Contextual Control - Provides users with the ability to influence the context AI uses, ensuring relevance and reducing bias. Improves explainability and allows for transparent decision-making by revealing what information the AI is utilizing. Reinforces data minimization by limiting unnecessary personal data use.

Relates to: Transparency, Data Minimization, Fairness, Explainability, Accountability

Consistency Control - Supports the production of repeatable, reliable outcomes under varying conditions. Continuous monitoring ensures consistent performance, while robustness prevents unexpected deviations. Human oversight is critical to refine processes and ensure long-term reliability.

Relates to: Reliability, Robustness, Continuous Monitoring, Human Oversight

Supports the production of repeatable, reliable outcomes under varying conditions. Continuous monitoring ensures consistent performance, while robustness prevents unexpected deviations. Human oversight is critical to refine processes and ensure long-term reliability.


In sum

Users want an AI system to perform and provide useful outputs under a variety of conditions. They want to be able to reproduce good outcomes and eliminate bad ones, and they want to create something original using AI systems. In AI governance, these goals align with creating AI that is useful and provides benefit while mitigating risks. The more advanced AI agents become, the more enmeshed they will be as both social and technical systems, and the more their behavior will need to be examined through the lenses of user centricity and AI governance.



要查看或添加评论,请登录

Alex Wall的更多文章

社区洞察

其他会员也浏览了