Copy of Generative AI-responsible implementation for content creation
rob gillespie
Information architect, content creator, tech writer, content digitalization lead and Web3.0 enthusiast!
I have explored some of the potential?issues?related to using Generative AI. Many of these issues can be addressed and to some degree mitigated, through an effective implementation plan.
Every project will have some unique challenges, technical debt, and likely legacy operational framework to contend with. However, we can define likely common components.
Statement of objectives
A statement of objectives is a living document that will continually evolve with the project. It is a critical foundation because it helps an organization understand why employing generative AI is being contemplated and the anticipated benefits of doing so. As it evolves, it should be constantly tested and seen as a measure of success (or failure).
This statement should be as detailed as possible. Any benefits must be quantitatively and/or qualitatively expressed. Implementing AI is never free, and while it is easy to claim cost-saving in some vague way, a responsible organization should evaluate any saving against the actual costs of making that saving. Moreover, putative cost savings must be proved in reality. The success of a project solely designed to save money can only be assessed quantitatively.
Projects that have other ambitions, such as improving the quality of the content created or removing some of the operational impediments through automation, can also have qualitative evaluators.
Qualitative improvements have a cost associated with them. However, it is important to understand the true purpose to appropriately assess benefits. It is rarely because of grammatical pedantry alone. The real aim is typically to better engage with your audience, improve understanding, reduce reliance on support, or generate sales. Operational improvements, such as implementing content operations, tend to have mixed objectives. Saving resources is a laudable objective, but content operations are much more about enabling an organization's content pipelines to better respond to organizational imperatives. A more nuanced assessment is required here.
The value of a statement of objectives is that it ensures the organization has a clear understanding of why it is contemplating using generative AI and what are the intended benefits. It cannot be because everyone else is.
Didactic plan
Typically, in an organization knowledge is distributed unevenly. With AI, it is typically a team of specialists and a core of ardent advocates, of whom only some may have technical insight but all eye the putative benefits.?
A responsible organization must ensure a wider understanding of what AI is. This will help reduce employee friction and soothe inevitable fears. Even more critically, it must ensure those in decision-making positions, who will be charged with determining if implementation shall proceed, have sufficient knowledge and understanding to do so. It is easy to get distracted by the promised benefits of generative AI and fail to properly analyze the inherent business risks associated with it.?
Ethical responsibility charter
Every organization has a duty, and ideally, a commitment, to operate ethically. AI raises particular concerns about the proper use of data and the controls used to ensure privacy. Additionally, the rise and sophistication of deep fakes create a new source of consternation. There are obvious legal considerations too, but I give the benefit of the doubt and assume legal sign-off.
Employees and users/customers have a right to expect that data is used transparently and for well-defined purposes. There are genuine fears- some justified and others less so- about the misuse and improper manipulation of data and the creation of false realities designed to influence unduly. Users must be effectively reassured that data will be used responsibly and empowered to give informed consent. When data is required to improve user experience, for example through personalization, a clear explanation of the process and how data is used, should be given.
It is incumbent on any organization that holds personal data, publishes content of any type, or provides a service to foster a climate of trust. Creating and maintaining trust requires transparency, clear statements of good practice, and evidence of compliance with those statements. Once users lose trust, re-establishing a relationship is fraught with difficulties.
Maintenance and monitoring plan
Once generative AI is adopted, there is a need for a comprehensive system of monitoring. Even where LLMs function impeccably at launch, performance is liable to deteriorate over time. More realistically, at launch, performance is sub-optimal and it is important to have a comprehensive data set to better train the LLM and identify potential deficiencies. Responsible organizations will always ensure there is a human in the loop and that human has a comprehensive range of monitoring tools to properly exercise their function. A black box militates against the practical and ethical functions humans must perform.
LLMs require maintenance. Because of how they function, performance will inevitably decline. Data poisoning is also a realistic threat. Moreover, if the subject data changes, the LLM must be re-trained on the updated data. While in some instances, training on the delta is sufficient, overall, this is liable to decrease the quality of the data set and lead to a decline in performance.
Organizations require a comprehensive maintenance plan and dedicated resources to perform it. It must be understood from the outset that deploying an LLM is not a fire-and-forget operation. There are long-term and reoccurring costs that must be factored into the decision to deploy. A LLM that is unmonitored and uncontrolled is liable to create business risk and is likely to be considered evidence of irresponsible behavior. If, for example, your LLM defames someone, what would the defense be?
领英推荐
Risk and opportunity matrices
Any responsible organization must properly identify, explore, quantify, and communicate risks and opportunities. An opportunities register enables an organization to understand why a particular action or outcome is desirable. It helps in planning and quantifying of the respective benefits of different courses of action or relative prioritization of actions. It also allows for retrospective assessment of success and failure. Promised benefits should be delivered and if not the reasons for the failure properly analyzed.
A risk register identifies the known risks of any particular course of action (or inaction). It enables informed decision-making and for undesirable outcomes to be identified and mitigated. Any application of AI inevitably involves novel risks and will require bespoke mitigation and controls.
Content creation, curation, and management plan
If an LLM is to be trusted with content creation, it must be subject to the same types of controls that are applied to other forms of content creation. Indeed, given the tendency to hallucinate, and for performance to deteriorate over time, the level and depth of monitoring and control should if anything be enhanced for AI. Quality, accuracy, and appropriateness are minimum expectations. Content created by AI should be subject to a thorough review and approval process- humans must remain in the loop, In cases where such controls are not possible, for instance, a chatbot, a system for retrospective review and evaluation must be established. Content creation must be aligned with monitoring capabilities with as close to real-time feedback channels as possible.
Content curation should of course be applied to AI-created content, like any other form of content, requires curation over time. Content can become obsolete, inaccurate, or less relevant and systems must be adopted to identify instances of this and rectify the issue. This task is particularly important where such content is to be consumed by AI. Periodically, the LLM will have to be re-trained on the updated content.
Content management is the creation of end-to-end content management pipelines, with appropriate controls and quality gates. It is the strategic expression of how content creation and curation are to be performed, and so ensuring a holistic view of the content life cycle.
Statements of use
An ethical responsibility charter is a statement of intent. When content is consumed by users, users have a right to understand how that content was created. To build trust, it is important to be transparent. Users must be given a clear and comprehensive explanation of how content is created, curated, and managed. Additionally, there must be candor about the sources of data used to train the LLM (or other AI).
Given the tendency for LLMs to be trained on data that belongs to others, users must have a clear understanding of the sources of data and be able to assess the risks of using the model. LLMs are being heavily litigated and it remains unclear if any liability will pass to those who use an LLM and "unknowingly" infringe on the ownership rights of others.
Data plan
Even where an organization deploys a publically available LLM, it will likely want to train it on its own data. Such data should be carefully selected so as not to expose IP or other commercially sensitive information unintentionally. Moreover, there must be appropriate controls to ensure the integrity of the data used. Going forward, there needs to be a comprehensive plan for a data pipeline that ensures the continued availability of appropriate and valid data so that the LLM can be re-trained.
A data plan is a focused component of the maintenance and monitoring that must be undertaken. The performance of an LLM is dependent upon ensuring the integrity of the data on which it is trained and having sufficient controls to prevent, or at least limit, data poisoning through operational reality or malicious acts. Bad actors have targeted LLMs to induce bad behavior or to precipitate data breaches. Employing an LLM responsibly requires additional threat detection and mitigation activities.
Toolchain
LLM deployment would not in itself necessarily imply additions to your existing toolchain (a major feature of its apparent attractiveness). However, you might choose to attempt to mitigate some of the eccentricities of LLMs using knowledge graphs. In my view, knowledge graphs have much more important uses- not least in hyper-personalization.
About the author
I am a technical writer and information architect with a passion for new technology and understanding the changes it will bring.
I am looking for a new role- a fresh challenge and a new adventure.
#content #contentstrategy #infoarchitecture #ai #generativeai #ethics #litigation #ip #techwriting #techwriter #samaltman #chaptgtp