YGNAP (You're Gonna Need A Policy) for ChatGPT, Bard, and every LLM Based Productivity Tool.
Lee Gonzales - Midjourney

YGNAP (You're Gonna Need A Policy) for ChatGPT, Bard, and every LLM Based Productivity Tool.

Before I get started, I am not a lawyer, this is not legal or business counsel. I get paid to do that, this is just food for thought ??. Furthermore, the field here is very, very big. I am only going to touch on high level concepts and discuss what I have learned working with OpenAI tools, and Microsoft's efforts as they are both big, trustworthy(ish), have mature policies, and deep pockets. Be very cautious in using tools/systems/APIs from other companies in a production, educational, or personal use case. Many good and reputable companies are building outstanding products using LLMs, YMMV do your own research, consider your risk tolerance, and read their Terms of Service & Privacy policies.?

Corporate Folks - if you are not already writing a policy, creating an education program, and thinking deeply about how the new crop of ML tools will get rolled out, governed/controlled, and leveraged in your company then this brief is for you.?

There are several areas of concern and opportunity you must address with regard an LLM Policy:

  1. Risk & Compliance - your team members are already using these tools, they are already copying your company data into a ChatGPT instance to help themselves be more productive, competitive and effective. Some will be smart about this, they will be selective in what they provide in a prompt, and thoughtful not to share company IP. Others will be oblivious to the risks of copying trade secrets, and confidential information such as product specs/designs, client rosters, marketing strategies and sales plans.?
  2. Leverage & Productivity - your team members are already using these tools, they have found that they are invaluable thought partners, writing buddies, learning & planning aids, incredible coding partners, and phenomenal in creating leverage in SaaS products/systems. One of the fascinating, exciting, and scary things about AI is that no one truly knows everything these systems can do, and what all they will be good for, your people will find many uses you can’t even imagine right now.?
  3. Cost & Use - your team members may already be using these tools, someone in your organization is (hopefully) thinking about how to leverage these tools to make your people more effective, efficient and happy. Someone in your organization is probably considering how to use these tools to improve your products, customer service/support, and internally to create better internal tools, reports, and analysis. Given all of that, there will be costs to consider, and while I firmly believe that when well executed the ROI of LLMs is phenomenal. It is still a concern, and one that the industry is not particularly prepared to address.?See Azure OpenAI Services and OpenAIs costs to ballpark your costs. ChatGPT Plus costs $20 bucks per month, worth it.?Check out my article on this too!

Parents & Educators

  1. Cheating & Plagiarism - your students are already using these tools. They are already using ChatGPT, GPT-3 (via playground or API), Perplexity.ai, Quillbot, or Sudowrite. Yes there are new systems being announced (OpenAI classifier or GPTZero) to detect AI generated content, however that is an unwinnable arms race (r/ChatGPT is full of bypass techniques), and frankly not one worth fighting. Instead consider how you can leverage these tools to accelerate learning, make it easier, more enjoyable, and be clear about what is acceptable vs what is not acceptable in use.?Further reading from NYT and Wharton Prof. Ethan Mollick who writes about extensively about this, here, here, and here. Business Insider has a good roundup, as does 538.?
  2. Curriculum design & educator leverage - your teachers are already using these tools. They are already using ChatGPT, GPT-3 (via playground or API), Perplexity.ai, Quillbot, or Sudowrite. In the short term you will need to consider how these tools can be supported, encouraged, and leveraged effectively and safely. Long term the advent of LLM based learning tools will completely upend how we teach, educate, test, evaluate, and correct student performance. Put simply if you are not thinking how LLM tools change the fundamental nature of your individual and organizational pedagogy you are missing out on a transformational opportunity to improve the lives of your learners and educators.?Further thoughts from Dan Fitzpatrick on Prompts for teachers, and Microsoft announces new AI-powered classroom tools.?

General concerns for all

LLMs will fabricate and hallucinate, you can’t trust it to be perfect. You also can’t trust anything you read on the internet, this is not materially different. This will be addressed via smart usage, continued model refinement and training. Further using factual sources of trusted information in conjunction with LLMs will be huge too, some examples are the new Bing, Perplexity.ai, and Groupthink.ai?

How to start

  1. Accept that the genie is out of the bottle, that this technology is not just around the corner, it is here now, and the growth in power, complexity, and capability of these tools is moving at an exponential rate.?
  2. Convene a working group to discuss these new AI tools, get everyone on the same page, and start the conversation on what these technologies mean for your organization as a whole, your people in particular, and those that you serve.?
  3. Start small and simple with your policy and guidance. Keep it simple, be sure to frame this in the context of your values, and existing policies. Update it as you learn more.?
  4. Educate, explain, inspire, and calm. Some people are going to blindly and wildly use these tools, help them do better. Some people are going to freak out about the potential for these tools, they will either ignore them, vilify them, or laugh at them. Help them see the path towards a bright and happy future.?


Here is a sample policy, full disclosure, I had ChatGPT help me write it, and again I AM NOT A LAWYER, this is a place to start and is purely for educational purposes.??

As language models become more sophisticated and widely used, it is essential for organizations to establish clear guidelines for their responsible use in the workplace. This policy outlines the dos and don'ts of using language models like GPT-3 and ChatGPT in order to ensure that they are used in a manner that is ethical, lawful, and beneficial to the organization and its stakeholders.

Policy Statement

Purpose: The purpose of this policy is to provide clear guidance on the responsible use of language models in the workplace, including the use of GPT-3 and ChatGPT.

Scope: This policy applies to all employees, contractors, and other individuals who use language models in the course of their work for the organization.

Do’s

  • Use language models for language translation, text summarization, and content creation.
  • Use language models to support research and development efforts, such as by generating hypotheses or analyzing data.
  • Use language models to enhance communication and collaboration, such as by generating meeting summaries or automating email responses.
  • Use language models to improve customer service, such as by generating responses to frequently asked questions, or customer inquiries.?
  • Use language models in accordance with ethical principles, such as fairness, accountability, and transparency.
  • Use language models in compliance with all applicable laws and regulations, such as data privacy laws.
  • Clearly indicate the source and limitations of any content generated by the language model, such as by using a disclaimer or watermark.
  • Measure the cost, ROI, and usage of these tools.
  • Share best practices with your team, peers, and partners.
  • Adhere to the company's privacy and compliance policies when using language models to handle sensitive or confidential information.

Don'ts

  • Use language models to spread false or misleading information that could harm others, such as by generating fake news articles or spreading false information about a product or service.
  • Use language models to create or distribute offensive or abusive content, such as hate speech or harassment.
  • Use language models to support illegal or unethical activities, such as phishing, spamming, or fraud.
  • Use language models to impersonate others or make false claims
  • Trust language models without verifying the accuracy of their output, as they can make errors or produce biased results.
  • Violate data privacy laws or regulations, such as by collecting, using, or sharing personal information without consent.?

Compliance: Failure to comply with this policy may result in disciplinary action, up to and including termination of employment or contract.

Training: All employees and contractors who use language models in the course of their work will be provided with training on this policy, including the do's and don'ts outlined above.

Review: This policy will be reviewed and updated regularly to reflect changes in the use of language models and evolving best practices.

By following this policy, employees and contractors can use language models in a responsible and ethical manner, while also maximizing their potential benefits in the workplace. The organization is committed to promoting the responsible use of technology and ensuring that all employees and contractors understand the importance of using language models in a manner that is lawful, ethical, and beneficial to all stakeholders.


Other considerations & risks

  1. The OpenAI Terms of Use and OpenAI Privacy Policy from OpenAI seems friendly to its user Intellectual property, for example OpenAI says “You may provide input to the Services (“Input”), and receive output generated and returned by the Services based on the Input (“Output”). Input and Output are collectively “Content.” As between the parties and to the extent permitted by applicable law, you own all Input, and subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output. OpenAI may use Content as necessary to provide and maintain the Services, comply with applicable law, and enforce our policies. You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms.”?
  2. If you are developing an application on top of OpenAI there is a lot to consider especially if you are going to be processing regulated data like, personal data (PIIA), health records (HIPAA) or EU residents’ personal data (GDPR) to start be sure to “Processing of Personal Data. If your use of the Services involves processing of personal data, you must provide legally adequate privacy notices and obtain necessary consents for the processing of such data, and you represent to us that you are processing such data in accordance with applicable law. If you are governed by the GDPR or CCPA and will be using OpenAI for the processing of “personal data” as defined in the GDPR or “Personal Information,” please contact [email protected] to execute our Data Processing Addendum.”
  3. Microsoft is doing a lot in the area of responsible AI use, see operationalizing responsible AI for the details. This article on MS privacy approach is also worth a read.?

References

If you loved the picture for this weeks article checkout this Twitter thread and Writeup.

Beverly Wright, PhD, CAP

VP - Data Science & AI at Wavicle, DS&AI Thought Leader, Executive Professor at UGA, National Speaker, Podcast host (TAG Data Talk), Board Chair & Member (INFORMS, TAG, Georgia Tech, Emory), Data Science & AI for Good

1 年
回复
?? Lee Gonzales

Engineering Director @ BetterUp - I build world class SaaS solutions & teams. Obsessed with GenAI, Agility, Strategy, Design, and Metacognition. AI Whisperer and Prompter.

1 年

Julie Chickillo heya, are you all at Guild building policies for this?

回复
?? Lee Gonzales

Engineering Director @ BetterUp - I build world class SaaS solutions & teams. Obsessed with GenAI, Agility, Strategy, Design, and Metacognition. AI Whisperer and Prompter.

1 年

Sam Masiello I started thinking about the topic we DM'd about, started thinking about it via this post :-). Very much looking for questions and feedback.

回复

要查看或添加评论,请登录

社区洞察