Our Responsible AI Principles in Practice

Our Responsible AI Principles in Practice

(Coauthored with Keren Baruch , Grace Tang , Sakshi Jain , Xinwei Sam Gong , Alex Murchison , Jon Adams and Sara Harrington )

We recently shared our Responsible AI principles which summarized how we build using AI at LinkedIn. These principles guide our work and ensure we are consistent in how we use AI to (1) Advance Economic Opportunity, (2) Uphold Trust, (3) Promote Fairness and Inclusion, (4) Provide Transparency, and (5) Embrace Accountability. We also recently announced three initial products that leverage generative AI to help our members and customers be more productive and successful. Today, we’ll share lessons we’ve learned from our journey applying our Responsible AI principles as we developed these early generative AI powered tools for our members and customers.


About Generative AI

Generative AI is an AI tool that generates content (such as images or text) that is provided as an “output” in response to a user request, or “prompt.” The quality of a generative AI tool’s output depends on how well the prompt is crafted. The user plays the biggest role in whether the output is usable. They decide the prompt and then whether the generated AI output is accurate and appropriate for their use case. The more practiced a user is in crafting prompts, the better the outputs. At LinkedIn, we use powerful Azure OpenAI and in-house generative AI models to build tools that serve our members and customers by generating content in response to prompts.?

If you’d like to learn more about generative AI, we recently unlocked 100 AI courses to help all members of the global workforce learn about this new technology, including What is Generative AI by Pinar Seyhan Demirdag and Introduction to Prompt Engineering for Generative AI by Ronnie Sheer .


Collaborative Articles, Profile Writing Suggestions, and AI-Powered Job Descriptions

Let’s talk about our first generative AI-powered products.

With Collaborative Articles , we curate a set of general knowledge topics and leverage generative AI to create conversation starter articles. We then invite members with related skills and expertise to contribute to the articles, allowing them to contribute their lessons, professional experience and advice.?

With Profile Writing Suggestions , we are testing generative AI writing suggestions to help members enhance the “About” and “Headline” sections of their profiles. When members opt to use this tool, we leverage existing information on their profile, such as recent work experience, and use it together with generative AI tools to create profile content suggestions to save members time. ?Of course, customization is still important. That's why we encourage members to review and edit the suggested content before adding it to their profile to ensure it is accurate and aligns with their experience.??

With Job Descriptions , we aim to make it faster to draft job descriptions by taking starter information provided by a hirer (e.g., job title, company name, workplace title, job type, location) and using it to draft job description for hirers to review and edit.

These products are in the early days, and we are listening to and learning from our members and customers to keep improving.


Applying our Responsible AI Principles to Generative AI Powered Products?

Our Responsible AI principles shaped the development of Collaborative Articles, Profile Writing Suggestions, and AI-Powered Job Descriptions:?


1.Advance Economic Opportunity.?

LinkedIn’s vision is to provide economic opportunity for every member of the global workforce. When we build products, we think about how we can help our members achieve their goals. All of our AI-powered products are built with a human-centric lens.

By using Collaborative Articles to start knowledge sharing, we see the potential for progress towards: (i) helping knowledge seekers find the information they need from experts who have ‘been there done that’ and (ii) increasing the opportunity for expert members to demonstrate their craft by contributing to articles and sharing their knowledge. We also ground our Collaborative Articles topics in our skills taxonomy to increase the volume of relevant professional knowledge available to the world.?

With personalized content suggestions for your profile w, we want to help members tell their professional story, highlight their professional strengths and unique capabilities, and build a strong first impression. With AI powered Job Descriptions we focus on making the process of writing a job description much faster because hirers tell us they wish to spend more time on strategic work, and a significant number give up posting a job when they get to the job description stage.?


2. Uphold Trust.?

LinkedIn’s trust principles emphasize the responsibility we have to deliver trustworthy products, including proactively addressing privacy, security, and safety in everything we do. We leveraged many of the same practices we use for all of our product launches, and identified additional steps we would take in the context of AI:

  • Privacy and Security: All three products underwent our usual privacy impact assessment and security review processes to ensure their compliance with our privacy commitments, security standards, and applicable laws. With AI, privacy considerations included being thoughtful about the personal data we used in prompt engineering (e.g., Collaborative Articles focused on professional knowledge topics and not individuals) and ensuring that members had full control of their profiles (e.g., Profile Writing Suggestions are not added to a profile without the member reviewing and editing them themselves).?
  • Safety: All of our products undergo a safety assessment, and with these three tools we applied four key practices to enhance safety:?
  • Proactive Access Management: We carefully ramped member access to the product and introduced rate limits to reduce the likelihood of abuse (e.g., misuse of automations, testing inputs to “jailbreak” the tool). For all of these products, we limited access to the AI features in ways that allow us to watch for issues that might arise despite our safety, security, and privacy efforts.?
  • Proactive Thoughtful Prompt Engineering: We aimed to identify how prompts could be misused to mitigate potential abuse.

For Collaborative Articles, we carefully consider the prompts we use in terms of both what to write about and how to write them (e.g., professional tone). This allows us to avoid outputs that would could result in problematic content (e.g., AI generated financial advice). Similarly, with personalized writing suggestions for your Profile, while based on information already on the member’s profile, we constructed the prompt in a manner that reduced the risk of problematic output (resulting from “jailbreaking” the AI tool through nefarious prompts that aim to end run restrictions in AI that are designed to maximize the chance that outputs will be reasonable and appropriate).?

For Job Descriptions, the hirer provides the information we incorporate into the prompt, like job title and location. While these text fields posed some risk, we can mitigate this by standardizing the information through drop-down menus. These measures enable the generation of job descriptions that are responsive to the customer’s inputs while also reducing the potential for abuse (for example through injection of harmful text into the prompt).

  • Proactive Content Moderation. AI generated content is held to the same professional bar as other content on our platform, so for all of these generative AI tools, we apply our content moderation filters to both the member inputs for the “prompt” and the output. We do this for both the member powered prompts (i.e., AI powered writing suggestions for Profile and AI powered Job Descriptions) and the resulting outputs. Even where we control the prompt (e.g., Collaborative Articles), we run our content moderation filters on the output to minimize the chance of problematic content, as well as on member contributions to those articles.?
  • Reactive Content Moderation. Notwithstanding these proactive efforts, we recognize that our members can provide valuable feedback to ensure safety. All the outputs of generative AI tools that make it onto our platform benefit from our members reporting policy-violating issues with the content. We added features to our usual feedback tools that address issues that are specific to generative AI outputs, such as “hallucinations” (where AI repeatedly and confidently provides a wrong or unsupported response) so that we can take action.?


3. Promote Fairness and Inclusion.?

We have a cross-functional team working to design solutions and guardrails to ensure that generative AI tools proactively address potential bias and discrimination. To promote fairness and inclusion, we target two key aspects: (1) Content Subject and (2) Communities.?

  • Content Subject: Training data can reflect the stereotypes that exist in society and we actively work to avoid harmful stereotypes being included in generative AI tool outputs. To avoid this, we deployed multi-layered tactics throughout the product development life-cycle. We engineered our prompts to reduce the risk of biased content, leveraged blocklists to replace harmful terms with neutral terms, and monitored our member feedback to learn and improve. For example, we engineered the Job Descriptions prompt to output descriptions in a gender-neutral and non-discriminatory manner and filtered out potentially discriminatory content in the output using blocklists. For AI powered writing suggestions on the profile , generative AI is instructed to output inclusive language through the prompt, and the member feedback menu includes the option to flag “biased content.” For all three products, we continue to monitor all member feedback, and in particular feedback pertaining to biased content.
  • Communities: We believe that being inclusive requires us to think beyond problematic content like stereotypes, and challenge ourselves to expand the member communities served by generative AI tools. For example, we will work to expand the set of languages and content topics that members from a variety of cultures or industries may find useful. For Collaborative Articles specifically, we looked to invite an inclusive set of contributors to the articles (e.g. we considered gender representation of the expert member collaborators.?

We also continue to invest in a suite of methodologies and techniques to understand and promote fairness and inclusivity in our AI driven products.


4. Provide Transparency.

We believe that one of the key reasons LinkedIn is considered to be one of the most trusted social platforms is because we have committed to being transparent with members about the things that matter to them. With our use of generative AI tools, we need to meet the challenge of educating members about this technology and our use of it in ways that allow them to make their own decisions about how they want to engage with it. We recognize that generative AI tools may not alway get things right. For all three tools we’ve discussed, we also felt it was important to put members on notice that they should be thinking critically when reading content from generative AI.?

At the top of every Collaborative Article we make the use of AI clear to members and we provide more detailed information in our “Learn more .” We want members to be aware of and spot any issues - this leads to them feeling like true contributors and that they are helping make LinkedIn a better platform.

No alt text provided for this image

For profile writing suggestions and job descriptions, where we’re presenting AI generated content to members as suggestions, we notify members about the use of AI and inform members that they play an important role in deciding whether the content is useful and appropriate for their purposes.?

No alt text provided for this image
No alt text provided for this image

As a platform, we’re building out ways in which we abide by our transparency principle. We’ll continue to do so in a clear manner, and we’ll evolve our approach as members’ expectations and the products at issue change.


4. Embrace Accountability.

Embracing accountability means that we are following through on our Responsible AI principles. We look to deploy robust AI governance, building on the many processes and practices that we already use to keep our products trustworthy. Each of these products went through a rigorous process we call the trustworthy design review, where our new products and initiatives undergo a review and assessment by cross-disciplinary specialists, with a focus on privacy, security and safety (see Section 2). As part of this process, we identify and document potential risks and mitigations.?

For AI tools, we have additional assessments of training data and model cards. Our goal is to develop a richer understanding of how the AI models supporting our AI-powered tools were developed and function so that we can more appropriately assess risks and develop mitigations. In particular, this view of our AI ‘at the source’ enables us to exercise accountability at the ground level so individual teams contribute to our overall governance program.?

With AI, the key to our governance is human involvement (“human in the loop”) and human oversight. This also means that we are assessing how each AI-powered tool impacts our members, customers, and society as a whole. For example, AI tools use more computing energy. However, even with AI, we remain committed to be carbon negative and to cut our overall emissions by more than half by 2030.????

We recognize that governments and civil society around the world are working to figure out how to make AI work for humanity to help ensure that it is safe and useful. As best practices and laws around governance and accountability evolve, we will embrace those practices.?


Conclusion

We are inspired and driven by the transformative power generative AI tools have to help our members be more successful and productive. In applying our Responsible AI principles to these three tools, we are committed to using AI in ways that are aligned with our mission, are responsible, and provide value to our members and customers.? ?

?????? Ganesh Kumar

Technology Executive. Strategist. Investor. Entrepreneur.

1 年

Fantastic article. Reminds me of our conversations in earlier days, where balancing business value, people needs, and regulatory needs will all need to work in tandem, with people needs being the north star.

回复
Ari Entin

Head of Sports Marketing @ Amazon Web Services (AWS) | Integrated Marketing and Brand Communications

1 年

Love this! Fairness is a process…always

要查看或添加评论,请登录

社区洞察

其他会员也浏览了