Our Responsible AI Principles in Practice
(Coauthored with Keren Baruch , Grace Tang , Sakshi Jain , Xinwei Sam Gong , Alex Murchison , Jon Adams and Sara Harrington )
We recently shared our Responsible AI principles which summarized how we build using AI at LinkedIn. These principles guide our work and ensure we are consistent in how we use AI to (1) Advance Economic Opportunity, (2) Uphold Trust, (3) Promote Fairness and Inclusion, (4) Provide Transparency, and (5) Embrace Accountability. We also recently announced three initial products that leverage generative AI to help our members and customers be more productive and successful. Today, we’ll share lessons we’ve learned from our journey applying our Responsible AI principles as we developed these early generative AI powered tools for our members and customers.
About Generative AI
Generative AI is an AI tool that generates content (such as images or text) that is provided as an “output” in response to a user request, or “prompt.” The quality of a generative AI tool’s output depends on how well the prompt is crafted. The user plays the biggest role in whether the output is usable. They decide the prompt and then whether the generated AI output is accurate and appropriate for their use case. The more practiced a user is in crafting prompts, the better the outputs. At LinkedIn, we use powerful Azure OpenAI and in-house generative AI models to build tools that serve our members and customers by generating content in response to prompts.?
If you’d like to learn more about generative AI, we recently unlocked 100 AI courses to help all members of the global workforce learn about this new technology, including What is Generative AI by Pinar Seyhan Demirdag and Introduction to Prompt Engineering for Generative AI by Ronnie Sheer .
Collaborative Articles, Profile Writing Suggestions, and AI-Powered Job Descriptions
Let’s talk about our first generative AI-powered products.
With Collaborative Articles , we curate a set of general knowledge topics and leverage generative AI to create conversation starter articles. We then invite members with related skills and expertise to contribute to the articles, allowing them to contribute their lessons, professional experience and advice.?
With Profile Writing Suggestions , we are testing generative AI writing suggestions to help members enhance the “About” and “Headline” sections of their profiles. When members opt to use this tool, we leverage existing information on their profile, such as recent work experience, and use it together with generative AI tools to create profile content suggestions to save members time. ?Of course, customization is still important. That's why we encourage members to review and edit the suggested content before adding it to their profile to ensure it is accurate and aligns with their experience.??
With Job Descriptions , we aim to make it faster to draft job descriptions by taking starter information provided by a hirer (e.g., job title, company name, workplace title, job type, location) and using it to draft job description for hirers to review and edit.
These products are in the early days, and we are listening to and learning from our members and customers to keep improving.
Applying our Responsible AI Principles to Generative AI Powered Products?
Our Responsible AI principles shaped the development of Collaborative Articles, Profile Writing Suggestions, and AI-Powered Job Descriptions:?
1.Advance Economic Opportunity.?
LinkedIn’s vision is to provide economic opportunity for every member of the global workforce. When we build products, we think about how we can help our members achieve their goals. All of our AI-powered products are built with a human-centric lens.
By using Collaborative Articles to start knowledge sharing, we see the potential for progress towards: (i) helping knowledge seekers find the information they need from experts who have ‘been there done that’ and (ii) increasing the opportunity for expert members to demonstrate their craft by contributing to articles and sharing their knowledge. We also ground our Collaborative Articles topics in our skills taxonomy to increase the volume of relevant professional knowledge available to the world.?
With personalized content suggestions for your profile w, we want to help members tell their professional story, highlight their professional strengths and unique capabilities, and build a strong first impression. With AI powered Job Descriptions we focus on making the process of writing a job description much faster because hirers tell us they wish to spend more time on strategic work, and a significant number give up posting a job when they get to the job description stage.?
2. Uphold Trust.?
LinkedIn’s trust principles emphasize the responsibility we have to deliver trustworthy products, including proactively addressing privacy, security, and safety in everything we do. We leveraged many of the same practices we use for all of our product launches, and identified additional steps we would take in the context of AI:
For Collaborative Articles, we carefully consider the prompts we use in terms of both what to write about and how to write them (e.g., professional tone). This allows us to avoid outputs that would could result in problematic content (e.g., AI generated financial advice). Similarly, with personalized writing suggestions for your Profile, while based on information already on the member’s profile, we constructed the prompt in a manner that reduced the risk of problematic output (resulting from “jailbreaking” the AI tool through nefarious prompts that aim to end run restrictions in AI that are designed to maximize the chance that outputs will be reasonable and appropriate).?
领英推荐
For Job Descriptions, the hirer provides the information we incorporate into the prompt, like job title and location. While these text fields posed some risk, we can mitigate this by standardizing the information through drop-down menus. These measures enable the generation of job descriptions that are responsive to the customer’s inputs while also reducing the potential for abuse (for example through injection of harmful text into the prompt).
3. Promote Fairness and Inclusion.?
We have a cross-functional team working to design solutions and guardrails to ensure that generative AI tools proactively address potential bias and discrimination. To promote fairness and inclusion, we target two key aspects: (1) Content Subject and (2) Communities.?
We also continue to invest in a suite of methodologies and techniques to understand and promote fairness and inclusivity in our AI driven products.
4. Provide Transparency.
We believe that one of the key reasons LinkedIn is considered to be one of the most trusted social platforms is because we have committed to being transparent with members about the things that matter to them. With our use of generative AI tools, we need to meet the challenge of educating members about this technology and our use of it in ways that allow them to make their own decisions about how they want to engage with it. We recognize that generative AI tools may not alway get things right. For all three tools we’ve discussed, we also felt it was important to put members on notice that they should be thinking critically when reading content from generative AI.?
At the top of every Collaborative Article we make the use of AI clear to members and we provide more detailed information in our “Learn more .” We want members to be aware of and spot any issues - this leads to them feeling like true contributors and that they are helping make LinkedIn a better platform.
For profile writing suggestions and job descriptions, where we’re presenting AI generated content to members as suggestions, we notify members about the use of AI and inform members that they play an important role in deciding whether the content is useful and appropriate for their purposes.?
As a platform, we’re building out ways in which we abide by our transparency principle. We’ll continue to do so in a clear manner, and we’ll evolve our approach as members’ expectations and the products at issue change.
4. Embrace Accountability.
Embracing accountability means that we are following through on our Responsible AI principles. We look to deploy robust AI governance, building on the many processes and practices that we already use to keep our products trustworthy. Each of these products went through a rigorous process we call the trustworthy design review, where our new products and initiatives undergo a review and assessment by cross-disciplinary specialists, with a focus on privacy, security and safety (see Section 2). As part of this process, we identify and document potential risks and mitigations.?
For AI tools, we have additional assessments of training data and model cards. Our goal is to develop a richer understanding of how the AI models supporting our AI-powered tools were developed and function so that we can more appropriately assess risks and develop mitigations. In particular, this view of our AI ‘at the source’ enables us to exercise accountability at the ground level so individual teams contribute to our overall governance program.?
With AI, the key to our governance is human involvement (“human in the loop”) and human oversight. This also means that we are assessing how each AI-powered tool impacts our members, customers, and society as a whole. For example, AI tools use more computing energy. However, even with AI, we remain committed to be carbon negative and to cut our overall emissions by more than half by 2030.????
We recognize that governments and civil society around the world are working to figure out how to make AI work for humanity to help ensure that it is safe and useful. As best practices and laws around governance and accountability evolve, we will embrace those practices.?
Conclusion
We are inspired and driven by the transformative power generative AI tools have to help our members be more successful and productive. In applying our Responsible AI principles to these three tools, we are committed to using AI in ways that are aligned with our mission, are responsible, and provide value to our members and customers.? ?
Technology Executive. Strategist. Investor. Entrepreneur.
1 年Fantastic article. Reminds me of our conversations in earlier days, where balancing business value, people needs, and regulatory needs will all need to work in tandem, with people needs being the north star.
Head of Sports Marketing @ Amazon Web Services (AWS) | Integrated Marketing and Brand Communications
1 年Love this! Fairness is a process…always