The PARTNER Framework: A practical guide to human-AI collaboration

The PARTNER Framework: A practical guide to human-AI collaboration

We’ve all heard it before: "AI should support people, not replace them." It sounds great in theory, but how do organizations actually make it happen?

Those who are familiar with my background won't be surprised that I find the dynamic between humans and AI particularly intriguing. Especially now that AI is becoming increasingly integrated into workplace settings, understanding this interaction is crucial to foster responsible adoption. For many organizations, however, this is still a challenge.

We’ve all heard it before: "AI should support people, not replace them." It sounds great in theory, but how do organizations actually make it happen?

To trust or not to trust: AI aversion

Part of the challenge lies in how people feel about AI. Whether it’s mistrust, fear of job loss or concerns about fairness, research shows that there’s often hesitation to fully adopt the technology. This reluctance to trust or adopt AI products or tools seems to persist even when they outperform human decision-makers, a phenomenon known as algorithm aversion.

Overcoming this challenge is not just about technology, it’s about understanding ourselves as humans. Our biases, trust dynamics and need for control all play a central role in shaping how AI can truly support and enhance what we do.

Our biases, trust dynamics and need for control all play a central role in shaping how AI can truly support and enhance what we do.

If organizations want to succeed with AI, they need to go beyond technology to address the human side of the equation. Achieving this balance is critical, as the success of AI initiatives often hinges on how well humans and AI systems work together. In this article, I’ll introduce the PARTNER Framework - a short guide designed to help organizations set up effective human-AI collaboration.

The PARTNER framework for human-AI collaboration

The PARTNER framework identifies seven key actions to foster effective collaboration between people and AI. Each component is grounded in research and real-world examples, hopefully making the principle “AI should augment, not replace people” practical and actionable.

Partner framework for effective human-AI collaboration

P - Participate: Involve employees in AI design

What it is and why it matters: When employees actively participate in AI design, they are more likely to trust and adopt the technology. Sounds evident, right? Yet, in my experience, it’s often overlooked. Organizations sometimes rush to implement AI without consulting the people who will use it. When employees feel excluded from the process, adoption becomes an uphill battle, and people will resist engaging with AI.

Participation on the other hand, fosters trust and ensures the AI product addresses challenges employees face in their daily work. While this approach might not always be the quickest, most cost-effective, or simplest path to adopting AI, sidelining employees entirely is simply not a viable option and will almost always backfire.

Evidence: Research shows that co-creating AI systems with users increases acceptance by fostering a sense of ownership and relevance. A review by Rogers et al. (2022) for instance, found that involving multiple stakeholders during design leads to mutual learning between developers and users. This participatory process not only enhances the system’s functionality but also builds trust and encourages smoother adoption.

How to apply it: Engage employees early in the design process. For instance:

  • Conduct user workshops early in the design phase: Organize workshops with employees to identify pain points and gather input on how AI can address their daily challenges. This ensures the system is tailored to real-world needs and fosters a sense of ownership from the start.
  • Create cross-functional AI teams: Include employees from different departments in the development process to integrate diverse perspectives. For example, involve end-users, managers, and technical experts in regular design reviews to ensure the AI system aligns with business goals and user expectations.
  • Prototype and test with employees: Share early prototypes with employees and collect their feedback through usability testing sessions. Use their input to refine the system, showing that their perspectives directly shape the final product and building trust in its relevance.

Example: A manufacturing company involves factory workers in designing an AI-powered maintenance system. Workers highlight specific machine behaviors that often precede breakdowns, helping developers create a predictive model that is both accurate and user-relevant.        

A - Assess: Conduct bias checks and audits

What it is and why it matters: Regular bias checks are crucial to maintain trust in AI systems, particularly in high-stakes areas like hiring, lending or customer interactions. Bias often stems from imbalanced training data or unintended algorithmic behaviors, which can result in unfair outcomes. Proactive audits help identify and mitigate these risks, ensuring AI systems deliver fair and objective results.

Evidence: A 2021 study by Zhou and colleagues highlighted that algorithmic fairness directly impacts user trust. When AI decisions are perceived as fair - such as ensuring equitable treatment across demographic groups - users are more likely to trust and rely on them. This reinforces the critical role of fairness in fostering confidence and adoption of AI tools.

How to apply it: Incorporate routine bias assessments into your AI workflows. For instance:

  • Implement routine bias audits: Schedule regular evaluations of AI systems to identify and address potential biases in training data or algorithms. Use diverse test datasets to ensure outcomes are fair across different demographic groups.
  • Establish clear fairness metrics: Define measurable fairness standards, such as equal error rates across demographic categories, and incorporate these metrics into the AI system’s performance evaluation.
  • Involve external auditors or experts: Partner with independent experts or third-party organizations to conduct unbiased reviews of AI systems, providing an additional layer of accountability and reinforcing user trust in the system's fairness.

Example: A recruitment platform regularly audits its AI for gender and racial bias by testing job recommendation outputs with diverse candidate profiles. These audits help ensure that all qualified applicants are equally represented in hiring pipelines.        

R - Realism: Set realistic expectations

What it is and why it matters: Humans tend to hold AI to higher standards than other humans: they show less forgiveness when AI makes errors, even when the AI performs as well - or better - than humans. However, evidence also suggests that experienced users - who understand AI’s limitations - are more forgiving of mistakes, highlighting the importance of educating users about what AI can - and cannot - deliver. Therefore, managing expectations about AI’s capabilities is key to preventing overreliance and reducing the risk of disappointment.

Evidence: A study by Kocielnik, Amershi and Bennett (2019) found that unrealistic expectations about AI often result in trust erosion when the system makes errors. It seems that humans often expect near-perfect accuracy, believing AI should outperform human capabilities.

How to apply it:

  • Design expectation adjustment techniques: Use onboarding tutorials and in-app messaging to prepare users for AI imperfections. For example, include disclaimers about accuracy levels and examples of scenarios where the AI might struggle, helping users form realistic expectations.
  • Provide regular information sessions: Conduct workshops or webinars to explain the AI tool’s capabilities and limitations in user-friendly terms. This ongoing education builds understanding and reduces the likelihood of disappointment when errors occur.

Collaboration between people and AI starts with a shared understanding of each other’s strengths and limits.
Example: A retail chain introduces an AI tool for inventory management but educates store managers that the tool forecasts trends based on historical data and may not account for sudden local events, encouraging managers to supplement predictions with their judgment.        

T - Train: Provide practical training

What it is and why it matters: Practical training is crucial to help employees confidently, responsibly and effectively use AI tools, ensuring they can interpret outputs and maximize the technology's value.

Evidence: Well-trained users are more likely to trust and adopt AI succesfully. For instance, a recent study by Huang & Ball (2024) showed that people who understand AI well tend to trust it more, also in critical areas like healthcare and transportation. Those with only moderate knowledge are often more skeptical, especially in high-stakes situations. These findings suggest that practical training is key in building trust and acts as a cornerstone of successful AI implementation.

How to apply it:

  • Develop role-specific training programs: Tailor training sessions to the specific needs of different user groups, focusing on how the AI tool applies to their roles and daily tasks.
  • Incorporate hands-on learning: Use real-world scenarios and interactive exercises to help employees practice using the AI tool. This approach builds confidence and ensures users can effectively interpret and act on AI outputs.
  • Offer ongoing support and refresher training: Provide regular updates and follow-up sessions to address questions, introduce new features, and reinforce key concepts, ensuring employees remain confident as the AI tool evolves.

N - Notify: Ensure transparency in AI decisions

What it is and why it matters: Transparency is essential for building trust in AI systems. When users understand how AI makes decisions, they’re more likely to trust its outputs and use them effectively. A lack of transparency on the other hand, can lead to skepticism and lower acceptance of AI recommendations.

Evidence: Research shows that users trust AI more when they understand how its recommendations are made. Studies by Lukashova-Sanz et al. (2023) and Kovari (2024) for instance, highlight that transparency and explanatory models significantly boost confidence in AI-based decision support systems.

How to apply it:

  • Design user-friendly interfaces with clear explanations: Create intuitive interfaces that provide concise, easy-to-understand insights into how AI decisions are made, helping users build trust and confidence.
  • Balance transparency with practicality: Share only relevant and clear information to avoid overwhelming users or exposing sensitive details, tailoring the level of transparency to the specific context.
  • Provide layered explanations for diverse needs: Offer high-level summaries for non-experts alongside detailed, technical insights for experts, ensuring both accessibility and depth for effective decision-making.

Example: An HR team using an AI-driven performance review system gets visibility into the criteria the AI considered - such as project completion rates, team feedback, or leadership assessments - through a user-friendly interface that explains these factors in clear, concise terms. The system offers layered explanations, allowing HR professionals to start with a high-level summary of the performance evaluation and drill down into detailed metrics or decision logic if needed.         

E - Empower: Give users control over AI outcomes

What it is and why it matters: Empowering users with the ability to adjust or override AI recommendations fosters a sense of control and trust, while also reducing the risk of overreliance. When users can combine AI outputs with their own judgment, they feel more confident in the decisions they make.

Evidence: Research shows that employees value AI most when it enhances, rather than dictates, their work. A study by Jeung and Huang (2023) for instance, revealed that users place greater trust in AI systems when they have the ability to modify outputs. This highlights the critical role of user empowerment in building trust and ensuring that AI serves as a collaborative tool rather than a rigid directive.

How to apply it: Design AI systems that allow for customization and human intervention.

  • Integrate adjustable AI outputs: Design AI systems with interactive features that allow users to tweak or modify recommendations. For example, include sliders, filters, or editable fields that let users adjust parameters or refine results based on their judgment. This functionality empowers users to take an active role in decision-making, fostering a collaborative dynamic.
  • Provide clear override options: Implement mechanisms that enable users to override AI-generated outputs when necessary. This could include easy-to-access override buttons or alternative workflows that allow users to combine their expertise with AI suggestions.

Empowerment is key: When users can guide AI decisions, they trust the technology more.
Example: A financial advisor platform lets users adjust AI-generated portfolio suggestions, allowing advisors to tailor investment plans based on unique client needs while still leveraging AI insights for efficiency.        

R - Refine: Enable easy feedback channels

What it is and why it matters: While the ability to adjust or override AI recommendations empowers users in the moment, integrating continuous feedback mechanisms strengthens trust and ensures the system evolves over time to stay relevant, accurate, and aligned with changing needs. Effective feedback loops demonstrate to users that the system can learn and adapt. In fact, feedback is a game changer.

Evidence: studies confirm that users value systems that are adaptable and responsive, reinforcing the importance of easy-to-use feedback channels in maintaining not just reliability but also user confidence and trust (Bach et al., 2024).

How to apply it: Set up user-friendly and accessible feedback mechanisms. For instance:

  • Implement simple feedback buttons: Include user-friendly feedback options (e.g., thumbs up/down, a comment box) directly within the AI interface to make it easy for users to provide input during their interactions.
  • Use guided feedback forms: Design short, focused surveys or forms that ask specific questions about the system's accuracy, usability, and relevance, ensuring feedback is actionable and easy to analyze.
  • Provide real-time feedback prompts: Trigger prompts for user feedback at key interaction points, such as after completing a task or resolving an issue, to capture timely and relevant insights.

Example: An AI-powered task prioritization tool could include a feedback panel where employees can rate the tool's recommendations (e.g., “Are these priorities aligned with your needs?” with options like “Yes,” “Partially,” or “No”). Employees could also provide specific comments, such as, “Task X is less urgent than task Y due to recent updates.” This feedback is automatically logged and analyzed to adjust the tool's algorithms, ensuring it remains relevant and aligned with evolving team priorities.        

Embracing a human-centered AI future with the PARTNER framework

The PARTNER Framework is a practical guide for making AI work alongside people, not instead of them. Each step - Participate, Assess, Realism, Train, Notify, Empower and Refine - focuses on actions that help organizations use AI in a way that builds trust, supports collaboration and keeps human input at the core.

By following these principles, businesses can make the most of AI’s strengths while still relying on human skills and judgment where it matters most. The framework turns the idea of “AI should augment, not replace people” into something real and achievable, showing how technology and people can work better together.




Kim Bracke

AI Strategy & Transformation | Behavioural Change | People & Culture

3 周

Spot on! A red thread throughout the PARTNER framework also seems to be the importance of building trust between humans and AI (in its intent and competence to augment work outputs).

回复
Alexandra Lutyens

Senior Innovation Specialist at Creative HQ

4 周

Laura - loved your presentation at BOIs Autonomous Summit (great event). Thanks for this perspective and framework. I am particularly interested in navigating ethics in this space so anything in particular you have to share on that topic would be so greatly appreciated. Would love to connect as well (I think I can only follow you)

Marcin Lobejko

Global Digital Business Partner || IT Executive - Digital Transformation Lead

2 个月

Very insightful and precisely describing how to build AI enabling culture in the organization, thanks Laura Stevens PhD for sharing. Indeed having such framework applied with support of strong change management shall make AI an enabler instead of a challenge.

Hein J.M. Knaapen

Managing Partner at CEO.works | Europe

2 个月

Again a coherent set of meaningful insights by Laura, all geared towards enhanced impact of AI on performance of people and businesses. It makes for great reading.

Dr. Vijay Varadi PhD

Lead Data Scientist @ DSM-Firmenich | Driving Data-Driven Business Growth

2 个月

Thanks for sharing Laura Stevens PhD, very insightful. I think, the following can also be added to the framework and enhance (PARTNER). To integrate AI as a collaborative partner, organizations should: 1. Identify Complementary Roles: Assign AI to handle repetitive tasks, allowing employees to focus on strategic and creative endeavors. 2. Foster Human-AI Collaboration: Encourage teams to work alongside AI tools, enhancing decision-making and productivity. 3. Invest in Training: Equip staff with the skills to effectively utilize AI, ensuring seamless integration into workflows. 4. Maintain Human Oversight: Implement governance to oversee AI operations, ensuring ethical standards and accountability.

要查看或添加评论,请登录

Laura Stevens PhD的更多文章

社区洞察

其他会员也浏览了