The PARTNER Framework: A practical guide to human-AI collaboration
We’ve all heard it before: "AI should support people, not replace them." It sounds great in theory, but how do organizations actually make it happen?
Those who are familiar with my background won't be surprised that I find the dynamic between humans and AI particularly intriguing. Especially now that AI is becoming increasingly integrated into workplace settings, understanding this interaction is crucial to foster responsible adoption. For many organizations, however, this is still a challenge.
We’ve all heard it before: "AI should support people, not replace them." It sounds great in theory, but how do organizations actually make it happen?
To trust or not to trust: AI aversion
Part of the challenge lies in how people feel about AI. Whether it’s mistrust, fear of job loss or concerns about fairness, research shows that there’s often hesitation to fully adopt the technology. This reluctance to trust or adopt AI products or tools seems to persist even when they outperform human decision-makers, a phenomenon known as algorithm aversion.
Overcoming this challenge is not just about technology, it’s about understanding ourselves as humans. Our biases, trust dynamics and need for control all play a central role in shaping how AI can truly support and enhance what we do.
Our biases, trust dynamics and need for control all play a central role in shaping how AI can truly support and enhance what we do.
If organizations want to succeed with AI, they need to go beyond technology to address the human side of the equation. Achieving this balance is critical, as the success of AI initiatives often hinges on how well humans and AI systems work together. In this article, I’ll introduce the PARTNER Framework - a short guide designed to help organizations set up effective human-AI collaboration.
The PARTNER framework for human-AI collaboration
The PARTNER framework identifies seven key actions to foster effective collaboration between people and AI. Each component is grounded in research and real-world examples, hopefully making the principle “AI should augment, not replace people” practical and actionable.
P - Participate: Involve employees in AI design
What it is and why it matters: When employees actively participate in AI design, they are more likely to trust and adopt the technology. Sounds evident, right? Yet, in my experience, it’s often overlooked. Organizations sometimes rush to implement AI without consulting the people who will use it. When employees feel excluded from the process, adoption becomes an uphill battle, and people will resist engaging with AI.
Participation on the other hand, fosters trust and ensures the AI product addresses challenges employees face in their daily work. While this approach might not always be the quickest, most cost-effective, or simplest path to adopting AI, sidelining employees entirely is simply not a viable option and will almost always backfire.
Evidence: Research shows that co-creating AI systems with users increases acceptance by fostering a sense of ownership and relevance. A review by Rogers et al. (2022) for instance, found that involving multiple stakeholders during design leads to mutual learning between developers and users. This participatory process not only enhances the system’s functionality but also builds trust and encourages smoother adoption.
How to apply it: Engage employees early in the design process. For instance:
Example: A manufacturing company involves factory workers in designing an AI-powered maintenance system. Workers highlight specific machine behaviors that often precede breakdowns, helping developers create a predictive model that is both accurate and user-relevant.
A - Assess: Conduct bias checks and audits
What it is and why it matters: Regular bias checks are crucial to maintain trust in AI systems, particularly in high-stakes areas like hiring, lending or customer interactions. Bias often stems from imbalanced training data or unintended algorithmic behaviors, which can result in unfair outcomes. Proactive audits help identify and mitigate these risks, ensuring AI systems deliver fair and objective results.
Evidence: A 2021 study by Zhou and colleagues highlighted that algorithmic fairness directly impacts user trust. When AI decisions are perceived as fair - such as ensuring equitable treatment across demographic groups - users are more likely to trust and rely on them. This reinforces the critical role of fairness in fostering confidence and adoption of AI tools.
How to apply it: Incorporate routine bias assessments into your AI workflows. For instance:
Example: A recruitment platform regularly audits its AI for gender and racial bias by testing job recommendation outputs with diverse candidate profiles. These audits help ensure that all qualified applicants are equally represented in hiring pipelines.
R - Realism: Set realistic expectations
What it is and why it matters: Humans tend to hold AI to higher standards than other humans: they show less forgiveness when AI makes errors, even when the AI performs as well - or better - than humans. However, evidence also suggests that experienced users - who understand AI’s limitations - are more forgiving of mistakes, highlighting the importance of educating users about what AI can - and cannot - deliver. Therefore, managing expectations about AI’s capabilities is key to preventing overreliance and reducing the risk of disappointment.
Evidence: A study by Kocielnik, Amershi and Bennett (2019) found that unrealistic expectations about AI often result in trust erosion when the system makes errors. It seems that humans often expect near-perfect accuracy, believing AI should outperform human capabilities.
How to apply it:
Collaboration between people and AI starts with a shared understanding of each other’s strengths and limits.
领英推荐
Example: A retail chain introduces an AI tool for inventory management but educates store managers that the tool forecasts trends based on historical data and may not account for sudden local events, encouraging managers to supplement predictions with their judgment.
T - Train: Provide practical training
What it is and why it matters: Practical training is crucial to help employees confidently, responsibly and effectively use AI tools, ensuring they can interpret outputs and maximize the technology's value.
Evidence: Well-trained users are more likely to trust and adopt AI succesfully. For instance, a recent study by Huang & Ball (2024) showed that people who understand AI well tend to trust it more, also in critical areas like healthcare and transportation. Those with only moderate knowledge are often more skeptical, especially in high-stakes situations. These findings suggest that practical training is key in building trust and acts as a cornerstone of successful AI implementation.
How to apply it:
N - Notify: Ensure transparency in AI decisions
What it is and why it matters: Transparency is essential for building trust in AI systems. When users understand how AI makes decisions, they’re more likely to trust its outputs and use them effectively. A lack of transparency on the other hand, can lead to skepticism and lower acceptance of AI recommendations.
Evidence: Research shows that users trust AI more when they understand how its recommendations are made. Studies by Lukashova-Sanz et al. (2023) and Kovari (2024) for instance, highlight that transparency and explanatory models significantly boost confidence in AI-based decision support systems.
How to apply it:
Example: An HR team using an AI-driven performance review system gets visibility into the criteria the AI considered - such as project completion rates, team feedback, or leadership assessments - through a user-friendly interface that explains these factors in clear, concise terms. The system offers layered explanations, allowing HR professionals to start with a high-level summary of the performance evaluation and drill down into detailed metrics or decision logic if needed.
E - Empower: Give users control over AI outcomes
What it is and why it matters: Empowering users with the ability to adjust or override AI recommendations fosters a sense of control and trust, while also reducing the risk of overreliance. When users can combine AI outputs with their own judgment, they feel more confident in the decisions they make.
Evidence: Research shows that employees value AI most when it enhances, rather than dictates, their work. A study by Jeung and Huang (2023) for instance, revealed that users place greater trust in AI systems when they have the ability to modify outputs. This highlights the critical role of user empowerment in building trust and ensuring that AI serves as a collaborative tool rather than a rigid directive.
How to apply it: Design AI systems that allow for customization and human intervention.
Empowerment is key: When users can guide AI decisions, they trust the technology more.
Example: A financial advisor platform lets users adjust AI-generated portfolio suggestions, allowing advisors to tailor investment plans based on unique client needs while still leveraging AI insights for efficiency.
R - Refine: Enable easy feedback channels
What it is and why it matters: While the ability to adjust or override AI recommendations empowers users in the moment, integrating continuous feedback mechanisms strengthens trust and ensures the system evolves over time to stay relevant, accurate, and aligned with changing needs. Effective feedback loops demonstrate to users that the system can learn and adapt. In fact, feedback is a game changer.
Evidence: studies confirm that users value systems that are adaptable and responsive, reinforcing the importance of easy-to-use feedback channels in maintaining not just reliability but also user confidence and trust (Bach et al., 2024).
How to apply it: Set up user-friendly and accessible feedback mechanisms. For instance:
Example: An AI-powered task prioritization tool could include a feedback panel where employees can rate the tool's recommendations (e.g., “Are these priorities aligned with your needs?” with options like “Yes,” “Partially,” or “No”). Employees could also provide specific comments, such as, “Task X is less urgent than task Y due to recent updates.” This feedback is automatically logged and analyzed to adjust the tool's algorithms, ensuring it remains relevant and aligned with evolving team priorities.
Embracing a human-centered AI future with the PARTNER framework
The PARTNER Framework is a practical guide for making AI work alongside people, not instead of them. Each step - Participate, Assess, Realism, Train, Notify, Empower and Refine - focuses on actions that help organizations use AI in a way that builds trust, supports collaboration and keeps human input at the core.
By following these principles, businesses can make the most of AI’s strengths while still relying on human skills and judgment where it matters most. The framework turns the idea of “AI should augment, not replace people” into something real and achievable, showing how technology and people can work better together.
AI Strategy & Transformation | Behavioural Change | People & Culture
3 周Spot on! A red thread throughout the PARTNER framework also seems to be the importance of building trust between humans and AI (in its intent and competence to augment work outputs).
Senior Innovation Specialist at Creative HQ
4 周Laura - loved your presentation at BOIs Autonomous Summit (great event). Thanks for this perspective and framework. I am particularly interested in navigating ethics in this space so anything in particular you have to share on that topic would be so greatly appreciated. Would love to connect as well (I think I can only follow you)
Global Digital Business Partner || IT Executive - Digital Transformation Lead
2 个月Very insightful and precisely describing how to build AI enabling culture in the organization, thanks Laura Stevens PhD for sharing. Indeed having such framework applied with support of strong change management shall make AI an enabler instead of a challenge.
Managing Partner at CEO.works | Europe
2 个月Again a coherent set of meaningful insights by Laura, all geared towards enhanced impact of AI on performance of people and businesses. It makes for great reading.
Lead Data Scientist @ DSM-Firmenich | Driving Data-Driven Business Growth
2 个月Thanks for sharing Laura Stevens PhD, very insightful. I think, the following can also be added to the framework and enhance (PARTNER). To integrate AI as a collaborative partner, organizations should: 1. Identify Complementary Roles: Assign AI to handle repetitive tasks, allowing employees to focus on strategic and creative endeavors. 2. Foster Human-AI Collaboration: Encourage teams to work alongside AI tools, enhancing decision-making and productivity. 3. Invest in Training: Equip staff with the skills to effectively utilize AI, ensuring seamless integration into workflows. 4. Maintain Human Oversight: Implement governance to oversee AI operations, ensuring ethical standards and accountability.