Would you rather receive performance feedback from a human or from artificial intelligence?
That seems such a strange question and one I had not pondered until recently.
For AI to be useful (think about self-driving cars, aviation, and medicine), trust must be present. And when it comes to evaluating trust and AI, the most important concepts are the same as evaluating trust in humans: vulnerability and risk.
Whether we trust AI depends to a large extent on our relationship with AI. When we think of AI as a partner or collaborator we share similar cognitive and emotional behaviors as we do with humans. In fact, when working with computers or AI, people tend to start naming their systems. (Hopefully, no one is calling them HAL anymore).
As we give AI information and it responds with answers or insights that prove reliable and provide those answers in a way that seems human-like, then our level of trust increases. In areas where the potential payoff is higher than what would be expected from another human, then we are often more willing to delegate that task to AI.
Yet, when the required decision-making process has a higher level of error or risk, we still prefer to have a human as our partner. Further, when it comes to accepting recommendations, the confidence we have in ourselves is often more important than the confidence or trust we place in AI.
Except when it comes to feedback on our performance.
In that situation, humans would rather have feedback from AI which is often perceived to be less biased and less emotional. Go figure.
Ultimately, fostering trust in AI involves accepting vulnerability, working to build and rebuild trust when necessary, and using common sense when evaluating processes and outcomes. By embracing this mindset, we can harness the potential of AI to enhance performance feedback while upholding ethical standards and preserving the human touch.
(Adapted from: Henrique, B. M., & Santos Jr, E. (2024). Trust in artificial intelligence: Literature review and main path analysis. Computers in Human Behavior: Artificial Humans, 100043.)
#coffee #cybersecurity #artificialintelligence #trust
Virtual Executive Communication Coaching that makes a difference - around the corner and around the world.
Just another reminder of the idiocy of performance reviews. Whether written by ChatGPT or a real live manager, they have less value than wallpaper. Performance reviews should be a series of formative feedback conversations during the year culminating in a confirmation conversation at the end of the year - what worked this year and what you should work on next year. The performance review paper is just an HR artifact that says we did something. Not whether we did it well or poorly - just that we did it. As managers, we should put our effort and care into the conversations. That's something we can't (and shouldn't) subcontract out to AI - yet!