Onboarding the AI Workforce: Common Concerns
Relevance AI
Enabling anyone to build their AI workforce. Scale your team with compute, not headcount.
By Rosh Singh
In 2018 Harvard Business Review conducted a survey amongst business leaders about their sentiment on AI. Only three quarters thought that AI would transform their business in the next 3 years. Now, it would be hard to imagine that number being any less than 100%. Despite that, most businesses are still finding it exceedingly difficult to implement AI in a meaningful way beyond simple functionality bolted onto their existing tooling.
I talk to business leaders every day about onboarding their first AI agents, which often yields unique insights about their challenges with scaling and implementing AI functionality. One of the most interesting data points are the objections that they raise, which are often very valid.
In this article, I’m going to shed some light on the most common of these concerns and how businesses can overcome them while still harnessing the power of the AI workforce.
“We are concerned about models being trained on our data”
Businesses, like people, have a right to privacy. Their intellectual property is just that, their property. For society, it’s a vital incentive for encouraging innovation and therefore promoting economic growth.
Recently, the New York Times have alleged that ChatGPT is trained on substantial amounts of their intellectual property without their permission. From a privacy perspective researchers from Indiana University were able to extract the contact details of 30 New York Times employees from ChatGPT.
Tech companies have a very checkered history of caring about legalities and privacy, preferring instead to “move fast and break things”.
For any business this is obviously concerning, both in terms of protecting their intellectual property AND their legal obligations to their staff, investors, and customers.
However, the reality of the situation is this - in order for AI agents to actually provide meaningful contributions they often need access to sensitive and proprietary data.
The key to navigating this concern is around actively making sure that data is exposed to are not being trained on the inputs or outputs produced. An important point to highlight is most of the concern is rooted in users putting sensitive data into ChatGPT, which is a free service provided by OpenAI, where the user is agreeing that their input is being used to train the model. This is not the case on the Relevance AI platform.
At Relevance, we allow our users to build AI agents that can leverage various LLMs, which are typically in one of three arrangements:
“I’ve heard about all these hallucinations, what if my AI agent makes stuff or even worse goes rogue or does something it’s not supposed to”
There’s a lot of infamous examples of this. Just this week Air Canada lost in court when it tried to argue against a passenger that had been given, and acted on, incorrect information provided by an AI chatbot on their bereavement policy. Whilst it seems like an obviously questionable decision to take a grieving passenger to court over one airfare, it does show that hallucinations can have very real-world consequences.
Although strange behavior of LLMs is often simply explained by stating that “AI is a black box”, hallucinations generally happen for a number of reasons, the main ones being:
领英推荐
Mitigating this risk is something that we and our users care a lot about at Relevance. One of the core parts of our product is Knowledge, where users can store their own “training data”, which is intelligently injected into LLM prompts based on the task at hand.
Good prompting is another contributing factor. Whilst it sounds easy to “just write good prompts”, it requires a deep understanding of how LLMs work, their strengths, and their weaknesses. With such nascent technology, “prompt engineering” is a huge skills gap at the moment.
One of the solutions we’ve created is our workflow builder, which allows anyone to create a prompt using a visual interface where they can provide the specifics of how an AI agent should operate. Through copious amounts of experimentation, we have figured out how to turn this into a schema that the LLMs are easily able to follow with a high degree of accuracy. This allows anyone with knowledge of a process to effectively translate and relay it to an AI agent.
Over time, what you will see is subject matter experts working in conjunction with AI specialists to develop a library of AI agent templates, designed to perform common roles or functions autonomously, and adaptable to each businesses unique requirements. This is what we’ve done with our BDR agent , and what we will soon be doing for other roles.
“Customers, staff, and partners won’t like it”
People like human interaction, and unfortunately, the previous generation of “chatbots” and other rules-based automation often yielded a bad experience. This was partly due to the underlying technology, but also the way they were implemented.
There is a major difference between rules-based automation, traditional RPA, or chatbots versus the AI agents of today. AI agents can use their own LLM-fueled cognition and discretion to dynamically determine the best course of action given the situation at hand, rather than adhering to a rigid set of rules. This means that they can complete a wider range of tasks autonomously, as they are able to produce generally higher quality output, but more importantly, deal with tricky edge cases that would typically rely on human intervention.
Often when we talk to customers, we talk about augmentation, not replacement, of their existing workforce. There are many cases where a situation is better handled by a human. Air Canada, for example, would have ended up not being a PR nightmare if the “chatbot” was able to recognize that it was a sensitive situation that required more careful handling and should have been escalated to a human.
Escalation is a key feature that we’ve built at Relevance AI. Users can set guidelines as to when AI agents should escalate what they are doing to a (human) colleague, either for approval, input, or takeover. Whether it’s a customer interaction like the above or if it’s not 100% sure what tax code to enter in an expense claim, an AI agent is able to use its discretion to escalate a situation when required.
“It’s overhyped! I’ve played with ChatGPT and it’s not very good. It’s just a toy”
We have customers that do some amazing things and are constantly surprised as to the different problems that exist. There’s a common pattern in technological innovations is that they are often labelled as a toy to begin with. The personal computer was initially considered to be a hobbyist gadget with limited practical use.
Sure, there’s lots of room for improvement but the internet, the car, the mobile phone - they were all toys at one point, and now it’s hard to imagine the world without their transformative impact.
In the journey towards integrating AI agents into business processes, it's crucial to acknowledge and tackle the genuine concerns that hamper smooth adoption. From data privacy, risks of errors, and customer acceptance to skepticism about AI efficacy, businesses encounter a range of obstacles. However, these challenges also present opportunities for innovation and improvement.
The one factor we simply cannot ignore is that the advent of AI becoming accessible in new ways is guaranteed to challenge the way that businesses have grown for the last few hundred years. We believe that by 2030 every business will have an AI agent, completing more and more meaningful work and enabling exponential scaling without having to increase head count. Sign up to have a look at what we’ve built so far.
Keep em coming, I love this series and building with Relevance AI!!