The Good Bad And Ugly Of AgentForce
Steve Fouracre
Founder of SES a Salesforce app which vastly speeds up Salesforce development. Also working at Metacube as Head of Salesforce Europe.
We’ve all seen the hype generated by AgentForce recently. In this article we will learn:
● what is AgentForce
● how can AgentForce be used for Good
● how to know if AgentForce is a good fit
● what are the probable and possible Ugly sides of AgentForce.
So why AgentForce at all?
To understand what AgentForce is we need to first understand why AgentForce was created. Before AgentForce we had Co-Pilot and Chat Bots. Co-Pilot was not widely adopted, principally because it was not good at being pro-active, such as, “Would you like me to book your order?”, or “Because you enjoyed your Caribbean cruise last year you might like to look at a great offer we have for a cruise down the Nile”. For these reasons Co-Pilot has been mainly adopted for internal business use rather than engaging with customers. Salesforce Chat Bots will only perform what they were strictly prescribed to do; they do not accurately interpret a customer’s intention and cannot be proactive in their responses.
An alternative approach widely adopted has been to develop web forms. A Web form captures pertinent information from the customer. The benefits of this approach is that you can strictly control information captured and information returned to the customer. The negative of this is that it is a very rigid type of capturing information, the cost of ownership is high and responses very limited. To gain greater flexibility many companies have supplemented their on-screen forms with support agents; but the cost of running support teams is high, training teams is costly, increased risk to the company if employees go ill or leave the company, and providing a 24 hour support can be operationally difficult and costly.
It was clear an alternative was necessary.
Agentforce is a groundbreaking AI-powered platform from Salesforce that allows businesses to create and deploy autonomous agents for various functions like sales, marketing, and customer service. These AI agents can handle tasks independently, learn from business data, and make decisions without human intervention, escalating to human employees only when necessary.
With Agentforce, companies can quickly build custom AI agents using low-code tools and ready-made templates, making it accessible to teams without coding expertise.
AgentForce has been built to inherently and instantly know all of your data, your meta data and be able to take action. AgentForce will never take a day of work or even a minute off work and drives down the operational costs of running a support team and other similar teams, pertinent to function of the AgentForce agent.
AgentForce unites humans, AI, data, and actions into a single system, helping businesses streamline operations, improve customer interactions, and make better decisions; all with minimal coding or AI expertise.
Benefits of AgentForce are broadly divided into 5 pillars. Prior to AgentForce it has not been possible to achieve all 5 pillars.
What are the AgentForce costs?
What does a typical use case of AgentForce look like to replace or supplement support agents?
Companies using AgentForce incur 2 costs:
1. A license fee $75 / month / user. This user is someone who will create the agents, not the users of agents.
2. Coupled with the license fee there is a consumption based cost for each prompt submitted by the user between $2 and $3 per submission. A submission consumes what is called an Einstein
Request. The calculation of how many Einstein Requests are consumed is complicated and is beyond the scope of this article. The scope of our article will be limited to assessing the financial gains of using AgentForce compared to employing support agents.
领英推荐
Let’s compare the cost of an employee answering support requests, say an employee could answer on average 1 request every 5 minutes, hence 12 requests per hour, and each request uses a minimum of 5 prompt responses. The equivalent cost using AgentForce would be a minimum of $2 per prompt response multiplied by 12 requests multiplied by 5 responses. Therefore, the equivalent cost of a support agent using AgentForce would be around $120 per hour. This indicates there won’t be any significant saving compared to using AgentForce. As a matter of fact, it looks to be more expensive right now over a human agent!
If there is no significant financial gain, does the accuracy of the responses, risks and competency of AgentForce in comparison to a human support agent make for a viable business case?
First, let’s examine the accuracy of the responses. We conducted a surgeon’s dissection of the capabilities of AgentForce.
Inaccurate Responses and Risks
The data source used to frame responses to prompts can either be the customer’s data or data within the LLM. The latter source comes chiefly from the internet. From extensive testing, it was concluded that AgentForce often produced erroneous responses. Since the internal mechanisms of the AI engine governing the outputs of AgentForce is opaque, it is impossible to determine the data sources that AgentForce is using when responding to prompts; but since the data presented by AgentForce for erroneous responses could not be found within the customer data the only logical conclusion is that AgentForce is making autonomous decisions which source of data to use to answer customer questions. Although the errors in AgentForce’s responses did not occur when instructions to AgentForce were changed, the side-effect this time
was that AgentForce did not know how to proceed. Therefore, it is often difficult to optimise instructions and topics ensuring reliable outcomes to prompt messages from customers.
We conducted a trial experiment, setting up a hypothetical restaurant and allowing customers to ask questions via an AgentForce chat about the restaurant. Whilst conversing with the AI agent, the customer asked “Do you have any allergy ingredients with your menu?”. The AI agent answered the question by providing a breakdown of the menu, consisting of a choice of starters, main courses and desserts. The AI agent had missed what the customer was exactly asking for. But more startling was the fact that the AI agent had provided a menu. The reason why this was startling was, we had not inputted any meals into Salesforce. AgentForce had autonomously decided that the best way to answer the question was to provide an example of a menu that it had learnt within its LLM, gained from many data points provided to the LLM. So AgentForce is not clever enough to understand when it is useful to supplement its responses with data learnt from the internet or Salesforce, and when it is not beneficial to. With the Topics and Topic Actions that were configured, it was expected AgentForce would redirect to a human agent. This scenario could have led to a far more dangerous outcome; the menu provided may have not listed ingredients that had any allergies, but the food the customer consumed could have contained allergies. If you compare this use case with an implementation, using, either AgentForce’s predecessor, Chat Bot, or a Web Form, the outcome would not be the same and the response would likely have been to redirect the customer to a human agent, because it would not have the capability to answer. This may be limiting, but in many scenarios could be more accurate and less risky.
Regarding other risks, AgentForce allows OpenAI to access all of a customer’s data and to transmit this data to OpenAI. Salesforce states that no data will prevail within the Chat GPT LLM and sensitive data will be masked. However, it remains to be seen how customers will react, either implicitly trusting statements from Salesforce without seeing the actual agreements signed between the 2 companies. Privacy, security and data prevalence are particularly important for Government, regulatory bodies, finance and health organisations, therefore these issues could be particular obstacles preventing adoption of AgentForce within these sectors.
Organisations must ensure they have robust data governance mechanisms in place ensuring data accuracy, validation, profiling, integrity, deduplication, normalization, and deprecation and archival of unnecessary data. If organisations do not implement such governance mechanisms they risk AgentForce increasing hallucinations, toxicity, data privacy, security and data prevalence, resulting in poor responses, resolution rates and customer confidence and trust.
Source: https://www.gartner.com/en/newsroom/press-releases/2023-11-06-gartner-says-ai-ambition-and-ai-ready-scenarios-must-be-a-top-priority-for-cios-for-next-12-24-months, Forrester January 2024
Before embarking on an AgentForce implementation it is recommended that improvements to systems, processes and data are implemented first. This will enable a far richer implementation of AgentForce with fewer data inaccuracies and risks.
Is AgentForce a good fit
To assess competency of AgentForce, after implementing the AI Restaurant, we posed a range of tests and was able to compile a guide to when AgentForce is a good fit or not, depicted below:
Although we have focused on using AgentForce in a support context, AgentForce can be used in sales, marketing, advisory and HR contexts. I’m sure other use cases will be invented as the market continues to adopt AgentForce.
AgentForce is an incredible leap in the direction of autonomous AI agents, but it is not quite at the stage where we can be confident that responses to customer queries will be accurate and will pose limited risks. Until companies improve their data and AgentForce evolves, customer response to AgentForce implementations will be as insipid as their Chat Bot predecessors.
The positive news is that companies have time to get their systems, processes and data AI ready.
Salesforce SI Partner | Passionate about Salesforce Technologies | Salesforce Consultant & Architect | Industry (Vlocity) Cloud Expert | Agentforce, AI & Digital Transformation Consultant
1 个月Thanks for the insights Steve Fouracre