Unlocking the Secrets to Salesforce Design Mastery: Part I

Unlocking the Secrets to Salesforce Design Mastery: Part I

Salesforce architecture: where dreams of seamless integrations and scalable systems meet the harsh reality of governor limits and the occasional SOQL bug.

Building a great Salesforce architecture is like being a chef with a wild kitchen – you have powerful tools and ingredients, but you also have to figure out how to create a gourmet dish with just 150 queries, a sprinkle of Flow, and a dash of Lightning Web Components.

In this blog, we’ll dive into the principles of Salesforce architecture design with Pere Marti – because while Salesforce may be a no-code/low-code platform, creating a robust system still requires high-code thinking and a solid strategy (and maybe a little caffeine).


When designing a solution for high user volume, what principles do you prioritize to ensure scalability and performance optimization?

The answer to this question is: limits, limits, limits!

As a Salesforce dev/admin/ you will quickly find out that Salesforce has limits for everything. This will be specially surprising if you come from a more traditional programming background where your limits are not so clear cut, specially in the cloud you can consume as much resources (memory, execution time, database space, …) as you want. If you don’t optimize, no problem, your application will still run. Your “limit” will be the big fat bill you will get at the end of the month from your cloud provider.

In Salesforce, the platform will enforce you to stay within certain limits, if you go over, your application will just crash. So for any solution, but specially for the ones that require performance and scalability, you always need to design and build it by taking limits into account.

Let me give you a few of the most typical cases (if you’ve been a while here you’ve seen some of them):

  • Query limits – Too many SOQL queries: 101 This error occurs when your code executes more than 100 SOQL queries in a single transaction. This is often due to queries being run inside loops or inefficient code design. Typical solutions could be: Bulkify code: design your code to process multiple records at once, using collections like lists or sets. Avoid queries in loops: refactor your logic so that all necessary data is fetched upfront in one or a few queries before entering any loop.
  • Api limits – REQUEST_LIMIT_EXCEEDED This error happens when the total number of API calls in your Salesforce org exceeds the daily or hourly limit. Monitoring: ensure you have set your API usage notifications and keep an eye on the api limits https://developer.salesforce.com/blogs/2024/11/api-limits-and-monitoring-your-api-usage Caching: Use Salesforce Platform Cache or similar mechanisms to store frequently used data. Refactor your API design to consolidate multiple API calls into fewer, more comprehensive ones.(e.g. composite requests)
  • Execution time: Apex CPU Time Limit Exceeded – This error occurs when the total CPU time used by Apex code in a transaction exceeds approximately 10 seconds. The first thing you will need to do is use the Developer console’s analysis perspective to find out where is your “time sink”. Then you can try to: Optimize code: identify and eliminate unnecessary logic, reduce nested loops replacing them with maps or ensure we don’t enter triggers unnecessarily. Move to asynchronous processing: for resource-heavy operations, such as processing large datasets, consider using asynchronous Apex (e.g., Batch Apex, Queueable Apex, or Scheduled Apex). This shifts processing to a separate transaction with its own (higher) set of limits.

Just to be a bit more complete on my answer, to ensure a good performance on FE side you can use several of the caching features available in LWC, or lazy loading: only load the data or components that are immediately needed for display. https://developer.salesforce.com/docs/platform/lwc/guide/apex-result-caching.html


On the sharing side, for high-user volume you should always consider data skew: that is, when a single record has a very large number of child records associated with it, or when many users are associated with the same role or record. This can lead to performance degradation, especially during sharing recalculations triggered by changes in record ownership or sharing rules. Here is a more detailed explanation and some best practices to avoid it: https://www.salesforceben.com/data-skew-in-salesforce-why-it-matters/

Finally something much neglected, that you should consider specially for high-volume situations. Data archival and your archival strategy: Identify records that are no longer needed for daily operations (e.g., old opportunities, cases, or logs) and use an external archival solution or data warehouse to move these records outside of Salesforce.


For the full article follow the link . . . .

要查看或添加评论,请登录

Andrea Onhaus的更多文章

社区洞察

其他会员也浏览了