Harnessing the Potential of Big Objects: Achieving Reliable Performance whether you have 1 million records, 100 million, or even 1 billion Records
?Unleashing the Power and Defining Features of a Massive Object:
Big objects typically store hundreds of millions or billions of records. They can save Salesforce records for compliance or auditing. Creating, populating, and accessing big object records requires tailored cogitation.
Not all field types are supported, unlike sObjects. When constructing a big entity, an index is generated for record queries. SOQL can query big object records, although not all actions are supported.
?Storing Large volume data -
Big objects operate on the Salesforce platform to store extensive data, such as hundreds of millions or billions of records, for long-term auditing, compliance, and historical analysis. This ensures efficient and reliable data management in login-run
Salesforce sObjects reports and queries slow considerably with large record volumes. Performance is unaffected by querying big object records.
?Querying Big Objects -
It's essential for LDV-ready organization design. Understanding query optimization and designing selected list views, reports, and SOQL searches is crucial.
- Storing vast quantities of data in the sandbox can have a negative impact on performance, particularly when it comes to doing searches. Search refers to the capacity to retrieve records by querying them using unstructured text. The Salesforce search architecture relies on its own data store, specifically designed to optimize text searches.
- ?Data must first be indexed before it can be searched. Most text fields are automatically indexed by Force.com, allowing users to create cross-object searches and rapidly identify records that include strings of interest. Indexed searches begin by scanning the indexes for relevant information, then filtering the results using access permissions, search limitations, and other filters.
- ?This generates a result set that generally includes the most pertinent outcomes. Once the result set reaches a pre-established size, any leftover records are destroyed. Subsequently, the outcome set is employed to interrogate the database and extract the specific fields that are visible to a user. Furthermore, the addition or modification of substantial amounts of data might significantly prolong this entire procedure.
Salesforce Big Objects Async SOQL Retirement
- With the Summer '23 release, Salesforce discontinued?and ceased support for the Async SOQL functionality of Big Objects.? It is being substituted with APIs that are more recognizable to customers.
What does this change mean to us?
- Following the update to the Summer '23 release, utilizing the Bulk API or batch Apex was necessary?to?query or generate reports on bespoke Big Objects. All ongoing tasks that were executing Async SOQL became inaccessible.
- Batch Apex is an alternative to Async SOQL for automated processing on a Big Object or ApiEvent, ReportEvent, or ListViewEvent. Apex does not allow to query an infinite number of Big Objects. Apex heap limits for sync are 6 MB and async heap limits are 12 MB. For example, if you have 6000 entries, each of which is 1KB in size, you can only keep 6000 records in memory.
Query cogitation -
- Without any spaces between the first and last fields in the query, construct an index query beginning with the first field declared in the index. Any field in your query can have either = or IN, but you can only use IN once.
- ?Both query and queryAll procedures are supported by bulk API queries. The queryAll action retrieves records that have been removed as a result of a merging or delete operation. The queryAll method also returns information about Task and Event records that have been archived.
- ?The operators !=, LIKE, NOT IN, EXCLUDES, and INCLUDES are invalid in all queries.
- ?Aggregate functions are not permissible in any query.
?Cogitation to leverage the power of Big objects -
- Encryption is not supported for big objects.? While archive encrypted data from a standard or custom object, it is saved on the big object in clear text.
- ?Standard or custom object field history is encrypted while using?Salesforce Shield Platform Encryption. The Shield field history archive is used to archive data for field history. At rest, big objects obey encryption.
- ?When working with huge amounts of data and writing batches of records using APIs or Apex, might?have a partial batch failure in which some records are written but others are not. This sort of behavior is expected because the database is very responsive and consistent at scale. In these circumstances, enforce retry mechanism until all records are written.
- ?Transactions are not supported by big objects.? When reading or writing to a big? object using a trigger, process, or flow on a sObject, utilize asynchronous Apex. The Queueable interface in Asynchronous Apex separates DML activities on various sObject types to prevent the mixed DML issue.
- ?Writing code asynchronously enhances its ability to handle database lifecycle events.
Because of its ability to manage enormous amounts of data within the Salesforce platform, Big Objects is the ideal solution to the long-standing problem of Salesforce data archiving. Big Objects, made possible by DataArchiva, came as a rescue for several leading organizations utilizing a Salesforce CRM. When it comes to archiving historical or less utilized Salesforce data (such as cases, leads, contacts, opportunities, old emails, etc.), DataArchiva is the ONLY Native Data Archiving Solution for Salesforce powered by Big Objects. This ensures data integrity while providing seamless access.