In the field of computer science, design patterns are tried-and-true techniques for resolving frequent problems in software development.
So it stands to reason that an anti-pattern is the polar opposite of a design pattern: an inefficient software implementation.
Understanding these anti-patterns helps the user to avoid or eliminate errors.
-
Some initial loads are quite fast, but later loading of some related objects is plagued with lock contention issues, resulting in batch retries, leading to slow loads that frequently result in load failures. The issue occurs when using bulk loading with parallel settings.
It’s critical to understand Salesforce’s locking mechanisms and how they affect large data loads. One workaround is to pre-sort the child entries in the CSV file by parent Id to reduce the possibility of parent record lock contention among parallel load batches.
If a sharing arrangement was already set up prior to the data load, it could contribute to the poor loading performance, which subsequently increases lock contention concerns.
Both load and sharing calculation efficiency might be greatly improved by postponing the org’s sharing calculations until after data load.
-
Attempting to refresh the entire local database results in long-running Bulk API tasks that unnecessarily hold onto asynchronous processing threads and impede other batch activity.
Performing incremental data backups entails backing up data that has been added or updated since the last incremental backup. Use queries that filter records based on SystemModstamp (a standard field in all objects with an index) rather than the LastModifiedDate field when doing so (not indexed).
It is vital to prepare an acceptable data backup plan, and with the backup-failure plan, we are now a pro.
-
Reports that take a long time to complete, sometimes timing out and failing entirely, are all afflicted by the same problem–inefficient reports with non-selective filters on unindexed fields. In addition, because of the fact that each sales rep creates nearly identical poorly performing reports, the same labour must be repeated continuously to tune them.
So what can we do to address such an issue?
One of the ideas is to anticipate problems by giving rights to sales reps to make reports, along with educating users on how to build them effectively. As a consequence, such reports can scale as the company’s database increases. Another available option is to create a library of public, standardised, and optimised reports to fulfil the needs of sales reps. Fewer reports to tune and manage, along with a higher level of user satisfaction.
-
Both sharing recalculations and Bulk API operations use a pool of asynchronous processing threads. Because of the concurrency, both jobs take longer to complete than if they were planned separately. Furthermore, if the complete backup was replaced with a considerably faster incremental backup, the time required for the backup (and thread use) would be substantially lower, allowing for more time for the sharing recalculation.
-
While it is critical to move your data to Salesforce as soon as possible, you must also ensure that you have selected the correct information and that the data has been cleaned up.
Although it may be natural to hold on to outdated documents, now comes the time to transfer only important and up-to-date information. Salesforce apps can assist you in storing, managing, and analysing massive volumes of client data that can be used in various ways. However, if the data is “dirty”–incomplete or inaccurate–it is useless to anything. Dirty data can range from blank fields to obsolete information to typos and spelling mistakes. Fortunately, Salesforce’s data cleansing tools can assist you in getting your data back on track.
- PROTIP 1: Consider how one part of the implementation might affect others and test the design for scalability under planned workloads before going live into production.
- PROTIP 2: Remove unnecessary formula fields from the data model if your reports and queries are slow and inefficient. You can try to implement them differently, i.e. using trigger or automation tools.
- PROTIP 3: Design and structure data ownership for your organisation from the very beginning. Users own the data they create. Using role hierarchy and sharing rules we can provide suitable access for others.
? Lesson learned: A little time dedicated to planning and testing upfront can save a lot of time consumed by addressing the technical debt afterwards.