Increase Efficiency with Platform Cache

Platform Cache is a memory layer that stores your application's session and environment data for later access. Applications run faster because they store reusable data instead of retrieving it whenever needed. Note that Platform Cache is visible and mutable by default and should never be used as a database replacement. Developers should use cache only for static data that is either frequently needed or computationally expensive to acquire. Let's explore the use of cache in a simple Apex class.

@AuraEnabled(cacheable=true)
public static List<Options> fetchObjectNamesUsingGlobalDescribe() {
List<Options> objectNames = new List<Options>();
try {
Map<String, Schema.SObjectType> schemaMap = Schema.getGlobalDescribe();
for (String objectName : schemaMap.keySet()) {
Schema.DescribeObjectResult describeResult = schemaMap.get(
objectName
)
.getDescribe(SObjectDescribeOptions.DEFERRED);
if (describeResult.isQueryable()) {
objectNames.add(
new Options(describeResult.getLabel(), objectName)
);
}
}
} catch (exception e) {
throw new AuraHandledException(e.getMessage());
}
return objectNames
}
// CPU Time - 1307 ms
// Heap Size - 80,000 bytes

In the example above, we acquire objects in the environment to create a schema. The Schema.getGlobalDescribe() function returns a map of all sObject names (keys) to sObject tokens (values) for the standard and custom objects defined in the environment in which we're executing the code. Unfortunately, we're not caching the data, which makes this an expensive process. This code consumes 1,307 ms of CPU time with a heap size of 80,000 bytes. Let's improve this code by using a cache partition.

public static List<Options> fetchObjectNamesUsingGlobalDescribeFromCache() {
List<Options> objectNames = new List<Options>();
try {
// Instantiate cache partition
Cache.OrgPartition orgPartition = Cache.Org.getPartition('CACHE_PARTITION_NAME');
if (orgPartition != null) {
// Load from Cache
if (orgPartition.get('objectlistfromdescribe') != null) {
objectNames = (List<Options>) orgPartition.get('objectlistfromdescribe');
} else {
Map<String, Schema.SObjectType> schemaMap = Schema.getGlobalDescribe();
// Same as previous code
List<Options> objectNamesViaDescribe = ; //values populated from schema describe
// Put the values into the org partition
orgPartition.put('objectlistfromdescribe', objectNamesViaDescribe, 300,
Cache.Visibility.ALL,
true
);
}
}
}
return objectNamesViaDescribe;
}
// CPU Time - 20 ms
// Heap Size - 1,300 bytes

This code performs the same operation but caches the result. In line 5, we're instantiating a cache partition. We're running the same function to build our schema map; however, line 15 instructs the program to place the results in the cache for later use. Our processing requirements diminished significantly, consuming only 20 ms of CPU time.

Despite the breathtaking advances in processing power, developers should always ensure they are writing efficient code that possesses a minimal processing footprint and scales with increased volume.

Further Reading

Salesforce Developer Guide - Platform Cache

The Paradox of Efficiency

It started earlier than I thought. In January, I wrote an article making predictions for 2023. One of my subheadings was “A Year of Doing More with Less,” where I argued that companies need to look for focused, strategic areas of investment to increase efficiency. We’re now seeing significant layoffs in the technology sector. Year to date, Google has laid off 12,000 workers, Microsoft 10,000 employees, and Salesforce 8,000. Unfortunately, these companies are taking a short-term view of efficiency that will damage their long-term success. Instead of finding areas where technologies can work together to provide multiplicative value, these CEOs are chasing short-term gains over long-term efficiency. I would argue that this quest for efficiency may decrease real efficiency.

Aggressive Headcount Reduction Limits Cross-Selling

Customer acquisition has its limits. Eventually, continued growth requires selling additional services to existing customers. Gathering revenue figures from sales is a trivial task, but it is challenging to pinpoint how much customer satisfaction with the service of existing products plays a role. The difficulty attributing hard figures to servicing makes these areas prime targets for headcount reduction. Why would a customer consider making another purchase when the business cannot provide support for products you’ve already bought? Platform lock-in has limits, and customers will eventually move to a competitor. Headcount reduction decisions are often made with the flawed assumption that all other variables will remain constant—productivity gains elsewhere will offset the smaller workforce. But this is seldom true unless the reduction is minimal.

The Inefficient Process of Gaining Efficiency

A consequence of chasing efficiency is its opportunity cost—its drain of resources that would have promoted real efficiency in the long term. Isn’t it curious that many companies most aggressively pursuing efficiency at all costs are often stuck making incremental improvements to existing technology? Why aren’t they most often responsible for radical, groundbreaking innovations? Why do comparatively small startups with different organizational values often make these genuine innovations? Companies with aggressive management directives to slash costs and reduce overhead often fail to invest in areas that produce innovation. In the long term, this lack of investment profoundly impacts company culture, often precipitating an exodus of forward-looking employees. Our industrial society values rapid and predictable returns on investment and neglects the necessarily inefficient process of innovation—shareholders see it as wasteful. This is the crux of the paradox; the quest for “friction-free” processes may be slowing the discovery of more fundamental changes that would have a much more profound impact on efficiency.

Our society views imagination with a strong sense of ambivalence. Humans are naturally short-term thinkers, and it takes an abundance of thoughtfulness to understand how a series of decisions made today will make a larger impact tomorrow.