Introduction
The overhead in starting a Lambda invocation commonly called Cold Starts is the most common problem faced by Serverless platforms. For some workloads where latency is critical, Cold Starts is something to avoid to keep things running smoothly. For that, some strategies have been set up to prevent it, each with its logic, among which we have Lambda SnapStart, Provisioned Concurrency, and Custom Warmer. So In this article, we will highlight the contrast between these three strategies mentioned above following different points.
The YouTube Channels in both English (En) and French (Fr) are now accessible, feel free to subscribe by clicking here.
Lambda SnapStart
How it Works
SnapStart as seen previously is a performance optimization technique that helps reduce the Lambda function’s initialization time. This strategy is fully powered by AWS, it prevents a Cold starts by creating a snapshot of the function when releasing the version, and then at the next invocations will reuse that cached version. With SnapStart, the Cold Starts is improved by up to 90%.
Pricing
Using SnapStart requires no additional cost, it’s free.
Supported Runtime
It’s available for only Java(11) runtime (Correto) at the moment.
Complexity to Set Up
It's available through the AWS console and doesn’t require any changes to your source code, you just have to activate and let it does magics.
Limit
One consequence is that the ephemeral data or credentials have no expiry guarantees since it uses snapshot resuming. For example, if your code uses a library that creates an expiring token at a function level, it can expire when a new instance of the function is launched via SnapStart.
Also, in the case where your code establishes a long-term connection to a network service during the init phase, the connection will be lost during the invocation process.
Here's a more detailed article that covers its impacts and how to set it up:

Provisioned Concurrency
How it Works
Fully powered by AWS, Provisioned Concurrency keeps your function warm, and ready to respond in double-digit milliseconds at the scale you provision. With this option enabled, you choose how many instances of your function you run simultaneously for incoming requests, unlike the in-demand where Lambda decides when to launch a new one per request.
The particularity of this feature is its startup speed, which is because that all setup processes happen before the invocation including the initialization code, and keeps the function in the state with your code downloaded and the underlying container structure all set. This feature is available with published versions or alias of your function only.
Pricing
There are additional costs related to it :
- You pay for how long provisioned capacity is active.
- You pay how many concurrent instances should be available.
Supported Runtime
It is available for all runtimes.
Complexity to Set Up
Provisioned Concurrency option is available through the AWS Console, the Lambda API, the AWS CLI, AWS CloudFormation, or through Application Auto Scaling and it doesn’t require any changes in the existing source code.
Limit
Provisioned Concurrency is not supported with Lambda@Edge.
Custom Warmer
How it Works
This strategy prevents Cold Starts by explicitly keeping function warmed, thanks to a pinging mechanism. It allows that by using AWS EventBridge Rules to schedule function invocations with a specific frequency. So the function is triggered automatically after each while of time that you have chosen (15min generally).
Also, it's generally implemented by some open-source libraries but you are free to built a custom one manually.
Pricing
No cost is required !! There are no additional charges for rules using Amazon EventBridge.
Supported Runtime
You can use it with any runtime you need.
Complexity to Set Up
Putting a Warming strategy in place needs some additional changes in the source code since the Warmer triggers an invocation of the function after some time.
The function must know if it's the Warmer's call and should then have a particular behavior, as the following example:

A sample implementation is available in the following repository with the NPM script npm run job:warm:env
.
Limit
This approach doesn't guarantee the reduction of Cold Starts. In the case where the function is behind a Load Balancer, it won't always work since the LB can call instances that aren't warmed yet. Also, in production environments when functions scale out to meet traffic, you aren't sure that the new instances are on the pipe to get warmed.
———————
We have just started our journey to build a network of professionals to grow even more our free knowledge-sharing community that’ll give you a chance to learn interesting things about topics like cloud computing, software development, and software architectures while keeping the door open to more opportunities.
Does this speak to you? If YES, feel free to Join our Discord Server to stay in touch with the community and be part of independently organized events.
———————
If the Cloud is of interest to you this video covers the 6 most Important Concepts you should know about it:
Conclusion
In sum, each of these strategies has its logic to improve Cold Starts:
- SnapStart takes advantage of the SnapShots technique.
- Custom Warmer implements a scheduled ping mechanism.
- Provisioned Concurrency uses provisioned functions instances.
Let's note that Cold Starts isn't a critical issue for most functions, since it happens to approximately 1% of invocations only, but we still hope to have covered a big part of the contrasts between these strategies, if you have any suggestions please don't mind letting us know in the comments section below.
Thanks for reading this article. Like, recommend, and share if you enjoyed it. Follow us on Facebook, Twitter, and LinkedIn for more content.