Serverless execution is gaining momentum. If you aren’t familiar with the concept, the idea is that you just provide the code you want to run and the platform magically makes it available as a service. It’s serverless, because you, the user, don’t have to provision a server in any way, shape or form. The best part, and the reason for its recent popularity, is that you only pay for the execution time, whereas traditionally you would have to pay for the server to be up and running. It is an elastic model that will immediately scale on-demand and you do not have to pay for the setup, teardown, or idle server time.
This type of solution is very flexible and affordable. The cost is based on what your business uses as opposed to paying for provisioned idle servers that sit unused. Serverless brings an unprecedented alignment between resource allocation and resource consumption; pair that with high speed caching technologies and the next generation of elastic compute is here.
A couple final notes:
- My extension code is available in GitHub here: https://github.com/InRule/Labs-AWS-Lamda
- Stay tuned for a similar blog using Microsoft’s response to serverless architecture: Azure Functions.