Hitting resource limits in your AWS CloudFormation template? I'll show a method that worked!
As most of you probably know, AWS service called CloudFormation has got its limits and one, in particular, is pretty annoying. Basically, I’m talking about the “maximum number of resources that you can declare in your AWS CloudFormation template”. Right now, it is limited to 200. As a guy who encountered this problem I am going to show you the method that worked for me.
Furthermore, I will cover the sharing of API Gateway endpoints and custom domains as well.
It won’t be a surprise if I tell you that, while using Serverless Framework, you have to deal with Cloudformation issues lying underhood. To be honest, I do not treat it as a drawback but keep in mind that getting to know some constraints takes time, especially during the development. Each time you type sls deploy with its options, you’re launching a new one or updating an existing CloudFormation stack.
After the stack is updated, the number of created resources within a particular stack will be displayed in the resource summary. For your information, it doesn’t count just functions but all resources you’ve additionally add, like IAM roles, database tables, S3 buckets, SQS queues, and definitely much more as a mandatory part of your serverless projects.
AWS Console also provides this information in a CloudFormation dashboard (number of resources after service split):
Unfortunately, I have to confess that I’ve missed that point during the development and reached almost 200 resources (it was somewhere around 190). Anyway, the obstacle was neither the CF limit nor the level I’ve reached. That time, I realized it was quite confusing. First of all, I had to figure out how to break it into multiple logical services and keep one common API Gateway, while doing it.
NOTE: By default, each serverless project generates a new API Gateway.
My first thought when I encountered this problem was: let’s find something out-of-the-box. The most reasonable tool to use seemed to be this one: https://github.com/dougmoscrop/serverless-plugin-split-stacks. If I were starting the project from scratch, this plugin would, hopefully, save me time and reduce my worries about limits. However, this was not the case as I already had about 40 Lambda functions working, with some additional AWS services and new parts of the microservice in my mind, waiting in line. If you take a glance at the description at the beginning of README.md, you’ll notice it reads:
“It is a good idea to select the best strategy for your needs from the start because the only reliable method of changing strategy, later on, is to recreate the deployment from scratch”.
No way, not on Saturday.
By the way, if you’re starting with a new serverless project, consider this repo. By default, there are different types of split available to you: Per Lambda, Per Type, Per Lambda Group.
Before making a change, I had one main microservice located in one directory (see below). My idea was to keep one microservice but extract different logics within. Just to make it clear, I am not talking about business logic but a bunch of modules, a great number of Lambda functions, working with one service.
My initial directory:
It is worth to highlight that you can follow this pattern if your application has many nested paths (presented below with service-a and service-b) and your goal is to split them into smaller services. Although two services have been deployed via different serverless.yml file, both“a” and “b” reference to the same parent path /posts.
Keep in mind that CloudFormation will throw an error if we try to generate an existing path resource. More on how to deal with that in the next paragraph.
Example from serverless.com:
After the split, I’ve ended with directories like the ones shown below, each having its own functions and sharing some AWS resources.
Long story short, if you create multiple API services via a serverless framework file – serverless.yml – they will all have unique api endpoints, like in the example shown below:
https://b6xm33po42.execute-api.eu-west-1.amazonaws.com/dev for service-a
and
https://b6xm33pb99.execute-api.eu-west-1.amazonaws.com/dev for service-b
You can assign different base paths for your services. For example, api.example.com/service-a can point to one service, while api.example.com/service-b can point to another one. But if you try to split up your service-a, you’ll face the challenge of sharing the custom domain across them.
* service-a-api for ⇒ GET https://api.example.com/service-a/{bookingId}
* service-a-api for ⇒ POST https://api.example.com/service-a
* service-a-api for ⇒ PUT https://api.example.com/service-a/{bookingId}
* service-b-api for ⇒ POST https://api.example.com/service-b
So, what’s the issue, you may ask. Let me explain.
Generally, each path part is a different API Gateway object, and a path part is a child resource of the preceding part. So, the aforementioned path part /service-a is basically a child resource of /. and /service-a/{bookingId} is a child resource of /service-a. Going further, we would like the service-b-api to have the /service-b path. This would be a child resource of /. However, / is created in the service-a service. So, we need to find a way to share the resource across services. Since sharing custom domain wasn’t exactly my issue, let me, my dear readers, give you a solution for this problem in the next paragraph.
As my case was referring to sharing the same API endpoint among logic modules, I started with serverless framework documentation: https://serverless.com/framework/docs/providers/aws/events/apigateway/. Since there is only a brief explanation, without detailed examples, I’ve decided to search further. I divided my problem into separate parts and focused on the endpoint I had created via initial service-a directory (the one below a fake one):
https://b6xm330b00.execute-api.eu-west-1.amazonaws.com/dev
Reading the “Custom domain sharing”part, notice the info about child resources and their dependencies on the preceding parts. At this point I’d like to add one thing: all child resources and root resource have their own “ids”:
AWS Console dashboard showing the ids:
https://b6xm330b00.execute-api.eu-west-1.amazonaws.com/dev pointing to bqdplee0re
/v1 pointing to i2315j
Having this knowledge, I knew I had to find a way to keep two different microservice modules (located inside different serverless.yml files), both pointing to one, common endpoint. I made some attempts, based on serverless.com docs, but each time I’ve tried to deploy service-a-module-2 via sls deploy, I got an error, notifying me that I was trying to generate an existing path resource /v1.
Finally, I reached my goal. Just look how simple it was.
service-a-module-1serverless.yml config snippets:
Additionally,I had to deploy, in external files, resource for child API Gateway path (/v1). Plus, outputs values that I wanted to export (ApiGatewayId, rootpath Id referencing to / and childpath/v1)had to be shared with service-a-module-2:
Child path is defined by PathPart:${self:custom.api_ver}.
service-a-module-2 serverless.yml config snippets.
NOTE: It has to be defined in the same region as service-a-module-1 because of CloudFormation outputs imports.
As you’ve noticed, service-a-module-1 has a function ready to be invoked via path:
Whereas service-a-module-2 has a path defined within a different serverless.yml file:
Before moving to the summary, let me explain one thing. It will be useful for those of you who have different API versioning approach than version embedded in the URL. This topic provokes philosophical talks and people often lose sight of the real goal. And that is developing software for business goals, thinking about API and making it easily consumable. We’ve decided to use route versioning because, during the development, our stage variable may, in some cases, contain different Lambda alias. Then, based on the resource like /v1/package, env variable, Lambda function name with alias,we can serve API Gateway data, and it chooses a corresponding function to be invoked.
https://b6xm330b00.execute-api.eu-west-1.amazonaws.com/dev/v1/package
https://b6xm330b00.execute-api.eu-west-1.amazonaws.com/dev/v2/package
We'd love to answer your questions and help you thrive in the cloud.