ArticleAWSserverless

Store with automation #2

By 05/28/2019 June 19th, 2019 No Comments

In the previous post, we’ve talked a bit about building Docker images using AWS native services. Today, we would like to show you how we use it to automate the deployment of our event-driven platform.

Automate Serverless invocation

The scenario is straightforward. We are a startup and we need to use simple solutions which will give us as much value as possible, as well as save our time, so we could focus more on the product that we are working on.

The team needed the most direct, possible way to deploy our Serverless services running in the same environment, to track changes, and to handle notifications. All of the above should be gained with the lowest costs of implementation and maintenance.

Today, I want to show you how we’ve achieved this.

A short introduction to our product

Gearbox is a platform created, designed and developed fully with AWS serverless services. It’s a bridge between your brilliant idea and the AWS cloud. Based on experience gathered continuously from a wide variety of projects, we’re letting developers and engineers work with their data and applications in a more secure, robust way without concerns regarding complex infrastructure puzzles.

Gearbox is where new features, extreme pace, and short term goals are the most important factors, making it a potent tool for small, innovative, focused around technology companies.

So far, our product is based on:

– 49 Lambda functions

– Python 3.6

– 6 microservices

– 7 repositories on GitHub

– Serverless Framework + Terraform

Ok, if you are not dead yet, let’s look at our guards which take care of Lambda deployments.

Serverless deployment

Services, infrastructure and all mentioned functions were defined as Code using Terraform, Serverless Framework or Python languages. AWS ECR, CodePipeline, and CodeBuild were used to remove any local deployment and provide extended deployment monitoring.

Developer – trigger

Any change in the code (Lambda functions, Serverless configurations, python plugins and so on) pushed to a described branch in GitHub repository triggers all deployment process… Which means every change is tested and deployed immediately.

GitHub – source code

Here, we have defined our Lambda functions, using Serverless Framework. Source repository contains functions and serverless.yml file in which defined resources and function configuration are stored.

CodePipeline – workflow

Here, we could avoid the use of CodePipeline and associated additional costs altogether. CodeBuild works with GitHub source as good as CodePipeline. The reason why we’ve extended our pipeline a bit is that we wanted to integrate out deployments with notifications and add other more advanced tests in the future.

codepipeline.tf

resource "aws_codepipeline" "pipeline" {
  name     = "${var.project}-${var.service}-${var.app}-${var.stage}"
  role_arn = "${aws_iam_role.codepipeline_role.arn}"
     
  artifact_store {
    location = "${aws_s3_bucket.codepipeline.bucket}"
    type     = "S3"
  }
     
  stage {
    name = "${var.stage_1_name}"
     
    action {
      name             = "${var.stage_1_action}"
      category         = "Source"
      owner            = "ThirdParty"
      provider         = "GitHub"
      version          = "1"
      output_artifacts = ["${var.artifact}"]

      configuration {
        Owner      = "${var.repository_owner}"
        Repo       = "${var.repository}"
        Branch     = "${var.branch}"
        OAuthToken = "${data.aws_ssm_parameter.github_token.value}"
      }
    }
  }

  stage {
    name = "${var.stage_2_name}"

    action {
      name             = "${var.stage_2_action}"
      category         = "Build"
      owner            = "AWS"
      provider         = "CodeBuild"
      input_artifacts  = ["${var.artifact}"]
      version          = "1"
      output_artifacts = ["${var.artifact}-container"]

      configuration {
        ProjectName = "${var.codebuild_proj_id}"
      }
    }
  }

  stage {
    name = "${var.stage_3_name}"

    action {
      category        = "Invoke"
      name            = "${var.stage_3_action}"
      owner           = "AWS"
      provider        = "Lambda"
      version         = "1"
      input_artifacts = ["${var.artifact}-container"]

      configuration {
        FunctionName = "${var.lambda_name}"

        UserParameters = <<EOF
                        {
                         "region" : "${var.aws_region}",
                         "ecr" : "${var.ecr_repository_name}",
                         "image" : "${var.app}"
                        }
                      EOF
      }
    }
}

AWS CodePipeline costs $1 for every active pipeline per month + eventual usage of other services. However, AWS is charging only for pipelines older than 30 days. Defining all AWS infrastructure as code allows us to keep the bill at $0 level and avoid a mess in our development environment.

ECR – images

From ECR, we are retrieving Serverless images. All tested and built in a separate pipeline, tagged and pushed to our private repository. By including a version of Serverless and committing hash into Image tag, we obtained better control and clarity of change in our base.

Storing images in ECR costs $0.10 per GB per month. Additionally, there are some charges for transfer of data between other AWS Services, but they are applied only when you consider cross-regional usage. It costs us just ~$0.2 monthly now. We’ve achieved that by taking great care of the hygiene, and by monitoring the growth of their quantity continuously. This feature would be nice improvement tbh, but for now, we need to handle this task using Lambda.

 CodeBuild – deployer

You’ve probably notices cheerful yellow creatures with strange glasses, when you looked at the diagram. Let’s say, they are our workers in CodeBuild. They are always there – we don’t have to care about their maintenance. One thing we need to do, though, is to provide a Docker image with a configured Serverless environment and describe what sort of performance they need to provide (memory & vCPUs). The remaining steps, which CodeBuild need to take, are specified in the buildspec file.

buildspec.yml

version: 0.2

phases:
  install:
    commands:
      - echo "Looking for working directory..."
      - cd $SERVERLESS_PATH
      - echo "Installing additional plugins..."
      - sls plugin install -n $SERVERLESS_PLUGINS
  build:
    commands:
      - echo "Deploying Serverless functions..."
      - $SERVERLESS_CMD
      
  post_build:
    commands:
      - echo "Serverless deployment completed on `date`"

Here, we are paying only per build minute. For example, the smallest compute instance with 3GB memory & 2vCPU costs only $0.005/min. Currently, it costs us ~ 100 minutes & $0.5 for a month.

AWS Lambda with Slack – notification

Due to frequent updates and debugs of Lambda functions, we found a solution which eases the pain of creating and managing them all – Serverless Framework.

Serverless Framework helps to develop and deploy Lambda functions, along with managing AWS infrastructure resources much more comfortable than before, and, of course, save some time to focus on the implementation of our product. Lambda code stays the same – the only thing that changes is a way of defining and deployment. In terms of Serverless Framework, we need to create a service which is like a project that contains functions, events that trigger these functions, and resources required by them and managed by the provider.

serverless.yml

service: gearbox-executor

provider:
  name: aws
  runtime: python2.7
  region: eu-west-1
  stage: dev
memorySize: 128
  apiKeys:
    - ${self:custom.app}-${self:custom.service_acronym}-apikey-${self:custom.stage}
  timeout: 5 # optional, in seconds
  versionFunctions: true
  api_verFunctions: true
  tags: # Optional service wide function tags
    Owner: chaosgears
    Project: gearbox
    Service: executor
    Environment: dev

  pipeline-notifications:
    name: ${self:custom.app}-${self:custom.service_acronym}-pipe-end-notify
    description: Sends notification to Slack about CodePipeline finish state
    handler: pipeline_end.lambda_handler
    runtime: python3.6
    role: GearboxImagePipelineRole
    environment:
      slack_url_info: ${self:custom.slack_url_info}
      slack_channel_info: ${self:custom.slack_channel_info}
      aws_region: ${self:provider.region}
    tags:
      Name: ${self:custom.app}-${self:custom.service_acronym}-pipe-end-notify
      Project: ${self:custom.app}
      Service: ${self:custom.service_acronym}
      Environment: ${self:custom.stage}
 ...

We chose Slack channel as a notification receiver just because it is our central communication application. It lets us work in a more integrated way. Everyone has quick access to it, and it is effortless to integrate with AWS and other tools. Just to avoid meaningless notifications, we decided to send alarm only in cases of unsuccessful deployment.

Serverless Framework is open-source and completely free of charge.

Slack provide Basic plan ($0), as well as Plus and Enterprise plan that offers more storage for messages and other features (depending on needs).

Finally, Lambda functions – charged for a number of request + memory usage GB-SECONDS in one month. However, the first 1M requests and 400 000 GB-SECONDS are free. For now, our 49 active Lambda functions, 300 000 GB-SECONDS, and 2 000 requests costs us $0 at the end of the month.

Thundra – another monitoring tools for trial

Besides all resources described in a diagram, it’s worth mentioning that deployment is not a final step which determines the success of the product. Continuous monitoring and improvements that are based on it, are the key. We’ve tried a few diagnostic tools for Lambda functions, but in the end, we chose to stick with Thundra, which btw are AWS Advanced Technology Partner.

After a couple of weeks of usage, we took a liking to their solution of putting everything in one place, I mean logs, tracing, metrics. The key feature, for now, is the traces. Traces allow us to follow the path of particular request and, thanks to provided analyses, fix the problem with latency, error or any other issue that occurs. Anyway, time will tell whether it’s going to be our final choice but our tests showed, among other thigs, that it provides a bunch of valuable details, necessary during the development phase.

Thundra is not the cheapest solution but seems to do the work, others don’t. They offer 2 weeks free trial and free charges for up to 1M monthly invocations, 2GB monthly data allotment with 7 days of data retention and community slack support. Of course, higher plans provide higher limits, better support and faster data loads. Our startup needs haven’t hit any charges so far.

What do you want?

In this whole, technological competition, it is easy to forget about one’s original goals and the role tools play in achieving them. Someone could ask why we chose native AWS solutions for CICD – they are not well known or famous. Well, maybe not, but they give us something more important than catchy words we could use during presentations. In other words, we reduced maintenance work to an absolute minimum, created a common and integrated environment for our developers, and saved money and time in the process.

We’ve started our project almost 1y ago. Currently, deployed AWS Infrastructure costs us ~$9.5 monthly. All of this is covered by AWS credits (one of the benefits of having a partnership). But you can get them in a variety of ways – by attending AWS events and webinars, joining some Startup Accelerators and so on. Or simply by using 12 months free tier.

From the beginning, our main goal was a product and people who understand the values of our business. The desired result should never be the code or a tool itself! We believe in automation. We value simplicity over complexity. We believe that technology can help us, but not indicate the direction. And finally – we just want a sandwich.