A recap of thoughts after spending one week on one of the most influential conferences in the world.
I still feel the energy which was shared by the people we met during the conference. Iâve always believed that ideas are stored in our minds and the only way to pick them up is to start meeting other inspirational people who can help you in doing that. This post is a kind of a recap and thought after spending one week in one the most influential conferences in the world.
Whatâs newâŠ
Iâd only like to mention only those announcements which really grabbed my attention. And I encourage you to dive deeper into AWS documentation regarding those which might be useful for your scenarios. Now, letâs talk business.
With Firecracker youâre able to launch lightweight micro-virtual machines (microVMs) in non-virtualized environments in a fraction of a second. Now, having in mind security and workload isolation (well known in traditional VMs) and great efficiency (well known in containers) we get it all in Firecracker. AWS confirmed that this solution is already being used in Lambda and Fargate services.
More on that: https://firecracker-microvm.github.io
Based on the next generation AWS Nitro System, new C5 instances make 100 Gbps networking available to network workloads without requiring customers to use custom drivers or recompile applications. One of the best examples might be the significant acceleration of data transfer to and from S3 buckets.
Available today in: US East (N. Virginia and Ohio), US West (Oregon), Europe (Ireland), and AWS GovCloud (US-West).
Imagine a situation when you treat your EC2 instances like a laptop which you can hibernate and let your application to start up right where you left off. Now, this feature is available in terms of EC2 instances. The AWS page says: âyour instanceâs EBS root volume and any other attached EBS data volumes are persisted between sessions. Additionally, data from memory (RAM) is also saved to your EBS root volume. Upon resume, your EBS root device is restored from its prior state, including the RAM content. Previously attached data volumes are reattached and the instance retains its instance ID. While the instances are in hibernation, you pay only for the EBS volumes and Elastic IP addresses attached to it.â
Available in: the US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), South America (Sao Paulo), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), and EU (Frankfurt, London, Ireland, Paris) Regions.
This one is probably one of my favorites announcements. Layers enable developers to build custom runtimes, and share and manage common code between functions, which, in to me is pretty awesome! Technically, they are new type of artifact containing code and data and may be referenced by several Lambda functions simultaneously. What you do, is simply put a common code in a zip file and then upload it to Lambda as a layer. Nothing else than better code reuse.
Layers can be used in all regions where Lambda is available and, as far as I know, they are supported by Serverless Framework.
With this feature enabled your customers are able to access serverless applications, based on the request content, from any HTTP client, including web browsers. Over previous years ALB targets were only restricted to EC2, containers and on-premise servers, but now we can play with targets as Lambdas.
Available in: US East (N. Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), South America (SĂŁo Paulo), and GovCloud (US-West) AWS Regions.
Apart from the official info from AWS: âAmazon Timestream is a purpose-built time series database service for collecting, storing, and processing time-series data such as server and network logs, sensor data, and industrial telemetry data for IoT and operational applications. Timestream also automates rollups, retention, tiering, and compression of data, so you can manage your data at the lowest possible cost. Timestream is serverless, so there are no servers to manageâ Iâd like to add that after seeing this information some of our customers started considering getting rid of InfluxDB. Time will tell.
Basically, thereâs nothing more to mention here except a simple pay-per-request pricing for read and write requests. You only pay for what you use, making it easy to balance costs and performance. I would like to see it in action.
Announced for MySQL-compatible DBs which allows single Aurora database to span multiple AWS regions, with fast replication to enable low-latency global reads and disaster recovery from region-wide outages. According to the documentation it uses storage-based replication with typical latency of under a second. One step further toward high-availability with serverless perks.
Generally, in Chaos Gears we tend to use external solutions in terms logs analytics, but the one coming from AWS is presented as a fully integrated, interactive, and pay-as-you-go log analytics service for CloudWatch. Interesting bait might be that you only pay for the queries you run. AWS informs its users that they can publish log-based metrics, create alarms, and correlate logs and metrics together which should increase the level of overall environment visibility. Canât say anything for sure, I need to investigate this further to say more.
With this particular one, customers can easily move their files to AWS without needing to modify their applications or manage any SFTP servers. Data uploaded or downloaded using SFTP is available in Amazon S3 bucket. We only pay for the use of the SFTP server endpoint, data uploaded and downloaded.
Available in: US East (N. Virginia, Ohio), US West (Oregon, N. California), Canada (Central), Europe (Ireland, Paris, Frankfurt, London), and Asia Pacific (Tokyo, Singapore, Sydney, Seoul).
Visibility of the traffic and enough capacity might end an era called âInternet Weatherâ. Iâm also putting the official info from AWS because it explains the purpose perfectly: âNetwork layer service that you can deploy in front of your internet applications to improve the availability and performance for your globally-distributed user base. AWS Global Accelerator uses AWSâ highly available and congestion-free global network to direct internet traffic from your users to your applications running in AWS Regions. With AWS Global Accelerator, your users are directed to your application based on geographic location, application health, and routing policies that you can configure. AWS Global Accelerator also allocates static anycast IP addresses that are globally unique for your application and do not change, thus removing the need to update clients as your application scalesâ. With Global Accelerator, itâs easy to run applications across multiple AWS Regions or move them between regions without changing the front-end interface seen by users.
AWS Transit Gateway â the service that enables customers to connect lots of Amazon Virtual Private Clouds (VPCs) and their on-premises networks using a single gateway. No more custom hubs and other magics to put all your vpcs and on-prem locations together! AWS Transit Gateway acts as a hub where traffic is routed among all the connected networks, well-known spokes. This hub and spoke model significantly simplifies management and reduces operational costs because each network only has to connect to the AWS Transit Gateway.
Peak performance of Provisioned IOPS SSD (io1) Volumes increased from 32,000 IOPS to 64,000 IOPS and from 500 MB/s to 1,000 MB/s of throughput per volume when attached to Nitro system EC2 instances!!!
Manage billions of objects stored in Amazon S3, with a single API request. I hope that this ends wasting time for coding custom application software, in order to perform desired API actions on S3 objects in bulk. âS3 Batch Operations manages retries, tracks progress, sends notifications, generates completion reports, and delivers events to AWS CloudTrail for all changes made and tasks executedâ.
Delivers automatic cost savings by moving data between two access tiers, frequent access and infrequent access, when access patterns change. Itâs ideal for data with unknown or changing access patterns. S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier. There are no retrieval fees in S3 Intelligent-Tiering. If an object in the infrequent access tier is accessed later, it is automatically moved back to the frequent access tier. Another piece of automation easing the pain with data pattern analysis. This should definitely be tested, to say whether it meets general requirements and really saves you money.
Now, this one looks massive. With this service it is possible to automatically copy data from S3 to FSx for Lustre, run your workloads and then write the results back to S3. FSx for Lustre also enables you to burst your compute-intensive workloads from on-premises to AWS by allowing you to access your FSx file system over Amazon Direct Connect or VPN. Having a lot of data to analyze, youâre able to process your file-based data sets with FSx for Lustre from Amazon S3 or other durable data stores.
S3 Glacier Deep Archive offers the lowest price of storage in AWS, and reliably stores any amount of data. Think about all those customers who need to make archival, durable copies of data that rarely, if ever, needs to be accessed. Data can be retrieved within 12 hours.
Generally available in 2019.
After this announcement customers can migrate workloads from existing write-once-read-many (WORM) systems into Amazon S3 and configure S3 object lock at the object and bucket levels to prevent object version deletions prior to pre-defined retain until dates or legal hold dates using the AWS SDK, AWS CLI, REST API, or the S3 management console. AWS says that S3 object lock can be configured in one of two modes.
When deployed in Governance mode, AWS accounts with specific IAM permissions are able to remove object locks from objects. If you require stronger immutability to comply with regulations, you can use Compliance Mode. In Compliance Mode, the protection cannot be removed by any user, including the root account.
App Mesh, uses the open source Envoy proxy, configures each microservice to export monitoring data and implements consistent communications control logic across your application. This makes it easy to quickly pinpoint the exact location of errors and automatically re-route network traffic when there are failures or when code changes need to be deployed. Service can be used with Amazon ECS, Amazon EKS and Kubernetes on EC2.
Available today as a public preview in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland).
Cloud Map, which looks like a brother of the well-known Consul, allows you to register any application resources such as databases, queues, microservices, and other cloud resources with custom names. Cloud Map constantly checks the health of resources to make sure the location is up-to-date.
Available in: US East (Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai) Regions
AWS announced ECR support as a source for CodePipeline, so you can easily setup a pipeline which automatically triggers a blue/green deployment when you upload a new image to Amazon ECR. Apart from that, now itâs possible to use blue/green deployments with AWS CodeDeploy.
Blue/green available in: US East (Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), South America (Sao Paulo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai).
As I mentioned at the beginning itâs just a portion of all the announcements but these are the ones Iâd like to investigate further and maybe (not without challenging them) use in customersâ environments. What I like about the cloud is the general flexibility in terms of solving problems. If you carefully watch the trend of changes and new type of services launched by cloud providers then you will notice that they are providing us tools as remedies for our problems, for taming our chaos. Hopefully weâre going to use them wisely and only when they really solve our issues, not because theyâve been launched by particular vendor and we blindly trust in their ideas. Unfortunately cloud vendors sometimes provide tools which are kind of elementary piece of work that need a lot of labor from their âsubvendorsâ or you, my friend.
Below Iâm pasting a pic of my idol â Netflix. The, reason why I love to automate using serverless tools, why I convince customers to challenge themselves constantly with questions âwhat ifâ and tell them that working with clouds is continuous improvement of internal communication and of cloud environment. And most importantly, they taught me ânot to waste time on repeatable tasks as a simple clog in the machineâ
We'd love to answer your questions and help you thrive in the cloud.