This short summary is all about tech or ideas I found interesting during KubeCon + CloudNativeCon 2019. I gathered those pieces of information in numerous talks, visiting booths at sponsors’ area, and attending panels. Some of the info used in this article comes from specific vendor documentation. Let’s start.

The first company on the list is CircleCI. These guys enable engineering teams with automation. They work with Linux, macOS, Android – SaaS or behind your firewall. Once a software repository on GitHub or Bitbucket is authorized and added as a project to, every code change triggers automated tests in a clean container or VM. CircleCI runs each job in a separate container or VM. That is, each time your job runs, CircleCI spins up a container or VM to run the job in.

CircleCI sends an email notification of success or failure after the tests complete. CircleCI also includes integrated Slack, HipChat, Campfire, Flowdock, and IRC notifications. Code test coverage results are available from the details page for any project for which a reporting library was added.

CircleCI may be configured to deploy code to various environments, including AWS CodeDeploy, AWS EC2 Container Service (ECS), AWS S3, Google Container Engine (GKE), and Heroku. Other cloud service deployments are easily scripted using SSH, or by installing the API client of the service with your job configuration.

Long story short – it’s a CI/CD software, and an alternative for Jenkins. Check their product if you haven’t heard about it yet. Anyway, what really caught my attention was not CircleCI in itself but the idea called Orb – packages of config that you can use to get started with the CircleCI platform quickly. Orbs enable you to share, standardize, and simplify config across your projects. You may also want to use orbs as a reference for config best practices. Refer to the CircleCI Orbs Registry ( for the complete list of available orbs. Orbs consist of the following elements: Commands, Jobs, Executors. Commands are reusable sets of steps that you can invoke using specific parameters within an existing job. Jobs are comprised of two parts: a set of steps, and the environment in which they should be executed. Executors define the environment in which the steps of a job will be run. Before using orbs, you may find it helpful to understand various design decisions and methodologies used when orbs were designed. The main principles are:

  • Orbs are transparent – If you can execute an orb you, and anyone else, can view the source of that orb.
  • Metadata is available – Every key can include a description key, and an orb may include a description at the top level.
  • Production orbs are always semantically versioned – CircleCI allows development orbs that have versions starting with dev:.
  • Production orbs are immutable – Once an orb has been published to a semantic version, it cannot be changed. This prevents unexpected breakage or changing behaviors in core orchestration.
  • One registry (per install) – Each installation of CircleCI, including, has only one registry where orbs can be kept.
  • Organization Admins publish production orbs. Organization members publish development orbs – All namespaces are owned by an organization. Only the admin(s) of that organization can publish/promote a production orb. All organization members can publish development orbs.

My next finding was OpenSDS. It is an open source community working under The Linux Foundation to address storage integration challenges in scale-out cloud-native environments. Its vision is to connect siloed data solutions in order to build a self-governed and intelligent data platform. The OpenSDS project was started towards the end of 2016 by a group of companies working together to resolve key data management pain-points reported by storage vendors and end users. Why this project is important? Imagine one logical storage layer between all of the platforms – cloud, on-premises disc arrays, native services – and, most important, one API/Standard way to communicate with it and manage it. Cool? If you find it interesting, dive in it a bit in a repo at It is also worth mentioning that this project is supported by storage users and vendors, including Dell EMC, Intel, Huawei, Fujitsu, Western Digital, Vodafone, NTT and Oregon State University. In the future, the project will seek cooperation with other upstream open source communities such as Cloud Native Computing Foundation, Docker, OpenStack, and Open Container Initiative.

My next stop was Pulumi. This software allows, by using general purpose languages, to create infrastructure as a code on a lot of platforms. You can create code and reuse it, thanks to functions, classes, packages, debugging, testability, and more. The end result is far less “copy and paste” with greater productivity. What’s more, it works the same way no matter what cloud you’re targeting. Other approaches use YAML, JSON, or bespoke DSLs that you need to master – and convince your team to use. These “languages” fall short of general purpose languages. They lack abstractions and reuse, and only reinvent familiar concepts like package managers. Pulumi’s SDK is fully open source and extensible. It enables you to participate in a rich ecosystem of libraries that ease common tasks, ranging from containers through serverless to infrastructure, and pretty much everything in between. Languages and clouds are supported by an extensible plugin model that enables public, private, and even hybrid cloud support. It’s support out of the box: AWS, Kubernetes, Microsoft Azure, Google Cloud Platform, Pulumi Cloud Framework, OpenStack. Pulumi is a multi-language runtime. The choice of language will not determine what clouds may be targeted – each language is just as capable as the rest, and supports the entire surface area of available resources. Right now Pulumi supports: Node.js (JavaScript, TypeScript, or any other Node.js compatible language), Python 3 (Python 3.6 or greater), and Go.

I’ve found another interesting product at Twistlock booth. Twistlock is the provider of full-stack, full-lifecycle container and cloud-native cybersecurity for teams using Docker, Kubernetes, serverless, and other cloud-native technologies. Twistlock integrates with any CI tool or registry, and runs wherever you choose to run your VM’s, containers, and cloud-native applications. It allows scaling security through automatic learning of normal app behavior and communication with other cloud services, as well as automated creation of ‘allow list’ runtime models for every version of every application. Everything is API enabled, programmable, and easily integrated with existing tools and services for your automation pipelines. Twistlock provides dynamic displays of your environments with live, interactive, multilayered maps of every application component, and real-time security health with clear insights rank vulnerabilities and compliance issues based on your unique use cases. Leverage flight data recorders for every host and container with real-time event stream processing of activity across your clusters. Twistlock ensures complete runtime prevention with automatic, active blocking of anomalous activity and explicitly blocked processes, network traffic, or file activity. It allows only known-good applications that meet your compliance and vulnerability requirements from other trusted sources, and enforces least privilege networking and micro-segmentation across your environments, preventing service account sprawl. It supports AWS, Docker, GCP, IBM Cloud, Kubernetes, Mesosphere DC/OS, Microsoft Azure, Oracle Cloud Infrastructure, Pivotal Cloud Foundry, Rancher, Red Hat OpenShift, and Serverless Functions.

There was also Harness, a Continuous Delivery-as-a-Service platform that automates entire CD process, uses machine learning to protect you when deployments fail, and equips you with enterprise-grade security every step of the way. Pipeline builder enables your team to build and execute complete continuous delivery pipelines with serial or parallel workflows across their applications, services, and environments in mere minutes. There is also workflow wizard – it allows your team to rapidly build deployment workflows with out-of-the-box support for canary deployments or blue/green deployments, using cloud technologies such as AWS EC2, AWS Lambda, Docker and Kubernetes. Harness supports continuous verification with unsupervised machine learning to automatically verify application deployments in production, detecting performance and quality regressions from tools such as AppDynamics, New Relic, Splunk, Elastic and Sumo Logic. Furthermore, the 24×7 ServiceGuard feature monitors the performance of your release. You can configure automated rollback and let Harness do it – automate rollback to the last working artifact version and run-time configuration with no required scripting or code. The Harness has secrets management capability. You can use Harness SecretStore or HashiCorp Vault integration to seamlessly reference your secrets across all your deployment workflows and pipelines. Auditing is also included – you can keep a full audit trail of every deployment, so you know the who, what, where and when behind every action. The Harness provides your DevOps and team leads with insight into every application, environment, version, and deployment.  You can also implement Continuous Delivery using configuration-as-code, directly from your Git repo for easy management and version control. The complete list of integrations is pretty impressive: Jenkins, JFrog Artifactory, Nexus, Travis CI, Bamboo, Docker Hub, AWS, TFS, Pivotal Cloud Foundry, Azure Container Services, etc.

And last but not least, Styra. Styra enables declarative authorization to secure Kubernetes. It has a built-in library of compliance policies -and a simple GUI – that let you implement and customize policy-as-code. You can do a pre-run which allows you to monitor and validate policy changes before committing, and to mitigate risk before deployment. Styra uses a declarative model that defines the desired state to prevent security drift and eliminate errors before they can occur. You can see graphical trends over time to prove security and compliance to auditors, security teams, and business leaders. Strya is based on OPA – Open Policy Agent. It is a general-purpose policy engine with uses ranging from authorization and admission control to data filtering. OPA provides greater flexibility and expressiveness than hard-coded service logic or ad-hoc domain-specific languages. And it comes with powerful tooling to help you get started. You can learn more at OPA is hosted by the Cloud Native Computing Foundation (CNCF) as an incubating-level project.

During KubeCon I had a chance to see many new tools, even for well-known solutions. Those two and a half days were not nearly enough. Regrettably, I didn’t have time to see everything and talk with everybody. This short list is just a tip of an iceberg rising from the ocean of products. Thanks for reading and see you at next KubCon + CloudNativeCon!