Why does our workflow work the same way or even slower, despite all the changes we made — and despite migrating to the cloud?
There was a time when word DevOps was on the lips of every innovative company Director and those who strived to be innovative. The time when everybody wanted to be Agile, because only that would let them deliver products quicker. When people started to move from the local data center to public cloud to be more effective. So why, after all this reorganization of processes, influx of new people, technology changes and, mostly, money investments, our workflow works the same way or even slower?
In this two-part article, I would like to share a few thoughts on some potential reasons – technological and organizational – based on my experience.
10 years ago, the DevOps culture was aiming to bring together two extremely different departments – Development and IT Operations. It meant sitting down together people with various goals and priorities to shorten the implementation time of the functions and changes in the target software. In short, take few people from Development and few people from Operations and tell them to work on the same fucking piece. Done.
Nowadays, there are as many types of DevOps as heads on the world, and it makes my hair stand on end when I hear that word coming out of the mouths of the recruiters.
In some companies, Dev and Ops are separated but still called DevOps (being a notable reason). That kind of “DevOps” doesn’t change anything. Workflow is slow. Projects need to have dedicated people in the Team and companies try to recruit them by themselves.
Others create dedicated DevOps Teams but they need to work as one Noah for whole Ark of developers (mostly lack of funding reason). Here, getting to know the specifics of every project takes time and when you combine it with people rotation it’s no wonder that global DevOps teams are far from being effective. Moreover, it is difficult to organize the exchange of knowledge in such large structures.
Some of them are just not ready for DevOps transformation to make it in an efficient way, because of:
– old monolithic architecture
– lack of DevOps specialists
– lack of funding
– lack of realistic transformation plan
but no one wants to admit that (pressure reason).
So, if you really want to speed up your workflow, choose dedicated people and let them work closely together. Try to deliver small pieces, more and more often. Changing the whole world takes time.
How annoying is the work on different local environments, know those, who lost hours debugging errors which weren’t even caused by wrong implementation. Especially, if the Team juggles between several operating systems, different versions of plugins, providers, tools. It can take even few days for new members to set up local environments. And this is only the beginning of the game.
“Need my help? Sorry, I have to configure my environment first”.
“You have documentation for that, you say. How often is it updated? …Oops”.
It’s not possible to maintain all those documents unless you have some kind of automation for it. And even then, manual configuration and all the updates won’t stop being a pain in the ass. So please, don’t cause issues by manual configuration. Just forget about that. Create common environments for the Team from the very beginning. Put all actions – builds, test, deployments, scripts in some simple CI/CD pipelines.
From the Infrastructure standpoint, DevOps tools and technologies were supposed to solve the problem of random mistakes with automation, infrastructure management, continuous delivery and continuous deployment tools. They really help to reduce the number of old issues and save time if we use them as intended. Otherwise, a lot of colorful tools won’t cause less issues, only ones of a different kind. Therefore, it is important to pick them out carefully. Don’t choose well-known tools. First of all, find out what your project really needs and then choose tools that meet the project’s requirements. Consider tools which are easy to integrate with others and work cross multi-providers. Avoid maintaining the tool as much as possible. Maintenance is boring. Don’t ignore new, actively developed tool projects. Sometimes they bring more value than old commonly known ones. If your workflow is not working efficiently, search for other solutions.
While the Development world doesn’t exist without at least basic tests, infrastructure provision tests are very often neglected.
“Why do I need tests, they are taking my time”, you may ask.
Resolving unexpected errors by improperly implemented Infrastructure takes more time because it affects all resources and layers of the project. Without automatic tests, you will never be sure that change is being deployed correctly every time. You will be not sure that every piece works perfectly, and as intended. Even while doing “quick manual tests” issues are very often noticed by accident, sometimes during production. Belated patches can be too time-consuming. Or it may be too late to even implement them.
The common problem of every beginner is that he doesn’t anticipate that in the near future he’ll need bigger Architecture. He implements very simple solutions, and when things become more complex, he devotes them more and more time, especially when he is maintaining a lot of environments. I’m not saying that simple is bad. It’s more effective than trying to build huge complicated configuration from the start, of course. It’s just good to think about some parameterization of environments, regions or accounts sooner than later. Start with small pieces, but think big.
I will always recommend the use of automation for configuration and deployment. I may be lazy or just don’t want to waste my time for repetitive work. Probably a bit of both… Be aware, that trite tasks, which you usually do, won’t just disappear. Unless you give them to someone else. But I have a feeling that your colleagues won’t be happy or appreciate such gifts. 😉
Respect your time and the time of others by automating solutions, even single ones. Automate all workflows with CI/CD pipelines. Check stateful tools – they are more efficient with Infrastructure provisioning. Check stateless tools – these are better with Application/OS management. Look for solutions that can be defined fully as code – in other words, be self-documented. No more click-clicks and manual actions!
Everyone wants to avoid the flooding of bugs. Patching holes is a nightmare and it’s even more annoying when you encounter them over and over again… Things become serious when they sneak unnoticeably during environments staging or even production. A pretty big chunk of those errors is caused by the lack of consistency of environments. That’s why it’s so important to test the code frequently and update all environments in the same way. This approach allows you to spot errors at a much earlier stage and avoid propagating them into successive environments. And that saves time for fixes and updates.
For instance:
Ideally, all changes made to the code (commits) in some code repository should be tested immediately during environment development using hooks, slightly less often during environment testing (like scheduled once a day), and on production stage, according to the release window.
When everything is working in separated pipelines and is configured in a fully automated way, it can speed up the workflow and improve the quality of the Team’s work noticeably.
PS: Don’t forget to choose CI/CD tools according to your needs, not fashion. Keep things simple and avoid maintenance!
PS2: Don’t forget to define your workflow change with the Team (e.g. gitflow)!
Maintenance is the most boring and the most expensive part of every project. It does not bring profits, it always generates additional costs.
“Oh, my gosh, critical alarm, storage is running away again! Seriously, I need to login and clean space AGAIN”? – said lazy Engineer.
Now, we can use a wide range of automation tools to solve similar or different problems. Like agent and agentless based applications. Or tools which are keeping the state of Infrastructure after changes and stateless ones. Even tools which track history command line changes in the operating system.
All changes can be defined as code and deployed remotely without your manual ssh connection to the machine. Every single implementation can be versioned and stored in code repository to help members of the Team debug issues quickly, find out exactly what changes were made and, if need be, revert them. Everything to make old school IT Administrators life happier and lazier.
Everything starts with a common standard of implementation for projects. With time, the number of environments or projects (later, I will use the word “stacks” instead) increase very quickly. More serious changes can make code incompatible with older stacks. Updating them manually or implementing some programming tricks to keep compatibility works only in the short term. You will get stuck with monkeyish maintenance work rather fast if you won’t consider some automation in your workflow.
I have brought this topic as a separate section on purpose because that problem happens quite often.
CI/CD pipelines will take care of automatic and continuous updates of the stacks.
Maybe, you are not versioning your code now or deploy only one common version from some branch (e.g. develop or master) to all. Consider tagging or releasing code and let stacks live their own stable life instead of trying to fix all issues caused by incompatibility.
Right at that moment, the devil rubs his hands. OK, you are defining changes in code. You are storing your code in the version control system. You are using CI/CD pipelines to take care of continuous deployment. Cool. You are a 100% DevOps Engineer. One little manual action is not going to hurt anybody, for sure.
If it is a hot production incident, you choose the quickest possible way to extinguish the fire. Otherwise, you would have to admit that there is something wrong with your CI/CD. Besides, focusing on improving pipelines is more efficient than doing a lot of changes and praying that you have not forgotten to write back everything in your code. Those differences in configuration, which was not tested in at least a few different environments, could cause issues in other environments or releases and affect production in the future.
Changing your company culture to DevOps will not magically make the tap water cleaner. Even if you use all the recommended tools and solutions from the catalog. The effectiveness of the Team depends largely on how wisely and realistically the project is managed. Use tools to help speed up your workflow, but choose them with your project requirements in mind. Look for solutions which will meet your expectations. Invest your time in automation, tests, version-control system, Infrastructure defined as code, continuous delivery and continuous deployment tools. Keep equal environments to find bugs faster and prevent them at all. Avoid manual changes. Be lazy. Don’t let people get stuck in boring maintenance work. Protect their creativity and take care of the quality of work if you want to make your workflow work faster.
We'd love to answer your questions and help you thrive in the cloud.