← Microservices in Practice: Challenges Navigating Our Pair to Success →

Key Components of DevOps

01/11/2016 By Akrem Saed

In a previous post, I mentioned that in order to have a successful DevOps experience, there were some key components and principles that need to be implemented. In this post, I’ll cover those components in more detail.

Automated Delivery Pipeline

The Pipeline

First, let’s talk about the “pipeline” part of this terminology. We want to create a process that defines what needs to happen in what order from the moment new code is published to source control to the last step of making that code available to customers in production. Assuming we have 3 deployment environments (Development, System Integration and Production), then a typical delivery pipeline would have the following steps:

  1. Developer pushes code to source control.
  2. A build is triggered that will compile the source code and run tests to make sure everything is in order.
  3. An artifact is created, given a unique version number, and published to an artifact repository.
  4. Deploy the latest artifact to the Development environment at a defined schedule. The schedule could be every hour, 3 times a day, after the successful publishing of an artifact, or whatever suits you.
  5. Deploy to the System Integration environment the artifact that was last deployed to the previous environment (i.e. Development environment) at a defined schedule. This schedule can be different from Development environment schedule.
  6. Deploy to the Production environment the artifact that was last deployed to the previous environment (i.e. System Integration environment) at a defined schedule. This schedule can be different from the schedule of the other environments.

It’s worth noting that when each step is successfully executed, only then should we trigger the next step. Moreover, when an artifact has been successfully deployed to an environment, only then would it be a candidate for deployment to the next environment. This allows the team to verify an artifact is working appropriately in an environment before advancing it to the next.

Automation

The second aspect of an Automated Delivery Pipeline is the fact it needs to be automated. Other than the first step when a developer pushes new code to source control, every other step should be automatically triggered and executed. In order to achieve automation, the following points need to be in place:

Configuration Management

How do we ensure that the application is only talking to resources specific to the environment in which it’s running? Configuration management is the answer. First step of configuration management is to externalizing those configuration concerns from source code into a configuration file (e.g. properties file for Java or an App.config file for .NET applications). For example, if we’re developing a Java application then instead of putting the URL of the database as a string in a Java class, the Java class that needs the URL will have to fetch it from a properties file or a system environment variable. Second step is to determine which configuration file to use. There are two approaches for this:

  1. At Run Time: When the application is starting up, it will determine the environment in which it's running and load the appropriate configuration file. Hence, this approach requires a separate configuration file for each environment. Ashraf Sarhan wrote a blog that walks through an example of using Spring profile to implement this approach.
  2. At Deployment Time: The second approach will instead write the configuration file at deploy time. Depending on to which environment we're deploying, the deployment script will write the configuration file with the appropriate values. An example of the second approach is Octopus Deploy which is described in their documentation.

Regular Integration

Integration in this context is simply to deploy our application to an environment where it will interact with other applications and components in the ecosystem. Having a regular schedule of integrating your application is essential in achieving a feedback cycle. When our integration cycle is tighter, so would our feedback cycle. The constant enemy of tight integration cycles is manual processes. Hence, every time we want to make the cycle tighter, we will need to automate a process or step that was manual. Examples of manual steps that need to be automated are:

Automated Monitoring & Health Checks

Since DevOps involves operational duties, we want to know about problems before they are reported or noticed by users so we can solve them before they impact our users. Minimally, our health checks should include:

We need to periodically and automatically check for those facts on each of our applications or resources and when one of them is not behaving appropriately, we need to be alerted automatically by email or text so we address them. Example of tools that assist with this are HP SiteScope, CloudFlare, AWS CloudWatch, AppDynamics, and more recently Graylog Alerts. Once you have automated monitoring, it can open the door for more exciting opportunities like auto-scaling your application. For example, if our automated monitoring process detected a spike in CPU and network traffic for an extended duration, then we can hook that up to another automated scaling process that will spin up more instances of the application that will help deal with this extra load. Another example is that if the increase in network traffic was actually malicious like a DoS attack, then we can react by investigating and potentially blacklisting the suspicious IP addresses.

The Firefighter Role

When the development team takes on the “ops” duties as part of implementing DevOps, it could lead to the team having to deal with one issue the whole week or half a dozen of issues per day. These issues are interruptions to the team’s development activities which can result in reduced velocity and loss of concentration. An approach that I like to employ is to define a Firefighter role and rotate it among the team. All issues that crop up are directed to the Firefighter and she is tasked with:

As a result, the firefighter is effectively minimizing the distractions and interruptions from the rest of the team. This role should be rotated among teammates. For example, each teammate takes on the role for a week or two during which this is their main responsibility. When there are no issues to deal with, the firefighter can join the development efforts. Dealing with issues that crop up at testing environment or production are opportunities to get first-hand experience at what “extraordinary” situations the application has to deal with and then we can come up with some ideas on how to improve the application’s handling of those situations. Those discoveries will certainly help the application evolve and mature.

Infrastructure as Code

This could be the most recognizable aspect of DevOps but I intentionally list it at the end to emphasize that it’s not the only aspect. Over the years, many good practices came out of developing software for business problems that are widely adopted but many of those practices are not employed to infrastructure as widely. Treating infrastructure as code does not only mean to write code for infrastructure but it also mean to apply the aforementioned best practices to infrastructure code:

There has been a steady growth in terms of tools that enables us to write, run and test infrastructure code and to mention just a few of them by categories:

Conclusion

All of the previous DevOps components discussed in this post are not to be treated in a “all or nothing” manner. Instead, it’s better to take them one by one. We can pick one to understand, design and implement. Once we are satisfied, we can move on to the next one. The beauty of this approach is that even with just implementing one of them, we will see immediate benefits and significant return on our investment.  Then the more we implement, the more benefits we gain. Of course some of those components work very closely with each other like Automated Delivery Pipeline and Regular Integration where the latter can be implemented much more easily if you have the former already in place. Hence, we might need to tackle them in a specific order.

← Microservices in Practice: Challenges Navigating Our Pair to Success →
Source Allies Logo © Source Allies, Inc.