Continuous Delivery practices and tooling.., are you using the right tool for the job?

As Product development teams mature in “Continuous Integration” process, next obvious evolution is towards Continuous Delivery. That journey is full of cultural shift for the everyone in the team, apart from the determined cultural commitment, tooling plays very critical role. Are CI tools such as Jenkins, Bamboo etc good candidates for successful CD journey..? what are those key aspects need to be supported by any tool that we choose here..Lets look at essential CD practices and see if that helps navigating this tooling Titanic..

Only build binaries once

This is one of the foundation blocks. Build your binary once and reuse the same artifact across the pipeline. This is primitive, most of the CI products offer a task to create artifacts out of the build process. However, it gets little complicated when we want to pull artifacts from more than one upstream pipeline for a given downstream pipeline. This raises the need for dependency management part of the pipeline and fetching appropriate version of the artifact from upstream dependency. Something like The dreaded diamond dependency problem.

Deploy the application exactly the same way across all environments

Idea is the exercise the deployment logic/automation across the environments, make sure it works so that production deployment becomes just another deployment, no last minute drama. This could be simple perl, shell, powershell script or there are a lot deployment automation solutions out there in the market for ages now. If we look at this as one of the stages in CD pipeline, it becomes an interesting challenge.

This tool should be environment aware, should be able to work with environment automation tools such as puppet, chef and recreate the environment any moment, should provide capabilities for zero downtime deployments, should be able to relate environment with build version and the pipeline, should be able to relate to the commits and quality metrics, most vital aspect is the ability to visualize what happened/happens in an environment in the context of a pipeline.

In essence, a typical tool that’s designed only for deployment/release automation without the thought process of pipeline will not work here. Sadly, some of the most popular tools are still catching up. If we pick up a CI tool, perhaps, it supports running scripts and hence deployment script can be executed as another job or task from the CI tool. However, they are likely to miss the context of Continuous Delivery and pipeline.

So, what? that tool is meant for deployment automation and does it well..

agreed, but you are missing the important aspect of Agile, Visibility. Many critical questions listed below will be unanswered or possible after several clicks and roundtrips on the tool

  • where is my check-in right now? dev, integration, perf ..?
  • My API integration worked fine last night and its broken now..
    • what changed in between?
    • list of commits came thru?
    • quality metrics related to each commit?
    • possibly show me which commit broke the smoke test?
    • Okay, somehow found build# broke critical integration, what is the associated build# in previous environment and did that pass smoke test in that environment?
    • this integration environment pulls bits from various components, how do I know which one of the component changes broke the integration?

This is vital to up-level the maturity. Traditional deployment automation tools has deep focus, investment and roadmap to make deployments better but what you need is deployment/release automation in the context of pipeline.

Smoke test the deployment

Again one of the critical aspects to gain confidence on Automation is to do it the right way with right principles. Test automation is one of those areas that doesn’t provide enough protons to the developers, because, they are non-deterministic, too high level, hard to repro and analyze etc. One of the best ways to improve that situation is to decompose the problem space and verify at each stage. Build process is verified with self testing build. Deployment automation should be verified with deployment verification tests before application smoke or regression runs. This can be as simple as just pinging your API to make sure it returns 200 OK, or bringing up the GUI to make sure login screen comes up. This determines whether the deployment was successful or not and eliminated the variable that automated tests could fail due to incomplete deployment.

Having understood the need for running deployment verification tests, it raises the bar for appropriate tooling. Pipeline orchestrator should have the capability to run appropriate automated tests. With increased SaaS providers in the market, it makes the job easier, no need to maintain grids of on-prem test bed infrastructure instead could leverage providers like Saucelabs, Visual Studio Online, Blazemeter etc to run the tests. Either pipeline tool or deployment automation tool should provide you necessary plug-ins to run tests or select a tool that helps you with this.

Most of the conventional deployment automation tools might do a great job performing the automated deployments, after all they were built for that purpose, but what you need is a tool that offers plugins to run various tests on SaaS, pass/fail the deployment based on test outcome.

If anything fails, stop the line

This is possibly the huge change for everyone. Automation is not something new, teams would have been doing build automation, deployment automation and test automation for years. However, connecting them together and start treating the pipeline as one of the more important aspects is an uphill task. Hiking without proper guide and tools is not going to make the journey to be a memorable and successful one. In today’s technology with a dime a dozen automation tools, its fairly easy to get started with automation, create a pipeline and do it once. But as product teams get busier and busier with feature, bug, support etc, its a struggle to keep up with automation. Once we lose track, automation will start failing, team members will lose confidence in automation and pipeline won’t be a meaningful pipeline.

It might be easy, if you are starting fresh on Continuous Delivery. But, if you are coming from CI world, obviously investments have been made on CI/deployment tools. Most of them won’t have the concept of pipeline or pipeline orchestration. It’ll be an interesting puzzle to solve in the mist of opinionated tool geeks as well.

If its my company, my money, these capabilities are essential in a Continuous Delivery tools eco system.

Of course, I’m not an expert in all the tools available in this space, as far as I know

  • Go.CD is one of the best fits for this purpose.
  • Recently, Jenkins came out with Delivery Pipeline, Build graph views to support CD.
  • Visual Studio ALM is maturing with pipeline and release management support.

I’m sure I would have missed many tools and welcome anyone to take this opportunity to share your choice of tool that offers some or all these capabilities.

Advertisements

One thought on “Continuous Delivery practices and tooling.., are you using the right tool for the job?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s