Sustainable Test Automation practices for Agile teams

High quality working software wins !!. Regardless, we embraced Agile or not, Test Automation is very critical for successful delivery. But Agile movement elevates the need for rationalized automation effort with the intent of boosting team confidence as code traverse different stages in the pipeline. It’s fairly easy to get started on Test automation with zillion tools, guidance and samples around us. However, one of the common problems is how to sustain the value of automation over time?

IMO, some common smells

When adapting Agile, Acceptance tests are one of the common patterns where acceptance criteria verification is automated alongside the code part of the user story definition of done.

  • each story spawns a few acceptance tests and over time, you will run 100s of tests.
  • 6 months back acceptance tests ran within 2 mins, provided great value and fast feedback but now they run for several minutes.
  • at the beginning, we had cycles to keep up with automated tests but over time, we got busy in delivering features and can’t keep the tests up to date, too many tests, because we automated test cases with every story. Naturally story level tests will fail when we change implementation or refactoring code. If we can’t keep up with automation, % of failing tests will increase.
  • technology changes –  with wide community contributed open source technology growth, new tools and technology comes to surface very often and as developers we get attracted. But the organization invested in some technology, that obviously old now (may not be outdated), my developers prefer this new tool..we are rewriting everything and need a dedicated automation stories or sprint..

If these sound familiar, keep reading.

Too many Acceptance Tests

First off, if you have quantified testable acceptance criteria part of the story, that’s good. If you are trying to automate the acceptance criteria, that’s even better. At this point, the process of trying to automate the acceptance criteria might need a check list to rationalize, because every automated test case would require care and maintenance as we continue to build more software

  • Is it essential to automate? what is the value of automating vs not automating?
  • how does it map to a user behavior?
  • can this be automated with unit testing or api testing before writing e-e test?

If it necessary to automate this behavior and impossible to accomplish that using unit tests (because, it might need to talk to api tier, messaging, database etc), determine

  • what is the intention of the verification?
    1. are we trying to verify the data that gets returned from the backend or other external services?
    2. are we trying to verify response payload, schema, content type before processing?
    3. are we trying to verify the message payload and data before processing?
    4. are we trying to verify the user interface behaviors like enabling/disabling ui elements?

Once the intention is clear, then determine right technique to automate

  • If its data related (1-3 above) , prefer API testing. API/service contracts less likely to change compared to GUI changes and hence we are likely to see better stability. Also, these would run a lot faster and offer faster feedback part of your delivery pipeline.
  • If its GUI behavior related
    • zoom out, understand the end-user journey and see how to maximize the automation benefits.
    • think about the user journey, intentions around user workflow, what would the user do before getting to this new feature and where would the user possibly go after using this feature. This will form a logical journey from the end-user perspective and will help us to join behaviors, verify the GUI behavior from end user’s perspective, minimize test cases and maximize the value

So, it’s too many GUI tests ..?

  • For example, lets assume your story is to develop show blog views by geography for WordPress. This new feature gets added part of Stats page and PO considers this to be one of the critical features that 20% of the users (author persona) would use 80% of the time. So it needs to be a part of the smoke suite. If the acceptance criteria is to verify views by geography, map is viewed in the GUI and hence its a GUI automation
    • Intention : The Author wants to view visitors by geography in last week
    • Steps:
      1. Login to the App as Author
      2. Navigate to Stats section
      3. Select the filter by weeks
      4. Verify whether data shows up by “Countries”
      5. Verify whether map matches the country list below

This is good test, helps verifying the intention. However, overtime, as we add more and more features like this, number of tests will grow and will become a maintenance challenge.

Instead a better one would be to amend this part of an existing flow or may be a CRUD flow.

For example, a CRUD flow

Intention: The Author wants to create a blog post, view posts, verify visitors by Geo and verify whether he can delete the post.

Steps:

1. Login to the App as Author

2. Cerate a post

3. generate views (use views APIs or generate fake views by different Geo)

4. Navigate to Stats section

5. Select the filter by weeks

6. verify the “countries”

7. verify the map

8. Delete the post

Steps 1-3 – C of CRUD (create). This is the setup part of the workflow. Everytime before verifying the core functionality we set the system under state to a known state.

Steps 4-7 – R of CRUD (read). This helps us to verify map functionality. Here we knew the visits we generated from previous steps (setup) and hence easy to verify expected outcome

Step 8 – D of CRUD (delete). This helps us to clean up the data created by this test.

Benefits

Based on this example,

  1. workflow vs several micro focused small tests
  2. verifying meaningful workflow composed by several activities that an end-user would do normally
  3. making sure the test is self-contained (setup, activities, teardown) to be able to run this on any environment
  4. If we design tests aligned with behaviors, same behaviors can be reused to construct more workflows (see below pictures)
  5. since tests are self-contained, they can run in parallel and gain faster feedback

These would help us to keep minimum number of test cases, maximize reusability and increased feedback cadence. Overall, maximize value from automation as a motivation to keep up the investment from Agile teams.

Screen Shot 2015-06-27 at 6.24.38 PM

Screen Shot 2015-06-27 at 6.35.08 PM

Screen Shot 2015-06-27 at 6.43.42 PM

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s