My leader reiterats this everyday. “Quality is never a variable and delivering high quality working software is critical”. While quality is absolutely critical, most of the time we tend to go behind some tools, try to automate and maximize the value. Often, automation is being done by test automation professionals and there are mixed feelings about automation benefits in longer run. Automation is great to begin with and hard to keep and maintain over time. Lots of false positives from the GUI tests, tests are out of date, unlike the production code;tests are created and maintained by specific individuals in the team etc.
Core tenet of Continuous Delivery is to gain fastest possible feedback on small batch of changes so that we can keep it production ready. Potentially be production ready always. Check out this short blog for some tips to prepare your QA’s for CD journey. Setting up nurturing environment with appropriate team/organizational structure is a good first step. However, its more than cultural change for tranditional QAs. We have to wrap our head around larger goal of delivering the product, map Continuous Delivery value stream, fit ourselves in the team and delivery pipeline.
Some fundamental technical changes are needed.
While operating on a separate team, QAs are possibly used to separate backlog, working in isolation, running tests nightly or something and send out a test run report in email to the dev manager. If you are somewhat in this state, this is the first change. Adapt BDD practices, inspire fellow team members to understand your world first and then ultimately motivate them to contribute and help test automation.
There are several good reasons to adapt BDD, however, some critical aspects that helped me
- Although BDD was intended for non-technical and business team members, overtime, with increased availability of tools for any developer in any language, BDD gained popularity amongst the developers. If we are still carrying some proprietary script, excel spreadsheets, it won’t work.
- BDD derives business scenarios from the user story and attempts to verify the acceptance criteria. This demands better collaborative environment amongst the team members and expects everyone to keep up shared understanding about the user story even before development.
- Declarative vs Imperative
Declarative programming: telling the “machine” what you would like to happen, and let the computer figure out how to do it.
Imperative programming: telling the “machine” how to do something, and as a result what you want to happen will happen.
By selecting declarative approach, it helps us to understand the user intent, communicate more clearly where the business value is aimed at and hence verify what matters the most to the user.
- Stick to the behaviors while designing the automated tests. Implementation design and code might change overtime but intention rarely changes. For example, in the above intention, we might add a confirmation and expect user to assert the odometer change, however, the intention that the user wants to change the odometer reading didn’t change. Perhaps, if we have designed the test aligned with behavior, test will fail upon code change and that’s positiv sign of good automation. Also, behaviors can be reused to stich end-end workflows.
Rationalized Test Automation
Since QA’s are part of the development team (as contributing team members), it offers great opportunity to see thru the development process and understand how something is being developed much more closely.That means, quality experts get early insider view of the code while its in development, if possible, pair with the developer and pair test the code while its running local. Preventing bugs is better than finding bugs.
Second, it presents another great opportunity to understand best possible ways to automate something. An acceptance criteria need not be automated with outside in GUI tests always.
- It could be verified with Unit testing, that’s more closer to the developer heart and offers almost immediate feedback.
- It could be verified with API testing, that’s little remote to the developer but still an efficient method to verify certain integration, end-end behaviors. This is very useful, especially when the verification intention is not related to user interface (GUI), user interface behaviors (GUI route changes, css changes, element enabling/disabling etc).
- It could be verified with GUI testing only if can’t be verified by above two
This analysis before automating anything would help in long run as it drives the automation based on the intention, smartly manage the non-deterministic GUI tests by delegating non GUI tests somewhere else and increase automation success rate. Read a little more here.
Its possible that “definition of done” doesn’t include quality and automation stream of work. Now that quality is built in, quality evengelists part of the team, we should revisit the definition of done and potentially expand.
In addition to the existing check points, below list helped me
- Environment agnostic tests – every test should be able to run against any environment without any code change.
- Tests part of the pipeline – Tests should be executed part of the pipeline, no more running from IDE, someone’s laptop, nightly run etc. Every deployment should be followed by suite of tests and provide automatic feedback as quick as possible. Any failure stops the pipeline.
- Broken pipeline SLA – how long your pipeline can stay broken? max 2 hours..or whatever works for your team. The longer it stays broken the delay in getting feedback
- Code review – treat test code same as production code for stable and reliable tests. Declarative specifications, use abstraction for locators/page objects and reuse, exception handling, self contained tests (test cases are not dependent on each other), async behaviors and any other principles as appropriate.
- Refactor Test code –
- Refactor tests along with Production Code
- Don’t let Tests fail, Quarantine useless tests
- Identify useful but brittle tests and treat them as bugs in the backlog
- Identify useful but brittle tests and run them in separate suite
- Data – less static test data dependency, setup test data needed before test run and clean up, adopt a mechanism to handle environment specific data as appropriate.
- Visibility – Test results/logs/screencasts are transparently available somewhere for the team. Since we stop the pipeline, its absolutely critical to provide best possible visibility to the entire team. Checkout this and determine if you have right tooling to support this endavour.
These are some of the practices helped us to adapt the change and increase confidence in automation.. more specific technical aspects is the next write up.