Regardless of the nature of app that we are building, regardless of the architecture and tech landscape, a fact about GUI\browser tests that almost every developer would point out is that GUI tests are brittle, slow, non-deterministic on and on and on..
Test automation pyramid is well known across the community and Integration testing at the API level is a great rescue vehicle in many ways. Unit tests are great and provides fastest possible feedback on code change. However, the trade-off is that true unit tests are mocked and any feedback that if offers excludes possible contract changes and broken integration. When we deploy new bits, you dont want your users to be the first testers, how would you verify the integrations in the most efficient fashion. With this intention, I’ve been using various specialty tools, some of the unit testing testing frameworks in the past but sadly none of them offer organic adoption amongst the developers in cross functional agile teams.
Recently, explored Taurus for load and performance testing. Here is my previous post talking about why Taurus could be a great contender on the stage.
For example, lets take a use case. I’m working on a story that’s related to stock symbol search indexing in google finance website. Change is in the backend indexing algorithm. Although is not directly related to the GUI however my end user will see the impact in the GUI. My intention is to verify whether the search still returns the list of matching stocks from USA, Mexico and minimum 2 options before proceeding too far, like
This can be verified from the GUI, however, to make it faster and possibly more reliable, trying to mimic what the UI does by directly calling the API would make more sense since our intention is not visibility or UX change related. Of course, we could use the unit testing frameworks and many other tools.
Lets see if Taurus can help here, my intention is to make this API call and assert whether data comes back with different exchanges, below script would do that
One of the good and bad aspects of automation is the “cost” of automation. We pay more in maintenance vs creation. The care and feed when it fails, especially team is charging fast towards a deadline, its got to be easy to fix and maintain these tests. Quite honestly, this is where all other tools fail in my experience.
Taurus makes it easier in many ways.
- Its is narrative, readable and when we source control, go thru collaborative code process (pull requests) and keep up tests just like any other production code
- Got junit xml reporting that can be parsed by almost any tool and improve visibility
- Blazemeter reporting integration is a great plus to keep track of history, trend and see when it started failing
- It’ll generate artifacts that were used for test execution and errors.jtl is very useful for first hand analysis
- At any point, run with -gui option to pop up jmeter GUI and debug the test
Finally, important aspect of any test automation is in providing faster and reliable feedback. Test should fail when its supposed to fail and stop the build from getting promoted. Its fairly easy to integrate this with any CI/deployment tool like Tfs, Jenkins or Bamboo and pass fail the deployment based on the pass/fail criteria.
This is one of the ways to control overuse of GUI testing, give it a shot and enable accelerated development.