Improve Test Automation visibility to succeed in Agile Quality transformation

One of the main success factors of Agile software development is, it makes everything terribly visible. If you walk into the area of a development team that’s practicing Agile, its highly likely to see some burn down charts, visible dashboards, monitors showing backlog, sprint progress, blockers and of course list of Tech Debts that they currently work or want to knock down soon.

For Quality practices to be successful and to benefit the overall agility, visibility is one of the key success factors too. In another blog post, I would have mentioned about preparing your QA’s for Agile practices. Knocking down separate QA\Perf\Security teams is the first step towards building self-sufficient product team to deliver concept to cash. However, once the team is formed, it’s absolutely critical manage visibility of the backlog.

For example, below is the task out of a story including automation tasks and keep it very visible part of the backlog.

Screen Shot 2015-09-06 at 8.01.09 PM

  • Visibility is critical to make fellow team members aware of the tasks
  • Visibility will encourage them to pull the test automation tasks even though it might not be their speciality
  • Visibility will improve collaboration between developers and those who are trying to make transition from conventional QA role.
    • For example, one of the thumb rules in test automation is “DRY” tests. Don’t Repeat Yourself tests. It’s crucial to analyze each and every test that we plan automate and answer this one question. “Are we automating the right thing and by applying the right technique?” ..If something can be automated at the unit test level, that’ll offer much faster feedback, love and care from everyone. If something is already unit tested, should we repeat that again in another form of test such as Integration or GUI level surface tests?

Further more, if you practice Kanban, WIP limits could come handy when team attempts to commit more development tasks before completing a story meeting DoD.

Once everything becomes part of the same backlog (remember no separate QA\Automation backlog), collaboration kicks in, its vital to elevate the visibility around the state of the union. Automated Tests should be part of the delivery pipeline and highly visible to each and every team member across the globe. When tests run part of pipeline and failure stops the build from being promoted that’s going to gain more attention.

For example, below is snapshot of test execution dashboard.

2015-08-19_15h41_10

Little note about the tooling here, regardless of the frameworks in use, ie. Specflow + xunit or Jasmine tests or JMeter tests or Selenium tests, what not.. everything would offer a way to gather the test run logs and plot some sort of nice chart showing the results.

More interesting  visual could be a histogram showing the success\failure of tests over time so that the team gains confidence by seeing more success

2015-09-06_21h09_38

Time taken for each test run so that the team can analyze and go smart to shorten the execution time

2015-09-06_21h18_03

Flakiness index (sort of failure rate over time) that might help connecting some global event that affected my app even though there is no change on my side

2015-09-06_21h09_38

If we provide a way for easier test failure analysis without having to wait for the expert to come in from another side of the planet, something like test execution screen cast or log link from Saucelabs, Blazemeter, Visual studio online etc..

(more detailed logs in each line in the highlighted area below)

2015-08-19_15h41_10

If we start practicing some of these techniques, it’ll make the Agile Quality transition more visible and next step is to identify obstacles and make them visible and keep pushing forward.

Do Testers need to be Developers to support Continuous Delivery and Agility.. ?

In this blog post, I explained the need to prepare the traditional QA’s towards Agile and Continuous delivery teams. However, this one question will keep bubbling up for some more time until everyone accepts the transition.  So, what is the role of testers in Agile teams. Do testers need to become developers? Unless you are a startup, you likely have QAs around in the organization who have been providing Test Automation as a service, what do to with them now? and so on..

There are many books (Agile Testing, More Agile Testing) carry insightful discussion and experiences from experts on this subject. Whenever this topic comes up for discussion, I’d fall back to one of my favorite articles by Rob Lambert.

While that sounds more like a people and talent management concept, sounds like it makes whole lot of sense when you want to mature towards self organizing Agile teams who can deliver high quality working product in sustainable pace.

Why is that critical to have ‘T’ skilled testers part of the team?

First off all it fosters collaboration across the disciplines. Deep expertise in one area ensures that the team has an expert in that area. However, for the expert to communicate his/her point across, it requires basic understanding and broad knowledge in the areas other than their specialty.

A team member who has come from QA background has deep expertise to approach quality, test automation design patterns and practices. However, it’ll add value if that person can understand the development world to some degree. Basic understanding of web development (if relevant), ability to scan thru unit tests and understand what’s covered at that layer and how, basic database concepts, DevOps understanding etc.

Similarly team member who is passionate about development got deep development expertise, however, it’ll add value if that person can understand testing techniques, test automation practices to some extent.

Lets take this example, recently I came across this situation where we were automating some test cases for a single page application(SPA). Unlike a website, SPA takes longer time to load initially as it loads all the JavaScript code once and any further action in the UI will just get updated data from the backend to rerender parts of the screen. URL is less likely to change. There could be several actions in the same page, depending on the user actions there could be network activity to fetch new data and update the screen. Given the nature of the application, it promotes composition at the GUI level. Meaning, a page might contain several components and each of them might have its own lifecycle. Lets say the page we were automating composed of 2 different components and your test case needs both the components to be completely available before going forward with next step.

We started to automate with default webdriver timeout of 60 seconds. For example, its expected that Angular finished rendering and has no outstanding http requests within 60 seconds, otherwise, test will fail. This worked well for a few days. Suddenly, tests started failing and resolving this issue highlighted the need for ‘T’ skilled QA team members.

Approach 1

One way to solve the problem is to increase the timeout from 60 to 90 or add some dynamic thread sleep. Tests started passing but would occasionally fail. This justifies why developers don’t subscribe to GUI tests because of non-deterministic nature of this testing technique. Why did it pass with 60 sec timeout, why does it pass with 90 sec and why does it fail for no reason?

Approach 2

Took a pragmatic approach. Went ahead to see what changed in that component and why it started failing. Found out that component two introduced long-polling. Long-polling? yeah, they prefered using persistent or long lasting http connection between the client side component and their chat server. So, the root cause of the problem was if the test is waiting for all http connections to be resolved within 60 seconds and its not resolved due to long-polling. Regardless, if we simply extend the timeout or put some sleep(), issue will persist.

Perhaps, Approach 2 surfaced the real problem and lead further discussions on both sides. It needed a tester who can go below the GUI surface, understand how application code works and work with developers to find a long term fix. Classic ‘T’ skilled tester. Developers started thinking whether they should adopt timeout technique in the production code instead of long-polling or is there a better technique. Testing side of the house started thinking whether its realistic to expect all http calls to be finished or narrow down to the scope of a specific component that the automation needs. Ultimately healthier discussions between the team members rather than blaming each other.

On one hand it might feel like these practices attract generalizing specialists, running into the risk of diluting the strengths, instead of mastering one thing we tend to touch everything. But the idea should be use specialists’ strength to succeed in accelerated delivery cycle.

  • If a tester can evangelize quality practices in a manner that other fellow team members can support and follow, that’s a win-win for everyone.
  • Fosters better collaboration amongst the team members.
  • Promotes empathy for each other and mutual respect. Cooperation not competition.
  • Invites developers and other team members to step up and help when there is a need instead of saying ‘automation is not my thing’.
  • Enables everyone to think about improving quality and preventing bugs.

Consider building ‘T’ shaped skills for your teams..

At (the very) least, why would it make sense to automate your tests?

While there is a lot of attractions towards continuous deployments, deploying to production 37 times a day etc, still the area of “Test Automation” seems to gain less attention that what it deserves. Most places adopt Agile could possibly suffer with lack of drive and ownership on Quality/Test automation during the transition, especially when your QAs transitioning to be an Agile team member. From what I’ve seen, those QAs who adopt the change are likely to get attreacted more towards product development and other tasks and team might get to a point where no one evangelizes quality. Especially, those places with legacy product in the market, it’s much harder. Reasons may be many, our PO wants new features, we got deal with tech debt, lot of energy goes into support and bug fix, our developers hate test automation;its a QA thing, we found a new tool and want to try that out, no proper tooling guidance, integration and so on..but the impact will hit our face sooner than later. I’m collaborating with atleast 100 developers in regular basis and could notice ~ 5% of them talking about practices like BDD and TDD. Remainder of the crowd not necessarily hate automating their verifications, rather their professional passion and interest is not in test automation. After all, AngularJs got better future for developers than Selenium webdriver automation. However, delivering high quality working software needs quality checks and verification. If we were to align with the faster delivery cadence, release once in 2 weeks, those verifications need to be automated mindfully.

Automation has been there since ages, tooling space have evolved so much and plenty of reasons why automation makes sense. But still those don’t seem to motivate agile teams. If you are one of those team members who would prefer to pass on the automation task to someone else, pls continue to read. IMHO, at the very least, automated tests would help in below situations

Pull Request shivering

Geographically distributed teams can’t be avoided in software development. Even within the same team, team members are spread across the globe, nowadays we develop a lot of small components, libraries and put it out for consumption. Great example NPM carries 163,000+ packages, libraries and reusables. Micro services is a buzz word amongst many. These situations are driving us towards small, self-contained repositories of code base and potentially a lot of run time dependencies between them. When more than one developer need to collaborate, pull request (PR) has become a must have a thing.

  • does the pull request send shivers down your spine?
  • when a PR comes in, how confident are you to accept that code change?
  • how do you verify the code quality of the incoming PR? how would you verify whether it confirms to your coding standards and wouldn’t degrade your current code quality?
  • how do you verify that the new PR wouldn’t break current functionality?
  • naturally code base grows, multiple people contribute, is there anyway to know the current state of a given functionality?

Painful Releases

If it’s a small product, being developed by a few co-located developers, all the team members might stay in sync and might follow the same routine steps to verify releases. As product complexity increases, this will become a repeated time-consuming activity. When product grows, more people contribute, it’ll demand for manual procedures/documentation to make sure everyone checks the mission critical functionality the same way to certify. While human verification is valuable, we could delegate some error prone, multi step complex processes to the machine

  • is your post deployment verification process becoming a nightmare?
  • are you spending a lot of time to verify the deployment?
  • are you unsure that the new build didn’t break mission critical functionalities?
  • are you unsure that the new build didn’t degrade performance?
  • are you catching significant bugs after the release while they could have been fixed earlier?
  • do you notice inconsistency in verification process? every team member follows different steps?

If current level of automation is so low, you are likely to suffer from these symptoms. For some reason, if you are in this situation, it’s a great opportunity to showcase your technical leadership to up level your team. recipe is not that hard, couple of easy recommendations

Sonar Analysis, quality gates, attach quality metrics with Pull Request

Minimum, I would suggest to use SonarQube. Sonar is a flagship open source software that will add tremendous value to understand current code quality metrics and improve towards your target goal. Sonar can be used to analyze the codebase and provide various code quality feedback, can be integrated to most of the CI build systems to run code analysis upon check in, can be integrated with source control and be an indicator of code quality along with pull request. Example Sonar + Stash PR integration, very handy for code reviews.

Consider writing some tests (unit and/or integration tests) that can be executed part of the CI process. Sonar offers Quality Gates that can help enforcing standards, fail the CI build when desired level of code quality is not met.

Smoke Test Suite

The level of automation such as unit testing, api, gui doesn’t matter. Patterns and practices such as BDD, TDD, ATDD doesn’t matter. It makes complete sense to identify mission critical flows of your application and start developing smoke testing suite. If your app is e-commerce, search is a must have. checkout is a must have. I would at least build a smoke testing suite and work towards integrating that with the delivery pipeline. Shoot for high confidence, faster, repeatable and consistent releases.

With increased modularity and libraries sharing eco system, these would also help the consumer to consume your library with high confidence. I generally don’t pull down any library from github unless I see tests passing.

Gradually fit Quality into Continuous Delivery pipeline

In last few blogs, I discussed about the significance of creating appreciating environment for successful CI to CD transition and some tips about preparing your QA’s for Agile and CD transition. I thought it’ll be nice to post this short write-up that shows what would it take to gradually plug into CD pipeline.

Fit Quality in CD pipeline

Hopefully, you can zoom in and see the picture.

Each question will expect yes/no answer. If your answer happens to be No, that’s possibly the most logical next step in the evolution. Each step will assert certain qualities of CD.

For example,

If the answer for “Quality discussions part of Sprint planning” is “No”, then its unlikely that the whole team owns quality, may be, someone in the team cares about quality, creates and maintains test automation. Recommended next step is to elevate Quality awareness within the team. Some tips here.

If the answer for “QA/Test automation task part of story DoD” is “No”, then its possible that your story is not truly complete to be ready for production push. Also, its possible that there is separate Automation backlog might be running in parallel apart from Sprint stories. Recommended approach is to visualize testing tasks/automation needs part of planning process and implement them part of the story. In order to enforce this, you might need to redefine the DoD as appropriate. Some tips here.

Of course, we need to combine similar maturity model from other aspects such as code quality, unit testing, build and deployment along with quality practices and improve the system as whole to realize tangible benefits. As we progress, It’ll be interesting to measure the lead time to deliver, feedback loop and production readiness.

Sustainable Test Automation practices for Agile teams

High quality working software wins !!. Regardless, we embraced Agile or not, Test Automation is very critical for successful delivery. But Agile movement elevates the need for rationalized automation effort with the intent of boosting team confidence as code traverse different stages in the pipeline. It’s fairly easy to get started on Test automation with zillion tools, guidance and samples around us. However, one of the common problems is how to sustain the value of automation over time?

IMO, some common smells

When adapting Agile, Acceptance tests are one of the common patterns where acceptance criteria verification is automated alongside the code part of the user story definition of done.

  • each story spawns a few acceptance tests and over time, you will run 100s of tests.
  • 6 months back acceptance tests ran within 2 mins, provided great value and fast feedback but now they run for several minutes.
  • at the beginning, we had cycles to keep up with automated tests but over time, we got busy in delivering features and can’t keep the tests up to date, too many tests, because we automated test cases with every story. Naturally story level tests will fail when we change implementation or refactoring code. If we can’t keep up with automation, % of failing tests will increase.
  • technology changes –  with wide community contributed open source technology growth, new tools and technology comes to surface very often and as developers we get attracted. But the organization invested in some technology, that obviously old now (may not be outdated), my developers prefer this new tool..we are rewriting everything and need a dedicated automation stories or sprint..

If these sound familiar, keep reading.

Too many Acceptance Tests

First off, if you have quantified testable acceptance criteria part of the story, that’s good. If you are trying to automate the acceptance criteria, that’s even better. At this point, the process of trying to automate the acceptance criteria might need a check list to rationalize, because every automated test case would require care and maintenance as we continue to build more software

  • Is it essential to automate? what is the value of automating vs not automating?
  • how does it map to a user behavior?
  • can this be automated with unit testing or api testing before writing e-e test?

If it necessary to automate this behavior and impossible to accomplish that using unit tests (because, it might need to talk to api tier, messaging, database etc), determine

  • what is the intention of the verification?
    1. are we trying to verify the data that gets returned from the backend or other external services?
    2. are we trying to verify response payload, schema, content type before processing?
    3. are we trying to verify the message payload and data before processing?
    4. are we trying to verify the user interface behaviors like enabling/disabling ui elements?

Once the intention is clear, then determine right technique to automate

  • If its data related (1-3 above) , prefer API testing. API/service contracts less likely to change compared to GUI changes and hence we are likely to see better stability. Also, these would run a lot faster and offer faster feedback part of your delivery pipeline.
  • If its GUI behavior related
    • zoom out, understand the end-user journey and see how to maximize the automation benefits.
    • think about the user journey, intentions around user workflow, what would the user do before getting to this new feature and where would the user possibly go after using this feature. This will form a logical journey from the end-user perspective and will help us to join behaviors, verify the GUI behavior from end user’s perspective, minimize test cases and maximize the value

So, it’s too many GUI tests ..?

  • For example, lets assume your story is to develop show blog views by geography for WordPress. This new feature gets added part of Stats page and PO considers this to be one of the critical features that 20% of the users (author persona) would use 80% of the time. So it needs to be a part of the smoke suite. If the acceptance criteria is to verify views by geography, map is viewed in the GUI and hence its a GUI automation
    • Intention : The Author wants to view visitors by geography in last week
    • Steps:
      1. Login to the App as Author
      2. Navigate to Stats section
      3. Select the filter by weeks
      4. Verify whether data shows up by “Countries”
      5. Verify whether map matches the country list below

This is good test, helps verifying the intention. However, overtime, as we add more and more features like this, number of tests will grow and will become a maintenance challenge.

Instead a better one would be to amend this part of an existing flow or may be a CRUD flow.

For example, a CRUD flow

Intention: The Author wants to create a blog post, view posts, verify visitors by Geo and verify whether he can delete the post.

Steps:

1. Login to the App as Author

2. Cerate a post

3. generate views (use views APIs or generate fake views by different Geo)

4. Navigate to Stats section

5. Select the filter by weeks

6. verify the “countries”

7. verify the map

8. Delete the post

Steps 1-3 – C of CRUD (create). This is the setup part of the workflow. Everytime before verifying the core functionality we set the system under state to a known state.

Steps 4-7 – R of CRUD (read). This helps us to verify map functionality. Here we knew the visits we generated from previous steps (setup) and hence easy to verify expected outcome

Step 8 – D of CRUD (delete). This helps us to clean up the data created by this test.

Benefits

Based on this example,

  1. workflow vs several micro focused small tests
  2. verifying meaningful workflow composed by several activities that an end-user would do normally
  3. making sure the test is self-contained (setup, activities, teardown) to be able to run this on any environment
  4. If we design tests aligned with behaviors, same behaviors can be reused to construct more workflows (see below pictures)
  5. since tests are self-contained, they can run in parallel and gain faster feedback

These would help us to keep minimum number of test cases, maximize reusability and increased feedback cadence. Overall, maximize value from automation as a motivation to keep up the investment from Agile teams.

Screen Shot 2015-06-27 at 6.24.38 PM

Screen Shot 2015-06-27 at 6.35.08 PM

Screen Shot 2015-06-27 at 6.43.42 PM

Quality practices that can accelerate your Continuous Delivery journey

My leader reiterats this everyday. “Quality is never a variable and delivering high quality working software is critical”. While quality is absolutely critical, most of the time we tend to go behind some tools, try to automate and maximize the value. Often, automation is being done by test automation professionals and there are mixed feelings about automation benefits in longer run. Automation is great to begin with and hard to keep and maintain over time. Lots of false positives from the GUI tests, tests are out of date, unlike the production code;tests are created and maintained by specific individuals in the team etc.

Core tenet of Continuous Delivery is to gain fastest possible feedback on small batch of changes so that we can keep it production ready. Potentially be production ready always. Check out this short blog for some tips to prepare your QA’s for CD journey. Setting up nurturing environment with appropriate team/organizational structure is a good first step. However, its more than cultural change for tranditional QAs. We have to wrap our head around larger goal of delivering the product, map Continuous Delivery value stream, fit ourselves in the team and delivery pipeline.

Some fundamental technical changes are needed.

BDD

While operating on a separate team, QAs are possibly used to separate backlog, working in isolation, running tests nightly or something and send out a test run report in email to the dev manager. If you are somewhat in this state, this is the first change. Adapt BDD practices, inspire fellow team members to understand your world first and then ultimately motivate them to contribute and help test automation.

Why BDD?

There are several good reasons to adapt BDD, however, some critical aspects that helped me

  • Although BDD was intended for non-technical and business team members, overtime, with increased availability of tools for any developer in any language, BDD gained popularity amongst the developers. If we are still carrying some proprietary script, excel spreadsheets, it won’t work.
  • BDD derives business scenarios from the user story and attempts to verify the acceptance criteria. This demands better collaborative environment amongst the team members and expects everyone to keep up shared understanding about the user story even before development.
  • Declarative vs Imperative

Declarative programming: telling the “machine” what you would like to happen, and let the computer figure out how to do it.

Imperative programming: telling the “machine” how to do something, and as a result what you want to happen will happen.

By selecting declarative approach, it helps us to understand the user intent, communicate more clearly where the business value is aimed at and hence verify what matters the most to the user.

For example,

Screen Shot 2015-06-22 at 9.26.13 PM

  • Stick to the behaviors while designing the automated tests. Implementation design and code might change overtime but intention rarely changes. For example, in the above intention, we might add a confirmation and expect user to assert the odometer change, however, the intention that the user wants to change the odometer reading didn’t change. Perhaps, if we have designed the test aligned with behavior, test will fail upon code change and that’s positiv sign of good automation. Also, behaviors can be reused to stich end-end workflows.

Rationalized Test Automation

Since QA’s are part of the development team (as contributing team members), it offers great opportunity to see thru the development process and understand how something is being developed much more closely.That means, quality experts get early insider view of the code while its in development, if possible, pair with the developer and pair test the code while its running local. Preventing bugs is better than finding bugs.

Second, it presents another great opportunity to understand best possible ways to automate something. An acceptance criteria need not be automated with outside in GUI tests always.

  • It could be verified with Unit testing, that’s more closer to the developer heart and offers almost immediate feedback.
  • It could be verified with API testing, that’s little remote to the developer but still an efficient method to verify certain integration, end-end behaviors. This is very useful, especially when the verification intention is not related to user interface (GUI), user interface behaviors (GUI route changes, css changes, element enabling/disabling etc).
  • It could be verified with GUI testing only if can’t be verified by above two

This analysis before automating anything would help in long run as it drives the automation based on the intention, smartly manage the non-deterministic GUI tests by delegating non GUI tests somewhere else and increase automation success rate. Read a little more here.

Redefine DoD

Its possible that “definition of done” doesn’t include quality and automation stream of work. Now that quality is built in, quality evengelists part of the team, we should revisit the definition of done and potentially expand.

In addition to the existing check points, below list helped me

  • Environment agnostic tests – every test should be able to run against any environment without any code change.
  • Tests part of the pipeline – Tests should be executed part of the pipeline, no more running from IDE, someone’s laptop, nightly run etc. Every deployment should be followed by suite of tests and provide automatic feedback as quick as possible. Any failure stops the pipeline.
  • Broken pipeline SLA – how long your pipeline can stay broken? max 2 hours..or whatever works for your team. The longer it stays broken the delay in getting feedback
  • Code review – treat test code same as production code for stable and reliable tests. Declarative specifications, use abstraction for locators/page objects and reuse, exception handling, self contained tests (test cases are not dependent on each other), async behaviors and any other principles as appropriate.
  • Refactor Test code –
    • Refactor tests along with Production Code
    • Don’t let Tests fail, Quarantine useless tests
    • Identify useful but brittle tests and treat them as bugs in the backlog
    • Identify useful but brittle tests and run them in separate suite
  • Data – less static test data dependency, setup test data needed before test run and clean up, adopt a mechanism to handle environment specific data as appropriate.
  • Visibility – Test results/logs/screencasts are transparently available somewhere for the team. Since we stop the pipeline, its absolutely critical to provide best possible visibility to the entire team. Checkout this and determine if you have right tooling to support this endavour.

These are some of the practices helped us to adapt the change and increase confidence in automation.. more specific technical aspects is the next write up.

Prepare your QA’s for Continuous Delivery transition

Continuous Delivery (CD) as a concept has been there for several years now, many enterprises are trying to adopt Continuous Delivery and offer that competitive advantage to the business. Wondering what’s the competivie advantage provided to the business? Take a look at this short post to understand the difference between CI vs. CD vs. CD. In essence, Continuous Delivery attempts to keep the code production ready with every single change and expect business to PULL the latest business value addition that their engineering has been working so hard. It’s a Blessing In Disguise, if both product and technology can understand and implement.

In this journey, your whole team will go thru variety of cultural and technical changes. One of the most critical rejig should be around the Quality side of the house and this post is an effort to share some of my exposure, experience about preparing and transitioning your QA’s to be Agile Team members.

I’m positive everyone started practicing Scrum or whatever works for them. However, look closely and see how the work is getting done. Meaning, how a story is developed and delivered.

How many streams of work happens here?

How many teams does it span across?

How different stream of work is being backlogged and managed?

does it look something like this ..?

Separate Quality teams

If we carefully look at the above picture, there are atleast 4 streams of work (Development, Test Automation, Perf Automation, Security testing and Ops). Streams of work is not the problem. But many organizations would have either outsourced parts of these to some other service provider or created internal Center Of Excellence to help product teams. Product teams consider themselves as just “developers”. This might have helped the business until yesterday, but wouldn’t work for future.

Given the situation, these streams of work is possibly managed by different parts of the organization. Perhaps, each organization involved here might have adopted Agile, running sprints, backlogs, daily standups, retro etc but not as ONE team from user story thru release. This could potentially cause several issues, specifically, development team would throw the bits over the wall and move on. Automation team wouldn’t have enough context to verify the change in the most efficient manner. Sometimes, build might wait for QA’s availability and similar concerns with Performance and Security. Overall, it would slow down the progress, quality would suffer, lots of back and forth between these teams etc. If we want to keep every single change production ready, all the necessary streams of work has to happen part of the story without any delay. Everyone must march towards the same goal.

This is one of the remarkable changes to happen. Seperate Quality teams (Functional test automation, performance automation, security etc) must go away and cross functional teams should be nurtured. No doubt that every stream of work is a specialty job, driven by passion. Can’t expect an iOS developer to be an automation guy to puppteize our environment creation. Can’t expect the automated tester to be an equvalant developer and so on..However, having essential experts part of the team is essential to derliver the work. Everyone will be driving towards one sprint goal including Quality experts.

  • Ex.QAs are equal team members passionate about Quality just like the other JavaScript enthusiast in the team.
  • Should participate in all team ceremonies and act as the Quality evangielist for the product.
    • Especially, play the vital role in sprint planning, fight for necessary quality work part of the story, extend the DoD and hence work towards true DONEness of each story.
    • Pair with the developer and help identifying bugs before the build is out.
    • Follow BDD (or anyother approach), help covering all possible business scenarios thru appropriate automation (unit testing, api testing, gui testing)
  • With increased understanding about the product development and background, minimize number of automated test cases, maximize automation reliability and coverage
  • Integrate automation with Continuous Delivery pipeline and enable fastest possible feedback for everyone in the team
  • Continuously refactor automation and keep up the value
  • Mentor and educate other team members on quality principles and motivate everyone

Things to avoid

  • Separate Automation backlog
  • Separate QA, Perf work, org structure
  • Separate QA tools. Go with developer friendly tools to gain help, guidance from other team members and also motivate other team members to participate and drive automation
  • Running automation separate from pipeline (nightly, weekly, from tester’s laptop..)
  • Filing bugs. When something doesn’t work as expected, go the extra mile, learn to analyze and and work with the PO to place them appropriately in the backlog. May be, submit a pull request with a fix. Trust me, your team mates will respect PR more than the bug.

Overall, QAs could become more prominent than the past by adapting the change, being part of the product development end-end and quality evagelist. Overtime, every user story will contain all necessary tasks including manual and automated test to complete the story and keep it production ready. Overtime, transition might look like this but long term commitment is necessary for success.