In the previous post, I mentioned my one of thousand opinions to improve test automation visibility to succeed in Agile transformation. If you happen to like those dashboards, continue to read. In this post, I attempt to explain how those dashboards can be created for your use.
Technology behind that is ELK stack.
- Elasticsearch to store the logs\test results.
- Logstash to ship the logs from your test runner to Elasticsearch.
- Kibana to dashboard the data.
Minimum design could look like this
In addition to this, we’ll also use another Elasticsearch plugin. Marvel to directly talk to Elasticsearch via API. Here is more info and install guide.
Now that the environment is up and running, first step is to define the mapping. For lack of better term, mapping might be something similar to db schema. In order to pivot data appropriately, we need to input data in a consistent pattern and mapping helps for that.
A simple mapping to begin with
Few quick points about mapping
- Index – something similar to the database
- type – something similar to the table
- Nested Object – to store array of objects inside an object
- _timestamp – enabled to automatically index timestamp of the document
- “index” : “not_analyzed” will indicate Elasticsearch not to analyze this field. Good example is that test case names potentially have multiple words in a sentence, ie. User login to the app from mobile… I dont want Elasticsearch to dismantle the sentence, tokenize the words instead I want to report the name as one sentence. In such case, I’d prefer not to analyze the field.
(There are best practices to create the index combined with date and create a one daily as Elasticsearch can do efficient search across the indices.. for simplicity, I’ve created an index with whatever is required for this attempt)
Here is the gist to create above index, you can simply run this from Marvel (for example, http://localhost:9200/_plugin/marvel/sense/index.html)
Once the index is created, we’ll publish fake data using APIs. Because publishing real test execution data depends on lot of external factors such as Test cases, plugin\logic to parse your test results conforming mapping structure and logstash configuration to ship the data over to ELK. While logstash configuration might be one easy thing, other pieces have huge spectrum of variables. For example, tests might be written in Jasmine + Protractor or Specflow + C# or something in Java.. so, at this point, lets fake it until we make it.
Gist – publish data
In the above picture, I push a sample data manually using DSL from marvel and right side is the acknowledgement showing this document is indexed.
If you run “GET quality/_search”, it’ll show up all the documents indexed. Use your favorite scripting language to publish fake data as much as you want.
At this point, we have some data in Elasticsearch, lets see how to create some visualizations using Kibana.
Launch Kibana (http://localhost:5601/) and confirm some settings
Add “quality” to defaultIndex and make sure _timestamp is in metaFields ..otherwise, no visualization would work.
First step is to identify the subset of data we want to visualize.. navigate to “Discover” tab, select required fields from available fields and Save the discovery
Create new visualization
Select data source for visualization
similarly, build couple of more visualization
Finally, we can bring all the visualizations together in the dashboard.
Visualizations built so far might not be so interesting because there are many test execution plugins offer pie chart and other charts of test execution data.
Best part of using ELK stack is the power of filters that allows us to go back in time and see the trend and make use of the time series data.
Below I’m applying the time filter
Here I can combine the time filter with another parameter of my choice. For example, I want to see only ‘failed’ test cases on ‘dev’ environment in last ‘1 week’
It can help answering many more interesting questions.
Would it be nice if you can see the quality of your build that went to production 14 days back? how many regression cases passed\failed?
Would it be nice if you can correlate your application logs with load test logs and detect anomaly?
Would it be nice if you can go to a single pane of glass that shows Functional, Performance, Security, Regression test results of your application for a given period?
Endless possibilities. Elasticsearch offers rich RESTful APIs as well to build and visualize something that Kibana doesn’t support out of the box..