ChartJS is cool
ChartJS is cool, but there is a journey to be followed to get to where I discuss ChartJS, so please bear with me.
In my last few posts I have been going through how we can start collecting statistics from ACE/IIB/WMB as part of running our test cases.
The first post was around how we can do it and what it would look like to a development team.
Our second post looked at what we could use these statistics for. In that case we were using it to quantitatively prove a theory that extra nodes made a flow in an execution group slower.
Now teams we engage with have 2 use cases for SonarQube and our product.
1) To help measure, manage and improve the quality of the code
2) Allow reporting to management around metrics for the code that teams are developing. How much technical debt is each team incurring ? How much test coverage is "average" for their teams? Which teams need help or more resources ?
A third use case could be using SonarQube metrics to feed into the management of an outsourced development program. Where a third party is developing code and reporting out of SonarQube is used to determine if the team is meeting some predetermined KPI (test coverage, technical debt, complexity, etc). I have always thought that organisations that outsource development and also outsource the quality management of that development to the same 3rd party organisation are really asking for trouble. Pretty much the fox watching the hen house issue. So how can teams use more than just cost of development to determine performance and acceptance of the outsourcing a solution ?
So to aid with the all 3 above, SonarQube has a number of dashboards available that allow aggregation and reporting of metrics.
So below we have 2 projects and an aggregated project.
And then combined:
To follow the same pattern, we can also report combined metrics for statistics into a single report/diagram from a number of IIB projects.
So wc can have a consolidated view of the performance of our code.
The next step to helping understand our statistics is looking for different ways to present them. So that we can look at "overall" team, project or organisation code quality.
Now for the statistics that we collect on performance of different flows, a table is good, but a little bland. Also, not everyone see things the same way. Sometimes graphs and charts are better.
Just like sometimes people prefer to listen over read, or watch over listen.
Also, I have been reading a lot of Dilbert lately and sometimes management needs a "cut down" version.
So I did some research on options for charting.
There's quite a few around. And there is really cool JS library available called ChartJS.
Longer term it would interesting to see if we could present the data to something like Prometheus of Grafana. They are usually used when monitoring operational systems. While we look more at a controlled "integration test" environment for test and performance reporting. So it is possible that monitoring teams report peak issues, or infrastructure issues, which may or may not match what we would see when running test/integration tests.
So not quite the same, but similar. Interesting, but probably something to look at in a future feature and Linked post.
Back to ChartJS. ChartJS is pretty easy to use.
And I found some great examples on GitHub.
I have added the code for creating the chart here:
<div id="canvas-holder" style="width:40%"> <canvas id="chart-area"></canvas> </div> <script> var randomScalingFactor = function() { return Math.round(Math.random() * 100); }; var config = { type: 'pie', data: { datasets: [{ data: [ $pieChartAverageElapsedTime ], backgroundColor: [ $piechartAllBackgroundColours ], label: 'Dataset 1' }], labels: [ $pieChartFlowNames ] }, options: { responsive: true } }; window.onload = function() { var ctx = document.getElementById('chart-area').getContext('2d'); window.myPie = new Chart(ctx, config); };
</script>
I did need to include a few JS files (available from the ChartJS site).
<script src="./js/chart.min.js"></script>
<script src="./js/chart-utils.js"></script>
Using the above code to add a chart to our architectural dashboard was reasonably seamless.
Now the chart for the statistics looks like the following:
So not providing any obvious insights.
But if the chart looked more like the following:
Then it is a lot more obvious where development teams and management might can start looking to invest in performance tuning of the code and flows in their environment.
It might be as simple as allowing multiple instances of a flow.
Or it might involve a complete rethink of the architecture - or some thing in between.
But the first step is identifying that there may be something that needs further investigation is measuring and understanding where you are now.
If you are interested in finding out more about our products or were interested ina demonstration please drop me an email to:
Or contact me via the contact page on our website.
Regards
Richard