Test Pyramids in MUnit
Intro
During the MuleSoft Summit 2019 in Auckland, New Zealand I have had the absolute pleasure of presenting to the audience concepts and implementation of Test Pyramids in MUnit as we apply them in our development practice at Datacom.
I have now decided to start writing and sharing my knowledge on these via some articles on MuleSoft which will including development & architecture, experiments, etc., which I'm sure will be useful and fun for for newbies and more seasoned Muleys.
This article was originally published on Medium.
The theory…
Implementing automated tests is considered critical for all software development practices to help assuring code quality. In MuleSoft these are implemented using MUnit, which similar to JUnit for Java, is a Mule applications testing framework that provides a full suite of integration and unit test capabilities, integrated with Maven and Surefire.
This post is an abstraction of the test pyramid concept presented by Martin Fowler in an article where he explains the test pyramid as being way of thinking about how different kinds of automated tests should be used to create a balanced portfolio.
By applying that same metaphor, this implements a test pyramid applicable for Mule applications, comprised by the following layers:
Unit tests
The very base of the pyramid. In this layer several small and fast to execute test cases should be implemented as this is where the smallest units of code should sit for testing including transformations and custom scripts (DataWeave, Groovy, Java, XSLT, etc.).
Component tests
Lesser number of test cases but more complex than unit tests. This is where flows and sub-flows functionality should be tested regardless of any inbound processors and using mocks.
End-to-End tests
Very few test cases in here. This is where your application either being an API or a batch should be tested end-to-end, from when a request is received to when an output is generated, validating that the result matches the expected.
The practice…
This is where it gets exciting. In this very simple example we will create a new Mule application that will expose a RESTful API where consumers can browse a list of Heavy Metal bands sourced from an in-memory database (H2).
Create API specification
Login to Anypoint Platform and head to the design center to create a new API specification using RAML 1.0. If you are not familiar with RAML 1.0, you have the option of designing your API using OAS (Swagger) or you can also leverage the visual designer from Anypoint Platform to do so.
When done implementing the RAML specification, publish the final version to Anypoint Exchange and this API specification will be ready to be implemented in Anypoint Studio, which is what we will do next.
Implement API
Now open up Anypoint Studio (at the time of writing this article this was version 7.4.2) and create a new Mule Project based on an API specification available from Exchange or Design Center.
In a few moments Anypoint Studio will have generated the skeleton of the application that uses APIKit and corresponding GET flow. When this is finished we can modify the auto-generated flow get:\bands:sys-heavy-metal-bands-api-config from sys-heavy-metal-bands-api.xml configuration file to add the logic to fetch the list of bands from the database table heavy_metal_bands, followed by a transformation to JSON which is the default media type of our API. The most important thing at that stage is to externalize the transformation code to a DataWeave mapping file (.DWL).
The API implementation code is now finished, so we have to create the test cases. Will start by implementing unit tests.
NOTE: This application has been implemented as very simple demonstration of the techniques applied to implement an automated test pyramid using MUnit, as such, a few standards, naming conventions and other considered best practices may not necessarily have been used.
Unit test
As mentioned before, will start with the smallest unit of testable code: javaCollection_to_json.dwl file. Let’s create a new test case to test it, where after transformation we assert that there are 1 or more bands in the collection and that the collection contains a Brazilian heavy metal band: Sepultura.
If there were any other transformations, including but not limited to DataWeave, we would create sub-sequential tests for these as well, but since there are no more of these, will move on to component tests.
Component test
At this layer, we will test the flow, regardless of a proper source (aka HTTP request) and will leverage MUnit’s mocking capability to respond with a collection of Heavy Metal bands as if sourced from the underlying database and just making a reference to the get bands flow. The assertions in this case are validating that the collection is not empty and that the first Heavy Metal band in the list is another Brazilian band - Angra.
If we had other flows or sub-flows, we would implement similar test cases.
End-to-End test
At the top of the pyramid we test the application end-to-end, meaning that we will go through all sorts of protocol and inbound connectors/listeners. In this case, we will make a proper HTTP call to our own API and evaluate the result. for this one, rather than writing a new test case from scratch, we will leverage Anypoint Studio to do it, by right-clicking on the APIKit Router in the main flow and clicking Create Test Suite from API Specification.
The final result will be a full-blown ready to use end-to-end test case and that finishes implementing the test pyramid.
Conclusion
Throughout this article you have seen how to abstract and apply the concept of test pyramid to MuleSoft applications using MUnit. That taking that approach, not only the quality of delivered code can be assessed, but it also caters for cost reduction since you write these once and run as many times as needed, plus it helps with scenarios where regression test is needed and on the bonus side, it helps improving accuracy, coverage and builds up on the developers confidence.
The fully working version of the example Mule application can be cloned from my Github repository here.
Enjoy!
Author
Eduardo Ponzoni works for Datacom New Zealand as an Enterprise Integration Development Manager and is the MuleSoft Practice Lead. He hails from Sao Paulo, Brazil and has over 18 years’ experience in the IT industry, having been in a technical leadership and managerial role with Datacom for about 3 years. He is well versed in integration practices and has a number of certifications in numerous technologies including MuleSoft and Microsoft.
To get in contact, feel free to reach out via LinkedIn or email
Air New Zealand? Software Engineer
4 年Nice work!
MuleSoft Ambassador | Salesforce MuleSoft Partner Advisory Board Member | Solutions Architect at DXC Technology
4 年Good stuff, Eduardo ??
Solutions Engineer, Sales Engineer, PreSales, Enterprise Architect, Solutions Consulting, Digital Transformation, Thought Leadership
4 年Very cool Eduardo!!!
Integration Architect | 1xMulesoft Certified | AWS | Azure | 1xSOA Certified | DevOps | Helping the best professionals with knowledge in Mulesoft/AWS and Azure connect with leading IT companies in Brazil and Portugal.
4 年Excellent article Eduardo Ponzoni, I really liked the name of the API !!
Integration Capability Lead | MuleSoft Ambassador | KPMG Australia
4 年Datacom MuleSoft