Test coverage in ACE unit tests

Test coverage in ACE unit tests

The latest release of ACE (App Connect Enterprise) was in May 2021.

If you not familiar with ACE - it's the new improved IIB (IBM Integration Bus), which was WMB (Websphere Message Broker).

One of the features that has been most exciting (going by the number of posts on LinkedIn that I have seen recently) is the new "Unit Testing" capability.

The IBM documentation has it described as "Developing Integration tests" but the framework is based on JUnit, so the terminology can be confusing.

In this article we are taking the unit and integration testing approach that we demonstrated previously (with the same project() and rebranding it as ACE unit testing.

I'm not going to go into all the nitty-gritty of the ACE tooling to create the unit tests and run them inside the toolkit.?

There are much better discussions and tutorials on this provided by IBM and others in the community, so I have included links to those rather then making a poor copy.

Creating a test case from a message flow.

ACE-v12 unitest test functionality.

But there are more available.

So just trying to set some background around unit testing and ACE.

So teams are at different levels of maturity when it comes to ACE/IIB/WMB development.

And what we try to do is move them along the levels to help improve the quality of their deliverables, the cadence and the cost of delivery.

So if we look at this maturity model for unit testing, then having unit tests for ACE/IIB with what IBM provides it aiming for teams to be at a "Level 3" maturity. There are more details on the descriptions of the levels here.

So the new IBM frame work can help isolate the code and mock/stub the interactions between the nodes.

What we provide to the teams that we support is the "Level 4" support, which is mostly around test coverage.

Unit tests are good, but how to we measure their effectiveness so that we can determine how much confidence we have to release new features based on the tests we have ?

If we develop a new feature have we tested it enough ?

How do we know what we have changed was being tested ?

Do we still need to have manual testing or can we release a change without manual intervention ?

So these are the questions that test coverage helps us to answer.?

The process for ACE/IIB is similar to what developers would do in other languages such as Java (with JUnit), C# (with NUnit) or Python (with pytest).


The developer writes a test case or test driver. I have used the term "test driver" previously as we have been helping teams with test coverage in WMB and IIB for a while, and what they have been using?hasn't quite been your traditional unit test case.?

Before the addition of support for JUnit style tests in ACE, the unit tests we have been working on weren't capable of doing what the new JUnit framework could do as far unit testing.

So what we?have been supporting is essentially "integration tests" that run the flow logic given an external driver (such as an MQ message) that executed code that had been re-factored to be "testable" functions (so unit test like).

What we can now do with the new unit testing framework is write tests a lot easier and a lot quicker, and also develop unit tests without need to do re-factoring of the original code.

Which is always a challenge, particularly if you have no automated tests. Very chicken and egg. Can't have a unit test without changing the code and don't want to change code without having a unit test.

So IBM has given us a much cleaner approach to unit and integration testing.

But coming back to test coverage.

Some tools, such as Java, C# and Python have support for test coverage built in. With ESQL (and WMB/IIB) we don't have that luxury. So using a tool like Sonarqube can help.

Sonarqube is also ideal for sharing those metrics. A developer might not be looking at coverage in his development tool, but publishing it on a Sonarqube dashboard means everyone has visibility.


We already have 2 ways to generate test coverage for IIB code. We call them tracing and instrumentation.

Both have their strengths and weaknesses.?

Tracing is simple to setup. It relies on consuming the trace files that can be generated with the "mqsichangetrace" command. It allows us to provide test coverage for ESQL, maps and message flows.

Instrumentation is more involved. It is more aligned to how?JaCoCo?works, where the code has been updated with profile points, and when executed these profile points record which branches the test case/s covered.

In the case of ACE/WMB/IIB it involves rewriting the code with these profile points before the code is deployed. It allows reporting coverage on ESQL, msgflows, XSL and Java.

We usually encourage starting with tracing and looking at the instrumentation process later. In the case of running the tests using the IntegrationServer command, there is no "trace" option.


We run the tests like the following:

integrationserver.exe --default-application-name    TestSimple_Project_UnitTestsApp \
--work-dir C:\richard\projects\mb_precise_git\mb-precise-demos\workspaces\ws1\test_work_dir?\
--test-project TestSimple_Project_Test \
--start-msgflows false \
--stop-after-duration 120000?
?        

And with the additional logging statements, tracing is not one of the options:

integrationserver.exe --default-application-name TestSimple_Project_UnitTestsApp --console-log \
--work-dir ws1\test_work_dir?\
--test-project TestSimple_Project_Test \
--start-msgflows false \
--stop-after-duration 120000 \				
--user-trace \
--diagnostic-trace \
--event-log \
events.txt \
--service-trace?        

But that might be something that IBM includes in one of the next releases of ACE.

So we are left with the instrumentation option.

The below is our ANT script that we use to run the test cases.

It is also up on gitlab as a public project.

Setup some ANT tasks:

<!-- instrument -->
<typedef name="instrument" 		
		 classname="au.com.bettercodingtools.sonar.messagebrokersonar.anttasks.BarFileInstrumentTask" >
		<classpath refid="project.classpath" />
	</typedef>?

<!-- listener -->
<typedef name="listener" 		
		 classname="au.com.bettercodingtools.sonar.messagebrokersonar.anttasks.TestMessageListenerTask" >
		<classpath refid="project.classpath" />
	</typedef>

<!-- stopper -->
<typedef name="stopper" 		?
		 classname="au.com.bettercodingtools.sonar.messagebrokersonar.anttasks.StopTestMessageListenerTask" >
		<classpath refid="project.classpath" />
	</typedef>

?        

Create a BAR file of the tests (essentially compile a jar):

<exec executable="mqsicreatebar" failonerror="false" 
			<arg value="-data"/>
			<arg value="C:\richard\projects\mb_precise_git\mb-precise-demos\workspaces\ws1"/>			
			<arg value="-p" />		
			<arg value="TestSimple_Project_Test" />
			<arg value="-compileOnly"/>			
			<arg value="-cleanBuild"/>			
			<arg value="-trace"/>					
		</exec>>        


Create the BAR file with the code to be tested and the test project:

<exec dir="C:\Program Files\IBM\ACE\12.0.1.0\server\bin" executable="mqsipackagebar.bat" failonerror="true" >
			<arg value="-w"/>
			<arg value="ws1"/>
			<arg value="-a"/>
			<arg value="ws1\BarFiles\UnitTests.bar"/>			
			<arg value="-t"/>
			<arg value="TestSimple_Project_Test"/>
			<arg value="-k"/>
			<arg value="TestSimple_Project"/>
			
		</exec>	        

Create a work directory for the temporary server that runs the tests:

<exec dir="C:\Program Files\IBM\ACE\12.0.1.0\server\bin" executable="mqsicreateworkdir.cmd" failonerror="true" >	
		<arg value="ws1\test_work_dir"/>
	
	</exec>		        

Instrument the BAR - recursively unzip, add profiling, re-zip and then re-name to a new BAR file:

<instrument workingFolder="work"
				sourceCode="TestSimple_Project"?
				keepCoverage="No"??
				barFileName="BarFiles\UnitTests.bar"?
				coverXSL="TRUE"?
				coverJava="FALSE"?
				updateModuleNames="TRUE"
				javaNodeFileName="C:/utils/bct/ibm/jplugin2-8.0.0.v20111129_1446.jar"
				coverageFilePath="coveragetemp"?
				buildingAce="TRUE" />?        

Start the "listener" which captures what has been run by the integration server:

<listener port="9011" maxWait="80" coverageFilePath="coveragetemp" />        

Run the integration server which runs the packaged tests:

<exec executable="integrationserver.exe" failonerror="true" >	
				<arg value="--default-application-name"/>
				<arg value="TestSimple_Project_UnitTestsApp"/>				
				<arg value="--console-log"/>				
				<arg value="--work-dir"/>
				<arg value="C:\richard\projects\mb_precise_git\mb-precise-demos\workspaces\ws1\test_work_dir"/>
				<arg value="--test-project"/>
				<arg value="TestSimple_Project_Test"/>				
				<arg value="--start-msgflows"/>
				<arg value="false"/>
				
				<arg value="--stop-after-duration"/>
				<arg value="180000"/>
			</exec>		        

And then we "stop" the process and write the "coverage.xml" file:

<stopper />	        

At the end of running the tests we end up with a "coverage.xml" file which represents a merging of the code branches that were found and for those branches the details of those that were executed. So we get at the end is an XML file that looks like this:

<coverage version="1">
? <file path="TestSimple_Project/mapLogging/bctMapLog.esql">
? ? <lineToCover lineNumber="6" covered="false" branchesToCover="0" coveredBranches="0" referenceValue="1">
? ? ? <recordedLines/>
? ? ? <parentLines/>
? ? ? <forEachOutputs/>
? ? ? <subMapsCalled/>
? ? ? <endLineNumber>-1</endLineNumber>
? ? </lineToCover>
? </file>
? <file path="TestSimple_Project/SimpleNonTestable_Compute.esql">
? ? <lineToCover lineNumber="7" covered="false" branchesToCover="0" coveredBranches="0" referenceValue="3">
? ? ? <recordedLines/>
? ? ? <parentLines/>
? ? ? <forEachOutputs/>
? ? ? <subMapsCalled/>
? ? ? <endLineNumber>-1</endLineNumber>
? ? </lineToCover>
? ? <lineToCover lineNumber="28" covered="true" branchesToCover="0" coveredBranches="0" referenceValue="7-4">
? ? ? <recordedLines/>
? ? ? <parentLines>
? ? ? ? <int>7</int>
? ? ? ? <int>22</int>
? ? ? </parentLines>
? ? ? <forEachOutputs/>
? ? ? <subMapsCalled/>
? ? ? <endLineNumber>30</endLineNumber>
? ? </lineToCover>
? ? <lineToCover lineNumber="33" covered="true" branchesToCover="0" coveredBranches="0" referenceValue="8">
? ? ? <recordedLines/>
? ? ? <parentLines>
? ? ? ? <int>7</int>
? ? ? ? <int>22</int>
? ? ? </parentLines>
? ? ? <forEachOutputs/>
? ? ? <subMapsCalled/>
? ? ? <endLineNumber>-1</endLineNumber>
? ? </lineToCover>
? </file>
? <file path="TestSimple_Project/SimpleTestable.msgflow">
? ? <lineToCover lineNumber="1" covered="false" branchesToCover="0" coveredBranches="0" referenceValue="15">
? ? ? <recordedLines/>
? ? ? <parentLines/>
? ? ? <forEachOutputs/>
? ? ? <subMapsCalled/>
? ? ? <endLineNumber>0</endLineNumber>
? ? </lineToCover>
? </file>
? <file path="TestSimple_Project/SimpleTestable_Compute.esql">
? ? <lineToCover lineNumber="3" covered="true" branchesToCover="0" coveredBranches="0" referenceValue="16">
? ? ? <recordedLines/>
? ? ? <parentLines/>
? ? ? <forEachOutputs/>
? ? ? <subMapsCalled/>
? ? ? <endLineNumber>-1</endLineNumber>
? ? </lineToCover>
...
        

So reading the above 'covered="true"' is good - i.e. that path was tested.

To make use of tis data, we need to feed the source and the coverage file into Sonarqube.

Once the Sonarqube analysis has run, in the dashboard we can see the coverage that was produced.

No alt text provided for this image

And from there we can see what was and wasn't tested in an ESQL project file.

No alt text provided for this image

Green denotes tested, orange partially tested and red not at all.

So this helps work towards level 10 - "Code Coverage" and level 11 - "Unit Test in the Build". And the use on Sonarqube helps with level 12 - "Code coverage awareness".


One additional feature of Sonarqube is the ability to monitor test coverage and essentially break the automated tests when insufficient test coverage has been measured.

So by configuring what's called a quality gate.?

We can set various thresholds on existing code, new code and what's called leak.

No alt text provided for this image

The idea behind monitoring the leak is that we can encourage developers to add test coverage without being penalized when they add tests to older code that had none. If we mandated all code needs to 50% coverage (not suggesting hard mandates but for the purposes of this example), when the developer adds a new test to legacy code and gets to 5% coverage (from 0%) then they should be rewarded.

So what does this mean for ACE developers and teams ?

Making use of the new ACE Unit testing functionality, Sonarqube and our MB-Precise tool for providing coverage using "instrumentation", a team can vastly improve the quality and measuring of their unit and integration testing outcomes.

Meaning that organizations can have more confidence on code changes, allowing them to move towards quicker higher quality releases that should require less manual testing by making more use of automated unit and integration testing.


We are looking to have some demonstrations over the next few weeks to go through how it works and answer any questions people might have.

The registration links and times are below:

7:30 PM AEST (Sydney time) 1st February 2022

10:30 PM AEST (Sydney time) 2nd February 2022

And we added one for our friends on the other side of the world:

11:55 PM AEST (Sydney time) 3nd February 2022


If you are interested in finding out more about our product and these times don't suit, please drop me an email to:

[email protected]

Or contact me via the contact page on our website.

Regards

Richard

[email protected]

www.bettercodingtools.com

要查看或添加评论,请登录

Richard Huegill的更多文章

  • Embedding DrawIO IIB/WMB/ACE flow diagrams in Confluence

    Embedding DrawIO IIB/WMB/ACE flow diagrams in Confluence

    Happy New Year My last demonstration was all the way back in time, 2024, almost 3 months and 4 hangovers ago. Or 3…

  • Apologies for the broken webinar

    Apologies for the broken webinar

    So last week we attempted to do a webinar on creating Confluence pages to summarize WMB/IIB/ACE code. Unfortunately…

  • Something I don't know too much about

    Something I don't know too much about

    It's ACE and Java classLoader's (but I'm sure friends will be able to point out many others). Actually this is only…

  • Confluence page generation for IIB/WMB/ACE applications

    Confluence page generation for IIB/WMB/ACE applications

    The larger an organization becomes, the more challenging that it is for an organization to manage is combined knowledge…

    2 条评论
  • WMB / IIB / ACE GitLab pages (with sound this time)

    WMB / IIB / ACE GitLab pages (with sound this time)

    The recent version 17 release of GitLab has added GitLab pages. Last week we did a live demo on using GitLab Pages to…

    2 条评论
  • GitLab Pages and IIB

    GitLab Pages and IIB

    The recent version 17 release of GitLab has added some enhancements and fixed some issues. On the of the newer features…

  • SonarQube 10.6 released

    SonarQube 10.6 released

    SonarSource recently a new version of their Sonarqube platform - SonarQube 10.6 For us, we have to update some of build…

  • Squid's (not the game)

    Squid's (not the game)

    When I develop code, I have never gotten it write first time. This is compounded when you work integration tooling.

  • Information radiators for IIB/ACE/WMB projects managed in GitLab

    Information radiators for IIB/ACE/WMB projects managed in GitLab

    Some of the functionality that we have in our plugin is more the "art" side of software engineering then the hard…

  • Uncle Ben and ACE

    Uncle Ben and ACE

    As organizations interact more and more with the public cloud (AWS, GCP, Azure) or software as a service (SaaS)…

社区洞察

其他会员也浏览了