Moving teams to Trunk Based Development (an example)

Moving teams to Trunk Based Development (an example)

In this post I am going to cover an example case study of introducing Trunk Based development to an existing enterprise Dev team building a monolithic web application. I’m not going to cover the details of trunk based development as that’s covered in detail elsewhere on the internet (I recommend trunkbaseddevelopment.com). For the purpose of this article I’m referring to all development teams working on a single code branch, constantly checking their changes in to that single branch (master). In our model the only reasons for ever creating a new branch was for one of the following reasons:

  1. A developer creates a personal branch off master for sole purposes of creating to Pull Request into the Master branch (for code review purposes).
  2. The release team take a branch off of master to create a Release Candidate build. Whilst this is not in the true spirit of Trunk based development it is often required for release management purpose. Microsoft use this model for their Azure DevOps tooling development and call it Release Flow.
  3. A team may still create a branch for a technical spike or proof of concept that will be dumped and wont ever me merged into Master.
Problem Statement

The development team in this case was an enterprise application development team building a large scale multi-channel application. There were five sprint teams working on the same monolithic application code-base with each team had its own branch of the code that they worked on independently from the other sprint teams. Teams were responsible for maintaining their own branches and merging in updates from previous releases but this was often not done correctly causing defects to emerge and features being accidentally regressed. Whats more, the teams branches would deviate more and more away from each other over time, making the problem worse with each subsequent release. Following a monthly release cycle, at the end of two fortnightly sprints all teams manually merged their code into a release branch which was then built and sanity tested. The merging of code was usually manual and often done via cherry picking individual changes from memory. Needless to say this process was error prone and complicated for larger releases. Once the merge was complete this was the first time that all the code had all been running together and any cross team impacting changes identified and so many issues were encountered at this point, either in failed CI builds or in release testing cycle. This Merge and test cycle took between 3 to 8 days therefore taking a week out of our delivery cycle and severely impacting velocity.  The more issues that were found the more the release testing was increased in an attempt to address quality issues, increasing lead times. It was clear that something needed to be done and so we decided to take the leap to trunk based development and eliminating waste by removing merges and getting each teams changes aligned sooner.

Solution

We moved to a single master/trunk stream (in Git) for ALL development and all sprint teams worked on this single stream. Each sprint team has its own development environment for their in-sprint testing of their changes. Every month we would create a cut of the master branch to create the release candidate branch for the route to live and to act as a stable/scope locked base for testing the upcoming release to production. This model follows Microsoft’s Release Flow model. Defect fixes for the release candidate were fixed in master or the release branch, depending on the sprint teams decision but each week changes made in the release branch (i.e. defect fixes) are back merged into master/trunk. This ‘back merges’ activity is owned by one sprint team per Release and this ownership is rotated each release so everyone shares the effort and benefits from improvements to this process

Feature Toggles

The move towards trunk based development was not possible without utilising Feature Toggles or Release Toggles to be more specific . We needed to support multiple teams working on the same codebase but building and testing their own changes (often targeting different releases). We needed to protect each team from every other teams untested changes, and so we used toggles to isolate new changes until they were tested/approved. All code changes were wrapped in an IF block with the old code in the ELSE block and a toggle check determines which route the code takes.

 if (ToggleManager.IsToggleEnabled("feature12345"))
 {
     // new code here 
 }
 else 
 {
     //wrap old code here unchanged
 }

As the system was a  Java EE application we chose FF4J as the toggle framework. FF4J is a simple but flexible feature flipping framework, and as the toggles are managed in a simple XML file it made it easier to implement our Jenkins solution described later. To be clear there are many frameworks that could be used and its simple to create your own.  To be able to support replacing the toggle framework and to make it as simple as possible for developers to create toggles/flips FF4J functionality was wrapped in a helper class which made adding a toggle a one line change.

We also implemented a Feature Toggle Tag Library so that JSF tags could be wrapped around content in JSF (Java Server Faces) pages to enable/disable depending on the FF4J toggle being on/off. Also a JavaScript wrapper for toggles was also developed that allowed toggles to be checked from within our client-side JS code.

A purposeful decision was made to bake the toggle config into the built code packages to prevent toggle definition files and the code drifting apart. This meant that the toggle file is baked into built JAR/EAR/TAR file for deployment. Once the package is created the toggles are set for that package and this prevents a code-configuration disconnect from forming which is the cause of many environmental stability issues. This was sometimes controversial as teams would want to change a toggle after deployment for the simple reason that they forgot to set it correctly or would hastily want to flip toggles on code in test which was not the design goal of the our Feature Toggles (although a valid scenario for toggles and a separate design was introduced for turning features on/off in Production).

All code changes are wrapped in a toggle and the toggle is set to OFF in the repository until the change has passed definition of done within the sprint team. Once the change has been completed (‘done’ includes testing in sprint) then the toggle is turned ON in the master branch – making it ON for all teams immediately (as the code base is shared). As the toggles for new untested changes are OFF in the code and all package builds come from the master code branch,  the new feature cannot be tested on a test environment without first flipping a toggle, so how can it be tested and turned on without impacting other teams?  For this we introduced Auto Toggling in our Jenkins job.  Our Jenkins CI jobs that build the code for test include a parameter for the sprint team indicator (to identify the team the built package is for) and this then automatically turns on the WIP toggles belonging to that team. This means that when Sprint Team A triggers a build, Jenkins will update the new work in progress toggles for Team A from OFF to ON in the package. This package is marked as WIP for Team A and so cannot be used by other teams. Team A can now deploy and test their new functionality but other teams will still not see their changes. Once Team A are happy with their changes they turn the Toggle ON in MASTER  for all teams and its widely available.

Unfortunately not everything can be toggled easily, and so a decision tree was built for developers to follow to understand the options available.  Where the toggle framework was not an option, then other approaches can be used. Sometimes a change can be “toggled/flipped” in other ways (e.g. a new function with a new name or a new column in the DB for new data, or an environment variable). The point is that the ability and desire to use Feature Toggles is not just about the Toggle framework you choose but instead its a philosophy and approach that must be adopted by the teams.  If after all other options there is no way to toggle then teams have to communicate changes amongst themselves  and take appropriate actions to communicate a potential breaking change.

What worked well

So what were the benefits seen after introducing trunk based development in the team?  Well firstly the obvious benefit of maintaining less code branches was immediately obvious. From day one every team was now on the latest codebase and no merging and cherry picking was required at end of sprint. This saved time, reduced merge related defects and increased the agility of the team.  Each team has saved time and effort by not having the housekeeping effort of maintaining their own branch. Use of environments became more flexible as in theory every new feature was on every environment behind a toggle meaning that Team A’s new feature can now be tested on Team B’s environment should the need arise.  Cross team design issues have been spotted earlier as a clash of changes between teams is seen during development and not later when the code is merged together prior to a release. Teams are now able to share new code more easily because as soon as a new helper function is coded it can be used by another team immediately. Any Improvements to CI/CD process or code organisation or tech debt can now be embraced by all teams at once without a delay as it filtered through each teams branches.

What didn’t go so well

Of course its not all perfect, and we have faced some challenges.  Now all teams are impacted immediately by the quality of the master/trunk branch. Broken builds have been reduced with the introduction of Pull Requests and Jenkins running quality gate builds on the check-ins on Master but any failure in the CI/CD build pipeline immediately impacts all the teams and so these issues need to be resolved ASAP (which they should anyway in any team). Teams must work together to resolve which does bring positives.   Which brings me to the next point – communication.

When teams are using trunk based development and sharing one codebase then communication between teams becomes more important. It is critical that there are open dialog between the teams to ensure that issues are resolved quickly, and that any changes that impact the trunk are communicated quickly and efficiently.  Whilst a lack of cross team comms exasperated in trunk based development any communication issues are a death-nell to a Dev team and should be resolved anyway. If this is an issue for your teams then be aware that trunk based development may not be for you until your teams are collaborating better. That said introducing trunk based development is a good way to encourage teams to collaborate better for their own benefit.

Feature toggles/flags are a key enabler to trunk based development, enabling you to isolate changes until they are ready but there is no denying that they add complexity and technical debt. We add a toggle around all changed code and use the Jira ID to name the toggle. Whilst we have a neat toggle solution there is no getting away from the fact that extra conditions mean extra complexity in your code. Unit tests must consider feature toggles and test with them on and off.  Feature toggles can make code harder to read and increase the cyclomatic complexity of functions which may result in code quality gates failing. We had to slightly adjust our Sonar quality gate to allow for this.  Whilst toggles do add technical debt to your code, but this is temporary debt and one that is an investment to improve overall productivity, and so is really in my opinion a valid form of technical debt. Its referred to as debt as its ok to borrow on a manageable scale if you pay it back over time. Removing toggles is critical to this process, and yet this is one area we have yet to address sufficiently. A process to remove toggles has been introduced but its proving harder than expected to motivate teams to remove toggles from previous releases at the same rate as they are being added. To this end we have added custom metrics in SonarQube to track toggle numbers and we will use this key metric to ensure that the number of toggles stabilises/reduces.

The state of toggles is an additional thing the team need to manage, however this is offset by the power to be able to control the release of new code changes to other teams.  We have found care should be taken to ensure that feature toggles are not mis-used for non intended purposes. Be clear on what that they are for and when they should/shouldn’t be flipped (for us its in the Definition of Done for a task in sprint). There can be demands to use them as a substitute for proper testing. Be clear on the types of Release/Feature toggles and provide guidance for what that can be used for. There is no doubt that they can be re-used at release time to back out unwanted features but this should be a controlled process and designing in from the start. We already had Release Switches for turning features on and off, but Feature Toggles (in our case) are used purely for isolating changes from other teams until ready. We strive to ensure that the toggle is set ON or OFF during the sprint before the code is moved through to release testing.

Conclusion

The benefits you derive from moving towards trunk based development will vary depending on your current processes. For a full guide to the general benefits of trunk based development check out this excellent resource trunkbaseddevelopment.com.

In our case the roll out was a success and achieved the desired improvements in cycle time and developer productivity. The process of delivering technical change was simplified in terms of configuration management and developers became more productive. This was despite the problems we faced and listed above.

There is no doubt that utilising feature toggles into your design from the start would make some of the technical challenges easier but we have proved it can be done with an existing brownfield monolith.

Next Steps

Next steps for the future are to constantly improve the toggle framework to reduce the instances where a change cant be toggled on/off, and to make it easier to communicate about changes that will impact all teams.  A renewed emphasis on removing old toggles from the code is required to ensure that teams and change approvers accept the “tax” each release of removing redundant toggles.

Links for further reading
Advertisements

SonarQube migration issue- Jenkins Using old URL

SonarQube migration issue- Jenkins Using old URL

I recently migrated a SonarQube server from one server to another in order to scale out the service to our dev team. All went well until builds failed due to them looking at both the old and new server URLs for the Sonar results and so I’m writing some notes here to help me (and others) out in the future if I hit this again.

I installed the same version of SonarQube on the new application server that is on the old server. The database was not being moved, just the application server (the server with the Sonar service running).

After installation I ensured that the same custom config settings were made onthe new server as had been made on the old server, and ensured that the same plugins were in place. I then stopped the Sonar service on the old server and started the service on the old box.

Once Sonar was confirmed to be up and running and connecting to the Database and showing the project dashboards I updated the Jenkins server configuration to point to the new box. All looked good so I ran a build, and then got this (log truncated)….

 
INFO: ANALYSIS SUCCESSFUL, you can browse http://OLDSERVER/dashboard?id=123
INFO: EXECUTION SUCCESS
The SonarQube Scanner has finished
Analysis results: http://NEWSERVER/dashboard/index/APP
Post-processing succeeded.
withSonarQubeEnv
waitForQualityGate
java.lang.IllegalStateException:
Fail to request http://OLDSERVER/api/ce/task?id=3423
at org.sonarqube.ws.client.HttpConnector.doCall(HttpConnector.java:212)

Bizarely the Jenkins build has managed to use both the new Sonar URL and the old one.The upload was successfull to the new server but some of the links for the resport point to the old server. Also the Quality Gate check whcih validates that the Sonar Quality Gate was successfull has tried to read the report on the OLD server and therefore failed as its not there (because its on the new sonar URL).

After checking Jenkins for any reference to the old Sonar server and restarting the service to clear any caches I was still getting the error. Eventually I ran a Jenkins build and interactively peeked into the Jenkins workspace on the Jenkins Slave and in there is an auto generated file containing lots of Sonar config settings. This file, SonarQubeAnalysisConfig.xml, is created during the Jenkins build initialisation stage. In the file I found references to the new Sonar URL but also this property pointing to the old URL:

  sonar.core.serverBaseURL  

This value is set in SonarQube configuration and is not dynamic and so will not be updated when you migrate the server or change the server URL/port. To change it open SonarQube > Administration > Configuration > General and change Server base URL to your new URL (e.g. http://yourhost.yourdomain/sonar). It says that this value is used to create links in emails etc but it in reality is also used to integate results.

Visual Studio 2019 Offline Installer

Visual Studio 2019 Offline Installer

Microsoft have now released Visual Studio 2019 and like VS2017 there is no offline installer provided by Microsoft, but you can generate one by using the web installer setup program to write all the packages to disk.

To create the offline installer just download the usual web installer exe from the Microsoft download site and then call it from the command line passing in the layout flag and a folder path like this:

vs_community --layout  "C:\Setup\VS2019Offline"

In the example above I’m dowloading the Community verssion, but if its the Enterrpise editions installer then the setup file you downloaded will be called vs_enterprise.

The packages will all be downloaded and a local setup exe installer created.

If you want to limit to English then pass –lang en-US flag too.

vs_community --layout  "C:\Setup\VS2019Offline" --lang en-US

You can also limit what workloads to download if you know their names by listing them after a –add flag.

Enjoy your offline installs.

Easy Upgrade Tool For NPM on Windows

Easy Upgrade Tool For NPM on Windows

Having recently needed to upgrade my version of NPM on a Windows machine, without upgrading my Node.js installation, I came across this excellent tool for doing just that without following a complex set of steps. Adding it here for others to find and for me to remember 🙂

The tool is called npm-windows-upgrade and can be found on GitHub. The tool simplifies the numerous steps previously required on Windows and is now the recommended approach by the NPM team.

npm-windows-upgrade tool

In the end I ran this tool several times to test out various versions and it worked well, upgrading NPM in place successfully.

Cheap Azure Hosting via Static Web Sites

Cheap Azure Hosting via Static Web Sites

Something that is pretty cool and not that well known is that you can now host your static web site in the cloud with Microsoft Azure just from your Azure storage account. The functionality is currently in preview only but its functional enough to get up and running quickly if you have an Azure account.

Why host a static site?

Whilst it does depend on your requirements many sites are quite capable of being static sites with no server side processing. The classic example is a blog site whereby the site could just serve up static html, images and JavaScript straight from disk as the content changes fairly infrequently.

The growth in JavaScript libraries and the functionality of frameworks like React.js make static sites even more viable. Using the power of JavaScript its possible to create rich powerful web applications that don’t need server side processing. There has been an explosion of static site generators over recent years that will take text or markdown files and generate a complete static site for you. Two very popular generators of note are Gatsby (React.js based) and Jekyll (Ruby) but there are literally hundreds of others as can be seen by this online directory: staticgen.com.

Hosting a static site in Azure

Of course you could always host a static site in Azure if you hosted it in a full featured web site (via a hosted VM or azure web site) but the beauty of a hosting a static only site is that you can host it straight out of storage area and so you don’t need to pay for any compute power which makes it extremely cheap (and even free). You just pay standard Azure storage rates which include a generous data transfer limit (about 5GB a month).

If you think about it hosting a static web site is just a natural extension for a cloud offering like Azure as they already host files and binary content on public URLs in Azure Storage. This new functionality though makes it more explicit and enables web site like functionality such as custom error pages. It is also possible to add your custom domain name to the site and link up SSL (although unfortunately at the moment SSL requires use of an Azure CDN which adds to the cost.)

So how do you host your site, well follow the official instructions here.

Once you have a web page being served by the default Azure storage URL you can proceed to add your own custom domain name using these steps.

Now you should have a fully working site, but to keep costs even lower we can utilise caching of our static content to encourage the client browser to cache the files thus reducing our data transfer costs. Luckily it is easy to set cache control settings on our Azure Blob storage items. This blog post by Alexandre Brisebois covers doing it in code but if you are just testing, or have a site that doesn’t change much you can do it manually via the Azure Portal. To do so enter your Azure Portal, browse to your Storage Account and then using Storage Explorer find the files you want to set caching for and go to their properties. In the Properties dialog you can set the Cache-control value in the HTTP header to something like…

 "public, max-age=86400". 

There are other alternatives to Azure for hosting static files and some offerings are very cheap or free. Some of these are more advanced than the current Azure offering and provide additional features such as integrated SSL and contact forms. One such vendor is netlify.com but there are others.

In summary, if you want to host a site cheaply and you dont really need server side processing then consider hosting a static site, and if you’re already using Azure then its a simple step to give it a go.

.

Developer Roadmaps

Developer Roadmaps

Something that’s proving popular on Medium these days are “development roadmaps” that outline a roadmap approach to choosing techniques and technologies for certain technical domains (for example Web development or Dev Ops). Some of these are particularly powerful for putting the many bewildering technologies all on one page with logical grouping and a visual representation of how they interact. Modern web development has seen so much change over recent times that it is very easy to get lost and become overwhelmed and these roadmaps can help clear the fog (a little).

My favourite is the Web Developer Roadmap in 2019 maintained by Kamran Ahmed over on GitHub.

I have shared this with several people who have also found it useful regardless of their level of expertise. The front end roadmap is a great guide to what the community are currently settling on as the standard choices for tooling and techniques. I have checked back to the roadmap a few times over the last 6 months to verify my approach when starting on a new project and I find that visualising the options makes decision making easier.

There are also Backend and DevOps Roadmaps included which are equally as useful.

For some more useful roadmaps check out this medium post.

Cmder – A Better Windows Console

Cmder – A Better Windows Console
Whilst Linux treats console users as first rate citizens and provides many useful and powerful terminal emulators Windows has always lagged behind. This is evermore noticeable now that many developer and IT Ops workloads are done via the terminal. Modern web development and DevOps tooling requires at least some interaction with the terminal, and with the world moving to git for source control developers everywhere are having to embrace consoles.
Whilst Microsoft have traditionally neglected the Windows console they have started to add new features and improvements. For a background on the Windows Console and its architecture check out this blog series. Windows 10 has the best Windows console to date, but there are better out there from 3rd parties and I’ve really got into Cmder.
Cmder is a smart per-configured bundle of the ConEmu emulator software with some extras thrown in. To quote directly from their website:
 

Cmder is a software package created out of pure frustration over the absence of nice console emulators on Windows. It is based on amazing software, and spiced up with the Monokai color scheme and a custom prompt layout, looking sexy from the start.

It can be run portable on a USB Stick if you wish and it has full Git and Bash support. You can emulate the Windows Command Prompt or PowerShell, Bash, Windows SubSystem for Linux (WSL), even the VS Developer Command Prompt among others. All in a slick feature rich emulator.

cmder

It has hundreds of settings that can be tweaked to get everything just the way you like it and it also has the awesome Quake mode so it can slide down from the top of your display.
Cmder2
Support for Cmd, PowerShell, Bash and many more is included out the box, but if you are a Visual Studio user and want to emulate the Developer Command Prompt for VS2017 (reommended) then check out the simple instructions in this guide by Ricardo Serradas on Medium.
I’ve been using it for months and its been stable, performant and has also caught the eye of collegues due to those good looks which make it a pleasure to work in compared to the plain Windows console. Give it a try.

Useful Git Training Links

Useful Git Training Links

git_logoHaving recently had to compile a list of useful learning resources for a development team migrating to git, I thought I would share them here.

Git is a very powerful and versatile distributed source control system but its not the easiest for a newbie to get their head around. The below links are ordered from tutorials based on giving an overview of git through to more advanced topics.

  1. What is Git – a nice overview article by Atlassian
  2. Learn Enough Git to Be Dangerous tutorial by Michael Hartl
  3. Git the Simple Guide – An excellent simple, straight to the point guide to git  by Roger Dudler. (My favourite guide)
  4. Git Tutorial – Another tutorial
  5. Git Cheat Sheet – cheat sheet for git and github commands
  6. The official git site documentation and tutorials
  7. Pro GIT ebook – an excellent definitive guide to git in a free ebook format


GitHub External Training Links: 

If you or your team also need to learn GitHub then here are some good training links.

  1. A great hello world example and introduction to GitHub
  2. Git Started With GitHub – free course from udemy
  3. Training videos on YouTube

Also its worth remembering that Microsoft offer FREE private git repository hosting via the Visual Studio Team Services if you don’t want to host all your projects publicly.

 

Consume JSON REST Service via WCF Message Class

Consume JSON REST Service via WCF Message Class

Since WCF was designed and envisioned by Microsoft the world has changed and the use of RESTful JSON based web services has increased at the expense of SOAP based services. WCF was updated to reflect this change and for several years has supported RESTful services through webHTTPBinding etc (more on MSDN), and there are many resources on the web for how to consume or host a REST service with WCF, but many of these assume you are not using a generic channel factory approach with the low-level message class. Usually in WCF you would consume a service via a proxy, or perhaps by directly creating a Channel Factory, however these require explicit knowledge of the service contract being consumed and sometimes a more generic solution is required. If, for example, you wanted to create a  generic WCF helper class for your application which would build a message directly from passed in data and call a service generically then you could use the Message class directly. This advanced approach is documented for SOAP messaging, but what about if you need to send JSON?

Below are some notes on how you would use the Message class to send JSON in a generic way (i.e. without needing intimate knowledge of the service contract you’re calling).

In the code below we need to pass the Person object named “bob” as JSON so we create a WebChannelFactory and use the “Endpoint1” config (which is very generic in nature). The special WebChannelFactory is a ChannelFactory that automatically adds WebHttpBinding and WebHttpBehavior to the endpoint config if its missing. Then we create a proxy and directly build a Channels.Message class using a SOAP version of “None” (as we’re not using SOAP here but JSON) and the DataContractJsonSerializer .

Person bob = new Person() {age = 89, name="Bob"};

WebChannelFactory factory = new WebChannelFactory("Endpoint1");

IRequestChannel proxy = factory.CreateChannel(
      new EndpointAddress("http://localhost:8080/Test"));

System.ServiceModel.Channels.Message requestMsg = 
      System.ServiceModel.Channels.Message.CreateMessage(
          MessageVersion.None, "", bob, new DataContractJsonSerializer(typeof(Person)));

requestMsg.Headers.To = new Uri(factory.Endpoint.Address.ToString());
requestMsg.Properties[WebBodyFormatMessageProperty.Name] = new WebBodyFormatMessageProperty(WebContentFormat.Json);

You will notice above we also need to set the message header URI too and also set the WebBodyFormatMessageProperty format to JSON. If we forget to do this step then the message will be sent in XML format despite the previous web config we have set (for more info on this issue see here and here). This is what is sent without setting the WebBodyFormatMessageProperty to JSON:

<root type="object"><age type="number">89</age><name>Bob</name></root>

and with the WebBodyFormatMessageProperty set to “WebContentFormat.Json”:

{“age”:89,”name”:”Bob”}

Next we call the nice and generic “Request()” method on the proxy and handle the response, picking out the body and de-serialising it into a Person object via the DataContractJsonSerializer.

System.ServiceModel.Channels.Message responseMsg = proxy.Request(requestMsg);

Person BobResponse = responseMsg.GetBody(new DataContractJsonSerializer(typeof(Person)));

Endpoint Config:

<system.serviceModel>
 <client>
 <endpoint name="Endpoint1"
 address="http://localhost:8080/Test" 
 binding="webHttpBinding"
 contract="System.ServiceModel.Channels.IRequestChannel"
 />
 </client>
</system.serviceModel>

In this snippet the only thing that is specific to the service being called is the Person object which the DataContractJsonSerializer needs to know about in order to be able to serialise it into JSON correctly. The actual service call is generic. To make this a completely generic helper we can instead pass in a type for the DataContractJsonSerializer to use instead of a real object, leaving the calling component to pass the right type in when it calls this generic helper method.

If you are already using this message class approach for SOAP services and need to now call some JSON REST services then hopefully this will help.

SonarQube: Unit Test Results Not Shown

SonarQube: Unit Test Results Not Shown

Recently whilst building Jenkins CI pipeline, with SonarQube static analysis, the JUnit unit test results were not being included in the Sonar dashboard results. The Jacoco based test coverage results were being included fine but not the actual test pass/fail percentage.

sonardash2

After digging into the log for the Jenkins build I found this warning being logged for the SurefireSensor (the Sonar sensor responsible for scanning JUnit XML reports for results):

[sonar] 10:26:34.534 INFO - Sensor SurefireSensor
[sonar] 10:26:34.534 INFO - parsing /apps/jenkins2/var/lib/jenkins/workspace/abc/code_master/examplecode/UnitTest/junit
[sonar] 10:26:34.864 DEBUG - Class not found in resource cache : com.rh.examplecode.UIMapperTest
[sonar] 10:26:34.864 WARN - Resource not found: com.rh.examplecode.UIMapperTest

The JUnit XML reports were being found and parsed fine but when it’s looking for the actual test code (the *.java code) it could not be found by the scanner and hence it throws the warning. It turns out that the java code for the tests is required in order analyse the JUnit results files and so you need to tell Sonar where to find the source code for the tests. How? Well this is done via the sonar.tests” property which is a comma-separated list of filepaths for directories containing the test code (the *.java files not *.class files). For example:

sonar.tests = "/UnitTests/junit"

Set this property alongside the other parameters for Sonar, for example:

sonar.projectBaseDir="${WORKSPACE}/exampleApp"
sonar.projectKey="testbuild1"
sonar.projectName="testBuild"
sonar.sourceEncoding="UTF-8"
sonar.sources="src/main/java/com/rh/examplecode/"
sonar.junit.reportsPath="ReportsXML/"
sonar.tests= "/UnitTests/junit"
sonar.jacoco.reportPath="target/jacoco.exec"
sonar.jacoco.reportMissing.force.zero="true"
sonar.binaries="build/com/rh/"

After this change the Sonar scanner will run and this time find the test source code, enabling it to complete the analysis. The log should report something like this:

[sonar] 13:10:20.848 INFO - Sensor SurefireSensor
[sonar] 13:10:20.848 INFO - parsing /apps/jenkins2/var/lib/jenkins/workspace/abc/code_master/ReportsXML
[sonar] 13:10:21.022 INFO - Sensor SurefireSensor (done) | time=10ms

And you should now have your unit test success/failure results in your unit test widgets in the projects Sonar dashboard, like so:

sonardash1