List all Transitive Dependencies in your .Net Core Project

Photo by cottonbro studio on Pexels.com

Sometimes you need to find out what packages a .Net Core project or solution depends on quickly and whilst you can find this information in the Visual Studio IDE or by opening the project files individually and reading them, there is a quick and easy way using the “dotnet list package” command. What’s more you can also list the packages that your dependencies also have dependencies on, which is really useful for tracking down what part of your solution has a requirement on a specific package.

dotnet list package

This outputs all NuGet package references for a specific project or a solution (which you can specify in the command parameters or just let dotnet find the nearest solution or project file in the current directory structure). Example below:

Project 'WebApplication1' has the following package references
   [net6.0]: 
   Top-level Package                                Requested   Resolved
   > Azure.Storage.Blobs                            12.14.1     12.14.1 
   > Microsoft.AspNet.Identity.EntityFramework      2.2.3       2.2.3  

To see which packages your project relies on and also which packages they in turn rely on then run the command with the –include-transitive flag.

dotnet list package --include-transitive

This outputs the full list of packages and their dependencies, like so:

Project 'WebApplication1' has the following package references
   [net6.0]: 
   Top-level Package                                Requested   Resolved
   > Azure.Storage.Blobs                            12.14.1     12.14.1 
   > Microsoft.AspNet.Identity.EntityFramework      2.2.3       2.2.3   

   Transitive Package                         Resolved
   > Azure.Core                               1.25.0  
   > Azure.Storage.Common                     12.13.0 
   > EntityFramework                          6.1.0   
   > Microsoft.AspNet.Identity.Core           2.2.3   
   > Microsoft.Bcl.AsyncInterfaces            1.1.1   
   > System.Diagnostics.DiagnosticSource      4.6.0   
   > System.IO.Hashing                        6.0.0   
   > System.Memory.Data                       1.0.2   
   > System.Numerics.Vectors                  4.5.0   
   > System.Text.Encodings.Web                4.7.2   
   > System.Text.Json                         4.7.2   
   > System.Threading.Tasks.Extensions        4.5.4   

I found this useful this week when a large solution’s build pipeline was erroring due to a missing dependency but it wasn’t clear which project needed it.

For more info on the dotnet package command see the Microsoft docs here.

Advertisement

Add Git Bash & VS Dev Cmd Prompt Profiles to Windows Terminal

I admit I was not too impressed with the early beta versions of Windows Terminal, maybe because I use Cmder as my daily terminal driver and its features are excellent. However since Windows Terminal reached RTM at v1.0 it does seem a better quality product and with the demo at MS Build 2021 of the new features coming soon I can see that my number one feature request will be released soon – Quake Mode! Quake Mode enables the terminal to drop down from the top of screen on a keypress and as it is a feature of Cmder I use it every day to show/hide the terminal for those quick actions.

Windows Terminal does need some tweaking to get it to look and behave to my tastes, and unfortunately it still doesnt have a proper opacity setting (we have to make do with the smudgy ‘arcylic’ setting instead, but with some it is highly customisable via the settings.json file and settings UI. One thing that you can do is to add your own profiles, so I added my own profile for GitBash.

Assuming you have GitBash installed (via Git) then the below config should work but you’ll need to check your paths. Add this section to your profiles list in the settings.json file which can be found by opening Windows Terminal, choose ‘settings’ from the dropdown, then click the settings icon, the json file will then open. It is usually located somewhere like C:\<YOURUSERNAME>\Rich\AppData\Local\Packages\Microsoft.WindowsTerminal_8wekyb3d8bbwe\LocalState.

{
 "closeOnExit": "always",
 "commandline": "C:\\Program Files\\Git\\bin\\bash.exe -I -l",
 "icon": "C:\\Program Files\\Git\\mingw64\\share\\git\\git-for-windows.ico",
 "name": "GitBash",
 "startingDirectory": "%USERPROFILE%",
 "tabTitle": "GitBash"
}

Its also useful to add a profile for the Developer Command Prompfor VS 2019, like this:

{
 "name": "Developer Command Prompt for VS 2019",
 "commandline": "%comspec%  /k \"%ProgramFiles(x86)%\\Microsoft Visual Studio\\2019\\Community\\Common7\\Tools\\VsDevCmd.bat\"",
 "icon": "%ProgramFiles(x86)%\\Microsoft Visual Studio\\2019\\Community\\Common7\\IDE\\Assets\\VisualStudio.70x70.contrast-standard_scale-180.png",
 "tabTitle": "Dev Cmd VS 2019"
}

This snippet assumes Visual Studio community edition is installed, if using Enterprise then correct the file paths to /Enterprise/ instead of /Community/. If you’re using Cmder and want to read how to add the Developer Command prompt to Cmder check out this post here.

To see the whole file check out my GitHub repo where I’ll keep my latest config file (whilst I remember to update) at https://github.com/RichHewlett/windows-terminal-config.

VS Code Keyboard Mappings in VS 2019

Quick post to let you and ‘future me’ know that Visual Studio 2019 includes the option to use VS Code keyboard mappings out of the box. As someone who has been using VS Code as my development environment much more often than full Visual Studio recently this was a really useful find for me, and one I wasn’t aware of. I had been recently fumbling about in Visual Studio using muscle memory to perform tasks before realising that I am in a different editor and so I need to use the ‘other’ keyboard mappings. I decided enough was enough and so I dug into the Visual Studio settings to tweak some of the keyboard shortcuts and found that you can just tell it to use VS Code mappings instead.

In Visual Studio 2019 > Tools > Options > Environment > Keyboard : Apply following Keyboard Mapping Scheme Visual Studio Code.

So I can now standardise my keyboard mappings across both IDEs and that is one less thing for me to remember.

Moving Sonar Rules closer to the Developer with ESLint

Shift code quality analysis to the left by moving your static analysis from the CI/CD pipeline to the developers IDE where possible. In this post I cover what we did, how to set up Sonar Lint and how we ultimately moved the Sonar rules into ESLint instead.

Problem

As a development team working on a JavaScript application we sometimes had issues where a difference between the rules enforced in our CI/CD pipeline, using SonarQube, and local rules, using ESLint, led to discrepancies in standards and ultimately, at times, broken builds. If you are enforcing rules in your project and in the CI/CD pipeline with SonarQube then indeally you need them to mostly match but this is not as easy as it sounds. Any failure in the pipeline are more time-consuming to resolve than if they happened locally before the developer pushed their code to the repository.

Photo by NESA by Makers on Unsplash

Solution

Import the Sonar rules into ESLint and force ESlint in both the IDE and the CI/CD pipeline.

In order to strike a balance of quality assurance and flexibility in the implementation of rules we introduced an approach that combines ESLint and Sonar rules with the emphasis on shift-left with rule enforcement done in the IDE as code is written and then re-enforced later in the CI/CD pipeline.

Firstly, Sonar Source (developers of SonarQube) provide a plugin for several IDEs (including VSCode) called SonarLint that helps address the issue of running Sonar rules in an IDE. Once installed it will analyse your code against default Sonar rules and report issues. What if you’re not running the default set of rules on your Sonar server, well no worries as the plugin can be set to connect to your server and download the right quality profile and rules.

To do this install the SonarLint extension into your IDE (many are supported, e.g. VS Code, Visual Studio etc) and then set the extension properties as per the instructions for your particular IDE. For VS Code it goes like this:

To link a server set the “sonarlint.connectedMode.connections.sonarqube” setting which has to be a USER setting (oddly). Then in workspace settings for the project you can configure the projectKey for your project. Workspace setting files are created in the .vscode folder in a settings.json file whcih can be added to source control so this only needs to be setup once per project. Once done, press F1 > type sonar > select “SonarLint: Update all project bindings to SonarQube” which will refresh the plugins cache and force it to download the rules from your Sonar server.

Now whilst SonarLint is a useful tool it is not as powerful as ESLint for linting in the IDE (in my opinion). For example ignoring a rule (for a genuine reason) at file or line level is not possible in a satisfactory way (you can only ignore from a line/file from ALL sonar rules). Also Eslint provides more power and flexibility especially where you have a centrally managed sonar server with shared rule profiles and quality gates that are not easy to change (which may be a good thing for your organisation).

So instead of, or even in additon to, SonarLint checking you can actually import the Sonar JavaScript scanner rules into ESLint. To do this install the npm package: eslint-plugin-sonar then configure your ESLint config to use the Sonar js/recommended) JS rules.

  1. npm install eslint-plugin-sonar
  2. Add it to your eslint config file as an extends
    extends: [
    “plugin:sonarjs/recommended”
    ]

Now ESLint will report quality errors that would previously only been highlighted in Sonar during a CI/CD pipeline build. This immediate feedback is more useful to the dev team and reduces the time wastage associated with broken builds. For us this ensured that apart from a few rules the majority are now in ESLint where developers can see them and resolve them, preventing the need for a CI/CD pipeline to highlight the problem (and broken builds).

To enforce the ESLint rules at build time we run the ESlint analysis during the CI/CD Pipeline by calling ESlint as a build step.

eslint --config eslintrc.json src 

This means no ESLint errors will be let through. We also have a Sonar Quality Gate check configured in the Pipeline but as the majority of Sonar rules are now in ESLint we should only get a failure where a rule is breached that is only in the server Sonar Profile.

Photo by Nicole Wolf on Unsplash

As an additional step we can also import all the ESlint issues found in to Sonar so that we can see them in the Sonar dashboard. You can export the ESlint rules as JSON for Sonar to import (during the build). To do this run this command in the build (ideally create a new npm script for it) assuming your src folder contains the source:

eslint --config .eslintrc.json --output-file ./eslint-report.json --format json src

Next set this Sonar property in your sonar-project.properties file or via command line argument (where eslint-report.json is the output report produced above).

sonar.eslint.reportPaths=eslint-report.json

Any issues from the ESLint report will then appear in Sonar issues marked with an EsLint badge. It appears warnings are added as majors and errors as Criticals and unfortunately I’ve not yet found a way to change this.

As a side note this command is also useful with eslint to output a HTML report of any errors which is great for reviewing or sharing:

eslint --config .eslintrc.json --output-file ./eslint-report.html --format html src

Summary

In summary, quality is being maintained as the same rules are enforced but now developers only need to ensure that ESlint is happy before committing their changes to source control to ensure a smooth server build. To make this easier you can add a new npm run script that runs all pre-commit checks that is triggered automatically (e.g. git hooks) or manually by the developer.

Auto increment build number in a JavaScript app

Whilst looking a simple way to auto increment an application build version number for a small React JavaScript application I found a neat solution so I’m sharing it here for others to use (and for me to refer back to on my next project).

Photo by Ariel on Unsplash

Problem:

For a small React application using NPM I needed to automatically increment a build number on every production build without having to remember to do it, and then I needed to refer to that version number within the application in order to display in on the site’s footer.

Solution:

After a short Google I found this really neat solution posted by Jan Lübeck on the Creact-React-App Github site here which I then very slightly tweaked for my requirements.

The solution is to essentially call a NodeJs script from npm build script definition whcih updates a metadata JSON file within the prpoject to increment the build version number by 1. This file can then be read within the React code as required to display the currnt vuild verson.

First we need to store the build version numbers in a JSON file:

{
"buildMajor":1,
"buildMinor":0,
"buildRevision":3,
"buildTag":"BETA"
}

Only the buildRevision will be updated automatically as we will want manual control over the major/minor numbers as we implement semantic versioning.

The NodeJS Script (generate-buildno.js) is below. This opens the metadata.json file, reads the buildRevision value and then updates it by 1 before saving the file:

var fs = require('fs');
console.log('Incrementing build number...');
fs.readFile('src/metadata.json',function(err,content) {
    if (err) throw err;
    var metadata = JSON.parse(content);
    metadata.buildRevision = metadata.buildRevision + 1;
    fs.writeFile('src/metadata.json',JSON.stringify(metadata),function(err){
        if (err) throw err;
        console.log(`Current build number: ${metadata.buildMajor}.${metadata.buildMinor}.${metadata.buildRevision} ${metadata.buildTag}`);
    })
});

To call this script at build time we update the package.json file and change the build command to call our script first, before the normal build command:

 "scripts": {
    "start": "react-scripts start",
    "build": "node generate-buildno.js && react-scripts build",
    "test": "react-scripts test",
  },

Now when we call npm run build the JSON file we be updated and the build revision number incremented. The React code that needs to display the app version number just has to read the metadata.json file like the example React Components below:

import React from 'react';
import metadata from './metadata.json';

function FooterComponent() {
  return (
    <div className="sf-footer">
      &copy; 2020 RichHewlett.com
      <div className="sf-footer-version">
        {`Version ${metadata.buildMajor}.${metadata.buildMinor}.${metadata.buildRevision} ${metadata.buildTag}`}
      </div>
    </div>
  );
}
export default FooterComponent;

Remember after building the application to commit your changed metadata.json file into source control!

IIS Express Launch Script

Usually during web development you want to run your web code locally via a local development web server and there are many options for this. In fact most development workflows provide this functionality. For example Visual Studio provides a local web server to run your code, and React/Webpack toolchains usually use NodeJS based solutions. Sometimes though you want to want to fire up something simple to run your code outside your development workflow.

Photo by Aryan Dhiman on Unsplash

The Problem

When developing ReactJS projects I often prefer to run my transpiled bundled code (produced via ‘NPM run build’) in a different way to my development code on a different local web server and, as I usually have IIS Express installed already, I prefer to use that than install new global Node based web server tools. Whilst the command line parameters for IIS Express are simple I have added them to this post to prevent me having to remember them in the future 🙂

The Solution

To run IIS Express from the commandline use:

iisexpress /path:<path_to_files>

You can also specify a port number and other options if you require (see the documentation here).

I prefer to wrap this into a RunOutputBuild.cmd batch file that I can store with the code in the repo and fire up when I want to host my transpiled build output files.

REM Script to launch IIS Express and host the build folder
echo "Launching IIS Express"
cmd /K "c:\program files (x86)\IIS Express\iisexpress.exe" /path:%~dp0build

This assumes the IIS Express installation location is the default one and that the script lives in root of the repo as does the \build folder. The useful %~dp0 key resolves to the local directory where the command script was run from.

I can then just run the command file and browse to the compiled site. Its also possible to add compilation steps and one to fire up the browser automatically if required. Alternatively you could write an NPM Script or Gulp/Grunt step that replicates this functionality to launch IIS Express.

Break your site out of Internet Explorer Compatibility View

Internet Explorer Compatibility mode is a feature of IE that allows you to choose to render sites that targeted older versions of IE when they were developed. It essentially pretends to be IE 8 during rendering which can correct many issues. Microsoft maintains a list of sites that require this compatibility mode and allows users to choose additional sites that should be rendered using this mode. For many large enterprises that were suck on IE 6 because of the need to old legacy systems built to IE6 standards this feature proved very valuable as it enabled them to move their workstations to newer supported version of IE whilst they built replacements for their legacy systems. You can choose to enable Compatibility mode for specific sites or all intranet sites.

alt text

The Problem

So what if you know that the majority of the users of your modern web application are using IE11 but with Compatibility mode on which makes their browser pretend its IE8 and thus unable to make use of new browser features? Unless you build in support for very old browsers via polyfills then those users will see errors and unexpected behaviors.

The Solution

If you want to ensure that users of IE11 or Edge are not Restricted by Compatibility mode then you need to disable it for your site and this is possible by adding a meta tag to your pages. Add the below meta tag in your head tag:

<head>
    <meta http-equiv="x-ua-compatible" content="ie=edge" />
</head>

The page will then disable the Compatibility mode and render the page as Edge compliant modern standards. This is a very useful way to target modern browser features without having to turn off compatibility settings on each client and possibly causing issues with other sites.

For more information checkout these links here and here

Some Recommended VS Code Extensions

One of the things that makes Visual Studio Code (VSCode) such a great editor is the many extensions that have been built for it. Extensions in VSCode are explained here. As a reference for myself when building new machines,and anyone else who might find this useful, below is a list of my most used extensions:

  • Bracket pair colorizer – Colours your brackets and braces for easy identification. I avoid many missing bracket errors with this one!
  • XML Tools – XML Formatting, XQuery, and XPath Tools.
  • ESLint – Extension to integrate ESLint into the IDE.
  • SonarLint – Sonar Rules in VSCode to check your code quality as you go.
  • Prettier – Integrate Prettier into your IDE
  • GitLens – Extend the Git capabilities of VSCode with this tool.
  • PowerShell – Develop PowerShell scripts inside VSCode.
  • Docker – Develop Dockers scripts inside VSCode. Adds syntax highlighting, commands, hover tips, and linting for Dockerfile and docker-compose files.
  • REST Client – Allows you to send HTTP requests and view the response directly in VSCode.
  • JS Refactor – Automated refactoring tools to smooth your JavaScript development workflow.

Check out more on the VSCode marketplace.

What are you using?

Moving teams to Trunk Based Development (an example)

In this post I am going to cover an example case study of introducing Trunk Based development to an existing enterprise Dev team building a monolithic web application. I’m not going to cover the details of trunk based development as that’s covered in detail elsewhere on the internet (I recommend trunkbaseddevelopment.com). For the purpose of this article I’m referring to all development teams working on a single code branch, constantly checking their changes in to that single branch (master). In our model the only reasons for ever creating a new branch was for one of the following reasons:

  1. A developer creates a personal branch off master for sole purposes of creating to Pull Request into the Master branch (for code review purposes).
  2. The release team take a branch off of master to create a Release Candidate build. Whilst this is not in the true spirit of Trunk based development it is often required for release management purpose. Microsoft use this model for their Azure DevOps tooling development and call it Release Flow.
  3. A team may still create a branch for a technical spike or proof of concept that will be dumped and wont ever me merged into Master.
Problem Statement

The development team in this case was an enterprise application development team building a large scale multi-channel application. There were five sprint teams working on the same monolithic application code-base with each team had its own branch of the code that they worked on independently from the other sprint teams. Teams were responsible for maintaining their own branches and merging in updates from previous releases but this was often not done correctly causing defects to emerge and features being accidentally regressed. Whats more, the teams branches would deviate more and more away from each other over time, making the problem worse with each subsequent release. Following a monthly release cycle, at the end of two fortnightly sprints all teams manually merged their code into a release branch which was then built and sanity tested. The merging of code was usually manual and often done via cherry picking individual changes from memory. Needless to say this process was error prone and complicated for larger releases. Once the merge was complete this was the first time that all the code had all been running together and any cross team impacting changes identified and so many issues were encountered at this point, either in failed CI builds or in release testing cycle. This Merge and test cycle took between 3 to 8 days therefore taking a week out of our delivery cycle and severely impacting velocity.  The more issues that were found the more the release testing was increased in an attempt to address quality issues, increasing lead times. It was clear that something needed to be done and so we decided to take the leap to trunk based development and eliminating waste by removing merges and getting each teams changes aligned sooner.

Solution

We moved to a single master/trunk stream (in Git) for ALL development and all sprint teams worked on this single stream. Each sprint team has its own development environment for their in-sprint testing of their changes. Every month we would create a cut of the master branch to create the release candidate branch for the route to live and to act as a stable/scope locked base for testing the upcoming release to production. This model follows Microsoft’s Release Flow model. Defect fixes for the release candidate were fixed in master or the release branch, depending on the sprint teams decision but each week changes made in the release branch (i.e. defect fixes) are back merged into master/trunk. This ‘back merges’ activity is owned by one sprint team per Release and this ownership is rotated each release so everyone shares the effort and benefits from improvements to this process

Feature Toggles

The move towards trunk based development was not possible without utilising Feature Toggles or Release Toggles to be more specific . We needed to support multiple teams working on the same codebase but building and testing their own changes (often targeting different releases). We needed to protect each team from every other teams untested changes, and so we used toggles to isolate new changes until they were tested/approved. All code changes were wrapped in an IF block with the old code in the ELSE block and a toggle check determines which route the code takes.

 if (ToggleManager.IsToggleEnabled("feature12345"))
 {
     // new code here 
 }
 else 
 {
     //wrap old code here unchanged
 }

As the system was a  Java EE application we chose FF4J as the toggle framework. FF4J is a simple but flexible feature flipping framework, and as the toggles are managed in a simple XML file it made it easier to implement our Jenkins solution described later. To be clear there are many frameworks that could be used and its simple to create your own.  To be able to support replacing the toggle framework and to make it as simple as possible for developers to create toggles/flips FF4J functionality was wrapped in a helper class which made adding a toggle a one line change.

We also implemented a Feature Toggle Tag Library so that JSF tags could be wrapped around content in JSF (Java Server Faces) pages to enable/disable depending on the FF4J toggle being on/off. Also a JavaScript wrapper for toggles was also developed that allowed toggles to be checked from within our client-side JS code.

A purposeful decision was made to bake the toggle config into the built code packages to prevent toggle definition files and the code drifting apart. This meant that the toggle file is baked into built JAR/EAR/TAR file for deployment. Once the package is created the toggles are set for that package and this prevents a code-configuration disconnect from forming which is the cause of many environmental stability issues. This was sometimes controversial as teams would want to change a toggle after deployment for the simple reason that they forgot to set it correctly or would hastily want to flip toggles on code in test which was not the design goal of the our Feature Toggles (although a valid scenario for toggles and a separate design was introduced for turning features on/off in Production).

All code changes are wrapped in a toggle and the toggle is set to OFF in the repository until the change has passed definition of done within the sprint team. Once the change has been completed (‘done’ includes testing in sprint) then the toggle is turned ON in the master branch – making it ON for all teams immediately (as the code base is shared). As the toggles for new untested changes are OFF in the code and all package builds come from the master code branch,  the new feature cannot be tested on a test environment without first flipping a toggle, so how can it be tested and turned on without impacting other teams?  For this we introduced Auto Toggling in our Jenkins job.  Our Jenkins CI jobs that build the code for test include a parameter for the sprint team indicator (to identify the team the built package is for) and this then automatically turns on the WIP toggles belonging to that team. This means that when Sprint Team A triggers a build, Jenkins will update the new work in progress toggles for Team A from OFF to ON in the package. This package is marked as WIP for Team A and so cannot be used by other teams. Team A can now deploy and test their new functionality but other teams will still not see their changes. Once Team A are happy with their changes they turn the Toggle ON in MASTER  for all teams and its widely available.

Unfortunately not everything can be toggled easily, and so a decision tree was built for developers to follow to understand the options available.  Where the toggle framework was not an option, then other approaches can be used. Sometimes a change can be “toggled/flipped” in other ways (e.g. a new function with a new name or a new column in the DB for new data, or an environment variable). The point is that the ability and desire to use Feature Toggles is not just about the Toggle framework you choose but instead its a philosophy and approach that must be adopted by the teams.  If after all other options there is no way to toggle then teams have to communicate changes amongst themselves  and take appropriate actions to communicate a potential breaking change.

What worked well

So what were the benefits seen after introducing trunk based development in the team?  Well firstly the obvious benefit of maintaining less code branches was immediately obvious. From day one every team was now on the latest codebase and no merging and cherry picking was required at end of sprint. This saved time, reduced merge related defects and increased the agility of the team.  Each team has saved time and effort by not having the housekeeping effort of maintaining their own branch. Use of environments became more flexible as in theory every new feature was on every environment behind a toggle meaning that Team A’s new feature can now be tested on Team B’s environment should the need arise.  Cross team design issues have been spotted earlier as a clash of changes between teams is seen during development and not later when the code is merged together prior to a release. Teams are now able to share new code more easily because as soon as a new helper function is coded it can be used by another team immediately. Any Improvements to CI/CD process or code organisation or tech debt can now be embraced by all teams at once without a delay as it filtered through each teams branches.

What didn’t go so well

Of course its not all perfect, and we have faced some challenges.  Now all teams are impacted immediately by the quality of the master/trunk branch. Broken builds have been reduced with the introduction of Pull Requests and Jenkins running quality gate builds on the check-ins on Master but any failure in the CI/CD build pipeline immediately impacts all the teams and so these issues need to be resolved ASAP (which they should anyway in any team). Teams must work together to resolve which does bring positives.   Which brings me to the next point – communication.

When teams are using trunk based development and sharing one codebase then communication between teams becomes more important. It is critical that there are open dialog between the teams to ensure that issues are resolved quickly, and that any changes that impact the trunk are communicated quickly and efficiently.  Whilst a lack of cross team comms exasperated in trunk based development any communication issues are a death-nell to a Dev team and should be resolved anyway. If this is an issue for your teams then be aware that trunk based development may not be for you until your teams are collaborating better. That said introducing trunk based development is a good way to encourage teams to collaborate better for their own benefit.

Feature toggles/flags are a key enabler to trunk based development, enabling you to isolate changes until they are ready but there is no denying that they add complexity and technical debt. We add a toggle around all changed code and use the Jira ID to name the toggle. Whilst we have a neat toggle solution there is no getting away from the fact that extra conditions mean extra complexity in your code. Unit tests must consider feature toggles and test with them on and off.  Feature toggles can make code harder to read and increase the cyclomatic complexity of functions which may result in code quality gates failing. We had to slightly adjust our Sonar quality gate to allow for this.  Whilst toggles do add technical debt to your code, but this is temporary debt and one that is an investment to improve overall productivity, and so is really in my opinion a valid form of technical debt. Its referred to as debt as its ok to borrow on a manageable scale if you pay it back over time. Removing toggles is critical to this process, and yet this is one area we have yet to address sufficiently. A process to remove toggles has been introduced but its proving harder than expected to motivate teams to remove toggles from previous releases at the same rate as they are being added. To this end we have added custom metrics in SonarQube to track toggle numbers and we will use this key metric to ensure that the number of toggles stabilises/reduces.

The state of toggles is an additional thing the team need to manage, however this is offset by the power to be able to control the release of new code changes to other teams.  We have found care should be taken to ensure that feature toggles are not mis-used for non intended purposes. Be clear on what that they are for and when they should/shouldn’t be flipped (for us its in the Definition of Done for a task in sprint). There can be demands to use them as a substitute for proper testing. Be clear on the types of Release/Feature toggles and provide guidance for what that can be used for. There is no doubt that they can be re-used at release time to back out unwanted features but this should be a controlled process and designing in from the start. We already had Release Switches for turning features on and off, but Feature Toggles (in our case) are used purely for isolating changes from other teams until ready. We strive to ensure that the toggle is set ON or OFF during the sprint before the code is moved through to release testing.

Conclusion

The benefits you derive from moving towards trunk based development will vary depending on your current processes. For a full guide to the general benefits of trunk based development check out this excellent resource trunkbaseddevelopment.com.

In our case the roll out was a success and achieved the desired improvements in cycle time and developer productivity. The process of delivering technical change was simplified in terms of configuration management and developers became more productive. This was despite the problems we faced and listed above.

There is no doubt that utilising feature toggles into your design from the start would make some of the technical challenges easier but we have proved it can be done with an existing brownfield monolith.

Next Steps

Next steps for the future are to constantly improve the toggle framework to reduce the instances where a change cant be toggled on/off, and to make it easier to communicate about changes that will impact all teams.  A renewed emphasis on removing old toggles from the code is required to ensure that teams and change approvers accept the “tax” each release of removing redundant toggles.

Links for further reading

RunAs Issue? Check Secondary Logon Service.

On Windows if you are having problems trying to perform an action as a different user via the RunAs command then it might be due to the ‘Secondary Logon Service’ not running. I recently had this problem on Windows Server and after some investigations found that the ‘Secondary Logon Service’ had been disabled, starting the service resolved the issue. By default it is set to ‘Manual’.

The error you get from the RunAs command may vary depending on OS version but will report a problem running a process or service. This is the error I get on Windows 10:

1058: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.