List all Transitive Dependencies in your .Net Core Project

Photo by cottonbro studio on Pexels.com

Sometimes you need to find out what packages a .Net Core project or solution depends on quickly and whilst you can find this information in the Visual Studio IDE or by opening the project files individually and reading them, there is a quick and easy way using the “dotnet list package” command. What’s more you can also list the packages that your dependencies also have dependencies on, which is really useful for tracking down what part of your solution has a requirement on a specific package.

dotnet list package

This outputs all NuGet package references for a specific project or a solution (which you can specify in the command parameters or just let dotnet find the nearest solution or project file in the current directory structure). Example below:

Project 'WebApplication1' has the following package references
   [net6.0]: 
   Top-level Package                                Requested   Resolved
   > Azure.Storage.Blobs                            12.14.1     12.14.1 
   > Microsoft.AspNet.Identity.EntityFramework      2.2.3       2.2.3  

To see which packages your project relies on and also which packages they in turn rely on then run the command with the –include-transitive flag.

dotnet list package --include-transitive

This outputs the full list of packages and their dependencies, like so:

Project 'WebApplication1' has the following package references
   [net6.0]: 
   Top-level Package                                Requested   Resolved
   > Azure.Storage.Blobs                            12.14.1     12.14.1 
   > Microsoft.AspNet.Identity.EntityFramework      2.2.3       2.2.3   

   Transitive Package                         Resolved
   > Azure.Core                               1.25.0  
   > Azure.Storage.Common                     12.13.0 
   > EntityFramework                          6.1.0   
   > Microsoft.AspNet.Identity.Core           2.2.3   
   > Microsoft.Bcl.AsyncInterfaces            1.1.1   
   > System.Diagnostics.DiagnosticSource      4.6.0   
   > System.IO.Hashing                        6.0.0   
   > System.Memory.Data                       1.0.2   
   > System.Numerics.Vectors                  4.5.0   
   > System.Text.Encodings.Web                4.7.2   
   > System.Text.Json                         4.7.2   
   > System.Threading.Tasks.Extensions        4.5.4   

I found this useful this week when a large solution’s build pipeline was erroring due to a missing dependency but it wasn’t clear which project needed it.

For more info on the dotnet package command see the Microsoft docs here.

Advertisement

Support multiple JS module formats with rollup

Having recently needed to produce a shared JavaScript npm package for internal sharing of functionality between applications I naively failed to consider the impact of the various competing module formats at large in the JS world, but luckily the solution is straight forward with the help of ‘rollup’.

The requirement was to make a shared JavaScript library to be shared between numerous React/Node applications, and make it available internally via an npm/yarn install by publishing it to an internal registry. All straight forward in theory, however there are competing module formats and you need to either pick one or support both.

The various formats are mainly CommonJS (CJS)(the original NodeJS module system), Async Module Definition (AMD), Universal Module Definition (UMD)(a combination of CommonJS & AMD) and ES6 Modules (ESM). Most require the use of “require” keyword to import modules, or in the case of ES6 modules the use of “import”. So why do I need to care about all this? Wouldn’t it be easier if just use the modern ES6 module format only. Well usually you would pick one to use for your app and mostly ignore the others, but when building a shared library you need to consider what’s used by the consuming application so that it can use your new library. Just using ES6 Modules makes it difficult for apps built using CommonJS to consume the library, and in my case I also found that consuming the library from a majority ES6 module based app was problematic and there is often something that requires the CommonJS format (I’m looking at you Node).

So whilst over time everything will standardise (maybe) on ES6 modules in the meantime I used rollup (https://rollupjs.org) to solve the problem. Rollup module bundler allows you to code in modern ES6 standard and then compile it down into other formats (e.g. CommonJS) in one bundle (or different bundles if you wish).

To make life even easier I found this excellent ts-library-template repo for a rollup based TypeScript JS library which fitted my needs perfectly. Rollup is configured to output both CJS and ESM formats.

output: [
  {
     dir: `dist/${dirname(pkg.main)}`,
     entryFileNames: '[name].js',
     format: 'cjs'
  },
  {
     dir: `dist/${dirname(pkg.module)}`,
     entryFileNames: '[name].mjs',
     format: 'esm'
  }
]

So when the library is compiled it will be able to be be consumed by both CJS and ESM based applications.

Window is undefined during SSR

If when server side rendering a React application (other JS frameworks are available) that makes use of the global window object during the initial render, or perhaps the global document object, then you may get an error stating “window is undefined”. This is because when the app is being rendered on the server side on the web server it is not within the browser environment and therefore it cannot access the global objects that the browser provides.

To resolve this issue you will need to check for the presence and validity of the window object before calling it. This could be done manually in your code but a simple widely used npm package can do the job for you – ‘global‘. A very small lightweight library tor require global variables. It is a common dependency on popular npm packages and so it may already be indirectly referenced in your app anyway, but install via ..

npm i global

Then in your code just add an import for window before using it like below. Now the lack of window object will be handled.

// import the global window variable
import window from 'global';

// you can now use window global object
const url = window && window.location.href ? window.location.href : ''

For more info checkout the npm page for global or the github repo LINK

Links:

https://npmjs.com/package/global
https://github.com/Raynos/global

NodeJS & HTTP Error 431

Photo by Vie Studio on Pexels.com

I recently found error responses from a Node JS microservice with HTTP error “431 Request Header Fields Too Large” but at first it seemed to be intermittent dependent on the test environment being used. Further investigations though found it to be a Node setting on the max header size combined with Node JS version changes and a few large cookies.

Error 431 Request Header Fields Too Large HTTP error indicates that the total size of the request headers (which includes cookies) is too large for the web server to accept. This often occurs where large cookies have built up maxing out the request size.

In 2018 Node (version 11.6.0) was updated to resolve a security vulnerability in this area – Denial of Service with large HTTP headers (CVE-2018-12121) and this resulted in the default max request headers size being reduced to 8kb (from 16kb), more info here (Interestingly 8kb was chosen as it was the NGINX default at the time). The default limit was eventually increased back to 16kb in v13.13.0 which means that if you happen to be running against a Node version between 11.6 and 13.13 then you will hit a 8kb limit but before or after those versions the limit won’t be hit until 16kb – which is the situation I was in recently.

If the default max header size for your node installation is not correct for you then it is easy to configure a new value using –max-http-header-size parameter.

--max-http-header-size=16250

Of course you shouldn’t set this value too high and should instead to configure it as low as feasible for your specific application.

Coding in Spectrum Basic Again

My first computer was a Sinclair ZX Spectrum (16k) with rubber keys which is an icon of the innovative 1980’s micro computer market.

Sinclair ZX Spectrum

On it I learned to code in Sinclair Basic either by reading the manual or typing in programs from the Spectrum magazines of the time. It wasn’t long before one Christmas my ultimate gift arrived…a ZX Spectrum 128k +2….now that was a machine!

ZX Spectrum 128k +2

It’s probably no surprise that the main purpose of my Spectrum was to play games and I played for many hours, but I also learned to program, create databases and do word processing on my Citizen 120D dot matrix printer. All this just seemed like fun at the time but turned out to be useful skills, which I guess is the benefit of combining gaming with utility uses in modern devices like the Raspberry Pi.

Citizen 120D Printer

Recently I’ve been getting nostalgic and rekindle my love of coding on for a Spectrum, but now in the modern PC era we have many many options for how to do this. We have emulators and IDEs and free resources to download quickly (no 10 minute load time 🙂 ). In this post I’m covering a few options to quickly and easily getting started on coding in Sinclair Basic, mostly so I don’t forget when I get nostalgic again in the future.

The most traditional and authentic approach would be to setup an original Spectrum machine and code directly on it, but this is not the easiest option, especially if you don’t have a working spectrum sitting around at home. So we are going down the emulator route here.

Essentially there are two key things we need to code and run our basic program. Firstly we need a compiler to compile our program down into binary and secondly we need an emulator to run the compiled, program. There are lots of options here, and I’ll post links later for where to look for alternatives but below are my current choices.

SpectNetIDE – An all in one solution that runs as a plugin in Visual Studio.

SpecNetIDE is a Visual Studio 2019 plugin enabling you to code your Spectrum program in the excellent Visual Studio IDE and it also includes an emulator so you can run and debug all in one place. Check it out here: https://dotneteer.github.io/spectnetide/

SpectNetIDE

If you already have Visual Studio installed then installing this trying it out is quick and easy. If you dont have Visual Studio then you download the Community edition free from Microsoft. Note this is a Windows only option. Assembly and Basic is supported with basic being implemented via the widely used Python based Boriel compiler which complies Basic down into Z80 Machine code.

After a few setup steps which are well documented then you’re away and coding. Make sure you install V2 if you plan on coding in Basic. Follow the simple documentation to get started with a ZX Basic program. There are some links at the end of this post for guides to coding in ZX Basic and many of the 1980’s manuals are available to download for free.

Once you’ve completed your program you can create TAP file (the equalivant of an old data tape) and play it on other emulators.

‘VS Code > Command Line > Emulator’ option

An alternative option that I have been using is to use VS Code (or any text editor, including Notepad) and then using Boriel compiler via the command line to compile the code to a TAP file and then loading the TAP file into one of the many available emulators. This option runs on Windows and Linux and Mac.

Ewhilst any text editor can be used to code your program, VS Code is argubly the best code editor there is, plus you can install ZX Spectrum plugins to make your programming easier. The one i use is ZX Basic which provides syntax highlighting.

Next we need to be able to compile this down so we install Boriel Compiler from the downloads section (this Python compiler is open source on github here). Check out the installation and quickstart guide on the GitHub page. Once its extracted you can call the compiler via the command line with something like this (which will create a tap file):

./zxb.exe ./code/helloworld.bas --tap --BASIC --autorun

Once compiled into a TAP file then we can run this on any Emultor you like that supports TAP files (which the majority do). My current favourite is the JavaScript based Qaop/JS as it runs in your browser. Click on the sides of the window to open the menu and choose to ‘Open’ your TAP file, and watch your amazing program run in all its 1980’s glory.

Now I just need to learn to code better programs than I did as a child….the links below should help.

Useful Links

Add Git Bash & VS Dev Cmd Prompt Profiles to Windows Terminal

I admit I was not too impressed with the early beta versions of Windows Terminal, maybe because I use Cmder as my daily terminal driver and its features are excellent. However since Windows Terminal reached RTM at v1.0 it does seem a better quality product and with the demo at MS Build 2021 of the new features coming soon I can see that my number one feature request will be released soon – Quake Mode! Quake Mode enables the terminal to drop down from the top of screen on a keypress and as it is a feature of Cmder I use it every day to show/hide the terminal for those quick actions.

Windows Terminal does need some tweaking to get it to look and behave to my tastes, and unfortunately it still doesnt have a proper opacity setting (we have to make do with the smudgy ‘arcylic’ setting instead, but with some it is highly customisable via the settings.json file and settings UI. One thing that you can do is to add your own profiles, so I added my own profile for GitBash.

Assuming you have GitBash installed (via Git) then the below config should work but you’ll need to check your paths. Add this section to your profiles list in the settings.json file which can be found by opening Windows Terminal, choose ‘settings’ from the dropdown, then click the settings icon, the json file will then open. It is usually located somewhere like C:\<YOURUSERNAME>\Rich\AppData\Local\Packages\Microsoft.WindowsTerminal_8wekyb3d8bbwe\LocalState.

{
 "closeOnExit": "always",
 "commandline": "C:\\Program Files\\Git\\bin\\bash.exe -I -l",
 "icon": "C:\\Program Files\\Git\\mingw64\\share\\git\\git-for-windows.ico",
 "name": "GitBash",
 "startingDirectory": "%USERPROFILE%",
 "tabTitle": "GitBash"
}

Its also useful to add a profile for the Developer Command Prompfor VS 2019, like this:

{
 "name": "Developer Command Prompt for VS 2019",
 "commandline": "%comspec%  /k \"%ProgramFiles(x86)%\\Microsoft Visual Studio\\2019\\Community\\Common7\\Tools\\VsDevCmd.bat\"",
 "icon": "%ProgramFiles(x86)%\\Microsoft Visual Studio\\2019\\Community\\Common7\\IDE\\Assets\\VisualStudio.70x70.contrast-standard_scale-180.png",
 "tabTitle": "Dev Cmd VS 2019"
}

This snippet assumes Visual Studio community edition is installed, if using Enterprise then correct the file paths to /Enterprise/ instead of /Community/. If you’re using Cmder and want to read how to add the Developer Command prompt to Cmder check out this post here.

To see the whole file check out my GitHub repo where I’ll keep my latest config file (whilst I remember to update) at https://github.com/RichHewlett/windows-terminal-config.

VS Code Keyboard Mappings in VS 2019

Quick post to let you and ‘future me’ know that Visual Studio 2019 includes the option to use VS Code keyboard mappings out of the box. As someone who has been using VS Code as my development environment much more often than full Visual Studio recently this was a really useful find for me, and one I wasn’t aware of. I had been recently fumbling about in Visual Studio using muscle memory to perform tasks before realising that I am in a different editor and so I need to use the ‘other’ keyboard mappings. I decided enough was enough and so I dug into the Visual Studio settings to tweak some of the keyboard shortcuts and found that you can just tell it to use VS Code mappings instead.

In Visual Studio 2019 > Tools > Options > Environment > Keyboard : Apply following Keyboard Mapping Scheme Visual Studio Code.

So I can now standardise my keyboard mappings across both IDEs and that is one less thing for me to remember.

Referencing External Controllers in ASP.Net Core 3.x

I recently had a situation where I needed to include a utility controller and set of operations into every .Net Core Web API that used a common in-house framework. The idea being that by baking this utility controller in to the framework then every API built that references it can take advantage of this common set of API operations for free. The types of operations that this might include are logging, registration, security or health check type API operations that every Micro Service might need to implement out of the box (some perhaps only in certain environments). Anyway I knew this was possible in .Net Core through the use of ApplicationPart and ApplicationPartManager so I thought I’d code it up and write a blog post to remind ‘future me’ of the approach, however it soon became clear that since .Net Core 3.x this is now automatic so this is the easiest blog post ever. That said there are times when you’ll want to NOT automatically include external controllers and so we’ll cover that too.

Photo by Luca Bravo on Unsplash

External Controllers Added Automatically in .Net 3.x

If you’re using .Net Core 3.x or upwards then ASP.net Core will traverse any referenced assemblies in the assembly hierarchy looking for those that reference the ASP.Net Core assembly and then look for controllers within those assemblies. Any assemblies found are automatically added to the controllers collection and you’ll be able to hit them from your client.

To test this, create a File > New ASP.Net Core Web Project in Visual Studio, then add class libraries to the solution and in those class libraries add Controller classes with the functionality you need. Add the [ApiController] attribute then follow the auto prompt to add the AspNetCore NuGet package. Code your new controller to return something and then add a reference to that new class library project from the original Web API project and you’re done. Run the Web API and try to hit the operation in your new external controller from the browser/postman. It should find the operation in the controller that lives inside the class library project.

This ability to include controllers from external assemblies into the main Web API project has always been very useful but now its just go easier which is great.

Removing automatically added controllers

I can see at least two issues with this automatic approach. Firstly it is a potential security risk as you may inadvertently include controllers in your project via package dependencies and these could be nefarious. This is discussed in more detail in this post along with a way clearing out all ApplicationParts from the collection and then adding the default one back in (reproduced below)

 
 public void ConfigureServices(IServiceCollection services)
{
  services.AddControllers().ConfigureApplicationPartManager(o =>
    {
      o.ApplicationParts.Clear();
      o.ApplicationParts.Add(new AssemblyPart(typeof(Startup).Assembly));
    });
} 

Secondly you may just want to control via configuration which ApplicationParts / Controllers are to be used by the API. In my use case I wanted to be able to allow API developers to be able to disable the built-in utility controllers if required. I can use configuration to drive this and remove the auto discovered controller by Assembly name (as below).

 
 public void ConfigureServices(IServiceCollection services)
{
  services.AddControllers().ConfigureApplicationPartManager(o =>
    {
      // custom local config check here
      if (!configuration.UseIntrinsicControllers)
      {
        // assuming the intrinsic controller is in an assembly named IntrinsicControllerLib
        ApplicationPart appPart = o.ApplicationParts.SingleOrDefault(s => s.Name == "IntrinsicControllerLib");
        if (appPart != null)
        {
         // remove it to stop it being used
         o.ApplicationParts.Remove(appPart);
        }
      }
} 

So as you can see its now easier than ever to add utility controllers into your API from externally referenced projects/assemblies but “with great power comes great responsibility” and so we need to ensure that only approved controllers (or other ApplicationParts such as views) are added at runtime.

Hope you found this useful.

References for further reading:

Moving Sonar Rules closer to the Developer with ESLint

Shift code quality analysis to the left by moving your static analysis from the CI/CD pipeline to the developers IDE where possible. In this post I cover what we did, how to set up Sonar Lint and how we ultimately moved the Sonar rules into ESLint instead.

Problem

As a development team working on a JavaScript application we sometimes had issues where a difference between the rules enforced in our CI/CD pipeline, using SonarQube, and local rules, using ESLint, led to discrepancies in standards and ultimately, at times, broken builds. If you are enforcing rules in your project and in the CI/CD pipeline with SonarQube then indeally you need them to mostly match but this is not as easy as it sounds. Any failure in the pipeline are more time-consuming to resolve than if they happened locally before the developer pushed their code to the repository.

Photo by NESA by Makers on Unsplash

Solution

Import the Sonar rules into ESLint and force ESlint in both the IDE and the CI/CD pipeline.

In order to strike a balance of quality assurance and flexibility in the implementation of rules we introduced an approach that combines ESLint and Sonar rules with the emphasis on shift-left with rule enforcement done in the IDE as code is written and then re-enforced later in the CI/CD pipeline.

Firstly, Sonar Source (developers of SonarQube) provide a plugin for several IDEs (including VSCode) called SonarLint that helps address the issue of running Sonar rules in an IDE. Once installed it will analyse your code against default Sonar rules and report issues. What if you’re not running the default set of rules on your Sonar server, well no worries as the plugin can be set to connect to your server and download the right quality profile and rules.

To do this install the SonarLint extension into your IDE (many are supported, e.g. VS Code, Visual Studio etc) and then set the extension properties as per the instructions for your particular IDE. For VS Code it goes like this:

To link a server set the “sonarlint.connectedMode.connections.sonarqube” setting which has to be a USER setting (oddly). Then in workspace settings for the project you can configure the projectKey for your project. Workspace setting files are created in the .vscode folder in a settings.json file whcih can be added to source control so this only needs to be setup once per project. Once done, press F1 > type sonar > select “SonarLint: Update all project bindings to SonarQube” which will refresh the plugins cache and force it to download the rules from your Sonar server.

Now whilst SonarLint is a useful tool it is not as powerful as ESLint for linting in the IDE (in my opinion). For example ignoring a rule (for a genuine reason) at file or line level is not possible in a satisfactory way (you can only ignore from a line/file from ALL sonar rules). Also Eslint provides more power and flexibility especially where you have a centrally managed sonar server with shared rule profiles and quality gates that are not easy to change (which may be a good thing for your organisation).

So instead of, or even in additon to, SonarLint checking you can actually import the Sonar JavaScript scanner rules into ESLint. To do this install the npm package: eslint-plugin-sonar then configure your ESLint config to use the Sonar js/recommended) JS rules.

  1. npm install eslint-plugin-sonar
  2. Add it to your eslint config file as an extends
    extends: [
    “plugin:sonarjs/recommended”
    ]

Now ESLint will report quality errors that would previously only been highlighted in Sonar during a CI/CD pipeline build. This immediate feedback is more useful to the dev team and reduces the time wastage associated with broken builds. For us this ensured that apart from a few rules the majority are now in ESLint where developers can see them and resolve them, preventing the need for a CI/CD pipeline to highlight the problem (and broken builds).

To enforce the ESLint rules at build time we run the ESlint analysis during the CI/CD Pipeline by calling ESlint as a build step.

eslint --config eslintrc.json src 

This means no ESLint errors will be let through. We also have a Sonar Quality Gate check configured in the Pipeline but as the majority of Sonar rules are now in ESLint we should only get a failure where a rule is breached that is only in the server Sonar Profile.

Photo by Nicole Wolf on Unsplash

As an additional step we can also import all the ESlint issues found in to Sonar so that we can see them in the Sonar dashboard. You can export the ESlint rules as JSON for Sonar to import (during the build). To do this run this command in the build (ideally create a new npm script for it) assuming your src folder contains the source:

eslint --config .eslintrc.json --output-file ./eslint-report.json --format json src

Next set this Sonar property in your sonar-project.properties file or via command line argument (where eslint-report.json is the output report produced above).

sonar.eslint.reportPaths=eslint-report.json

Any issues from the ESLint report will then appear in Sonar issues marked with an EsLint badge. It appears warnings are added as majors and errors as Criticals and unfortunately I’ve not yet found a way to change this.

As a side note this command is also useful with eslint to output a HTML report of any errors which is great for reviewing or sharing:

eslint --config .eslintrc.json --output-file ./eslint-report.html --format html src

Summary

In summary, quality is being maintained as the same rules are enforced but now developers only need to ensure that ESlint is happy before committing their changes to source control to ensure a smooth server build. To make this easier you can add a new npm run script that runs all pre-commit checks that is triggered automatically (e.g. git hooks) or manually by the developer.

Auto increment build number in a JavaScript app

Whilst looking a simple way to auto increment an application build version number for a small React JavaScript application I found a neat solution so I’m sharing it here for others to use (and for me to refer back to on my next project).

Photo by Ariel on Unsplash

Problem:

For a small React application using NPM I needed to automatically increment a build number on every production build without having to remember to do it, and then I needed to refer to that version number within the application in order to display in on the site’s footer.

Solution:

After a short Google I found this really neat solution posted by Jan Lübeck on the Creact-React-App Github site here which I then very slightly tweaked for my requirements.

The solution is to essentially call a NodeJs script from npm build script definition whcih updates a metadata JSON file within the prpoject to increment the build version number by 1. This file can then be read within the React code as required to display the currnt vuild verson.

First we need to store the build version numbers in a JSON file:

{
"buildMajor":1,
"buildMinor":0,
"buildRevision":3,
"buildTag":"BETA"
}

Only the buildRevision will be updated automatically as we will want manual control over the major/minor numbers as we implement semantic versioning.

The NodeJS Script (generate-buildno.js) is below. This opens the metadata.json file, reads the buildRevision value and then updates it by 1 before saving the file:

var fs = require('fs');
console.log('Incrementing build number...');
fs.readFile('src/metadata.json',function(err,content) {
    if (err) throw err;
    var metadata = JSON.parse(content);
    metadata.buildRevision = metadata.buildRevision + 1;
    fs.writeFile('src/metadata.json',JSON.stringify(metadata),function(err){
        if (err) throw err;
        console.log(`Current build number: ${metadata.buildMajor}.${metadata.buildMinor}.${metadata.buildRevision} ${metadata.buildTag}`);
    })
});

To call this script at build time we update the package.json file and change the build command to call our script first, before the normal build command:

 "scripts": {
    "start": "react-scripts start",
    "build": "node generate-buildno.js && react-scripts build",
    "test": "react-scripts test",
  },

Now when we call npm run build the JSON file we be updated and the build revision number incremented. The React code that needs to display the app version number just has to read the metadata.json file like the example React Components below:

import React from 'react';
import metadata from './metadata.json';

function FooterComponent() {
  return (
    <div className="sf-footer">
      &copy; 2020 RichHewlett.com
      <div className="sf-footer-version">
        {`Version ${metadata.buildMajor}.${metadata.buildMinor}.${metadata.buildRevision} ${metadata.buildTag}`}
      </div>
    </div>
  );
}
export default FooterComponent;

Remember after building the application to commit your changed metadata.json file into source control!