Coding in Spectrum Basic Again

My first computer was a Sinclair ZX Spectrum (16k) with rubber keys which is an icon of the innovative 1980’s micro computer market.

Sinclair ZX Spectrum

On it I learned to code in Sinclair Basic either by reading the manual or typing in programs from the Spectrum magazines of the time. It wasn’t long before one Christmas my ultimate gift arrived…a ZX Spectrum 128k +2….now that was a machine!

ZX Spectrum 128k +2

It’s probably no surprise that the main purpose of my Spectrum was to play games and I played for many hours, but I also learned to program, create databases and do word processing on my Citizen 120D dot matrix printer. All this just seemed like fun at the time but turned out to be useful skills, which I guess is the benefit of combining gaming with utility uses in modern devices like the Raspberry Pi.

Citizen 120D Printer

Recently I’ve been getting nostalgic and rekindle my love of coding on for a Spectrum, but now in the modern PC era we have many many options for how to do this. We have emulators and IDEs and free resources to download quickly (no 10 minute load time 🙂 ). In this post I’m covering a few options to quickly and easily getting started on coding in Sinclair Basic, mostly so I don’t forget when I get nostalgic again in the future.

The most traditional and authentic approach would be to setup an original Spectrum machine and code directly on it, but this is not the easiest option, especially if you don’t have a working spectrum sitting around at home. So we are going down the emulator route here.

Essentially there are two key things we need to code and run our basic program. Firstly we need a compiler to compile our program down into binary and secondly we need an emulator to run the compiled, program. There are lots of options here, and I’ll post links later for where to look for alternatives but below are my current choices.

SpectNetIDE – An all in one solution that runs as a plugin in Visual Studio.

SpecNetIDE is a Visual Studio 2019 plugin enabling you to code your Spectrum program in the excellent Visual Studio IDE and it also includes an emulator so you can run and debug all in one place. Check it out here: https://dotneteer.github.io/spectnetide/

SpectNetIDE

If you already have Visual Studio installed then installing this trying it out is quick and easy. If you dont have Visual Studio then you download the Community edition free from Microsoft. Note this is a Windows only option. Assembly and Basic is supported with basic being implemented via the widely used Python based Boriel compiler which complies Basic down into Z80 Machine code.

After a few setup steps which are well documented then you’re away and coding. Make sure you install V2 if you plan on coding in Basic. Follow the simple documentation to get started with a ZX Basic program. There are some links at the end of this post for guides to coding in ZX Basic and many of the 1980’s manuals are available to download for free.

Once you’ve completed your program you can create TAP file (the equalivant of an old data tape) and play it on other emulators.

‘VS Code > Command Line > Emulator’ option

An alternative option that I have been using is to use VS Code (or any text editor, including Notepad) and then using Boriel compiler via the command line to compile the code to a TAP file and then loading the TAP file into one of the many available emulators. This option runs on Windows and Linux and Mac.

Ewhilst any text editor can be used to code your program, VS Code is argubly the best code editor there is, plus you can install ZX Spectrum plugins to make your programming easier. The one i use is ZX Basic which provides syntax highlighting.

Next we need to be able to compile this down so we install Boriel Compiler from the downloads section (this Python compiler is open source on github here). Check out the installation and quickstart guide on the GitHub page. Once its extracted you can call the compiler via the command line with something like this (which will create a tap file):

./zxb.exe ./code/helloworld.bas --tap --BASIC --autorun

Once compiled into a TAP file then we can run this on any Emultor you like that supports TAP files (which the majority do). My current favourite is the JavaScript based Qaop/JS as it runs in your browser. Click on the sides of the window to open the menu and choose to ‘Open’ your TAP file, and watch your amazing program run in all its 1980’s glory.

Now I just need to learn to code better programs than I did as a child….the links below should help.

Useful Links

Add Git Bash & VS Dev Cmd Prompt Profiles to Windows Terminal

I admit I was not too impressed with the early beta versions of Windows Terminal, maybe because I use Cmder as my daily terminal driver and its features are excellent. However since Windows Terminal reached RTM at v1.0 it does seem a better quality product and with the demo at MS Build 2021 of the new features coming soon I can see that my number one feature request will be released soon – Quake Mode! Quake Mode enables the terminal to drop down from the top of screen on a keypress and as it is a feature of Cmder I use it every day to show/hide the terminal for those quick actions.

Windows Terminal does need some tweaking to get it to look and behave to my tastes, and unfortunately it still doesnt have a proper opacity setting (we have to make do with the smudgy ‘arcylic’ setting instead, but with some it is highly customisable via the settings.json file and settings UI. One thing that you can do is to add your own profiles, so I added my own profile for GitBash.

Assuming you have GitBash installed (via Git) then the below config should work but you’ll need to check your paths. Add this section to your profiles list in the settings.json file which can be found by opening Windows Terminal, choose ‘settings’ from the dropdown, then click the settings icon, the json file will then open. It is usually located somewhere like C:\<YOURUSERNAME>\Rich\AppData\Local\Packages\Microsoft.WindowsTerminal_8wekyb3d8bbwe\LocalState.

{
 "closeOnExit": "always",
 "commandline": "C:\\Program Files\\Git\\bin\\bash.exe -I -l",
 "icon": "C:\\Program Files\\Git\\mingw64\\share\\git\\git-for-windows.ico",
 "name": "GitBash",
 "startingDirectory": "%USERPROFILE%",
 "tabTitle": "GitBash"
}

Its also useful to add a profile for the Developer Command Prompfor VS 2019, like this:

{
 "name": "Developer Command Prompt for VS 2019",
 "commandline": "%comspec%  /k \"%ProgramFiles(x86)%\\Microsoft Visual Studio\\2019\\Community\\Common7\\Tools\\VsDevCmd.bat\"",
 "icon": "%ProgramFiles(x86)%\\Microsoft Visual Studio\\2019\\Community\\Common7\\IDE\\Assets\\VisualStudio.70x70.contrast-standard_scale-180.png",
 "tabTitle": "Dev Cmd VS 2019"
}

This snippet assumes Visual Studio community edition is installed, if using Enterprise then correct the file paths to /Enterprise/ instead of /Community/. If you’re using Cmder and want to read how to add the Developer Command prompt to Cmder check out this post here.

To see the whole file check out my GitHub repo where I’ll keep my latest config file (whilst I remember to update) at https://github.com/RichHewlett/windows-terminal-config.

VS Code Keyboard Mappings in VS 2019

Quick post to let you and ‘future me’ know that Visual Studio 2019 includes the option to use VS Code keyboard mappings out of the box. As someone who has been using VS Code as my development environment much more often than full Visual Studio recently this was a really useful find for me, and one I wasn’t aware of. I had been recently fumbling about in Visual Studio using muscle memory to perform tasks before realising that I am in a different editor and so I need to use the ‘other’ keyboard mappings. I decided enough was enough and so I dug into the Visual Studio settings to tweak some of the keyboard shortcuts and found that you can just tell it to use VS Code mappings instead.

In Visual Studio 2019 > Tools > Options > Environment > Keyboard : Apply following Keyboard Mapping Scheme Visual Studio Code.

So I can now standardise my keyboard mappings across both IDEs and that is one less thing for me to remember.

Referencing External Controllers in ASP.Net Core 3.x

I recently had a situation where I needed to include a utility controller and set of operations into every .Net Core Web API that used a common in-house framework. The idea being that by baking this utility controller in to the framework then every API built that references it can take advantage of this common set of API operations for free. The types of operations that this might include are logging, registration, security or health check type API operations that every Micro Service might need to implement out of the box (some perhaps only in certain environments). Anyway I knew this was possible in .Net Core through the use of ApplicationPart and ApplicationPartManager so I thought I’d code it up and write a blog post to remind ‘future me’ of the approach, however it soon became clear that since .Net Core 3.x this is now automatic so this is the easiest blog post ever. That said there are times when you’ll want to NOT automatically include external controllers and so we’ll cover that too.

Photo by Luca Bravo on Unsplash

External Controllers Added Automatically in .Net 3.x

If you’re using .Net Core 3.x or upwards then ASP.net Core will traverse any referenced assemblies in the assembly hierarchy looking for those that reference the ASP.Net Core assembly and then look for controllers within those assemblies. Any assemblies found are automatically added to the controllers collection and you’ll be able to hit them from your client.

To test this, create a File > New ASP.Net Core Web Project in Visual Studio, then add class libraries to the solution and in those class libraries add Controller classes with the functionality you need. Add the [ApiController] attribute then follow the auto prompt to add the AspNetCore NuGet package. Code your new controller to return something and then add a reference to that new class library project from the original Web API project and you’re done. Run the Web API and try to hit the operation in your new external controller from the browser/postman. It should find the operation in the controller that lives inside the class library project.

This ability to include controllers from external assemblies into the main Web API project has always been very useful but now its just go easier which is great.

Removing automatically added controllers

I can see at least two issues with this automatic approach. Firstly it is a potential security risk as you may inadvertently include controllers in your project via package dependencies and these could be nefarious. This is discussed in more detail in this post along with a way clearing out all ApplicationParts from the collection and then adding the default one back in (reproduced below)

 
 public void ConfigureServices(IServiceCollection services)
{
  services.AddControllers().ConfigureApplicationPartManager(o =>
    {
      o.ApplicationParts.Clear();
      o.ApplicationParts.Add(new AssemblyPart(typeof(Startup).Assembly));
    });
} 

Secondly you may just want to control via configuration which ApplicationParts / Controllers are to be used by the API. In my use case I wanted to be able to allow API developers to be able to disable the built-in utility controllers if required. I can use configuration to drive this and remove the auto discovered controller by Assembly name (as below).

 
 public void ConfigureServices(IServiceCollection services)
{
  services.AddControllers().ConfigureApplicationPartManager(o =>
    {
      // custom local config check here
      if (!configuration.UseIntrinsicControllers)
      {
        // assuming the intrinsic controller is in an assembly named IntrinsicControllerLib
        ApplicationPart appPart = o.ApplicationParts.SingleOrDefault(s => s.Name == "IntrinsicControllerLib");
        if (appPart != null)
        {
         // remove it to stop it being used
         o.ApplicationParts.Remove(appPart);
        }
      }
} 

So as you can see its now easier than ever to add utility controllers into your API from externally referenced projects/assemblies but “with great power comes great responsibility” and so we need to ensure that only approved controllers (or other ApplicationParts such as views) are added at runtime.

Hope you found this useful.

References for further reading:

Creating Linux Desktop Shortcuts

In the previous post about how to hibernate your Ubuntu machine I touched on the concept of creating a desktop shortcut for the hibernate command to make it easy to run the command. That post can be found here. But since then I have been playing with desktop shortcuts some more and so in this post I go into some detail and outline how to create a shortcut icon with content menus too.

The basic concepts with examples…

The basic concept is to create the shortcut as a text file with the “.desktop” file extenstion and in it we define what we want to launch together with some other basic details about the application or script/command, such as icon, name, and whether it needs to run in the terminal etc. Below is an example “.desktop” file, saved to my desktop as “HibernateNow.desktop”. It is running a bash script via the terminal (/hibernatescript.sh).

 
 [Desktop Entry]
Type=Application
Terminal=true
Name=HibernateNow
Icon=utilities-terminal
Exec=gnome-terminal -e "bash -c './hibernatescript.sh;$SHELL'"
Categories=Application; 

From ubuntu 20.04 onwards you may need to right click on the file after creation and choose “Allow Launching”, which is a one time operation (see screenshot).

But lets say that we want to create a shortcut for an application instead of a script. Its the same concept except the Exec path would be the file path to the executable application instead of a script and it would be “terminal=false” as we don’t need to launch the command in the terminal.

In the below example we are launching the Visual Studio Code application which is executed from /usr/share/code/code.

Just save this file on your desktop with a name “.desktop” extension, eg. VSCode.desktop.

 
[Desktop Entry]
Type=Application
Terminal=false
Name=VSCode
Icon=utilities-terminal
Exec=/usr/share/code/code
Categories=Application; 

In a nutshell that’s it for creating application shortcuts in Ubuntu (and many other Linux distros as this is a shared standard). Just create the file and amend the Exec path to be the application you want to launch, then change the name and optionally the icon etc. The category property is for you to optionally choose which group of application types it sits with in the menus.

Taking it further…

But what else can we do with these “.desktop” files? The specification for the files can be found here. As well as the Type=Application option there is also Type=Link for web/document links that can be used with a URL property like this….

 
[Desktop Entry]
Encoding=UTF-8
Name=Google
Type=Link
URL=http://www.google.co.uk/
Icon=text-html 

But this doesnt work for me in Ubuntu 20.10 and it seems this has been removed from Gnome as per the discussion on this GitHub issue. If you want to add a web URL then its possible to use the Type=Application and point to your browser of choice (e.g. Firefox) and pass the URL as a parameter as per below:

 
[Desktop Entry]
Type=Application
Terminal=false
Name=Google
Icon=text-html
Exec=firefox -new-window www.google.co.uk
Categories=Application; 

Notice that the Icon property has been updated to be Icon=text-html so show its a web link. The icons spec can be found here . A good place to find new icons is to poke around in the “.Desktop” files already on your system. On Ubuntu checkout here: /usr/share/applications for a list of your shared system applications shortcuts or /home/.local/share/applications for your users application shortcuts list. If you add your custom “.desktop” file to one of the above file paths then you can see them in your applications menu, and then you can go further by adding it to your Favourites list (right click icon > Add to Favorties), thus making it appear in your launcher (depending on your distro’s launcher toolbar settings).

Adding Context Menu Actions to the shortcuts…

Check out this example below for a shortcut named Test that launches an application (VS Code in this case). Notice the Actions property and associated “Desktop Action” entry
which provides an additional menu item for action within the shortcut (in this case it uses Firefox to navigate to Google). If we create the textfile with the below contents and call it “test.Desktop”, then add it to the /home/.local/share/applications folder, it will appear in the Applications list and also now has a right click context menu which includes “Navigate to Google” (as per the screenshot).

 
[Desktop Entry]
Type=Application
Terminal=false
Name=Test-Shortcut
Icon=com.visualstudio.code
Exec=/usr/share/code/code 
Categories=Application;
Actions=shortcut-action-here;

[Desktop Action shortcut-action-here]
Name=Navigate to Google
Exec=firefox -new-window www.Google.co.uk
Icon=com.visualstudio.code 

And if I add it to my Dock Toolbar then I get the context menu there too.

A Useful GitHub Shortcut Example…

A better fleshed out and usable example is below where the shortcut is to GitHub (via firefox) and there are multiple actions for the context menu options for viewing Profile, Issues and Pull Requests.

 
[Desktop Entry]
Type=Application
Terminal=false
Name=GitHub
Icon=/usr/local/share/github.png
Exec=firefox www.github.com 
Categories=Application;
Actions=Profile;Issues;Pull-Requests

[Desktop Action Profile]
Name=Profile
Exec=firefox https://github.com/richhewlett
Icon=/home/Desktop/github.png

[Desktop Action Issues]
Name=Issues
Exec=firefox https://github.com/issues
Icon=/home/Desktop/github.png

[Desktop Action Pull-Requests]
Name=Pull Requests
Exec=firefox https://github.com/pulls
Icon=/home/Desktop/github.png
 

So as you can see there is quite a lot of useful functionality in the desktop shortcuts on Linux.

Enabling Hibernate on Ubuntu

Personally I prefer to hibernate all my machines instead of shut them down as I find it more efficient being able to carry on from where I left off than to start a fresh. One annoyance with my Ubuntu installation is that Hibernate is not a first class function and needs some tweaking to get it to work. For my own record this post records what I have done to be able to Hibernate my Ubuntu 20.04 install. All credit to the original post on AskUbuntu.com.

For the actual steps checkout the AskUbuntu post but essentially you need to find the Swap partition (assuming you have one), grabs it UUID and then edit the grub file to update the GRUB_CMDLINE_LINUX_DEFAULT entry to add the resume=UUID=XXXXX-XXX-XXXX-XXXX-YYYYYYYYYY value. Finally update the grub file with “sudo update-grub”.

Once setup you can enter hibernate from the terminal with:

sudo systemctl hibernate

Now you may want to add a shortcut to your desktop or to a hot key to run this command without entering the terminal.

For a desktop shortcut check out this post, where it explains how to create a *.desktop file that will launch the script. Create a text file named e.g. HibernateNow.desktop and then add the content of the file like below, assuming that you have created a bash script with the above “sudo systemctl hibernate” command in it called “hibernatescript.sh” :

[Desktop Entry]
Type=Application
Terminal=true
Name=HibernateNow
Icon=utilities-terminal
Exec=gnome-terminal -e "bash -c './hibernatescript.sh;$SHELL'"
Categories=Application;

Then you can double click the HibernateNow.desktop file and hibernate the machine. Or setup a hotkey for this instead.

Happy hibernating.

Ubuntu Intermittent Freezing Fixed with Swappiness

Having run Ubuntu on my Dell XPS 14z for years I have been increasingly plagued by an intermittent freezing problem which causes the UI to freeze (mouse still movable) for anything from 5 to 30 seconds. I’m not sure when this started (maybe Ubuntu 19.04 time) but it has gotten worse with each subsequent Ubuntu release to the point that Ubuntu 20.04 was almost unusable which prompted me to search harder for a solution. The solution I found resolved the problem completely – Swappiness.

Mine is capable but aging hardware but it appears that the OS is Swapping memory to disk too agressively for my system which is what is causing the temporary freezing to occur.

Photo by Michael Dziedzic on Unsplash

The Linux Swappiness setting is an itensity setting on how aggressive the OS should swap memory to disk. Here is the offical definition:

“This control is used to define how aggressive (sic) the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. The default value is 60.”

Linux documentation on GitHub

For a great explanation checkout this great HowToGeek article.

Whilst the default is 60 there are differing opinions on what to set it to for certain types of hardware and OS usage but for me I went with a value of “20” and its been great. No more freezing so far.

Find your systems current setting with this command:

cat /proc/sys/vm/swappiness

Then update it with this:

sudo sysctl vm.swappiness=20

No reboot required for testing, and when you have decided on the final value to use you need to persist it via updating the sysctl.conf file, e.g:

sudo nano /etc/sysctl.conf

and set the vm.swappiness=20 setting in the file.

This solution worked for me and Ubuntu 20.04 is now running really well.

Moving Sonar Rules closer to the Developer with ESLint

Shift code quality analysis to the left by moving your static analysis from the CI/CD pipeline to the developers IDE where possible. In this post I cover what we did, how to set up Sonar Lint and how we ultimately moved the Sonar rules into ESLint instead.

Problem

As a development team working on a JavaScript application we sometimes had issues where a difference between the rules enforced in our CI/CD pipeline, using SonarQube, and local rules, using ESLint, led to discrepancies in standards and ultimately, at times, broken builds. If you are enforcing rules in your project and in the CI/CD pipeline with SonarQube then indeally you need them to mostly match but this is not as easy as it sounds. Any failure in the pipeline are more time-consuming to resolve than if they happened locally before the developer pushed their code to the repository.

Photo by NESA by Makers on Unsplash

Solution

Import the Sonar rules into ESLint and force ESlint in both the IDE and the CI/CD pipeline.

In order to strike a balance of quality assurance and flexibility in the implementation of rules we introduced an approach that combines ESLint and Sonar rules with the emphasis on shift-left with rule enforcement done in the IDE as code is written and then re-enforced later in the CI/CD pipeline.

Firstly, Sonar Source (developers of SonarQube) provide a plugin for several IDEs (including VSCode) called SonarLint that helps address the issue of running Sonar rules in an IDE. Once installed it will analyse your code against default Sonar rules and report issues. What if you’re not running the default set of rules on your Sonar server, well no worries as the plugin can be set to connect to your server and download the right quality profile and rules.

To do this install the SonarLint extension into your IDE (many are supported, e.g. VS Code, Visual Studio etc) and then set the extension properties as per the instructions for your particular IDE. For VS Code it goes like this:

To link a server set the “sonarlint.connectedMode.connections.sonarqube” setting which has to be a USER setting (oddly). Then in workspace settings for the project you can configure the projectKey for your project. Workspace setting files are created in the .vscode folder in a settings.json file whcih can be added to source control so this only needs to be setup once per project. Once done, press F1 > type sonar > select “SonarLint: Update all project bindings to SonarQube” which will refresh the plugins cache and force it to download the rules from your Sonar server.

Now whilst SonarLint is a useful tool it is not as powerful as ESLint for linting in the IDE (in my opinion). For example ignoring a rule (for a genuine reason) at file or line level is not possible in a satisfactory way (you can only ignore from a line/file from ALL sonar rules). Also Eslint provides more power and flexibility especially where you have a centrally managed sonar server with shared rule profiles and quality gates that are not easy to change (which may be a good thing for your organisation).

So instead of, or even in additon to, SonarLint checking you can actually import the Sonar JavaScript scanner rules into ESLint. To do this install the npm package: eslint-plugin-sonar then configure your ESLint config to use the Sonar js/recommended) JS rules.

  1. npm install eslint-plugin-sonar
  2. Add it to your eslint config file as an extends
    extends: [
    “plugin:sonarjs/recommended”
    ]

Now ESLint will report quality errors that would previously only been highlighted in Sonar during a CI/CD pipeline build. This immediate feedback is more useful to the dev team and reduces the time wastage associated with broken builds. For us this ensured that apart from a few rules the majority are now in ESLint where developers can see them and resolve them, preventing the need for a CI/CD pipeline to highlight the problem (and broken builds).

To enforce the ESLint rules at build time we run the ESlint analysis during the CI/CD Pipeline by calling ESlint as a build step.

eslint --config eslintrc.json src 

This means no ESLint errors will be let through. We also have a Sonar Quality Gate check configured in the Pipeline but as the majority of Sonar rules are now in ESLint we should only get a failure where a rule is breached that is only in the server Sonar Profile.

Photo by Nicole Wolf on Unsplash

As an additional step we can also import all the ESlint issues found in to Sonar so that we can see them in the Sonar dashboard. You can export the ESlint rules as JSON for Sonar to import (during the build). To do this run this command in the build (ideally create a new npm script for it) assuming your src folder contains the source:

eslint --config .eslintrc.json --output-file ./eslint-report.json --format json src

Next set this Sonar property in your sonar-project.properties file or via command line argument (where eslint-report.json is the output report produced above).

sonar.eslint.reportPaths=eslint-report.json

Any issues from the ESLint report will then appear in Sonar issues marked with an EsLint badge. It appears warnings are added as majors and errors as Criticals and unfortunately I’ve not yet found a way to change this.

As a side note this command is also useful with eslint to output a HTML report of any errors which is great for reviewing or sharing:

eslint --config .eslintrc.json --output-file ./eslint-report.html --format html src

Summary

In summary, quality is being maintained as the same rules are enforced but now developers only need to ensure that ESlint is happy before committing their changes to source control to ensure a smooth server build. To make this easier you can add a new npm run script that runs all pre-commit checks that is triggered automatically (e.g. git hooks) or manually by the developer.

Auto increment build number in a JavaScript app

Whilst looking a simple way to auto increment an application build version number for a small React JavaScript application I found a neat solution so I’m sharing it here for others to use (and for me to refer back to on my next project).

Photo by Ariel on Unsplash

Problem:

For a small React application using NPM I needed to automatically increment a build number on every production build without having to remember to do it, and then I needed to refer to that version number within the application in order to display in on the site’s footer.

Solution:

After a short Google I found this really neat solution posted by Jan Lübeck on the Creact-React-App Github site here which I then very slightly tweaked for my requirements.

The solution is to essentially call a NodeJs script from npm build script definition whcih updates a metadata JSON file within the prpoject to increment the build version number by 1. This file can then be read within the React code as required to display the currnt vuild verson.

First we need to store the build version numbers in a JSON file:

{
"buildMajor":1,
"buildMinor":0,
"buildRevision":3,
"buildTag":"BETA"
}

Only the buildRevision will be updated automatically as we will want manual control over the major/minor numbers as we implement semantic versioning.

The NodeJS Script (generate-buildno.js) is below. This opens the metadata.json file, reads the buildRevision value and then updates it by 1 before saving the file:

var fs = require('fs');
console.log('Incrementing build number...');
fs.readFile('src/metadata.json',function(err,content) {
    if (err) throw err;
    var metadata = JSON.parse(content);
    metadata.buildRevision = metadata.buildRevision + 1;
    fs.writeFile('src/metadata.json',JSON.stringify(metadata),function(err){
        if (err) throw err;
        console.log(`Current build number: ${metadata.buildMajor}.${metadata.buildMinor}.${metadata.buildRevision} ${metadata.buildTag}`);
    })
});

To call this script at build time we update the package.json file and change the build command to call our script first, before the normal build command:

 "scripts": {
    "start": "react-scripts start",
    "build": "node generate-buildno.js && react-scripts build",
    "test": "react-scripts test",
  },

Now when we call npm run build the JSON file we be updated and the build revision number incremented. The React code that needs to display the app version number just has to read the metadata.json file like the example React Components below:

import React from 'react';
import metadata from './metadata.json';

function FooterComponent() {
  return (
    <div className="sf-footer">
      &copy; 2020 RichHewlett.com
      <div className="sf-footer-version">
        {`Version ${metadata.buildMajor}.${metadata.buildMinor}.${metadata.buildRevision} ${metadata.buildTag}`}
      </div>
    </div>
  );
}
export default FooterComponent;

Remember after building the application to commit your changed metadata.json file into source control!

Java keystore certificate missing after Jenkins upgrade

Following a tricky upgrade to a Jenkins server it was found that the server was no longer able to communicate with the Git server, although SSH access with private keys was stil working correctly. On invrestigating the Jenkins logs (found at Logs at: http://YOUR-JENKINS-SERVER-URL/log/all) this error was found to be the problem:

Failed to login with creds MYCREDENTIALNAMEHERE
sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(Unknown Source)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(Unknown Source)
at java.security.cert.CertPathBuilder.build(Unknown Source)
Caused: sun.security.validator.ValidatorException: PKIX path building failed
at sun.security.validator.PKIXValidator.doBuild(Unknown Source)
at sun.security.validator.PKIXValidator.engineValidate(Unknown Source)
at sun.security.validator.Validator.validate(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.validate(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(Unknown Source)
Caused: javax.net.ssl.SSLHandshakeException

The problem here is that the Java runtime is not able to validate the connection as it doesnt have the relevant SSL certificate. To resolve I needed to download the certificate from the site and add it to the Java Keystore.

Photo by Chunlea Ju on Unsplash

On the server download the relevant web certificate for connecting to the relevant site (a GitHub server in my instance) that you are trying to access. Next find which Java JRE your Jenkins was using. This can be found in your Jenkins.xml file under the executable element. For example %BASE%\jre\bin\java means c:\Jenkins\JRE\bin\java if you have installed Jenkins to C:\Jenkins on Windows. Now find the cacerts file inside that Java runtime that it was using (for example C:\Jenkins\jre\lib\security\cacerts) where you will need to install the Certificate. To add the cert to the Java Keystore for that Java install run the below in a terminal:

keytool -import -alias INSERT_ALIAS_NAME_HERE -keystore PATH_TO_JRE_HERE\security\cacerts -file CERTIFICATE_FILE_NAME

for example…

keytool -import -alias ExampleName -keystore C:\Jenkins\jre\lib\security\cacerts -file exampleCertFile.cer

If you are asked for a password then it will be ‘changeit‘ by default.

Now the JRE Java runtime on the box that Jenkins is now using has the Certificate installed it will be able to make a SSL connection.