Cheap Azure Hosting via Static Web Sites

Cheap Azure Hosting via Static Web Sites

Something that is pretty cool and not that well known is that you can now host your static web site in the cloud with Microsoft Azure just from your Azure storage account. The functionality is currently in preview only but its functional enough to get up and running quickly if you have an Azure account.

Why host a static site?

Whilst it does depend on your requirements many sites are quite capable of being static sites with no server side processing. The classic example is a blog site whereby the site could just serve up static html, images and JavaScript straight from disk as the content changes fairly infrequently.

The growth in JavaScript libraries and the functionality of frameworks like React.js make static sites even more viable. Using the power of JavaScript its possible to create rich powerful web applications that don’t need server side processing. There has been an explosion of static site generators over recent years that will take text or markdown files and generate a complete static site for you. Two very popular generators of note are Gatsby (React.js based) and Jekyll (Ruby) but there are literally hundreds of others as can be seen by this online directory: staticgen.com.

Hosting a static site in Azure

Of course you could always host a static site in Azure if you hosted it in a full featured web site (via a hosted VM or azure web site) but the beauty of a hosting a static only site is that you can host it straight out of storage area and so you don’t need to pay for any compute power which makes it extremely cheap (and even free). You just pay standard Azure storage rates which include a generous data transfer limit (about 5GB a month).

If you think about it hosting a static web site is just a natural extension for a cloud offering like Azure as they already host files and binary content on public URLs in Azure Storage. This new functionality though makes it more explicit and enables web site like functionality such as custom error pages. It is also possible to add your custom domain name to the site and link up SSL (although unfortunately at the moment SSL requires use of an Azure CDN which adds to the cost.)

So how do you host your site, well follow the official instructions here.

Once you have a web page being served by the default Azure storage URL you can proceed to add your own custom domain name using these steps.

Now you should have a fully working site, but to keep costs even lower we can utilise caching of our static content to encourage the client browser to cache the files thus reducing our data transfer costs. Luckily it is easy to set cache control settings on our Azure Blob storage items. This blog post by Alexandre Brisebois covers doing it in code but if you are just testing, or have a site that doesn’t change much you can do it manually via the Azure Portal. To do so enter your Azure Portal, browse to your Storage Account and then using Storage Explorer find the files you want to set caching for and go to their properties. In the Properties dialog you can set the Cache-control value in the HTTP header to something like…

 "public, max-age=86400". 

There are other alternatives to Azure for hosting static files and some offerings are very cheap or free. Some of these are more advanced than the current Azure offering and provide additional features such as integrated SSL and contact forms. One such vendor is netlify.com but there are others.

In summary, if you want to host a site cheaply and you dont really need server side processing then consider hosting a static site, and if you’re already using Azure then its a simple step to give it a go.

.

Advertisements

Linux Home Server Build

Linux Home Server Build

On this blog I have posted many times about my home server configuration and seeing as I’ve recently updated it I thought I’d give a quick overview of the changes made and provide some tips for setting up a Linux home server.

My home server (an HP MicroServer) is used for NAS file storage, running Plex Media server and a few other ativites including client backups. Previously my server was running Windows Home Server 2011 which was an excellent Home Server OS from Microsoft based on Windows Server 2008 R2. In additon to file sharing it also allowed for easy server administration and client PC backups. Client PCs would backup images to the server allowing for client files or whole systems to be restored. Unfortunately Microsoft discontinued Windows Home Server and it is no longer supportrd and Windows Server 2008 R2 updates will stop in July 2019. In terms of replacement options there were several, whilst all Windows offerings are too expensive and seem overkill for a home server, serious Linux and BSD options include Free NAS, Amahi, Ubuntu Server and numerous Linux desktop distros. I would also recommend looking at a Synology if you have the budget.

In the end I chose Lubuntu 18.04 (LTS) desktop distro for my needs. Why a desktop distro for a server? Well I dont need to squeeze every onunce of performance from the server and the Lubuntu desktop is so lightweight and efficient I can have a graphcial desktop enviroment as well as great server performance. It is handy to have the option to be able to RDP into the box and use the Lubuntu desktop as an alternative to SSH when required.

I installed the OS, and checked for updates:

sudo apt-get update
sudo apt-get dist-upgrade

After installing the OS I inserted the data drives and mounted them under a /mydata mount point so that I can easily access all the files on those drives. To make these mount points persistent I edited /etc/fstab to add each one using the UUID of the partition (which is found in Disks app or the GParted app)

sudo nano /etc/fstab

Then add an entry for each parition to mount …

 UUID=YOUR_OWN_PARTITION_UUID /mydata/d1/ ext4 defaults 0 0
UUID=YOUR_OWN_PARTITION_UUID /mydata/d2/ ext4 defaults 0 0

Configure The FireWall

The ufw (uncomplicated firewall) firewall is installed on Lubuntu by default but is turned off so turn it on and check its status:

sudo ufw status 

If inactive then activate it with:

sudo ufw enable

To see its status and current rules:

sudo ufw status verbose

Add new rules with:

sudo ufw allow PORTNUMBERHERE 

Install XRDP Remote Desktop Service

Next I set up XRDP for remote access to this headless server.

sudo apt-get -y install xrdp
sudo ufw allow 3389/tcp
sudo systemctl enable xrdp
sudo systemctl restart xrdp

At this point I hit may issues but this hack below seemed to be the one that led to a working XRDP session but using this command to create a .xsession in home directory of connecting user:

 echo "lxsession -s Lubuntu -e LXDE" > ~/.xsession 

For more information see this article.

Setup Cockpit Web Interface

For more remote administration and monitoring goodness I installed Cockpit which is a web based interface for servers with lots of useful features.

sudo apt-get install cockpit
sudo ufw allow 9090

Then browse to https://yourserverip:9090

For a useful Cockpit install guide check out this link.

Setup Samba for file sharing

The whole point of a file server is to share files and in order to support file sharing with Windows devices on the network you’ll need to setup Samba. A good link for setting up Samba can be found here.

sudo apt-get install samba

Set a password for your user in Samba.

sudo smbpasswd -a        

All the folder shares and their configuration are stored in the smb.conf file which can be edited by opening it up in a text editor like nano.

 sudo nano /etc/samba/smb.conf 

In the smb.conf file you may want to change the workgroup name to the same as that used by your window PCs and then add each of your folder shares.

[<folder_name>]
    path = /folder/path
    valid users = <user_name>
    read only = no

Once you have made the required changed restart the smb daemon service

sudo service smbd restart

You’ll also likely need to punch a hole in the ufw firewall for samba

sudo ufw allow Samba

Once Samba has restarted, use this command to check your smb.conf for any syntax errors with testparm

testparm

For a good guide on more complex permissions check out this guide here.

It’s worth noting that users need to have Unix perms on the underlying folders in order to be able to access them. Amend Linux file permissions as required.

Scheduled Tasks with Cron

For scheduled tasks (backup jobs etc) I have configured Cron jobs to run bash scripts (although I could have kept my existing PowerShell scripts as PowerShell now runs on Linux too) but they needed a rewrite anyway.

Open your cron job file with…

sudo crontab -e 

then add entries like this example:

# run test job at 11:15 every day 
15 11 * * * . /etc/profile; /bin/bash /home/me/testjob.sh > /tmp/cron.out

For more info on Cron check out this guide and for an awesome helper tool for building the Cron schedule times check out corntab.com.

Summary

So I’ve covered the basics of how I’ve set up my Home Server using Lubuntu which others may find useful. I’ve been running this setup for a few months and so far I am very pleased with its performance and stability.

Future steps are to install Plex Media Server and configure client PC backups. As I want to use the Snaps for Plex I am waiting for the offical Plex Snap Package to come out of BETA as I’m not in a rush. Alternatively I may use Docker to run Plex. To replace the client PC backup feature I previously had with Windows Home Server I will soon be moving to a client imaging tool such as CloneZilla, Acronis or Windows Backup and then copying the images to the server.

Developer Roadmaps

Developer Roadmaps

Something that’s proving popular on Medium these days are “development roadmaps” that outline a roadmap approach to choosing techniques and technologies for certain technical domains (for example Web development or Dev Ops). Some of these are particularly powerful for putting the many bewildering technologies all on one page with logical grouping and a visual representation of how they interact. Modern web development has seen so much change over recent times that it is very easy to get lost and become overwhelmed and these roadmaps can help clear the fog (a little).

My favourite is the Web Developer Roadmap in 2019 maintained by Kamran Ahmed over on GitHub.

I have shared this with several people who have also found it useful regardless of their level of expertise. The front end roadmap is a great guide to what the community are currently settling on as the standard choices for tooling and techniques. I have checked back to the roadmap a few times over the last 6 months to verify my approach when starting on a new project and I find that visualising the options makes decision making easier.

There are also Backend and DevOps Roadmaps included which are equally as useful.

For some more useful roadmaps check out this medium post.

Cmder – A Better Windows Console

Cmder – A Better Windows Console
Whilst Linux treats console users as first rate citizens and provides many useful and powerful terminal emulators Windows has always lagged behind. This is evermore noticeable now that many developer and IT Ops workloads are done via the terminal. Modern web development and DevOps tooling requires at least some interaction with the terminal, and with the world moving to git for source control developers everywhere are having to embrace consoles.
Whilst Microsoft have traditionally neglected the Windows console they have started to add new features and improvements. For a background on the Windows Console and its architecture check out this blog series. Windows 10 has the best Windows console to date, but there are better out there from 3rd parties and I’ve really got into Cmder.
Cmder is a smart per-configured bundle of the ConEmu emulator software with some extras thrown in. To quote directly from their website:
 

Cmder is a software package created out of pure frustration over the absence of nice console emulators on Windows. It is based on amazing software, and spiced up with the Monokai color scheme and a custom prompt layout, looking sexy from the start.

It can be run portable on a USB Stick if you wish and it has full Git and Bash support. You can emulate the Windows Command Prompt or PowerShell, Bash, Windows SubSystem for Linux (WSL), even the VS Developer Command Prompt among others. All in a slick feature rich emulator.

cmder

It has hundreds of settings that can be tweaked to get everything just the way you like it and it also has the awesome Quake mode so it can slide down from the top of your display.
Cmder2
Support for Cmd, PowerShell, Bash and many more is included out the box, but if you are a Visual Studio user and want to emulate the Developer Command Prompt for VS2017 (reommended) then check out the simple instructions in this guide by Ricardo Serradas on Medium.
I’ve been using it for months and its been stable, performant and has also caught the eye of collegues due to those good looks which make it a pleasure to work in compared to the plain Windows console. Give it a try.

Useful Git Training Links

Useful Git Training Links

git_logoHaving recently had to compile a list of useful learning resources for a development team migrating to git, I thought I would share them here.

Git is a very powerful and versatile distributed source control system but its not the easiest for a newbie to get their head around. The below links are ordered from tutorials based on giving an overview of git through to more advanced topics.

  1. What is Git – a nice overview article by Atlassian
  2. Learn Enough Git to Be Dangerous tutorial by Michael Hartl
  3. Git the Simple Guide – An excellent simple, straight to the point guide to git  by Roger Dudler. (My favourite guide)
  4. Git Tutorial – Another tutorial
  5. Git Cheat Sheet – cheat sheet for git and github commands
  6. The official git site documentation and tutorials
  7. Pro GIT ebook – an excellent definitive guide to git in a free ebook format


GitHub External Training Links: 

If you or your team also need to learn GitHub then here are some good training links.

  1. A great hello world example and introduction to GitHub
  2. Git Started With GitHub – free course from udemy
  3. Training videos on YouTube

Also its worth remembering that Microsoft offer FREE private git repository hosting via the Visual Studio Team Services if you don’t want to host all your projects publicly.

 

Consume JSON REST Service via WCF Message Class

Consume JSON REST Service via WCF Message Class

Since WCF was designed and envisioned by Microsoft the world has changed and the use of RESTful JSON based web services has increased at the expense of SOAP based services. WCF was updated to reflect this change and for several years has supported RESTful services through webHTTPBinding etc (more on MSDN), and there are many resources on the web for how to consume or host a REST service with WCF, but many of these assume you are not using a generic channel factory approach with the low-level message class. Usually in WCF you would consume a service via a proxy, or perhaps by directly creating a Channel Factory, however these require explicit knowledge of the service contract being consumed and sometimes a more generic solution is required. If, for example, you wanted to create a  generic WCF helper class for your application which would build a message directly from passed in data and call a service generically then you could use the Message class directly. This advanced approach is documented for SOAP messaging, but what about if you need to send JSON?

Below are some notes on how you would use the Message class to send JSON in a generic way (i.e. without needing intimate knowledge of the service contract you’re calling).

In the code below we need to pass the Person object named “bob” as JSON so we create a WebChannelFactory and use the “Endpoint1” config (which is very generic in nature). The special WebChannelFactory is a ChannelFactory that automatically adds WebHttpBinding and WebHttpBehavior to the endpoint config if its missing. Then we create a proxy and directly build a Channels.Message class using a SOAP version of “None” (as we’re not using SOAP here but JSON) and the DataContractJsonSerializer .

Person bob = new Person() {age = 89, name="Bob"};

WebChannelFactory factory = new WebChannelFactory("Endpoint1");

IRequestChannel proxy = factory.CreateChannel(
      new EndpointAddress("http://localhost:8080/Test"));

System.ServiceModel.Channels.Message requestMsg = 
      System.ServiceModel.Channels.Message.CreateMessage(
          MessageVersion.None, "", bob, new DataContractJsonSerializer(typeof(Person)));

requestMsg.Headers.To = new Uri(factory.Endpoint.Address.ToString());
requestMsg.Properties[WebBodyFormatMessageProperty.Name] = new WebBodyFormatMessageProperty(WebContentFormat.Json);

You will notice above we also need to set the message header URI too and also set the WebBodyFormatMessageProperty format to JSON. If we forget to do this step then the message will be sent in XML format despite the previous web config we have set (for more info on this issue see here and here). This is what is sent without setting the WebBodyFormatMessageProperty to JSON:

<root type="object"><age type="number">89</age><name>Bob</name></root>

and with the WebBodyFormatMessageProperty set to “WebContentFormat.Json”:

{“age”:89,”name”:”Bob”}

Next we call the nice and generic “Request()” method on the proxy and handle the response, picking out the body and de-serialising it into a Person object via the DataContractJsonSerializer.

System.ServiceModel.Channels.Message responseMsg = proxy.Request(requestMsg);

Person BobResponse = responseMsg.GetBody(new DataContractJsonSerializer(typeof(Person)));

Endpoint Config:

<system.serviceModel>
 <client>
 <endpoint name="Endpoint1"
 address="http://localhost:8080/Test" 
 binding="webHttpBinding"
 contract="System.ServiceModel.Channels.IRequestChannel"
 />
 </client>
</system.serviceModel>

In this snippet the only thing that is specific to the service being called is the Person object which the DataContractJsonSerializer needs to know about in order to be able to serialise it into JSON correctly. The actual service call is generic. To make this a completely generic helper we can instead pass in a type for the DataContractJsonSerializer to use instead of a real object, leaving the calling component to pass the right type in when it calls this generic helper method.

If you are already using this message class approach for SOAP services and need to now call some JSON REST services then hopefully this will help.

Vertical Monitors

Vertical Monitors

Whilst I’ve been using a multiple monitor setup at home and work for many years only for the past year have I been using one of the two in vertical/portrait mode, and I’m hooked.

The benefits for using multiple displays over a single display has been researched numerous times over the years. Estimates on the amount of productivity gain that can be made vary from 20% to 42% in the research I have seen, but even at 20% the second monitor soon pays for itself. Jon Peddie Research (JPR) in their 2017 research found a 42% gain in productivity based on the increased number of pixels available via multiple screens. And whilst some of this gain can be made from just larger screens as opposed to more screens there are certainly ease of use advatanges to multiple screens and several studies sponsored by Dell have found the user experience to be more pleasureable with multiple screens. I find using several screens an easier way of arranging windows than trying to arrange them on one large monitor.

“We found that users of multiple monitors have an average expected productivity increase of 42%,” said Dr. Jon Peddie, President of Jon Peddie Research.

University of Utah study showed that a second monitor can save an employee 2.5 hours a day if they use it all day for data entry.

“The more you can see, the more you can do.” Jon Peddie, 1998

But what about having monitors in portrait mode, well the Jon Peddie research also stated that additional benefit could be had by reducing scrolling for information.

“Having one of the monitors in portrait mode adds to productivity by eliminating the need to scroll.”

After taking a week or so to get used to the portrait mode I quickly found its benefits. It’s ideal for viewing anything that requires vertical scrolling, e.g. web pages, code, log files, Jira backlogs etc. In a quick test of a code file in VSCode I could see 62 lines of code on my 24inch hortizontal monitor, but 120 lines when the file was moved to the vertical monitor. Not only do you reduce scrolling but there are advantages in being able to see more of a document in one go, helping you visualise the problem.

On a side note the Xerox Alto, which was the first computer designed to support a GUI based OS, had a vertical screen to represent a document layout.

twomonitors_small

Over the past decade or more displays have moved to widescreen format and many applications have tailored their UIs to fit this trend. This means that having all portrait monitors can quickly lead to some frustrations and the need to horizontally scroll, thus reducing the additional productivity gains. This is why I have found that (for me) the ideal setup is one landscape display and one portrait display (as in the image above).

With this formation you can use the display that is best suited to the task at hand, and so you have the best of both worlds. Combine this with a few simple keyboard shortcuts to move things between monitors and you’ll find soon be loving the layout. In Windows using the Windows key and the cursor keys together can quickly move and dock windows. Linux distros vary.

Since I made the switch to one vertical at work I have had numerous colleagues see the benefits and be converted to this layout, so if you have a monitor that is capable of being turned vertically then I recommend you give it a go and see how you get on. You might get hooked!

 

SonarQube: Unit Test Results Not Shown

SonarQube: Unit Test Results Not Shown

Recently whilst building Jenkins CI pipeline, with SonarQube static analysis, the JUnit unit test results were not being included in the Sonar dashboard results. The Jacoco based test coverage results were being included fine but not the actual test pass/fail percentage.

sonardash2

After digging into the log for the Jenkins build I found this warning being logged for the SurefireSensor (the Sonar sensor responsible for scanning JUnit XML reports for results):

[sonar] 10:26:34.534 INFO - Sensor SurefireSensor
[sonar] 10:26:34.534 INFO - parsing /apps/jenkins2/var/lib/jenkins/workspace/abc/code_master/examplecode/UnitTest/junit
[sonar] 10:26:34.864 DEBUG - Class not found in resource cache : com.rh.examplecode.UIMapperTest
[sonar] 10:26:34.864 WARN - Resource not found: com.rh.examplecode.UIMapperTest

The JUnit XML reports were being found and parsed fine but when it’s looking for the actual test code (the *.java code) it could not be found by the scanner and hence it throws the warning. It turns out that the java code for the tests is required in order analyse the JUnit results files and so you need to tell Sonar where to find the source code for the tests. How? Well this is done via the sonar.tests” property which is a comma-separated list of filepaths for directories containing the test code (the *.java files not *.class files). For example:

sonar.tests = "/UnitTests/junit"

Set this property alongside the other parameters for Sonar, for example:

sonar.projectBaseDir="${WORKSPACE}/exampleApp"
sonar.projectKey="testbuild1"
sonar.projectName="testBuild"
sonar.sourceEncoding="UTF-8"
sonar.sources="src/main/java/com/rh/examplecode/"
sonar.junit.reportsPath="ReportsXML/"
sonar.tests= "/UnitTests/junit"
sonar.jacoco.reportPath="target/jacoco.exec"
sonar.jacoco.reportMissing.force.zero="true"
sonar.binaries="build/com/rh/"

After this change the Sonar scanner will run and this time find the test source code, enabling it to complete the analysis. The log should report something like this:

[sonar] 13:10:20.848 INFO - Sensor SurefireSensor
[sonar] 13:10:20.848 INFO - parsing /apps/jenkins2/var/lib/jenkins/workspace/abc/code_master/ReportsXML
[sonar] 13:10:21.022 INFO - Sensor SurefireSensor (done) | time=10ms

And you should now have your unit test success/failure results in your unit test widgets in the projects Sonar dashboard, like so:

sonardash1

A Custom JSF Tag Lib For Toggling Render of Child Elements

A Custom JSF Tag Lib For Toggling Render of Child Elements

 

I’ve added a new sample project on GitHub that shows a custom Tag Library for JSF (Java Server Faces) that can be used to show/hide its children. There are several uses for this sort of custom component in your JSF web project but in this sample code I just read a property on the custom tag to determine whether to render all child elements or not. In a real application this logic could be swapped to implement application logic or perhaps call a Feature Toggle framework which decides whether to render elements within this tag or not.

Whilst the code is a complete same JSF project the main class is the CustomTag class.

This custom tag extends UIComponentBase to control the encodeChildren and getRendersChildren methods. In the getRendersChildren method we need to determine whether any child elements should be shown or not (again this can be any logic you find useful but for this example we are just reading a parameter on the XHTML). If its determined that we DO want to display children then we will follow normal processing and pass the call onto the base class’s getRendersChildren method. If its determined we DO NOT want to display child elements we instead return TRUE which tells the JSF framework that we want to render children ourselves using our custom encodeChildren method (which then ignores the request).

public class CustomTag extends UIComponentBase {

    @Override
    public String getFamily() {
         return "com.test.common.CustomTag";
    }

    @Override
    public void encodeChildren(FacesContext arg0) throws IOException {
         return;
    }

    @Override
    public boolean getRendersChildren() {
         boolean enabled = false;
         // Replace this logic with whatever you need to determine whether to display children or not
        String ruleName = (String) getAttributes().get("showChildren");
        enabled = (ruleName.equalsIgnoreCase("enabled"));

        if(enabled)
        {
             // we want the children rendered so we tell JSF that we wont be doing it in this custom tag.
             return super.getRendersChildren();
        }
        else {
             // we will tell JSF that we will render the children, but then we'll not.
             return true;
        }
    }
}

Next we define the new tag in the web.xml and custom.taglib.xml config files.

<!-- register custom tag -->
<context-param>
    <param-name>javax.faces.FACELETS_LIBRARIES</param-name>
    <param-value>/WEB-INF/custom.taglib.xml</param-value>
</context-param>

and

<?xml version="1.0"?>
<!DOCTYPE facelet-taglib PUBLIC
        "-//Sun Microsystems, Inc.//DTD Facelet Taglib 1.0//EN"
        "http://java.sun.com/dtd/facelet-taglib_1_0.dtd">
<facelet-taglib>
    <namespace>http://RichCustomTag.com</namespace>
    <tag>
        <tag-name>customTag</tag-name>
        <component>
            <component-type>com.test.common.CustomTag</component-type>
        </component>
    </tag>
</facelet-taglib>

We then use that new CustomTag within our XHTML page to wrap the elements that we want to toggle on/off depending on our logic.

Items below may or may not appear depending whether they are turned on or off:
<!-- to show children set showChildren to 'enabled' else 'disabled' -->
<rh:customTag showChildren="enabled">
   If you see this then child items are enabled
    <h:inputText value="#{helloBean.name2}"></h:inputText>
    end of dynamic content.
</rh:customTag>

That’s it.

Check out the code on GitHub via https://github.com/RichHewlett/JSFTag_ToggleChildRendering.

Create New MSTest Projects for Pre .Net 4.5 in Visual Studio 2017

Create New MSTest Projects for Pre .Net 4.5 in Visual Studio 2017

This post outlines the steps to create a new unit test project in Visual Studio 2017 using MS Test V1 and that targets .Net Frameworks prior to .Net 4.5.

Visual Studio 2017 onwards only has new unit test projects for MS Test V2 onwards and .Net 4.5. This is fine for new applications or ones targeting a recent .Net framework version but what if you have an existing solution targeting an older .Net version. Below shows the Unit Test Project available for .Net 4.5, but as you can see in the second screenshot for .Net 3.5 its not available.

NewTestProj_45

NewTestProj_35

If you want to create a new unit test project targeting .Net 3.5/4 for example then follow the steps below:

Create a new MS Test V2 project targetting .Net Framework 4.5 as in the first screenshot above (i.e. File > New Project > Test > Unit Test Project targeting .Net 4.5).

Once its created, change the project to target your earlier .Net Framework (e.g. .Net 3.5). This is done via the Project Properties page.  Click Yes and Visual Studio will reload the project.

NewTestProj_ChgFmk

Once it reloads the project will complain about some references which is fine as we’re now going to remove the MS Test V2 assemblies.

NewTestProj_Errors

Now remove the two project references to test assemblies.

NewTestProj_RemoveRefs

Then add a reference to the MSTest v1 assembly, Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll.  This should be under Extensions in the Add Reference dialog. Alternatively you can browse to them on your hard drive at the following locations:

For pre .Net 4 projects : C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v2.0

For post .Net 4 projects: C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.0

If you are not running Visual Studio Enterprise, then swap Enterprise in the path for Community etc.

NewTestProj_AddRefs

Now rebuild the project and you’re all done.