SonarQube migration issue- Jenkins Using old URL

SonarQube migration issue- Jenkins Using old URL

I recently migrated a SonarQube server from one server to another in order to scale out the service to our dev team. All went well until builds failed due to them looking at both the old and new server URLs for the Sonar results and so I’m writing some notes here to help me (and others) out in the future if I hit this again.

I installed the same version of SonarQube on the new application server that is on the old server. The database was not being moved, just the application server (the server with the Sonar service running).

After installation I ensured that the same custom config settings were made onthe new server as had been made on the old server, and ensured that the same plugins were in place. I then stopped the Sonar service on the old server and started the service on the old box.

Once Sonar was confirmed to be up and running and connecting to the Database and showing the project dashboards I updated the Jenkins server configuration to point to the new box. All looked good so I ran a build, and then got this (log truncated)….

 
INFO: ANALYSIS SUCCESSFUL, you can browse http://OLDSERVER/dashboard?id=123
INFO: EXECUTION SUCCESS
The SonarQube Scanner has finished
Analysis results: http://NEWSERVER/dashboard/index/APP
Post-processing succeeded.
withSonarQubeEnv
waitForQualityGate
java.lang.IllegalStateException:
Fail to request http://OLDSERVER/api/ce/task?id=3423
at org.sonarqube.ws.client.HttpConnector.doCall(HttpConnector.java:212)

Bizarely the Jenkins build has managed to use both the new Sonar URL and the old one.The upload was successfull to the new server but some of the links for the resport point to the old server. Also the Quality Gate check whcih validates that the Sonar Quality Gate was successfull has tried to read the report on the OLD server and therefore failed as its not there (because its on the new sonar URL).

After checking Jenkins for any reference to the old Sonar server and restarting the service to clear any caches I was still getting the error. Eventually I ran a Jenkins build and interactively peeked into the Jenkins workspace on the Jenkins Slave and in there is an auto generated file containing lots of Sonar config settings. This file, SonarQubeAnalysisConfig.xml, is created during the Jenkins build initialisation stage. In the file I found references to the new Sonar URL but also this property pointing to the old URL:

  sonar.core.serverBaseURL  

This value is set in SonarQube configuration and is not dynamic and so will not be updated when you migrate the server or change the server URL/port. To change it open SonarQube > Administration > Configuration > General and change Server base URL to your new URL (e.g. http://yourhost.yourdomain/sonar). It says that this value is used to create links in emails etc but it in reality is also used to integate results.

Advertisements

Visual Studio 2019 Offline Installer

Visual Studio 2019 Offline Installer

Microsoft have now released Visual Studio 2019 and like VS2017 there is no offline installer provided by Microsoft, but you can generate one by using the web installer setup program to write all the packages to disk.

To create the offline installer just download the usual web installer exe from the Microsoft download site and then call it from the command line passing in the layout flag and a folder path like this:

vs_community --layout  "C:\Setup\VS2019Offline"

In the example above I’m dowloading the Community verssion, but if its the Enterrpise editions installer then the setup file you downloaded will be called vs_enterprise.

The packages will all be downloaded and a local setup exe installer created.

If you want to limit to English then pass –lang en-US flag too.

vs_community --layout  "C:\Setup\VS2019Offline" --lang en-US

You can also limit what workloads to download if you know their names by listing them after a –add flag.

Enjoy your offline installs.

Easy Upgrade Tool For NPM on Windows

Easy Upgrade Tool For NPM on Windows

Having recently needed to upgrade my version of NPM on a Windows machine, without upgrading my Node.js installation, I came across this excellent tool for doing just that without following a complex set of steps. Adding it here for others to find and for me to remember ūüôā

The tool is called npm-windows-upgrade and can be found on GitHub. The tool simplifies the numerous steps previously required on Windows and is now the recommended approach by the NPM team.

npm-windows-upgrade tool

In the end I ran this tool several times to test out various versions and it worked well, upgrading NPM in place successfully.

New WP Code Snippet Editor Online Tool

New WP Code Snippet Editor Online Tool

You can now get the benefits of my Live Writer plugin in your browser without using Live Writer.

Use WordPress…..? Post code snippets….? Well now you can customise the look and feel of the snippets whilst previewing them in a new online tool at https://WPCodePreview.com.

Ten years ago I blogged here about how to create a plugin for the popular (at that time) Microsoft Windows Live Writer blog editor and made the plugin available for download. Over time I added some new features and it has proved very popular, but times change and Windows Live Writer was dumped by Microsoft, and then resurrected as an open source project – now called Open Live Writer (and the plug was updated). Over that time more people are using other editors and platforms to edit their content (and so am I) so I have now replicated the main features of the plugin into an online web application. No more need for Live Writer unless you still like using it, in which case carry on.

Like the Live Writer plugin before it, the site provides a simple way to edit a code snippet and get it looking just how you want – line numbers, line highlighting, language syntax to use etc. All features provided by the WordPress “code” short-code functionality documented here . Once you have the snippet looking how you want, then copy it and paste it into your blog post editor of choice.

For more infnromation checkout the site at https://WPCodePreview.com or the User Guide.

Cheap Azure Hosting via Static Web Sites

Cheap Azure Hosting via Static Web Sites

Something that is pretty cool and not that well known is that you can now host your static web site in the cloud with Microsoft Azure just from your Azure storage account. The functionality is currently in preview only but its functional enough to get up and running quickly if you have an Azure account.

Why host a static site?

Whilst it does depend on your requirements many sites are quite capable of being static sites with no server side processing. The classic example is a blog site whereby the site could just serve up static html, images and JavaScript straight from disk as the content changes fairly infrequently.

The growth in JavaScript libraries and the functionality of frameworks like React.js make static sites even more viable. Using the power of JavaScript its possible to create rich powerful web applications that don’t need server side processing. There has been an explosion of static site generators over recent years that will take text or markdown files and generate a complete static site for you. Two very popular generators of note are Gatsby (React.js based) and Jekyll (Ruby) but there are literally hundreds of others as can be seen by this online directory: staticgen.com.

Hosting a static site in Azure

Of course you could always host a static site in Azure if you hosted it in a full featured web site (via a hosted VM or azure web site) but the beauty of a hosting a static only site is that you can host it straight out of storage area and so you don’t need to pay for any compute power which makes it extremely cheap (and even free). You just pay standard Azure storage rates which include a generous data transfer limit (about 5GB a month).

If you think about it hosting a static web site is just a natural extension for a cloud offering like Azure as they already host files and binary content on public URLs in Azure Storage. This new functionality though makes it more explicit and enables web site like functionality such as custom error pages. It is also possible to add your custom domain name to the site and link up SSL (although unfortunately at the moment SSL requires use of an Azure CDN which adds to the cost.)

So how do you host your site, well follow the official instructions here.

Once you have a web page being served by the default Azure storage URL you can proceed to add your own custom domain name using these steps.

Now you should have a fully working site, but to keep costs even lower we can utilise caching of our static content to encourage the client browser to cache the files thus reducing our data transfer costs. Luckily it is easy to set cache control settings on our Azure Blob storage items. This blog post by Alexandre Brisebois covers doing it in code but if you are just testing, or have a site that doesn’t change much you can do it manually via the Azure Portal. To do so enter your Azure Portal, browse to your Storage Account and then using Storage Explorer find the files you want to set caching for and go to their properties. In the Properties dialog you can set the Cache-control value in the HTTP header to something like…

 "public, max-age=86400". 

There are other alternatives to Azure for hosting static files and some offerings are very cheap or free. Some of these are more advanced than the current Azure offering and provide additional features such as integrated SSL and contact forms. One such vendor is netlify.com but there are others.

In summary, if you want to host a site cheaply and you dont really need server side processing then consider hosting a static site, and if you’re already using Azure then its a simple step to give it a go.

.

Linux Home Server Build

Linux Home Server Build

On this blog I have posted many times about my home server configuration and seeing as I’ve recently updated it I thought I’d give a quick overview of the changes made and provide some tips for setting up a Linux home server.

My home server (an HP MicroServer) is used for NAS file storage, running Plex Media server and a few other ativites including client backups. Previously my server was running Windows Home Server 2011 which was an excellent Home Server OS from Microsoft based on Windows Server 2008 R2. In additon to file sharing it also allowed for easy server administration and client PC backups. Client PCs would backup images to the server allowing for client files or whole systems to be restored. Unfortunately Microsoft discontinued Windows Home Server and it is no longer supportrd and Windows Server 2008 R2 updates will stop in July 2019. In terms of replacement options there were several, whilst all Windows offerings are too expensive and seem overkill for a home server, serious Linux and BSD options include Free NAS, Amahi, Ubuntu Server and numerous Linux desktop distros. I would also recommend looking at a Synology if you have the budget.

In the end I chose Lubuntu 18.04 (LTS) desktop distro for my needs. Why a desktop distro for a server? Well I dont need to squeeze every onunce of performance from the server and the Lubuntu desktop is so lightweight and efficient I can have a graphcial desktop enviroment as well as great server performance. It is handy to have the option to be able to RDP into the box and use the Lubuntu desktop as an alternative to SSH when required.

I installed the OS, and checked for updates:

sudo apt-get update
sudo apt-get dist-upgrade

After installing the OS I inserted the data drives and mounted them under a /mydata mount point so that I can easily access all the files on those drives. To make these mount points persistent I edited /etc/fstab to add each one using the UUID of the partition (which is found in Disks app or the GParted app)

sudo nano /etc/fstab

Then add an entry for each parition to mount …

 UUID=YOUR_OWN_PARTITION_UUID /mydata/d1/ ext4 defaults 0 0
UUID=YOUR_OWN_PARTITION_UUID /mydata/d2/ ext4 defaults 0 0

Configure The FireWall

The ufw (uncomplicated firewall) firewall is installed on Lubuntu by default but is turned off so turn it on and check its status:

sudo ufw status 

If inactive then activate it with:

sudo ufw enable

To see its status and current rules:

sudo ufw status verbose

Add new rules with:

sudo ufw allow PORTNUMBERHERE 

Install XRDP Remote Desktop Service

Next I set up XRDP for remote access to this headless server.

sudo apt-get -y install xrdp
sudo ufw allow 3389/tcp
sudo systemctl enable xrdp
sudo systemctl restart xrdp

At this point I hit may issues but this hack below seemed to be the one that led to a working XRDP session but using this command to create a .xsession in home directory of connecting user:

 echo "lxsession -s Lubuntu -e LXDE" > ~/.xsession 

For more information see this article.

Setup Cockpit Web Interface

For more remote administration and monitoring goodness I installed Cockpit which is a web based interface for servers with lots of useful features.

sudo apt-get install cockpit
sudo ufw allow 9090

Then browse to https://yourserverip:9090

For a useful Cockpit install guide check out this link.

Setup Samba for file sharing

The whole point of a file server is to share files and in order to support file sharing with Windows devices on the network you’ll need to setup Samba. A good link for setting up Samba can be found here.

sudo apt-get install samba

Set a password for your user in Samba.

sudo smbpasswd -a        

All the folder shares and their configuration are stored in the smb.conf file which can be edited by opening it up in a text editor like nano.

 sudo nano /etc/samba/smb.conf 

In the smb.conf file you may want to change the workgroup name to the same as that used by your window PCs and then add each of your folder shares.

[<folder_name>]
    path = /folder/path
    valid users = <user_name>
    read only = no

Once you have made the required changed restart the smb daemon service

sudo service smbd restart

You’ll also likely need to punch a hole in the ufw firewall for samba

sudo ufw allow Samba

Once Samba has restarted, use this command to check your smb.conf for any syntax errors with testparm

testparm

For a good guide on more complex permissions check out this guide here.

It’s worth noting that users need to have Unix perms on the underlying folders in order to be able to access them. Amend Linux file permissions as required.

Scheduled Tasks with Cron

For scheduled tasks (backup jobs etc) I have configured Cron jobs to run bash scripts (although I could have kept my existing PowerShell scripts as PowerShell now runs on Linux too) but they needed a rewrite anyway.

Open your cron job file with…

sudo crontab -e 

then add entries like this example:

# run test job at 11:15 every day 
15 11 * * * . /etc/profile; /bin/bash /home/me/testjob.sh > /tmp/cron.out

For more info on Cron check out this guide and for an awesome helper tool for building the Cron schedule times check out corntab.com.

Summary

So I’ve covered the basics of how I’ve set up my Home Server using Lubuntu which others may find useful. I’ve been running this setup for a few months and so far I am very pleased with its performance and stability.

Future steps are to install Plex Media Server and configure client PC backups. As I want to use the Snaps for Plex I am waiting for the offical Plex Snap Package to come out of BETA as I’m not in a rush. Alternatively I may use Docker to run Plex. To replace the client PC backup feature I previously had with Windows Home Server I will soon be moving to a client imaging tool such as CloneZilla, Acronis or Windows Backup and then copying the images to the server.

Developer Roadmaps

Developer Roadmaps

Something that’s proving popular on Medium these days are “development roadmaps” that outline a roadmap approach to choosing techniques and technologies for certain technical domains (for example Web development or Dev Ops). Some of these are particularly powerful for putting the many bewildering technologies all on one page with logical grouping and a visual representation of how they interact. Modern web development has seen so much change over recent times that it is very easy to get lost and become overwhelmed and these roadmaps can help clear the fog (a little).

My favourite is the Web Developer Roadmap in 2019 maintained by Kamran Ahmed over on GitHub.

I have shared this with several people who have also found it useful regardless of their level of expertise. The front end roadmap is a great guide to what the community are currently settling on as the standard choices for tooling and techniques. I have checked back to the roadmap a few times over the last 6 months to verify my approach when starting on a new project and I find that visualising the options makes decision making easier.

There are also Backend and DevOps Roadmaps included which are equally as useful.

For some more useful roadmaps check out this medium post.

Cmder – A Better Windows Console

Cmder – A Better Windows Console
Whilst Linux treats console users as first rate citizens and provides many useful and powerful terminal emulators Windows has always lagged behind. This is evermore noticeable now that many developer and IT Ops workloads are done via the terminal. Modern web development and DevOps tooling requires at least some interaction with the terminal, and with the world moving to git for source control developers everywhere are having to embrace consoles.
Whilst Microsoft have traditionally neglected the Windows console they have started to add new features and improvements. For a background on the Windows Console and its architecture check out this blog series. Windows 10 has the best Windows console to date, but there are better out there from 3rd parties and I’ve really got into Cmder.
Cmder is a smart per-configured bundle of the ConEmu emulator software with some extras thrown in. To quote directly from their website:
 

Cmder is a software package created out of pure frustration over the absence of nice console emulators on Windows. It is based on amazing software, and spiced up with the Monokai color scheme and a custom prompt layout, looking sexy from the start.

It can be run portable on a USB Stick if you wish and it has full Git and Bash support. You can emulate the Windows Command Prompt or PowerShell, Bash, Windows SubSystem for Linux (WSL), even the VS Developer Command Prompt among others. All in a slick feature rich emulator.

cmder

It has hundreds of settings that can be tweaked to get everything just the way you like it and it also has the awesome Quake mode so it can slide down from the top of your display.
Cmder2
Support for Cmd, PowerShell, Bash and many more is included out the box, but if you are a Visual Studio user and want to emulate the Developer Command Prompt for VS2017 (reommended) then check out the simple instructions in this guide by Ricardo Serradas on Medium.
I’ve been using it for months and its been stable, performant and has also caught the eye of collegues due to those good looks which make it a pleasure to work in compared to the plain Windows console. Give it a try.

Useful Git Training Links

Useful Git Training Links

git_logoHaving recently had to compile a list of useful learning resources for a development team migrating to git, I thought I would share them here.

Git is a very powerful and versatile distributed source control system but its not the easiest for a newbie to get their head around. The below links are ordered from tutorials based on giving an overview of git through to more advanced topics.

  1. What is Git Рa nice overview article by Atlassian
  2. Learn Enough Git to Be Dangerous tutorial by Michael Hartl
  3. Git the Simple Guide РAn excellent simple, straight to the point guide to git  by Roger Dudler. (My favourite guide)
  4. Git Tutorial – Another tutorial
  5. Git Cheat Sheet – cheat sheet for git and github commands
  6. The official git site documentation and tutorials
  7. Pro GIT ebook – an excellent definitive guide to git in a free ebook format


GitHub External Training Links: 

If you or your team also need to learn GitHub then here are some good training links.

  1. A great hello world example and introduction to GitHub
  2. Git Started With GitHub – free course from udemy
  3. Training videos on YouTube

Also its worth remembering that Microsoft offer FREE private¬†git repository hosting via the Visual Studio Team Services¬†if you don’t want to host all your projects publicly.

 

Consume JSON REST Service via WCF Message Class

Consume JSON REST Service via WCF Message Class

Since WCF was designed and envisioned by Microsoft the world has changed and the use of RESTful JSON based web services has increased at the expense of SOAP based services. WCF was updated to reflect this change and for several years has supported RESTful services through webHTTPBinding etc (more on MSDN), and there are many resources on the web for how to consume or host a REST service with WCF, but many of these assume you are not using a generic channel factory approach with the low-level message class. Usually in WCF you would consume a service via a proxy, or perhaps by directly creating a Channel Factory, however these require explicit knowledge of the service contract being consumed and sometimes a more generic solution is required. If, for example, you wanted to create a  generic WCF helper class for your application which would build a message directly from passed in data and call a service generically then you could use the Message class directly. This advanced approach is documented for SOAP messaging, but what about if you need to send JSON?

Below are some notes on how you would use the Message class to send JSON in a generic way (i.e. without needing intimate knowledge of the service contract you’re calling).

In the code below we need to pass the Person object named “bob” as JSON so we create a WebChannelFactory and use the “Endpoint1” config (which is very generic in nature). The special WebChannelFactory is a¬†ChannelFactory that automatically adds WebHttpBinding¬†and¬†WebHttpBehavior¬†to the endpoint config if its missing. Then we create a proxy and directly build a Channels.Message class using a SOAP version of “None” (as we’re not using SOAP here but JSON) and the¬†DataContractJsonSerializer¬†.

Person bob = new Person() {age = 89, name="Bob"};

WebChannelFactory factory = new WebChannelFactory("Endpoint1");

IRequestChannel proxy = factory.CreateChannel(
      new EndpointAddress("http://localhost:8080/Test"));

System.ServiceModel.Channels.Message requestMsg = 
      System.ServiceModel.Channels.Message.CreateMessage(
          MessageVersion.None, "", bob, new DataContractJsonSerializer(typeof(Person)));

requestMsg.Headers.To = new Uri(factory.Endpoint.Address.ToString());
requestMsg.Properties[WebBodyFormatMessageProperty.Name] = new WebBodyFormatMessageProperty(WebContentFormat.Json);

You will notice above we also need to set the message header URI too and also set the WebBodyFormatMessageProperty format to JSON. If we forget to do this step then the message will be sent in XML format despite the previous web config we have set (for more info on this issue see here and here). This is what is sent without setting the WebBodyFormatMessageProperty to JSON:

<root type="object"><age type="number">89</age><name>Bob</name></root>

and with the¬†WebBodyFormatMessageProperty set to “WebContentFormat.Json”:

{“age”:89,”name”:”Bob”}

Next we call the nice and generic “Request()” method on the proxy and handle the response, picking out the body and de-serialising¬†it into a Person object via the¬†DataContractJsonSerializer.

System.ServiceModel.Channels.Message responseMsg = proxy.Request(requestMsg);

Person BobResponse = responseMsg.GetBody(new DataContractJsonSerializer(typeof(Person)));

Endpoint Config:

<system.serviceModel>
 <client>
 <endpoint name="Endpoint1"
 address="http://localhost:8080/Test" 
 binding="webHttpBinding"
 contract="System.ServiceModel.Channels.IRequestChannel"
 />
 </client>
</system.serviceModel>

In this snippet the only thing that is specific to the service being called is the Person object which the DataContractJsonSerializer needs to know about in order to be able to serialise it into JSON correctly. The actual service call is generic. To make this a completely generic helper we can instead pass in a type for the DataContractJsonSerializer to use instead of a real object, leaving the calling component to pass the right type in when it calls this generic helper method.

If you are already using this message class approach for SOAP services and need to now call some JSON REST services then hopefully this will help.