Some Recommended VS Code Extensions

Some Recommended VS Code Extensions

One of the things that makes Visual Studio Code (VSCode) such a great editor is the many extensions that have been built for it. Extensions in VSCode are explained here. As a reference for myself when building new machines,and anyone else who might find this useful, below is a list of my most used extensions:

  • Bracket pair colorizer – Colours your brackets and braces for easy identification. I avoid many missing bracket errors with this one!
  • XML Tools – XML Formatting, XQuery, and XPath Tools.
  • ESLint – Extension to integrate ESLint into the IDE.
  • SonarLint – Sonar Rules in VSCode to check your code quality as you go.
  • Prettier – Integrate Prettier into your IDE
  • GitLens – Extend the Git capabilities of VSCode with this tool.
  • PowerShell – Develop PowerShell scripts inside VSCode.
  • Docker – Develop Dockers scripts inside VSCode. Adds syntax highlighting, commands, hover tips, and linting for Dockerfile and docker-compose files.
  • REST Client – Allows you to send HTTP requests and view the response directly in VSCode.
  • JS Refactor – Automated refactoring tools to smooth your JavaScript development workflow.

Check out more on the VSCode marketplace.

What are you using?

Advertisements

Razor Pages Fixes to Tag Helpers Issues

Razor Pages Fixes to Tag Helpers Issues

Recently when adding Razor Pages to an existing ASP.net Core MVC web application I had issues with the Tag Helpers not working. No markup was being produced. Not only were the tag helpers (i.e. asp-for) not doing their job of but I also noticed that the markup was not being formatted in bold in Visual Studio as it should be.

At this point I checked for a _ViewImports.cshtml file which I found before checking some other things (see the list below), however I failed to notice that as this was an MVC application with Razor Views there is already a _ViewImports.cshtml file but in the wrong place for my Razor Pages. The _ViewImports.cshtml file must be in the root of the Pages folder where your Razor pages reside, and so you will need one in both the Pages folder for your Razor Pages and also in the Views folder for your MVC Razor Views.

The _ViewImports.cshtml file must contain:

@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers

Adding a new _ViewImports.cshtml file under the Pages folder resolved my problem, but in the meantime here are some additional things to try if you have other Tag Helper problems:

  • Update all dependency Packages
  • Add/re-add Microsoft.AspNetCore.MVC.TagHelpers package if its missing
  • Check your _ViewImports.cshtml file is in the Pages folder or a parent folder of your Razor Pages. You may need one in the Areas folder if you have one.
  • Check that the _ViewImports.cshtml file includes a reference to Microsoft.AspNetCore.Mvc.TagHelpers. If it does try removing it, rebuilding the solution and then re-adding it, and rebuilding again.
  • Check the namespace in the _ViewImports.cshtml file is correct.
  • Should all these fail, try turning on the developer exception page and see if that helps to narrow down the problem.

Moving teams to Trunk Based Development (an example)

Moving teams to Trunk Based Development (an example)

In this post I am going to cover an example case study of introducing Trunk Based development to an existing enterprise Dev team building a monolithic web application. I’m not going to cover the details of trunk based development as that’s covered in detail elsewhere on the internet (I recommend trunkbaseddevelopment.com). For the purpose of this article I’m referring to all development teams working on a single code branch, constantly checking their changes in to that single branch (master). In our model the only reasons for ever creating a new branch was for one of the following reasons:

  1. A developer creates a personal branch off master for sole purposes of creating to Pull Request into the Master branch (for code review purposes).
  2. The release team take a branch off of master to create a Release Candidate build. Whilst this is not in the true spirit of Trunk based development it is often required for release management purpose. Microsoft use this model for their Azure DevOps tooling development and call it Release Flow.
  3. A team may still create a branch for a technical spike or proof of concept that will be dumped and wont ever me merged into Master.
Problem Statement

The development team in this case was an enterprise application development team building a large scale multi-channel application. There were five sprint teams working on the same monolithic application code-base with each team had its own branch of the code that they worked on independently from the other sprint teams. Teams were responsible for maintaining their own branches and merging in updates from previous releases but this was often not done correctly causing defects to emerge and features being accidentally regressed. Whats more, the teams branches would deviate more and more away from each other over time, making the problem worse with each subsequent release. Following a monthly release cycle, at the end of two fortnightly sprints all teams manually merged their code into a release branch which was then built and sanity tested. The merging of code was usually manual and often done via cherry picking individual changes from memory. Needless to say this process was error prone and complicated for larger releases. Once the merge was complete this was the first time that all the code had all been running together and any cross team impacting changes identified and so many issues were encountered at this point, either in failed CI builds or in release testing cycle. This Merge and test cycle took between 3 to 8 days therefore taking a week out of our delivery cycle and severely impacting velocity.  The more issues that were found the more the release testing was increased in an attempt to address quality issues, increasing lead times. It was clear that something needed to be done and so we decided to take the leap to trunk based development and eliminating waste by removing merges and getting each teams changes aligned sooner.

Solution

We moved to a single master/trunk stream (in Git) for ALL development and all sprint teams worked on this single stream. Each sprint team has its own development environment for their in-sprint testing of their changes. Every month we would create a cut of the master branch to create the release candidate branch for the route to live and to act as a stable/scope locked base for testing the upcoming release to production. This model follows Microsoft’s Release Flow model. Defect fixes for the release candidate were fixed in master or the release branch, depending on the sprint teams decision but each week changes made in the release branch (i.e. defect fixes) are back merged into master/trunk. This ‘back merges’ activity is owned by one sprint team per Release and this ownership is rotated each release so everyone shares the effort and benefits from improvements to this process

Feature Toggles

The move towards trunk based development was not possible without utilising Feature Toggles or Release Toggles to be more specific . We needed to support multiple teams working on the same codebase but building and testing their own changes (often targeting different releases). We needed to protect each team from every other teams untested changes, and so we used toggles to isolate new changes until they were tested/approved. All code changes were wrapped in an IF block with the old code in the ELSE block and a toggle check determines which route the code takes.

 if (ToggleManager.IsToggleEnabled("feature12345"))
 {
     // new code here 
 }
 else 
 {
     //wrap old code here unchanged
 }

As the system was a  Java EE application we chose FF4J as the toggle framework. FF4J is a simple but flexible feature flipping framework, and as the toggles are managed in a simple XML file it made it easier to implement our Jenkins solution described later. To be clear there are many frameworks that could be used and its simple to create your own.  To be able to support replacing the toggle framework and to make it as simple as possible for developers to create toggles/flips FF4J functionality was wrapped in a helper class which made adding a toggle a one line change.

We also implemented a Feature Toggle Tag Library so that JSF tags could be wrapped around content in JSF (Java Server Faces) pages to enable/disable depending on the FF4J toggle being on/off. Also a JavaScript wrapper for toggles was also developed that allowed toggles to be checked from within our client-side JS code.

A purposeful decision was made to bake the toggle config into the built code packages to prevent toggle definition files and the code drifting apart. This meant that the toggle file is baked into built JAR/EAR/TAR file for deployment. Once the package is created the toggles are set for that package and this prevents a code-configuration disconnect from forming which is the cause of many environmental stability issues. This was sometimes controversial as teams would want to change a toggle after deployment for the simple reason that they forgot to set it correctly or would hastily want to flip toggles on code in test which was not the design goal of the our Feature Toggles (although a valid scenario for toggles and a separate design was introduced for turning features on/off in Production).

All code changes are wrapped in a toggle and the toggle is set to OFF in the repository until the change has passed definition of done within the sprint team. Once the change has been completed (‘done’ includes testing in sprint) then the toggle is turned ON in the master branch – making it ON for all teams immediately (as the code base is shared). As the toggles for new untested changes are OFF in the code and all package builds come from the master code branch,  the new feature cannot be tested on a test environment without first flipping a toggle, so how can it be tested and turned on without impacting other teams?  For this we introduced Auto Toggling in our Jenkins job.  Our Jenkins CI jobs that build the code for test include a parameter for the sprint team indicator (to identify the team the built package is for) and this then automatically turns on the WIP toggles belonging to that team. This means that when Sprint Team A triggers a build, Jenkins will update the new work in progress toggles for Team A from OFF to ON in the package. This package is marked as WIP for Team A and so cannot be used by other teams. Team A can now deploy and test their new functionality but other teams will still not see their changes. Once Team A are happy with their changes they turn the Toggle ON in MASTER  for all teams and its widely available.

Unfortunately not everything can be toggled easily, and so a decision tree was built for developers to follow to understand the options available.  Where the toggle framework was not an option, then other approaches can be used. Sometimes a change can be “toggled/flipped” in other ways (e.g. a new function with a new name or a new column in the DB for new data, or an environment variable). The point is that the ability and desire to use Feature Toggles is not just about the Toggle framework you choose but instead its a philosophy and approach that must be adopted by the teams.  If after all other options there is no way to toggle then teams have to communicate changes amongst themselves  and take appropriate actions to communicate a potential breaking change.

What worked well

So what were the benefits seen after introducing trunk based development in the team?  Well firstly the obvious benefit of maintaining less code branches was immediately obvious. From day one every team was now on the latest codebase and no merging and cherry picking was required at end of sprint. This saved time, reduced merge related defects and increased the agility of the team.  Each team has saved time and effort by not having the housekeeping effort of maintaining their own branch. Use of environments became more flexible as in theory every new feature was on every environment behind a toggle meaning that Team A’s new feature can now be tested on Team B’s environment should the need arise.  Cross team design issues have been spotted earlier as a clash of changes between teams is seen during development and not later when the code is merged together prior to a release. Teams are now able to share new code more easily because as soon as a new helper function is coded it can be used by another team immediately. Any Improvements to CI/CD process or code organisation or tech debt can now be embraced by all teams at once without a delay as it filtered through each teams branches.

What didn’t go so well

Of course its not all perfect, and we have faced some challenges.  Now all teams are impacted immediately by the quality of the master/trunk branch. Broken builds have been reduced with the introduction of Pull Requests and Jenkins running quality gate builds on the check-ins on Master but any failure in the CI/CD build pipeline immediately impacts all the teams and so these issues need to be resolved ASAP (which they should anyway in any team). Teams must work together to resolve which does bring positives.   Which brings me to the next point – communication.

When teams are using trunk based development and sharing one codebase then communication between teams becomes more important. It is critical that there are open dialog between the teams to ensure that issues are resolved quickly, and that any changes that impact the trunk are communicated quickly and efficiently.  Whilst a lack of cross team comms exasperated in trunk based development any communication issues are a death-nell to a Dev team and should be resolved anyway. If this is an issue for your teams then be aware that trunk based development may not be for you until your teams are collaborating better. That said introducing trunk based development is a good way to encourage teams to collaborate better for their own benefit.

Feature toggles/flags are a key enabler to trunk based development, enabling you to isolate changes until they are ready but there is no denying that they add complexity and technical debt. We add a toggle around all changed code and use the Jira ID to name the toggle. Whilst we have a neat toggle solution there is no getting away from the fact that extra conditions mean extra complexity in your code. Unit tests must consider feature toggles and test with them on and off.  Feature toggles can make code harder to read and increase the cyclomatic complexity of functions which may result in code quality gates failing. We had to slightly adjust our Sonar quality gate to allow for this.  Whilst toggles do add technical debt to your code, but this is temporary debt and one that is an investment to improve overall productivity, and so is really in my opinion a valid form of technical debt. Its referred to as debt as its ok to borrow on a manageable scale if you pay it back over time. Removing toggles is critical to this process, and yet this is one area we have yet to address sufficiently. A process to remove toggles has been introduced but its proving harder than expected to motivate teams to remove toggles from previous releases at the same rate as they are being added. To this end we have added custom metrics in SonarQube to track toggle numbers and we will use this key metric to ensure that the number of toggles stabilises/reduces.

The state of toggles is an additional thing the team need to manage, however this is offset by the power to be able to control the release of new code changes to other teams.  We have found care should be taken to ensure that feature toggles are not mis-used for non intended purposes. Be clear on what that they are for and when they should/shouldn’t be flipped (for us its in the Definition of Done for a task in sprint). There can be demands to use them as a substitute for proper testing. Be clear on the types of Release/Feature toggles and provide guidance for what that can be used for. There is no doubt that they can be re-used at release time to back out unwanted features but this should be a controlled process and designing in from the start. We already had Release Switches for turning features on and off, but Feature Toggles (in our case) are used purely for isolating changes from other teams until ready. We strive to ensure that the toggle is set ON or OFF during the sprint before the code is moved through to release testing.

Conclusion

The benefits you derive from moving towards trunk based development will vary depending on your current processes. For a full guide to the general benefits of trunk based development check out this excellent resource trunkbaseddevelopment.com.

In our case the roll out was a success and achieved the desired improvements in cycle time and developer productivity. The process of delivering technical change was simplified in terms of configuration management and developers became more productive. This was despite the problems we faced and listed above.

There is no doubt that utilising feature toggles into your design from the start would make some of the technical challenges easier but we have proved it can be done with an existing brownfield monolith.

Next Steps

Next steps for the future are to constantly improve the toggle framework to reduce the instances where a change cant be toggled on/off, and to make it easier to communicate about changes that will impact all teams.  A renewed emphasis on removing old toggles from the code is required to ensure that teams and change approvers accept the “tax” each release of removing redundant toggles.

Links for further reading

RunAs Issue? Check Secondary Logon Service.

RunAs Issue? Check Secondary Logon Service.

On Windows if you are having problems trying to perform an action as a different user via the RunAs command then it might be due to the ‘Secondary Logon Service’ not running. I recently had this problem on Windows Server and after some investigations found that the ‘Secondary Logon Service’ had been disabled, starting the service resolved the issue. By default it is set to ‘Manual’.

The error you get from the RunAs command may vary depending on OS version but will report a problem running a process or service. This is the error I get on Windows 10:

1058: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.

SonarQube migration issue- Jenkins Using old URL

SonarQube migration issue- Jenkins Using old URL

I recently migrated a SonarQube server from one server to another in order to scale out the service to our dev team. All went well until builds failed due to them looking at both the old and new server URLs for the Sonar results and so I’m writing some notes here to help me (and others) out in the future if I hit this again.

I installed the same version of SonarQube on the new application server that is on the old server. The database was not being moved, just the application server (the server with the Sonar service running).

After installation I ensured that the same custom config settings were made onthe new server as had been made on the old server, and ensured that the same plugins were in place. I then stopped the Sonar service on the old server and started the service on the old box.

Once Sonar was confirmed to be up and running and connecting to the Database and showing the project dashboards I updated the Jenkins server configuration to point to the new box. All looked good so I ran a build, and then got this (log truncated)….

 
INFO: ANALYSIS SUCCESSFUL, you can browse http://OLDSERVER/dashboard?id=123
INFO: EXECUTION SUCCESS
The SonarQube Scanner has finished
Analysis results: http://NEWSERVER/dashboard/index/APP
Post-processing succeeded.
withSonarQubeEnv
waitForQualityGate
java.lang.IllegalStateException:
Fail to request http://OLDSERVER/api/ce/task?id=3423
at org.sonarqube.ws.client.HttpConnector.doCall(HttpConnector.java:212)

Bizarely the Jenkins build has managed to use both the new Sonar URL and the old one.The upload was successfull to the new server but some of the links for the resport point to the old server. Also the Quality Gate check whcih validates that the Sonar Quality Gate was successfull has tried to read the report on the OLD server and therefore failed as its not there (because its on the new sonar URL).

After checking Jenkins for any reference to the old Sonar server and restarting the service to clear any caches I was still getting the error. Eventually I ran a Jenkins build and interactively peeked into the Jenkins workspace on the Jenkins Slave and in there is an auto generated file containing lots of Sonar config settings. This file, SonarQubeAnalysisConfig.xml, is created during the Jenkins build initialisation stage. In the file I found references to the new Sonar URL but also this property pointing to the old URL:

  sonar.core.serverBaseURL  

This value is set in SonarQube configuration and is not dynamic and so will not be updated when you migrate the server or change the server URL/port. To change it open SonarQube > Administration > Configuration > General and change Server base URL to your new URL (e.g. http://yourhost.yourdomain/sonar). It says that this value is used to create links in emails etc but it in reality is also used to integate results.

Visual Studio 2019 Offline Installer

Visual Studio 2019 Offline Installer

Microsoft have now released Visual Studio 2019 and like VS2017 there is no offline installer provided by Microsoft, but you can generate one by using the web installer setup program to write all the packages to disk.

To create the offline installer just download the usual web installer exe from the Microsoft download site and then call it from the command line passing in the layout flag and a folder path like this:

vs_community --layout  "C:\Setup\VS2019Offline"

In the example above I’m dowloading the Community verssion, but if its the Enterrpise editions installer then the setup file you downloaded will be called vs_enterprise.

The packages will all be downloaded and a local setup exe installer created.

If you want to limit to English then pass –lang en-US flag too.

vs_community --layout  "C:\Setup\VS2019Offline" --lang en-US

You can also limit what workloads to download if you know their names by listing them after a –add flag.

Enjoy your offline installs.

Easy Upgrade Tool For NPM on Windows

Easy Upgrade Tool For NPM on Windows

Having recently needed to upgrade my version of NPM on a Windows machine, without upgrading my Node.js installation, I came across this excellent tool for doing just that without following a complex set of steps. Adding it here for others to find and for me to remember 🙂

The tool is called npm-windows-upgrade and can be found on GitHub. The tool simplifies the numerous steps previously required on Windows and is now the recommended approach by the NPM team.

npm-windows-upgrade tool

In the end I ran this tool several times to test out various versions and it worked well, upgrading NPM in place successfully.

New WP Code Snippet Editor Online Tool

New WP Code Snippet Editor Online Tool

You can now get the benefits of my Live Writer plugin in your browser without using Live Writer.

Use WordPress…..? Post code snippets….? Well now you can customise the look and feel of the snippets whilst previewing them in a new online tool at https://WPCodePreview.com.

Ten years ago I blogged here about how to create a plugin for the popular (at that time) Microsoft Windows Live Writer blog editor and made the plugin available for download. Over time I added some new features and it has proved very popular, but times change and Windows Live Writer was dumped by Microsoft, and then resurrected as an open source project – now called Open Live Writer (and the plug was updated). Over that time more people are using other editors and platforms to edit their content (and so am I) so I have now replicated the main features of the plugin into an online web application. No more need for Live Writer unless you still like using it, in which case carry on.

Like the Live Writer plugin before it, the site provides a simple way to edit a code snippet and get it looking just how you want – line numbers, line highlighting, language syntax to use etc. All features provided by the WordPress “code” short-code functionality documented here . Once you have the snippet looking how you want, then copy it and paste it into your blog post editor of choice.

For more infnromation checkout the site at https://WPCodePreview.com or the User Guide.

Cheap Azure Hosting via Static Web Sites

Cheap Azure Hosting via Static Web Sites

Something that is pretty cool and not that well known is that you can now host your static web site in the cloud with Microsoft Azure just from your Azure storage account. The functionality is currently in preview only but its functional enough to get up and running quickly if you have an Azure account.

Why host a static site?

Whilst it does depend on your requirements many sites are quite capable of being static sites with no server side processing. The classic example is a blog site whereby the site could just serve up static html, images and JavaScript straight from disk as the content changes fairly infrequently.

The growth in JavaScript libraries and the functionality of frameworks like React.js make static sites even more viable. Using the power of JavaScript its possible to create rich powerful web applications that don’t need server side processing. There has been an explosion of static site generators over recent years that will take text or markdown files and generate a complete static site for you. Two very popular generators of note are Gatsby (React.js based) and Jekyll (Ruby) but there are literally hundreds of others as can be seen by this online directory: staticgen.com.

Hosting a static site in Azure

Of course you could always host a static site in Azure if you hosted it in a full featured web site (via a hosted VM or azure web site) but the beauty of a hosting a static only site is that you can host it straight out of storage area and so you don’t need to pay for any compute power which makes it extremely cheap (and even free). You just pay standard Azure storage rates which include a generous data transfer limit (about 5GB a month).

If you think about it hosting a static web site is just a natural extension for a cloud offering like Azure as they already host files and binary content on public URLs in Azure Storage. This new functionality though makes it more explicit and enables web site like functionality such as custom error pages. It is also possible to add your custom domain name to the site and link up SSL (although unfortunately at the moment SSL requires use of an Azure CDN which adds to the cost.)

So how do you host your site, well follow the official instructions here.

Once you have a web page being served by the default Azure storage URL you can proceed to add your own custom domain name using these steps.

Now you should have a fully working site, but to keep costs even lower we can utilise caching of our static content to encourage the client browser to cache the files thus reducing our data transfer costs. Luckily it is easy to set cache control settings on our Azure Blob storage items. This blog post by Alexandre Brisebois covers doing it in code but if you are just testing, or have a site that doesn’t change much you can do it manually via the Azure Portal. To do so enter your Azure Portal, browse to your Storage Account and then using Storage Explorer find the files you want to set caching for and go to their properties. In the Properties dialog you can set the Cache-control value in the HTTP header to something like…

 "public, max-age=86400". 

There are other alternatives to Azure for hosting static files and some offerings are very cheap or free. Some of these are more advanced than the current Azure offering and provide additional features such as integrated SSL and contact forms. One such vendor is netlify.com but there are others.

In summary, if you want to host a site cheaply and you dont really need server side processing then consider hosting a static site, and if you’re already using Azure then its a simple step to give it a go.

.

Linux Home Server Build

Linux Home Server Build

On this blog I have posted many times about my home server configuration and seeing as I’ve recently updated it I thought I’d give a quick overview of the changes made and provide some tips for setting up a Linux home server.

My home server (an HP MicroServer) is used for NAS file storage, running Plex Media server and a few other ativites including client backups. Previously my server was running Windows Home Server 2011 which was an excellent Home Server OS from Microsoft based on Windows Server 2008 R2. In additon to file sharing it also allowed for easy server administration and client PC backups. Client PCs would backup images to the server allowing for client files or whole systems to be restored. Unfortunately Microsoft discontinued Windows Home Server and it is no longer supportrd and Windows Server 2008 R2 updates will stop in July 2019. In terms of replacement options there were several, whilst all Windows offerings are too expensive and seem overkill for a home server, serious Linux and BSD options include Free NAS, Amahi, Ubuntu Server and numerous Linux desktop distros. I would also recommend looking at a Synology if you have the budget.

In the end I chose Lubuntu 18.04 (LTS) desktop distro for my needs. Why a desktop distro for a server? Well I dont need to squeeze every onunce of performance from the server and the Lubuntu desktop is so lightweight and efficient I can have a graphcial desktop enviroment as well as great server performance. It is handy to have the option to be able to RDP into the box and use the Lubuntu desktop as an alternative to SSH when required.

I installed the OS, and checked for updates:

sudo apt-get update
sudo apt-get dist-upgrade

After installing the OS I inserted the data drives and mounted them under a /mydata mount point so that I can easily access all the files on those drives. To make these mount points persistent I edited /etc/fstab to add each one using the UUID of the partition (which is found in Disks app or the GParted app)

sudo nano /etc/fstab

Then add an entry for each parition to mount …

 UUID=YOUR_OWN_PARTITION_UUID /mydata/d1/ ext4 defaults 0 0
UUID=YOUR_OWN_PARTITION_UUID /mydata/d2/ ext4 defaults 0 0

Configure The FireWall

The ufw (uncomplicated firewall) firewall is installed on Lubuntu by default but is turned off so turn it on and check its status:

sudo ufw status 

If inactive then activate it with:

sudo ufw enable

To see its status and current rules:

sudo ufw status verbose

Add new rules with:

sudo ufw allow PORTNUMBERHERE 

Install XRDP Remote Desktop Service

Next I set up XRDP for remote access to this headless server.

sudo apt-get -y install xrdp
sudo ufw allow 3389/tcp
sudo systemctl enable xrdp
sudo systemctl restart xrdp

At this point I hit may issues but this hack below seemed to be the one that led to a working XRDP session but using this command to create a .xsession in home directory of connecting user:

 echo "lxsession -s Lubuntu -e LXDE" > ~/.xsession 

For more information see this article.

Setup Cockpit Web Interface

For more remote administration and monitoring goodness I installed Cockpit which is a web based interface for servers with lots of useful features.

sudo apt-get install cockpit
sudo ufw allow 9090

Then browse to https://yourserverip:9090

For a useful Cockpit install guide check out this link.

Setup Samba for file sharing

The whole point of a file server is to share files and in order to support file sharing with Windows devices on the network you’ll need to setup Samba. A good link for setting up Samba can be found here.

sudo apt-get install samba

Set a password for your user in Samba.

sudo smbpasswd -a        

All the folder shares and their configuration are stored in the smb.conf file which can be edited by opening it up in a text editor like nano.

 sudo nano /etc/samba/smb.conf 

In the smb.conf file you may want to change the workgroup name to the same as that used by your window PCs and then add each of your folder shares.

[<folder_name>]
    path = /folder/path
    valid users = <user_name>
    read only = no

Once you have made the required changed restart the smb daemon service

sudo service smbd restart

You’ll also likely need to punch a hole in the ufw firewall for samba

sudo ufw allow Samba

Once Samba has restarted, use this command to check your smb.conf for any syntax errors with testparm

testparm

For a good guide on more complex permissions check out this guide here.

It’s worth noting that users need to have Unix perms on the underlying folders in order to be able to access them. Amend Linux file permissions as required.

Scheduled Tasks with Cron

For scheduled tasks (backup jobs etc) I have configured Cron jobs to run bash scripts (although I could have kept my existing PowerShell scripts as PowerShell now runs on Linux too) but they needed a rewrite anyway.

Open your cron job file with…

sudo crontab -e 

then add entries like this example:

# run test job at 11:15 every day 
15 11 * * * . /etc/profile; /bin/bash /home/me/testjob.sh > /tmp/cron.out

For more info on Cron check out this guide and for an awesome helper tool for building the Cron schedule times check out corntab.com.

Summary

So I’ve covered the basics of how I’ve set up my Home Server using Lubuntu which others may find useful. I’ve been running this setup for a few months and so far I am very pleased with its performance and stability.

Future steps are to install Plex Media Server and configure client PC backups. As I want to use the Snaps for Plex I am waiting for the offical Plex Snap Package to come out of BETA as I’m not in a rush. Alternatively I may use Docker to run Plex. To replace the client PC backup feature I previously had with Windows Home Server I will soon be moving to a client imaging tool such as CloneZilla, Acronis or Windows Backup and then copying the images to the server.