A key part of most personal data backup strategies involves backing up data to an external USB drive but I don’t want to leave it constantly connected. In this post I cover how to backup to an external drive using a scheduled automated process but only if the external drive is connected at the time. 

I don’t believe in leaving external backup USB drive always connected to my system (PC or Server) to avoid the data being corrupted or deleted. Also if the backup drive is for off-site storage then its not possible to be always connected anyway. Using the steps below I can perform a full data backup by simply physically connecting the drive (connecting a USB drive or docking the SATA drive into a USB dock for example), and then leaving it overnight. In the morning I can safely physically disconnect it and store it without even having to log onto the machine.

The basic flow:

1) Assuming that the machine is always on, which my server is, a scheduled task runs every night and executes a backup script (regardless of the backup drive being connected or not). If your machine is not always on then vary this by setting the scheduled task at a time when the machine is usually on.
2) The script checks for the existence of a specific folder on the connected drive (e.g: U:\backup\). If the drive isn’t connected then the folder path won’t exist and the script just exits happily. However, if the drive has been connected then the folder path will exist and the backup script will continue and copy over the data.
3) After a successful backup the script safely disconnects the USB drive. This step is technically optional as Windows supports pulling a USB drive out without doing a soft eject but its highly recommended to tell Windows first to avoid data corruption.
4) Optionally you can also output a backup log somewhere to enable you to check the logs periodically without having to reconnect the USB drive to verify the job worked.

More detailed steps:

Firstly connect the external USB drive and make a note of the drive letter it uses. Changing the drive letter to something memorable might help (B for backup, U for USB, O for Offsite etc). We’ll use U for this example. Next we need to write a DOS command script, a simple program or Powershell script to perform the backup of the data using the backup tool of your choice. I use Robocopy to copy the files via a Powershell script and below is a simplified version of my script.

#==========================================================================================================
# Checks for presence of offsite backup USB drive, and backs up relevant data to drive if present, 
# exits gracefully if not present. Run script everynight and it only backs up data when offsite 
# USB external drive is turned on. 
#==========================================================================================================
clear-host

# set file paths and log file names 
$timestamp = Get-Date -format yyyyMMdd_HHmmss
$LogBasePath="D:\Logs\OffSiteUSBBackup"
$LogFile="$LogBasePath\USBBkUp_$timestamp.txt"
$USBDriveLetter="U"
$USBDriveBackupPath="U:\Backup"

# set error action preference so errors don't stop and the trycatch kicks in to handle gracefully
$erroractionpreference = "Continue"

try
{
	# Check USB drive is on by verfiying the path
	if(Test-Path $USBDriveBackupPath)
	{	
		# now copy the data folders to backup drive
		invoke-expression "Robocopy C:\Docs $USBDriveBackupPath\Docs /MIR /LOG:$LogFile /NP"
		invoke-expression "Robocopy C:\Stuff $USBDriveBackupPath\Stuff /MIR /LOG+:$LogFile /NP"
		
		# Copy the log file too
		invoke-expression "Robocopy $LogBasePath $USBDriveBackupPath\Logs /MIR /NP"
					
		# Sleep for 60 to ensure all transactions complete, then disconnect USB drive		
		Start-Sleep -Seconds 60
		Invoke-Expression "c:\DevCon\USB_Disk_Eject /removeletter $USBDriveLetter"        
	}
}
catch
{
	# Catch the error, log it somewhere, but make sure you still eject the drive using below
	Start-Sleep -Seconds 60
	Invoke-Expression "c:\DevCon\USB_Disk_Eject /removeletter $USBDriveLetter"
}

It is key to include in the script a check for the existence of a specific folder on the drive letter belonging to the external drive (U in our example). Only if its present do we continue with the backup.

I make sure that Robocopy logs the output to a file and that file is on the server and copied to the USB drive (as a record of the last backup date etc). I also report the running of the PowerShell script to the Eventlog for reporting purposes but this is outside the scope of this post.

It's safer to tell Windows that you're gonna pull the drive out and so I call  USB_Disk_Eject  from within my script, passing in the drive letter. I then wait 30 seconds to ensure the drive has had sufficient time to disconnect before I exit the script. There are a few tools available for ejecting USB drives such as Microsoft’s Device Console (DevCon.exe) but I use USB Disk Ejector (https://github.com/bgbennyboy/USB-Disk-Ejector). 

Now set up a Scheduled Task in Windows to run the script every night at a set time.  As the script is scheduled to run every night all I have to do if I want to perform a back-up is connect my backup drive and leave it until the morning. The script will run overnight, find the drive, backup and disconnect. In the morning I can just physically disconnect the drive safely without having to log onto the machine. Periodically I'll check the backup logs and the backup drive to make sure all is well and to check remaining drive space etc.

Do you like this approach? Got a better idea? Let me know via the comments.

About these ads

WLW2011TemplatePreview

Just a quick note to announce a new version of my Windows Live Writer plug-in, for Source Code Syntax Highlighting in WordPress.com posts, has been released.

CaptainKernel posted a comment on this blog to say he was having issues with the preview feature not formatting the code correctly. After some investigation this has been caused by a change on the WordPress.com.

Anyway a new fixed version, v.1.4.2, is now available to download here which resolves this issue.


My laptop running Windows 8.1 decided not to boot this week but instead gave me a blue screen with the error "System Thread Exception Not Handled". As I’d not installed anything new recently I guessed it could be related to a Video Driver issue, so I tried to Safe Boot – but wait where is Safe Boot in Windows 8? Google and Toms Hardware site to the rescue with this excellent article for resolving the issue. Note the use of in the article of BCDEDIT from the Command Prompt to turn on the legacy Windows boot menu (accessed via pressing F8 during boot).

At the C:\ command prompt:  BCDEDIT /SET {DEFAULT} BOOTMENUPOLICY LEGACY

You can dig a bit more into this command on the ‘Windows Developer Center’ site and check out the various options you can specify, including a useful ‘onetimeadvancedoptions’ option to only turn on F8 menu for a one time use on the next boot. For more detail on the Windows 8 Start-up settings including how to restart in Safe Mode from within Windows check out this page on the windows site. Also note that you can use MSConfig (Start > Run > "msconfig.exe") to restart Windows in Safe Mode too.

To return to the standard Windows 8 boot menu (for faster boot times), once you have resolved your issue, you can run the BCDEDIT command again but this time set the BOOTMENUPOLICY to STANDARD:

At the C:\ command prompt:  BCDEDIT /SET {DEFAULT} BOOTMENUPOLICY STANDARD

If you get an ‘Access Denied’ message make sure the command prompt window is running as Administrator (right click the shortcut > Run as Administrator).

As for my laptop issue, I used Safe Mode to uninstall my video drivers, enabling me to boot normally and then successfully update the drivers.


If you need to host a static HTML page within an ASP.net MVC website or you need to mix ASP.net WebForms with an MVC website then you need to configure your routing configuration in MVC to ignore requests for those pages.

File:Belgian road sign F7.svgRecently I wanted to host a static HTML welcome page (e.g. hello.htm) on an MVC website. I added the HTML page to my MVC solution (setting it as the Visual Studio project’s start page) and configured my web site’s default page to be the HTML page (hello.htm). It tested ok at first but then I realised that it was only displaying the hello page first on debug because I’d set the page to be the Visual Studio project’s start-up page and I hadn’t actually configured the MVC routes correctly so it wouldn’t work once deployed.

For this to work you need to tell MVC to ignore the route if its for the HTML page (or ASPX page in the case of mixing WebForms and MVC). Find your routing configuration section (for MVC4 it’s in RouteConfig.cs under the App_Start folder, for MVC1,2,3 it’s in Global.asax). Once found use the IgnoreRoute() method to tell Routing to ignore the specific paths. I used this:

routes.IgnoreRoute("hello.htm"); //ignore the specific HTML start page
routes.IgnoreRoute(""); //to ignore any default root requests

Now MVC ignores a request to load the hello HTML page and leaves IIS to handle returning the resource and hence the page displays correctly. 


Sometimes if you are on a new machine or using Remote Desktop for the first time you might find that the display size is not correct when you connect to a remote machine. If the remote machine session won’t go Full Screen it can be annoying. To resolve launch Remote Desktop (tip: Start > Run > mstsc is the easiest way) or via Start Menu (Start > Programs or All Programs > Accessories > Remote Desktop Connection). Once launched click ‘Options’ or ‘Show Options’ and then on the ‘Display’ tab adjust the size of your remote desktop screen. Move the slider all the way to the right for full screen.

image

Once you connect the settings becomes the default for all Remote Desktop connections and so you’ll only need to do this once. The settings are saved in a Default.rdp file, usually stored in ‘My Documents’ or ‘Users/<UserName>/Documents’. It is possible to save multiple versions of *.rdp files and pass them to MSTSC as a command line parameter if you need to connect to different machines with different settings.


I’m keen on fostering a learning culture within teams and was drawn to this article on InfoQ Creating a Culture of Learning and Innovation by Jeff Plummer which shows what can be achieved through community learning. In the article Jeff outlines how a learning culture was developed within his organisation using simple yet effective crowd sourcing methods.

imageI have implemented a community learning approach on a smaller scale using informal Lunch & Learns where dev’s give up their lunch break routine to eat their lunch together whilst learning something new, with the presenter\teacher being one of the team who has volunteered to share their knowledge on a particular subject. Sometimes the presenter will already have the knowledge they are sharing but other times they have volunteered to go and learn a subject first and then present it back to the group. Lunch & Learns work even better if you can convince your company to buy the lunch (it’s much cheaper per head than most other training options).

It’s hard to justify expensive training courses these days but that said it’s also never been easier to find free or low cost training by looking online. As Jeff points out innovation often comes from learning subjects not directly relevant to your day job. In my approach to learning with team I have always tried to mix specific job relevant subjects with seemingly less relevant ones. For example a session on Node.js for a team of .Net developers would be hard to justify in monetary terms however I’ve no doubt the developers took away important points around new web paradigms, non-blocking threads, web server models, and much more. Developers like to learn something new and innovation often comes from taking ideas that already exist elsewhere in a different domain and applying them to the current problem.

I agree with Jeff’s point that the champions are key to the success of this initiative. It is likely that the first few subjects will be taught by the champion(s) and they will need to promote the process to others. One tip to take some of the load off the champions is to mix in video sessions as well as presenter based learning sessions. There are a lot of excellent conference session videos and these can make a good Lunch & Learn sessions. Once the momentum builds it becomes the norm for everyone to be involved and this crucially triggers a general sense of learning and of sharing that learning experience with others.


I recently decided to add a custom domain name to a free Azure website that I use for development purposes. As the FREE Azure web site model doesn’t support custom domains (a shame but hard to complain as it’s FREE) I needed to upgrade the site to the ‘Shared’ mode. This is easily done by the Scaling button in the azure portal.

Firstly however I needed to link my current azure web site to sit under a different subscription to the one I used to set it up. The problem is that cannot move sites between subscription models yet (please fix this Microsoft). To get around this I needed to create a new website under the correct subscription and then publish my web site code to it. Luckily this is easy to do as it’s just a basic website but I can imagine that this could be painful if you have a bunch of storage accounts or a database to re-create.

Using the Azure Portal, creating a new site is a simple process Click +NEW at the bottom of the portal for the menu shown below:

image

Once created all I needed to do was download a Publish profile (see this tutorial link for how to publish to Azure) for the new site for Visual Studio to use. Once downloaded I opened my VS2012 solution and brought up the Publish dialog. I pointed it to the new Publish profile file and clicked Publish. In just a few seconds I’ve got a new Azure web site up and running with my existing MVC web application. This was very smooth, with no change to config or code required. The sheer simplicity of this impressed me as I was short on time.

Next I needed to allocate my custom domain which as previously mentioned is not available for FREE websites so i needed to upgrade to SHARED mode. From the Azure portal >web site configuration > scale > click SHARED (remember this model incurs a cost).

image

Once upgraded I could then immediately select DOMAINS and set up my CNAME and A record references, for more information see this useful link (configuring a custom domain name for a Windows Azure web site). It’s worth reading the comments on the post too as it covers issues with registering the domain without the WWW subdomain.

Once the DNS entries had propagated I had my existing site up and running under a custom domain running within a shared Azure instance, all with very little effort. 




Follow

Get every new post delivered to your Inbox.