Ray Ozzie’s Dawn of a New Day

I would recommend everyone interested in technology to read Ray Ozzie’s (Chief Software Architect of Microsoft) memo – "Dawn of a New Day". It’s a fascinating insight into the vision of a key player in the industry and a call to arms for Microsoft and it’s partners. What interests me the most about this vision is that it is a conceivable vision and one that I share. This vision of "appliance-like connected devices" being the norm and consuming "Cloud Based Continuous Services" is one that is easy to visualise as this day is dawning now around us. Smart phones, tablets, connected TVs etc are set to become the principle means of interacting with our online world.

"Complexity kills"

Whenever I’m called upon to help out family and friends with their PCs it often strikes me how inappropriate these machines are for the needs of the basic user. The power and complexity of the PC is it’s great power but it also makes them often too difficult to manage and secure. Huge numbers of basic PC users now in reality only use their browser and don’t install software applications anymore. These people are also now enjoying the simplicity provided by smart phone OS’s such as Android and iOS. In fact many of these users are able to fulfil their needs via App Stores etc whilst their PCs gradually gather dust. the future vision where devices rule makes total sense. Whilst Apple is proving the master in the device market Microsoft have the ‘Windows’ advantage. The failure of Linux netbooks to maintain market share shows that given similar pricing models consumers will stick with the familiarity and safe option of Windows, and this is an opportunity for Microsoft. They could capitalise on this with a lean “appliance like” version of Windows in the future.

"Complexity sucks the life out of users, developers and IT. " – I have seen numerous projects needlessly suffer in delivery due to overly complex designs, sometimes from overly complex requirements. Because we can create software to be configurable and feature rich we feel we have to, but of course every additional feature brings additional overhead. This overhead my be felt by the end user or perhaps just the developer and testers trying to implement or test the features.

"Cloud-based continuous services"

Ray’s vision of cloud services being continuous is key for the connected future. Consumers need to be able to depend on the cloud always being available and willing to serve them. As these services grow in importance they will be expected to grow in number and complexity. This is a real challenge for industry engineers and we really need to learn the lessons of the hugely scalable consumer web sites such as Facebook and Google. I look forward to seeing what technologies are produced to aid the development of these services and which scalability patterns move towards the mainstream.

It’s an exciting future for our industry and one that I look forward to playing my part in.

The Future of Windows Home Server

Microsoft’s recent announcement that the key Drive Extender feature is to be removed from the new version of Windows Home Server codenamed ‘Vail’ has resulted in much dismay within the community. Many commentators, including the vocal WHS user community itself, have started to question the future of this product. In this post I give my take on where I see WHS in the medium term and consider how it can fit alongside the “new dawn” of a Cloud Computing era.

How big is the Drive Extender issue?

Firstly, what’s all this about Drive Extender (DE)? Well DE is a really neat feature of WHS that pools all the hard drives in the system into one logical data drive. This means that you can throw in a mixed selection of hard drives of any type (USB, SATA etc) or capacity and the system enables you to see them as one. It also provides fault tolerance through data duplication which protects your data from drive failure. It is one of the major features of Windows Home Server (WHS). I would argue one of three, with the others being the client backups and remote access. Sure the product does much more than just that but it’s fair to say that all of WHS’s features are available in other products in some shape or form and the combination of these three features into one customisable platform made WHS stand out for me.

Microsoft’s announcement to remove DE from the next version of WHS code named Vail immediately removes a major reason to buy into the new WHS version and this has been evidenced in the recent twitter comments on the subject where a lot of people have stated their intention to not use ‘Vail’. Of course some of this is just anger at the fact that the feature has been removed (and the way in which it was announced) but still the fact remains that the product is a weaker proposition than it was before.

Personally I see this decision in both a negative and a positive light. Firstly I see this as a major blow to the uniqueness of the product and feel that it will suffer without this USP (Unique Selling Point). Also it’s important to remember that this is positioned as a product for the average PC user and DE made extending the storage capacity easy. The user doesn’t need to buy matching disks or configure RAID, they just pop in a new disk and it gets added to the pool. Without DE adding extra storage will presumably be a more complex task. In reality though how many “average” PC users would feel happy upgrading the hard drive on their WHS anyway. Whilst enthusiasts relish the chance to pop open the case many casual users would actually see their OEM produced WHS as just an appliance, and one probably already stuffed with several 2 or 3 TB drives providing a good chunk of storage capacity right out the box. They would not consider any upgrades to it other than replacing it when it gets full. In addition whilst the shared drive pool concept makes adding storage easy the ability to add additional storage as additional drives will still be there in the product as it is in any Windows OS. I don’t see this as a huge blocker to WHS adoption.

Folder duplication utilises the DE feature to ensure that the data is duplicated onto different physical drives within the logical storage pool. This in effect is ‘RAID like’ except that the data is duplicated over time and not immediately (although there is no way of retrieving previous versions of files). This provides an easy form of fault tolerance that, whilst being fairly easy to replicate yourself using other means, will probably never be as easy as ticking a check box. This is again more of an issue for the “average” guy than the PC enthusiast who is at home configuring RAID, although a simple file copy add-in or batch job is my preferred solution. I already run daily automated RoboCopy jobs to copy "’snapshots’ of my data drives to another drive to provide both fault tolerance but also versioned snapshots that I can restore if required. I have had to dive into my snapshots on several occasions to restore a previous version of a file that has accidently been deleted/modified. I prefer this solution over RAID as disk write to a drive in RAID is duplicated immediately even if its not what you wanted.

So, what’s the positive? Well let’s consider why Microsoft are removing it. They have said that it causes conflicts with applications installed on the Small Business Server sister OS code named ‘Aurora’. These software applications don’t play nicely with having a logical drive pool. I, as have many other WHS enthusiasts, have over time installed numerous applications onto my WHS (e.g Microsoft Team Foundation Server) and I always do so with caution due to DE. I am careful to  ensure that nothing I install utilises the DATA drive and I often refrain from installing software that I think might conflict. With DE removed this worry is taken care of, which is definitely a positive for me.

Does WHS fit in the Cloud Computing Landscape?

If we look to the future and assume that the Cloud Computing paradigm is here to stay the bigger question arises of what role would WHS play. I admit to being a Cloud advocate and I do share Ray Ozzie’s view of a “New Dawn” where  devices (not PCs) connect to continuous services hosted in the internet. In this vision the majority of people only use devices to connect to the internet (smart phones, tablets, TVs etc) and they are continuously connected to the web where their data is stored, analysed, processed and shared. The concept of having a local home server is almost alien as your storage will all be in the cloud. Backups won’t be required as data will be automatically synched and devices won’t need to be imaged for restoration as they will only be simple devices with sophisticated browsers. Sure PC’s will remain for advanced users but not the user majority. This vision of the future is not that revolutionary, it’s already happening, so fast in fact that the next version of WHS after Vail will need to be positioned within this connected world. People may cry that users will always want their data close by and local but that’s not true as over time they won’t even think about it as evidenced by early cloud services like Hotmail, Exchange Online etc.

This vision of the future relies heavily on a fast internet connection and related infrastructure which is slowly being rolled out across the developed world but this weakness perhaps provides an opportunity for the WHS’s of the future. The ability to synch to your local “private cloud” and use that as the hub for your home is probably a requirement of the future and a ‘server’ device could fulfil this space. Unfortunately so could other home based devices, such as the XBoxes, Google TVs and Media Centers of the future, and the single home device is the ‘holy grail’ of consumer electronics. The battle for the position as sole ‘provider’ and gateway to the continuous services of the future will be intense and whilst the current WHS offerings (V1 and Vail") are too weak to survive the battle, maybe, just maybe, their future off-spring will fit that gap perfectly.

Summary:

WHS has, unfortunately, always been a niche product which is a real shame as it is one of the best products to ever have come out of Redmond and one that deserves more credit. Microsoft have never promoted it and seem instead to be happy to use it as a experiment for newer technologies (like DE). This is obviously a dark period for the WHS product but the communities reaction to the DE news and the growing popularity of the platform means that I believe it it will survive in the short term.

If I were Microsoft I would look to extract the key features of WHS (i.e. client backup and remote access services) and convert them into add on applications for Windows. With DE gone there is little point in having a ‘Home’ sku of Windows Server. Sell Windows 2008 Foundation to OEMs with these WHS feature applications installed for them to put on their consumer devices. This would enable these features to be supported on Windows Client OS’s in the future too when it was profitable to do so. I would be happy to run a fully fledged supported version of Windows Server that comfortably ran all server based software but to which I could also install a Client Backup and Remote Access Services if I required them.

Will I upgrade to Vail? Good question. Currently I’m undecided. I will review it against other products when the time comes (Amahi on Linux, Aurora, Win Server 2008) but one thing is for sure – the removal of DE will not affect my decision but the strength of Microsoft commitment to the product will.

No Syntax Highlighting On Preview In Live Writer 2011?

Anyone using my Windows Live Writer plug-in for Source Code highlighting will find that after upgrading to the latest Live Writer version the preview feature is not working. The latest version of Windows Live Writer that ships with the Windows Live Essentials 2011 package has some modifications to prevent script running from within the application. This is apparently to enhance security and whilst I think that this modification is a valid one on those grounds I would have expected a way of the user being able to disable this feature.

On the previous version of Live Writer on inserting a code snippet you would get a nice preview (left hand image) but now in 2011 you just get the plain text version of your code (right hand image):

image

My plugin uses JavaScript (for preview purposes only – it’s not posted to your blog) to render the code exactly as it will be displayed on your WordPress blog but as this script is now disabled no render can take place.

To be honest I am not a fan of the new Live Writer as to me it appears as though the only objective was to get the ribbon toolbar included at the expense of most other things, including usability. To insert anything now I have to select insert on the toolbar and then select for example an image. Previously I just clicked on the right hand toolbar. The way that the plugin buttons are displayed on the toolbar is also poor in comparison to the previous version.

Regardless of seeing it as a step backwards I do intend to support this version with my plugin and so will be releasing an updated version soon that will aim to provide some sort of preview functionality within 2011 (although I expect it to not be as convenient as with the previous version). 

Stay tuned for the next version.

BigTrak Is Back

imageThe 1980’s iconic “programmable electronic vehicle” BigTrak is back and in the shops now. I always wanted one of these when I was a boy and now I finally get to buy one, and I can also justify the purchase by buying it as a Christmas present for my son. Having boys is great!

Below is the classic 80’s advert in case you’ve forgotten how useful BigTrak is for delivering an apple to your resting dad:

Live Writer Syntax Highlighting Plug-in v1.2.0 Released

I have finally posted a new version of my Windows Live Writer Syntax Highlighter for WordPress.com hosted blogs, you can find it here. Version 1.2.0 has been a while in the making but I have included the features most requested by you guys. Firstly thank you to everyone who had downloaded the plug-in and provided feedback either directly on this blog or on their own blogs. I do take the feedback seriously and have included all the features that I could into this version.

So what’s in V1.2.0? Well there are some small bug fixes and improvements which you may or may not notice, but the big features are covered below:

Ever since the first version of the Plug-in I have wanted to provide a mechanism to preview what the syntax highlighting will be like once the post has been posted to your blog. The problem is that the rendering is done not by my plug-in (as it just inserts the right tags into the HTML) but by the WordPress server. After a few ‘proof of concepts’ I finally hit on a way of making this work. The result is that you can see what your code will look like in both the ‘Edit’ and ‘Preview’ windows within Live Writer (see below screenshot):

WLW1200_4

This preview mode can be turned on/off per code snippet too. Any changes to the code snippet properties, for example to highlight specific lines, will take effect in the previewed version too so you can get a feel for how the properties affect the snippets display. It should be noted however that to refresh the preview display you will need to switch to the ‘Source’ or ‘Preview’ views in Live Writer and back to ‘Edit’ view. This is purely due to the fact that Live Writer doesn’t refresh the view after altering a plug-in item.

WLW1200_5

The code entry form (above) has been upgraded to accept tab inputs, be movable and now provides a line/column number indicator. It’s very useful to be able to identify lines and columns so that you can choose the right line to highlight and can ensure that you are within any relevant width restrictions on your blog for the code snippet. The context menu on the code entry form also has a ‘Trim Whitespace’ option which will trim the leading and trailing whitespace from your code snippet. This is useful after you have pasted in code from another editor, e.g. Visual Studio.

The issue of XML not being displayed correctly has been resolved. This is again a Live Writer limitation as it will not display HTML or XML correctly. This is resolved two ways in this version of the plug-in. Firstly when you use preview the XML/HTML will be displayed correctly as it is fully rendered to look how it will look after posting. Secondly where you have chosen to turn off preview the plug-in will now replace the “<>” characters with “()”. This change is purely for viewing in Live Writer and the correct characters will be used when posting to your blog.

Version information and a check for updates link has been added to the ‘Tools – Options – Plug-ins’ dialog (see below). I hope to introduce the option of auto updates in future versions, but for now the link just navigates to the plug-in’s homepage.

WLW1200_7

You can download the new version from https://richhewlett.com/software/wlwsourcecodeplugin/. I hope you like it.

Backing up TFS 2010 with new Power Tools Backup Plan

At last – backup is built into the TFS product, well via the Power Tools at least. Backing up TFS has always been difficult and non-intuitive without a SQL DBA in your pocket, even the documentation is at best extensive but at worse confusing. But all that’s history as the September 2010 TFS Power Tools now includes a Backup plan feature.

Recently I have posted a few articles on backing up TFS 2010 using Windows PowerShell. The first involved backing up the all TFS SQL databases using PowerShell and SMO and can be found here: ‘Backing Up TFS Part-1’. the second covered an alternative strategy of backing up the latest content of your TFS repository and can found here: ‘Backing Up TFS Part-2’.

Brian Harry recently posted this article ‘Backing up and restoring your TFS server’ describing the new Backup Plan feature of the new release of the TFS Power Tools and this week the September 2010 TFS Power Tools were released. Brian’s posts provide all the details you need on installing and configuring a backup plan together with some screenshots.

I installed the updated Power Tools on my Windows Home Server (you must remove any previous versions manually first) running TFS and easily setup a backup plan:

TFSBackupInstallOption

I did hit one problem with it though. It seems that the feature is a little fussy on the target location of the backup files. The target must be a network share and the tool will attempt to apply the relevant access privileges to the folder for the scheduled job to be able to write the backups successfully. The tool verifies that the information you have supplied is accurate and it checks that a backup can run. My initial attempts to verify my backup plan failed at this point with the following error in the log:

Microsoft.SqlServer.Management.Smo.FailedOperationException: Backup failed for Server ‘servername\SqlExpress’.  —> Microsoft.SqlServer.Management.Common.ExecutionFailureException: An exception occurred while executing a Transact-SQL statement or batch. —> System.Data.SqlClient.SqlException: Cannot open backup device ‘\\servername\backup\tfsbackups\Tfs_Configuration_20100911231343.bak’. Operating system error 5(failed to retrieve text for this error. Reason: 1815).

After some investigation and a helpful post from Dave Hunter I concluded that this was due to the permissions on the backup share on my Home Server. As that share is a WHS managed share it has specific security permissions that prevented the backup tool from asserting its authority and granting the relevant privileges. To circumvent the problem I created a new standard non-WHS share on my Home Server with minimal restrictions. Once I entered the new share’s details the backup tool verified the backup plan successfully and ran a backup fine. I then knocked up a simple RoboCopy script to copy the contents of the new backup share to my original intended WHS share target location on a daily basis via a scheduled task.

In summary I believe that this is a major step forward for TFS and will benefit many of the new users picked up since the introduction of the TFS Basic Configuration in TFS 2010, proving to be another nail in the coffin for Visual SourceSafe. I’d recommend any team still using SourceSafe or any other tool to take another look at TFS as it is definitely getting easier to manage than previously.

Private Clouds Gaining Momentum

Well its been an interesting few weeks for cloud computing, mostly in the “private cloud” space. Microsoft have announced their Windows Azure Appliance enabling you to buy a Windows Azure cloud solution in a box (well actually many boxes as it comprises of hundreds of servers) and also the OpenStack cloud offering continues to grow in strength with RackSpace releasing its cloud storage offering under Apache 2.0 license with the OpenStack project.

OpenStack is an initiate to provide open source cloud computing and contains many elements from various organisations (Citrix, Dell etc) but the core offerings are Rackspace’s storage solution and the cloud compute technology behind NASA’s Nebula Cloud platform. To quote their web site…

The goal of OpenStack is to allow any organization to create and offer cloud computing capabilities using open source software running on standard hardware. OpenStack Compute is software for automatically creating and managing large groups of virtual private servers. OpenStack Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data.”

It is exciting to see OpenStack grow as more vendors outsource their offerings and integrate them into the OpenStack initiative. It provides an opportunity to run your own open source private cloud that will eventually enable you to consume the best of breed offerings from various vendors based on the proliferation of common standards.

Meanwhile Microsoft’s Azure Appliance is described as …

…a turnkey cloud platform that customers can deploy in their own datacentre, across hundreds to thousands of servers. The Windows Azure platform appliance consists of Windows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. This hardware will be delivered by a variety of partners.

Whilst this is initially going to appeal to service providers wanting to offer Azure based cloud computing to their customers, it is also another important shift towards private clouds.

These are both examples in my eyes of the industry stepping closer to private clouds becoming a key presence in the enterprise and this will doubtless lead to the integration of public and private clouds. It shows the progression from hype around what cloud might offer, to organisations gaining real tangible benefits from the scalable and flexible cloud computing platforms that are at home inside or outside of the private data centre. These flexible platforms provide real opportunities for enterprises to deploy, run, monitor and scale their applications on elastic commodity infrastructure regardless of whether this infrastructure is housed internally or externally.

The debate on whether ‘Private clouds’ are true cloud computing can continue and whilst it is true that they don’t offer the ‘no- capital upfront’ expenditure and pay as you go model I personally don’t think that excludes them from the cloud computing definition. For enterprises and organisations that are intent on running their own data centres in the future there will still be the drive for efficiencies as there is now, perhaps more so to compete with competitors utilising public cloud offerings. Data centre owners will want to reduce the costs of managing this infrastructure, and will need it to be scalable and fault tolerant. These are the same core objectives of the cloud providers. It makes sense for private clouds to evolve based on the standards, tools and products used by the cloud providers. the ability to easily deploy enterprise applications onto an elastic infrastructure and manage them in a single autonomous way is surely the vision for many a CTO. Sure the elasticity of the infrastructure is restricted by the physical hardware on site but the ability to shut down and re-provision an existing application instance based on current load can drive massive cost benefits as it maximises the efficiency of each node.  The emergence of standards also provides the option to extend your cloud seamlessly out to the public cloud utilising excess capacity from pubic cloud vendors.

The Windows Azure ‘Appliance’ is actually hundreds of servers and there is no denying the fact that cloud computing is currently solely for the big boys who can afford to purchase hundreds or thousands of servers, but it won’t always be that way. Just as with previous computing paradigms the early adopters will pave the way but as standards evolve and more open source offerings such as OpenStack become available more and more opportunities will evolve for smaller more fragmented private and public clouds to flourish. For those enterprises that don’t want to solely use the cloud offerings and need to maintain a small selection of private servers the future may see private clouds consisting of only 5 to 10 servers that connect to the public cloud platforms for extra capacity or for hosted services. The ability to manage those servers as one collective platform offers efficiency benefits capable of driving down the cost of computing.

Whatever the future brings I think that there is a place for private clouds. If public cloud offerings prove to be successful and grow in importance to the industry then private clouds will no doubt grow too to compliment and integrate those public offerrings. Alternatively if the public cloud fails to deliver then I would expect the technologies involved to still make their way into the private data centre as companies like Microsoft move to capitalise on their assets by integrating them into their enterprise product offerings. Either way then, as long as the emergence of standards continues as does the need for some enterprises to manage their systems on site, the future of private cloud computing platforms seems bright. Only time will tell.

Backing Up TFS 2010 Using PowerShell: Part 2

In my previous post (“Backing Up TFS 2010 Using PowerShell: Part 1”) I covered how to backup up the TFS SQL Server databases using Windows PowerShell and the SQL Server Management Objects (SMO) API. In this post I’m going to take it one stage further and add another layer of backup for the paranoid amongst you like me.

Backing up the databases is all well and good but I like to be able to see what I’m backing up too and there’s nothing quite like being able to see all your data on disk in its native format.  So to give me this extra warm fuzzy feeling I have written a PowerShell script to perform a ‘Get-Latest’ of all the source code in my ‘Team Project Collection’ in TFS and then copy those latest file versions to a backup location. By scheduling this activity to occur periodically through the week I see two key benefits. Firstly I can see that I have the data in a second place and that its in its native format (i.e. just plain files of source code, documents, images etc) which I can then backup and know that should I not be able to restore the SQL databases I will at least have the latest source files. The second benefit is that I have the ability to quickly access my source code (on the local machine or remotely) from another machine that may not have Visual Studio installed. This is useful if I just want to check something out and don’t need to modify the files.

I wanted this script to run on the server (in my case a Windows Home Server) and so to be able to perform a Get-Latest on TFS I needed to install ‘Team Explorer’ and setup a ‘Workspace’. Team Explorer is the standalone client required to interact with TFS Source code without Visual Studio. The Team Explorer setup package is included on the TFS installation media in the TeamExplorer folder. Browse to this and then run Setup.exe and keep the default values. This installs the Visual Studio 2010 shell, the TFS object model, Visual Studio Team Explorer 2010 and MS Help Viewer.

Next you need to setup a Workspace for the server using Team Explorer (via Start > All Programs > Microsoft Visual Studio 2010 > Microsoft Visual Studio 2010) and access the TFS server by adding the server in the normal way and go to ‘source control’. Select the top item in source control (in most cases ‘SERVERNAME\DefaulCollection’) and right click, “Map to local Folder”. Enter path of local folder on server and then it will ask if you want to get latest, say yes. You now have a workspace set up and you will be able to perform a manual or automated ‘Get-Latest’.

To automate the interaction with TFS and perform the ‘get’ you could use the TF.exe command line tool that installs with the Team Explorer setup. Personally as a PowerShell fan I’ chose to use the PowerShell cmdlets that come with the Team Foundation 2010 Power Tools. To use these you’ll need to download the power tools and run the tfpt.msi installation package. Choose custom and just choose the server specific items, in this case just the PowerShell cmdlets and the command line interface options (see screenshot).

powertolls

The basic flow of the script is configure the file paths required, import the Microsoft.TeamFoundation.PowerShell snap-in/module, and then call the ‘Update-TFSWorkspace’ cmdlet. This is the command to perform a Get-Latest on the workspace previously configured. The workspace folder you specify must match the local folder you specified earlier in Team Explorer. The next step is to use Robocopy to copy the files to the backup location. Robocopy is a superb file copy tool as it only copies changed files which significantly improves the speed of this procedure compared to using a standard file copy cmdlet. Once complete the log is passed to RoboCopyLogger (more about this in a future blog post) that scans the log for errors, and then we add an Event Log entry for success or failure.

Finally set up a Scheduled Task to configure this script to run automatically each week. If you were to use this script as is you would need to change the file paths, and use an Event Log Source that was relevant to your system (or just remove the Write-Eventlog call”).


clear-host
write-output "------------------------------------Script Start------------------------------------"
write-output " TFS Data Get-Latest Backup "
write-output "------------------------------------------------------------------------------------" 

# set file paths
$LocalWorkspaceFolder="c:\tempcode\tfsbackups\latestdata"
$RemoteBackupFolder="\\WinHomeSvr\Backup2\TFSBackups\LatestData"
$Robocopy="c:\scripts\Robocopy.exe"
$timestamp = Get-Date -format yyyyMMddHHmmss
$Log="c:\Scripts\Logs\TFSData\TFSLatestDataBackup_$timestamp.txt"
$RobocopyLogger="c:\Scripts\RoboCopyLoggerTweet_BETA\RobocopyLoggerTweet"

# set error action preference so errors stop and the trycatch kicks in
$erroractionpreference = "Continue"

try
{
    # add the TFS snapin to the session
    add-pssnapin Microsoft.TeamFoundation.Powershell

    # get latest on the top level workspace level
    Update-TfsWorkspace $LocalWorkspaceFolder -Force -Recurse

    # now robocopy the local cache to the final backup folder
    invoke-expression "$Robocopy $LocalWorkspaceFolder
                                        $RemoteBackupFolder /MIR /LOG:$Log /NP"

    # Copy the log files too
    invoke-expression "$Robocopy $Log $RemoteBackupFolder /MIR"

    # Check the log for errors etc and log to eventlog/twitter
    invoke-expression "$RobocopyLogger $Log 1"

    # write all events to the logs
    write-output "Writting SUCCESS to EventLog"
    write-eventlog -LogName "HomeNetwork" -Source "TFS Backups"
                              -EventId 1 -Message "TFS Data BackUp Script ran"
}
catch
{
    # error occurred so lets report it
    write-output "ERROR OCCURRED" $error 

    # write an event to the event log
    write-output "Writting FAIL to EventLog"
    write-eventlog -LogName "HomeNetwork" -Source "TFS Backups"
                               -EventId 1 -Message "TFS Data BackUp Script failed" -EntryType Error
}

write-output "------------------------------------Script end------------------------------------" 

Backing Up TFS 2010 Using PowerShell: Part 1

In a previous post I covered how to install Team Foundation Server 2010 onto a Windows Home Server. The installation was a TFS Basic Configuration installation and whilst it was geared towards Windows Home Server the concepts are the same if you are installing it on other servers / workstations. This post will cover how to backup the TFS databases. For backing up just the raw source files too check out Part 2 of this post.

Now, I am a healthily paranoid kind of guy and after installing TFS the first thing I decided to get right was a method to backup my new TFS installation to protect against data loss. I’m not going to sleep easy until I know that my source code is backed up in a solid repeatable manner. The backup tool of choice is Windows PowerShell due to the sheer power that this scripting shell provides.

Firstly its important to understand that Microsoft Team Foundation Server relies completely on Microsoft SQL Server for its data persistence. Therefore backing up TFS is just a matter of backing up the SQL Server databases in the TFS Data Tier. Usually, unless you are installing an enterprise TFS solution, the database will reside on the same server as the rest of the TFS installation. The number of databases that are created by TFS will vary depending on the number of ‘Project Collections’ you create in TFS. Therefore to avoid having to update your backup scripts each time you add or remove a collection in TFS and if your SQL Server instance is only used for TFS then its safer to just backup ALL the databases.

I strongly recommend that you read the TFS documentation on how to backup TFS and only use the information in this post as supplementary information, as backing up your data is a serious business and I’d hate for something to go wrong.

I used this excellent post from Donabel Santos as the inspiration for my PowerShell script and modified it to customise for TFS and to provide additional functionality. The SQL Server interaction is through the use of the SQL Server Management Objects (SMO) API which provides a rich collection of objects through which you can interact with your databases. PowerShell makes interacting with these objects easy.

The basic flow of the script is to connect to the SQL Server instance using the SMO objects and then loop through the collection of databases within that instance. We ignore the ‘TempDB’ database as backing up this is not required nor possible. We then backup the database to file, again using the SMO. Once all databases have been backed up to files we zip them up (using the excellent 7-Zip) and copy the zip file to a backup location. You don’t need to install the full 7-Zip package on your server as you can download a command line friendly version that just needs to be copied across to your server. The end of the script then records the success or failure of the transaction to the Event-Log.

Obviously if you were to use this script you would need to change the file paths, SQL Server Instance name, and use an Event Log Source that was relevant to your system (or just remove the Write-Eventlog call”).

clear-host
write-output "-------------------------------Script Start----------------------------------"
write-output " TFS SQL Database Backup "
write-output "------------------------------------------------------------------------------------"

# load modules used in this script
import-module -name C:\scripts\support\SupportModule -verbose

# load assemblies
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null

# create a new server object, and set backup path and timestamp info (they will share same timestamp)
$server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "winhomesvr\SQLEXPRESS"
$timestamp = Get-Date -format yyyyMMddHHmmss
$SQLDataFolder = "C:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA"
$backupDirectory = "c:\tempcode\tfsbackups\sql\bkupdir"
$backupZipStore = "\\winhomesvr\backup2\tfsbackups\sql\Zipped\"
$backupRawDataZipStore = "\\winhomesvr\backup2\tfsbackups\sql\ZippedRAW\"
$7ZipExePath = "c:\Scripts\Support\7-Zip\7za"
$7ZipCmdLineForBkUps = $7ZipExePath + " a " + $backupZipStore + "TFSSQLBkup_" + $timestamp
                                 + ".zip " + $backupDirectory
$7ZipCmdLineForRawDataFileBkUps = $7ZipExePath + " a " + $backupRawDataZipStore
                                 + "TFSSQLBkupRAW_" + $timestamp + ".zip " + $backupDirectory

# display settings
write-output "Backup Directory: " $backupDirectory
write-output "Backup Zip Store: " $backupZipStore
write-output "Timestamp: " $timestamp

# set error action preference so errors stop and the trycatch kicks in
$erroractionpreference = "Continue"

try
{
    write-output "Deleting old backup files"
    remove-item -Path ($backupDirectory + "\*.*") -force

    # loop all databases in server, and backup each one using SQL Backup
    foreach($db in $server.Databases)
    {
        # set database name
        $dbName = $db.Name

        # exclude the tempdb as you can't back that one up
        if ($dbName -ne "tempdb")
        {
            write-output "Processing database: " $dbName
            $smoBackup = New-Object ("Microsoft.SqlServer.Management.Smo.Backup")
            $smoBackup.Action = [Microsoft.SqlServer.Management.Smo.BackupActionType]::Database
            $smoBackup.BackupSetDescription = "Full backup of " + $dbName
            $smoBackup.BackupSetName = $dbName + " Backup"
            $smoBackup.Database = $dbName
            $smoBackup.MediaDescription = "Disk"
            $smoBackup.Devices.AddDevice(
                          $backupDirectory + "\" + $dbName + "_" + $timestamp + ".bak", "File")
            $smoBackup.Initialize = $TRUE
            $smoBackup.SqlBackup($server)
            write-output "Processed database: " $dbName
        }
    }

    write-output "Processed all databases, listing outputs..."

    #let's confirm, let's list all backup files
    $directory = Get-ChildItem $backupDirectory

    #list only files that end in .bak
    $backupFilesList = $directory | where {$_.extension -eq ".bak"}
    $backupFilesList | Format-Table Name, LastWriteTime

    write-output "Clear out old zipped file from the zip storage folder..."
    Remove-MostFiles $backupZipStore *.zip 2

    write-output "Zipping up the files..."
    invoke-expression $7ZipCmdLineForBkUps

    # write all events to the logs
    write-output "Writting SUCCESS to EventLog"
    write-eventlog -LogName "HomeNetwork" -Source "TFS Backups"
                              -EventId 1 -Message "TFS SQL BackUp Script ran"
}
catch
{
    # error occurred so lets report it
    write-output "ERROR OCCURRED" $error

    # write an event to the event log
    write-output "Writting FAIL to EventLog"
    write-eventlog -LogName "HomeNetwork" -Source "TFS Backups"
                              -EventId 1 -Message "TFS SQL BackUp Script failed" -EntryType Error
}

write-output "------------------------------------Script end------------------------------------"

Update: Since writing this post Microsoft have updated the TFS Power Tools to include a Backup Plan which enables you to schedule full backups from the TFS Admin Console. Checkout this post.

Installing Team Foundation Server 2010 on Windows Home Server

Last October I posted about the fact that Microsoft’s Team Foundation Server 2010 was going to be shipping with a “Basic” configuration that more light weight and seen as more of a Visual SourceSafe replacement. In that post I also pointed out that it seemed possible to install this version on a Windows Home Server (WHS) and that it would be something I’d try. Well eventually I have got round to doing it and of course I’ve documented my approach. The benefits of TFS immense but out of the scope of this article but instead I’m covering the installation process.

It may have been technically possible to jump through some hoops and install TFS 2008 on your WHS box but the pre-requisites were heavy (including SharePoint and full SQL Server) and it was always seen as a complicated process. With the ‘basic’ configuration of TFS 2010 you can ignore SharePoint and SQL Reporting Services and it installs onto SQL Server 2008 Express (which it also installs for you).

A point worth mentioning is that TFS 2010 will not trash you existing web sites (important for Windows Home Servers) but will instead install its own site alongside any existing sites on the server. It is possible to then connect to TFS remotely via your WHS Remote Access domain name which is a very nifty feature.

Pre-Installation Considerations:

Basic Configuration: Whilst it’s possible to install the non-basic configurations I don’t see the point unless there is something specific that you want to take advantage of that’s not included in the basic configuration. For a light, easy installation and a nice minimal processing overhead on your WHS the ‘basic’ configuration is ideal. It also contains all the core features such as build automation, work item tracking and source control.

Location of SQL Server Data Files: By default the SQL Server data files are installed to the system drive under the program files location for SQL Server Express. As you add more and more content to TFS these files will naturally grow and so you need to consider up front whether you have the disk space to support this. Windows Home Server by default installs only a 20GB system partition which should have adequate space for the data files to grow, but if you have installed a lot of applications to your system drive this could be an issue.  To store the data files at an alternative location it is easier to install SQL Express 2008 manually prior to installing TFS 2010 (and configure the data file locations via the SQL setup). The TFS installation will then just reuse the SQL Express instance already installed. Alternatively you could install the data files on the WHS’s ‘Data’ drive (D drive) but personally I prefer to leave that drive well alone and let WHS’s ‘Drive Extender’ manage all the data in it’s Storage Pool. I did consider installing a new drive to my server, and not adding that to the storage pool, which would then be used for my data files. In the end I decided I had adequate space on the system drive and I could move the data files at a future point in time if required.

Build Controller: Part of the TFS installation/configuration includes the decision to install a Build Controller on the server. If you plan on running automated TFS builds then you’ll need to install this. I would strongly recommend you use the Team Build features of TFS as they are one of its key features but if you don’t want to automate builds or want to minimise the services running on your server then just skip that part.

Support: Whist WHS is really a Windows Server 2003 under the covers and TFS 2010 clearly supports Windows Server 2003, installing it on a Windows Home Server is not supported by Microsoft (or anyone else) and you do so at your own risk.

Steps:

You’ll need to Remote Desktop onto your server to run the installation interactively. The below screenshots show the key stages of the installation process. Luckily it’s mostly a matter of clicking ‘Next’:

whs_tfs_01 whs_tfs_02

Choose to install Team Foundation Server and the Build Service (if you plan to run automated builds on this server too). Don’t install the Server Proxy:

whs_tfs_04  whs_tfs_06

After much processing and a reboot it’s installed:

whs_tfs_07 whs_tfs_08

Once installed, it then launches the Configuration Centre and it’s at this point that you can choose the ‘Basic’ configuration. It will then install SQL Server Express or point to an existing SQL instance:

whs_tfs_10 whs_tfs_11 whs_tfs_12    whs_tfs_16 whs_tfs_17

Next you’re asked to configure the Build Service details:

whs_tfs_18 whs_tfs_19 whs_tfs_20 whs_tfs_21 whs_tfs_22    whs_tfs_26 whs_tfs_27

That’s it. You can launch the ‘Team Foundation Server Administration Console’ from the Start Menu which is a neat new tool enabling you to manage TFS from the server itself:

whs_tfs_29

Next Steps:

Connect via Visual Studio: Now it’s all up and running you can fire up Visual Studio and test that you can connect to the new Team Foundation Server by adding it to the list of servers in Team Explorer. You should be able to create new Team Projects, check in code and run builds just as you would on any other TFS server.

Enable Remote Access: To enable remote access to TFS so you can access your source code from remote locations you need to copy your client certificate from the Remote Access site and then add it to the TFS site. For instructions on how to do this see this excellent post by Jason Neave.

Backups: Call me paranoid but before I add anything important to this TFS server I want an automated backup procedure in place. Backing up TFS is really ‘just’ a matter of backing up the SQL Server databases on which TFS sits as that is the only source of its data. I say ‘just’ because it’s not as easy as it should be. A key point to note is that as TFS uses numerous databases it is critical to backup all of them at the same time so you don’t end up restoring unsynchronised databases. I have created an automated backup procedure using Windows PowerShell that I will share in a future post.

Conclusion:

It’s exciting to see a combination of two excellent products working together and I think this is another great way to add value to your Windows Home Server whilst enabling you to experiment with a quality ALM tool.

UPDATE: For WHS 2011 users check out my post on Installing TFS 2010 on Windows Home Server 2011.