Capturing your Camcorder DV Video with Windows Live Movie Maker & Windows 7

I have five years of Mini-DV Camcorder tapes stacked up on my shelve. My plan has always been to transfer the video off onto my PC and then reuse the tapes, but for various reasons this never quite happened, instead the pile just grew bigger and bigger. Not for any longer though as I am refusing to buy anymore tapes and forcing myself to get the video transferred.

Firstly I need to connect the camera and as my camcorder is a Sony its the good old Firewire card and unfortunately Windows 7 (and Vista before it) refuses to recognise it. After some messing about and moving slots (why oh why did Microsoft remove the ability to manually adjust IRQ settings?)  I give up and order a new card (a Belkin 3 port FireWire PCI card) which happily Windows 7 finds and embraces straight away. My Sony then appears in Windows in the list of devices in "My Computer".

Now I have the connectivity I need the software to import the data in. There are numerous options but I already have two installed; Nero and Windows Live Movie Maker. Windows Live Movie Maker replaces Windows Movie Maker but is not included in the Windows 7 installation but instead is part of the free Windows Live Essentials pack for download separately. Nero Vision does a good job but Windows Live Movie Maker provides the option to automatically split the video into multiple files. It seems more natural to me to have a collection of video files instead of a monolithic 60 minute video. Separate files also makes it easy to quickly remove scenes and include extra ones later when adding the video to a DVD.

Importing via Windows Live Movie Maker:

1)  Open Movie Maker and select “Import from device” from the top menu.

6

2) Select device:

4

3)  Name the video and pick options:

3

2

To create multiple files instead of one monolithic file check the bottom checkbox.

4) Ok the options dialog and click Next, then sit back and wait for the video to stream. It will take as long as the length of video you’re streaming takes to play. If you selected to import the whole tape the video will play for the length of the tape and then will spend around 5 to 10 minutes to splitting up the file.

1

WARNING: It takes a lot of disk space to import video, with a 60 minute tape creating about 12GB of video files. Make sure that you have plenty of free space before you start to import the video. Also if you have chosen to split the video into multiple files you will need to have double the free space. This is because a single file is created initially (e.g. 12GB) and then the multiple individual files are created before the original is deleted which means that there will be twice the amount of data during this process (e.g. 24GB). If there is not enough disk space to create the individual files then Movie Maker will just inform you that it is unable to split the video into multiple files and you get left with the single file. You would then need to free some more space and then re-import the video again to end up with the desired collection of multiple videos which is a waste of a valuable hour of your life.

What format should I save the Video in?

Now that the video is imported I want to keep it in a digital format but should I store it? Well I wanted to create DVDs from the tapes and so I am using Nero for this purpose as I like the way it creates the DVD menu’s etc. DVD maker included with Windows 7 could alternatively be used for this purpose however. Creating the DVD is not so much for storage but for portability. I like the fact that I can watch my videos on any DVD player but it’s not the ideal way to store the video for the future. Video is compressed down to get it onto a DVD and although the quality is still very good it doesn’t maintain the complete raw format that ‘might’ be useful in the future when converting to new and improved video formats. When looking around for a format to store this 12GB/hour video it occurred to me that although this was a large amount of data it is in fact getting smaller by the day anyway as technology improves. When digital cameras first came out people were looking for ways to compress their images as several MB per photo seemed hard to store. Of course we now take it for granted that a photo is 5MB and hard drives are now sold in the Terabytes meaning it is less of a problem. So I’ve decided to keep my video in it’s raw AVI format for the foreseeable future, and have stored it on my already bulging Windows Home Server.

Team Foundation Server ‘Basic’ Edition

Many development teams still regularly use Visual SourceSafe for their source control which can stimulate heated debates between those that have used it for many years without problems and those that have suffered some pain with it. Regardless of this debate there is no denying that SourceSafe is coming to the end of it’s useful life. It’s old technology and will come out of support in 2011, although a compatibility update is expected with Visual Studio 2010.

When Microsoft developed it’s replacement, Team Foundation Server (TFS), it focused on providing more than just a source control product but a whole development lifecycle management system. Regardless of the benefits of TFS (and there are many) it has been avoided by many small development teams due to its high costs and complex installation/management. Many have instead moved to alternative source control products such as the free Subversion, leading to a decline in Microsoft’s market share in this area.

So, what’s changed? Microsoft now plan to provide a ‘Basic’ version of TFS 2010  when it ships next year. I think that this is a huge step forward for TFS and it’s take up across the development community. Brian Harry details the ‘Basic’ version in this blog post. This version of TFS will have a fast and easy installation and provides many more implementation options for the product. It will install on SQL Express 2008 and can even be installed on Client Windows Operating Systems. This really is targeting the current SourceSafe users and provides a low cost (perhaps even free) entry to the benefits of TFS. You might think that this would only provide basic TFS functionality but no so. Included in the basic version is Source Control, Bug Tracking and Build Automation, which provide the bulk of the key TFS features. The screenshots also suggest that Web Access is also included. What’s not included is Report Services and SharePoint, which are arguably more geared towards the larger development teams anyway. The key benefits from TFS come from the Work Item interaction and ‘Continuous Integration’ friendly automated build features and these are included.

The move to TFS for a SourceSafe (or any other simple source control system) team will provide many benefits and this version should enable those benefits at a minimal cost. There are no details on pricing but personally I would expect it to be included in the Team Developer MSDN subscription.

SourceSafe is also used by hobbyist and professional developers to manage their own personal source code and I see this version of TFS being ideal for this. The ability to install on a client OS is a major factor to these users. There is also a comment on Brian’s blog post about running TFS basic on Windows Home Server which is something I am keen to try out.

By allowing more people to access this great product it will greatly contribute to the TFS community and it’s take up globally. If you can’t wait until TFS 2010 is released and would like to know more about TFS versus SourceSafe in terms of pricing then check out Eric Nelson’s post here.

‘Windows Home Server’ Build & Setup

I recently setup a new Windows Home Server and this post covers why I chose this operating system and how I setup my server.

Requirements:

My requirement was for an extendable ‘always on’ network attached file storage solution that would allow me to access my files from any machine in the house (and ideally remotely via the Internet when required) whilst providing some fault tolerance data protection. Having all my data in one place makes it easier to manage (less duplication of files across machines) and easier to back-up. This centralisation of data however means being more susceptible to hardware failure (e.g. hard disk failure) and so a solution with either a RAID configuration or something similar was required which ruled out most budget NAS Storage devices. After investigating the options I decided to build a Windows Home Server (WHS). This meets all the requirements above and also adds other neat features such as the extensible Add-In model (a huge bonus for a .Net developer like me).

Buy vs Build:

Having decided on Windows Home Server as the solution the next step was to decide whether to buy or build. There are several very smart WHS devices available from manufacturers like HP and Acer. Whilst these are the easy option they are not the cheapest or the easiest to extend. Also the availability of these devices varies depending on your geographical location. Based on these factors I decided to build.

Build Option:

The fact that WHS has such modest hardware requirements means that building a server is a very economical option. As my server will be ‘always on’ I put power efficiency as a key requirement in my build. To this end I considered the Intel Atom processor found in most ‘NetBooks’. These consume little power and pack enough punch to run WHS comfortably. The Atom CPU comes pre-attached to an Intel motherboard (you can’t buy them separately yet) for under £50. However as I wanted the storage in my server to be extendable and grow over the next few  years I needed the space for at least 4 hard drives but the majority of Intel Atom boards only come with 2 SATA ports. Some boards do exist with four SATA ports but they are hard to source. Another possible Atom drawback is that it may be difficult to source Windows 2003 drivers (required for WHS) for ‘NetBook’ targeted Atom motherboards.

Buy Option:

image24Eventually after some investigations I had a list of parts to build into my shiny new server, but also a few reservations. Firstly would all these components play nicely together and would the build be solid enough to meet my ‘always on’ requirement. After discussions with a colleague he suggested I look for pre-built end of line servers, which is what I did. I quickly found the HP Proliant Ml110 G5 going for £170, bargain. With 1GB RAM, Dual Core Pentium 1.8Ghz CPU, on-board video, Gigabit NIC (Network Interface Card), 160GB HDD, DVD ROM and a multitude of SATA ports it was ideal.

Sure it lacked the power saving benefits of an Atom processor based server but it’s solid Enterprise level build quality more than makes up for it. As the server is designed to run Windows 2003 drivers would also not be a problem.

  BiosInfo

 

For storage I purchased two Western Digital Caviar Green 750GB SATA drives to sit alongside the HP’s 160GB disk. By buying two I can make full use of WHS’s data duplication features to protect my data. Whilst the ‘Green’ branded disks are not as fast as traditional drives they are packed with energy saving features which I value in an ‘always on’ server. 

internals

Hardware:

After much deliberation on whether to use the faster 160GB drive or the larger 750GB drive for the system drive I decided to install a 750GB drive as the system drive, mainly to ensure maximum extendibility. Whichever I installed as the system drive I would be stuck with (without reinstalling the Operating System) and I didn’t want to be limited to the smaller 160GB drive. To make installation of the OS easier I only connected up the first hard drive, and then connected the other two later once the OS was up and running.

 drives

Software Installation:

Once the hardware was sorted I put in the WHS DVD and followed the instructions. The installation went quicker than expected, surprisingly not spending long on performing the ‘Microsoft Updates’. Once installed I logged on to find that WHS didn’t have the right NIC (Network Interface Card) drivers and therefore the NIC hadn’t been installed. This of course explains why I didn’t have to wait for the install to download the updates as it couldn’t get on the web to find them. I installed the NIC drivers from the HP CD and rebooted to find that I could now access the internet via Internet Explorer but neither ‘Windows Update’ nor ‘Product Activation’ would connect. After further investigation (and much head scratching) I found this error in the Windows Event Log:

Type: Error.  Source: W32Time.
Description: Time Provider NtpClient: An error occurred during DNS lookup of the manually configured peer ‘time.windows.com,0x1’. NtpClient will try the DNS lookup again in 15 minutes. The error was: A socket operation was attempted to an unreachable host. (0x80072751)

Checking the System Time revealed I was two years in the past (2007) for some unknown  reason. After correcting the date I could connect to Windows Update fine. After a mammoth 70 updates and a reboot I’m presented with a strict ‘Activate Now’ prompt on logon. I presume that since my WHS install believes it has been installed for two years without activation it thinks it’s time to get serious. After I activate it I ran Windows Update again and this time it installs 5 more updates. Once the OS is stable I connect up the extra hard drives and add them to the storage pool via the WHS Console Server Storage tab.

Clients:

In order to connect to your Client PCs the server and Clients need to be in the same Workgroup making this alignment the next task, along with checking for useful machine names/descriptions. Once all the clients are ready I installed the Client Connector software on each client (all Windows 7 clients) and configure their backup schedules. All clients connected and performed a successful backup first time.

Before copying across all my data onto the Windows Home Server Shared Folders I made sure that ‘Folder Duplication’ was turned off. This was purely to maximise the transfer speed (as WHS didn’t have to perform any duplication during the copy process) but I made sure I turned ‘Folder Duplication’ on for all folders after the data was in place.

Setting up a Printer Server :

Next I wanted to set-up my server as a Print Server ensuring that I could print from any machine without having to turn on the Desktop hosting the printer first. The printer is a basic Lexmark Z615 but there are some unsupported Windows 2003 drivers on the Lexmark site. After trial and error with these though I abandoned them and reverted to the XP drivers which worked ok. I did have to reboot several times though to completely remove the failed printer installed with the Windows 2003 drivers.

An annoying feature of Windows is that it searches the local network for other printers and adds them to the server. I don’t want ‘Print to OneNote’ and ‘XPS Document Printer’ printers on my server but deleting them is pointless as they will reappear. To prevent Windows from performing this auto search you need to turn it off by deselecting the option in: My Computer > Tools > Folder Options > View > “Automatically Search for Network Folders & Printers’.

With my print server setup I attempted to add the printer to my Windows 7 client, but this was to prove difficult too. I couldn’t find an option to specify the correct drivers to use for the Printer and the Vista printers (needed for Windows 7) weren’t installed on my server. In the end I found this blog post  where it explains how to use the Print Manager tool (new to Vista Sp1) to add additional drivers to your print server. This worked perfectly and on the next attempt it downloaded the Vista drivers correctly and installed the printer successfully.

WHS Add Ins:

I intend to install and (time permitting) write plenty of Add Ins for use with WHS as I think that they are an excellent way to add functionality to your server. So far I have installed the Microsoft WHS Toolkit v1.1 and Andreas M’s Advanced Admin Console. I find the Advanced Admin console useful for accessing admin tools via the Console without having to Remote Desktop into the server each time.  Over the next few weeks I hope to review the Power Management Add-Ins and install one to help my server to get a few hours sleep over night when it’s not required, thus saving power and money.

Summary:

So that’s my build story. My home server is up and running and I’m so far very impressed with it. I aim to post some more articles about Windows Home Server over the coming months.

Windows Home Server

I have recently set-up a home server using the Windows Home Server Operating System. The details of the set-up will follow in a future post but firstly I thought I would quickly introduce the Windows Home Server (WHS) product and provide some useful links.

Windows Home Server was released by Microsoft in 2007 and is built on top of Windows Server 2003. It’s role is to sit quietly in your home and automatically backup all your PCs, provide NAS (Network Attached Storage) file sharing features, media streaming and remote access. It’s protects your data from hard drive failure by duplicating your data over multiple drives where you have a multi-drive system.

WHS can be bought pre-installed on custom devices from companies like HP and Acer or you can install it yourself on your own kit. As the hardware requirements are so light its possible to get it running on an old PC you might have lying around. Alternatively build or purchase a cheap low end PC for the purpose.

Particularly of interest to developers is the WHS Add-In model. WHS can be extended through the use of ‘Add-Ins’ from various ISVs (Independent Software Vendors) and enthusiast developers. Microsoft provides a Windows Home Server Add-In SDK for .Net Developers wanting to write Add-Ins for WHS and Brendan Grant has Visual Studio project templates on his blog.

Here’s a selection of links for more information:

Nokia E71 – An update

In May posted an entry about my new Nokia E71 smart phone. This is a quick update post and I can confirm that the novelty factor has still yet to wane. I’ve found more uses for my E71 than I originally expected. In addition to the applications listed in my original post I have since downloaded these apps:

  • Twibble: If you do that ‘Twitter’ thing then this application is an excellent mobile client that makes it easy to keep up to date on the move, and it works great on the E71.
  • * UPDATE:  I’ve since dumped Twibble for Snaptu. The Twitter client is excellent on Snaptu and you gain from the extra plugins such as RSS readers, FaceBook etc. * 
  • Snakes: A new 3D take on the old Nokia ‘Snake’ game. The link I used for this is no longer valid, but it might still be available on the Nokia site.
  • Top Hits Solitaires: A nice selection of basic card games.
  • Global Race – Raging Thunder: A very smart driving game.
  • Ovi Store: Despite Nokia’s efforts this is no Apple iPhone ‘App Store’ but it’s worth having a look through the list of available applications for your Nokia phone periodically as there are a lot of apps available and many are free.

I listen to lots of podcasts and I’m finding that the E71 makes this so easy through its in-built Podcasting application. It enables you to add podcasts to your regular download list by searching by podcast name or you can just  add the feed URL if you know it. You can then check for updates and download them for playing on the phone manually or automatically. The phones in-built speaker is solid too. It’s loud and clear enough to listen to in the car whilst travelling without the need for an additional FM Transmitter.

I find the Opera Mini browser makes mobile browsing really easy and slick on the E71. It’s very fast and easy to use, adjusting the view of the page to fit neatly on your screen minimising the amount of scrolling required per page.

All in all I’m still enjoying my E71 and am starting to wonder how I managed without it. My first months data transfer amount was a eye watering 1 GB of data which shows the extent to which I’ve been maximising the phones 3G connectivity features.

Folder Based Toolbars on the Windows Taskbar

blob_e421d1af-6513-4656-8b9a-749aa923c588Recently someone asked me how I managed to access a list of files on my machine from a pop-up list on the Windows Taskbar. The ability to add folder links to the Windows Taskbar has been around for many versions of Windows I guess still not everyone realises how easy and/or useful it is. I find it helps my productivity and because they are very easy to create and remove it sometimes helps to create them for short to medium term use too. 

To add a folder as a toolbar on the taskbar just right click on the taskbar, pick ‘Toolbars’ and then ‘Create Toolbar…’. This displays the ‘New Toolbar’ dialog which is basically just a folder picker for you to select the folder you want to display.

blob_a30be7b3-c538-4ef2-9794-d201a78f5c65

Once you’ve picked a folder, that’s it! Now you can move it around on the taskbar like any other toolbar.  To remove it, just right click the taskbar again, pick ‘Toolbars’ and unselect it.

 

Syntax Highlighting in WordPress.com hosted blogs and how to create a Windows Live Writer Plug-in

There are various ways of displaying code in a blog entry. Some authors insert images which enable them to ensure that the code is readable and in the required format, however the code can’t then be as easily viewed inside some feed readers, and the text is not searchable or easy to copy/paste. Alternatively an author can just type the text in and use indentation and font styles to distinguish it from the body of the text. By far the best solution however is use a Code Highlighter of some sort that will render the code into a readable format with additional benefits of copy to clipboard, line numbers and colour coding. One popular tool for this job is Syntax Highlighter by Alex Gorbatchev. This uses JavaScript to render the code on the client in a very readable format (see below). It integrates well with the WordPress blog engine and there are several Windows Live Writer Plug-ins for it making it easy to add code snippets to your posts.

Having decided on using Syntax Highlighter for my blog I hit a problem. I’m hosting on WordPress.com and therefore do not have the access required to add it to my site. I found a lot of posts on the web discussing how to integrate it with WordPress but these are for self hosted WordPress.org blogs. Eventually however, to much delight, I found that the WordPress.com developers themselves have already integrated Syntax Highlighter. There is a list of supported languages and details on how to use it here. To activate it you just add a wrap your code inside special tags. Excellent!

Writing a Windows Live Writer Plug-in (Part One)

My next thought though was that I needed to integrate into Windows Live Writer via a plug-in. I couldn’t find a WordPress.com specific plug-in so I fired up MSDN and looked at coding one. All the information you need is on MSDN here

There are two types of plugins. Simple and Smart. As I just needed to add a block of code text surrounded by special tags I figured I would implement the simple type. Simple’s good! Right?

To start with I needed to create a new class library project, reference the Live Writer API (windowslive.writer.api.dll), and add a new class which derived from ‘ContentSource’ and has a ‘InsertableContentSourceAttribute’ attribute.

InsertableContentSourceAttribute("Highlighted Source Code")]
public class MyPlugin : ContentSource
{
}


Next override the CreateContent method on the base class. This is the method that gets called when the user clicks to activate our plug-in to insert some content. It returns a DialogResult and has a string parameter passed by-reference called ‘newContent’. This string parameter is how you set the text to be inserted into the blog post.  In the code example shown below I am displaying a form for the user to enter the code block to insert. The form adds the required WordPress.com wrapper text and then the ‘newContent’ string is set to be the whole text content (code plus wrapper text):

public override System.Windows.Forms.DialogResult CreateContent(System.Windows.Forms.IWin32Window dialogOwner, ref string newContent)
{
    using (codeEntryForm entryform = new codeEntryForm())
    {
        entryform.StartPosition = FormStartPosition.CenterParent;
        DialogResult result = entryform.ShowDialog(dialogOwner); 
        if (result == DialogResult.OK)
        {
            newContent = entryform.GetData();
        }                
        return result;
    }
} 


In addition to overriding the CreateContent method the WriterPluginAttribute needs to be set on the class. This basically provides Live Writer with information about your plug-in. For an explanation of the properties see here.

[WriterPlugin (
        "BF15A85A-9668-480d-9FC2-EC5C16FC140D",
        "Wordpress.com Syntax Highliter Plugin",
        ImagePath = "Images.image.bmp",
        PublisherUrl = "http://www.richhewlett.com/",
        Description = "Inserts WordPress.com code highlight tags around code snippets",
        HasEditableOptions=false)
] 

Once done compile it up and copy the assembly to the Plugins folder of your Windows Live writer installation (e.g. C:\Program Files\Windows Live\Writer). When the application starts all the plug-ins in that folder get loaded too. I now had my Syntax Highlighter plug-in, and it worked. I could insert code into my blog post and upload it to the server for WordPress to display using Syntax Highlighter.

Job done then hey? Well not quite. As I had coded this as a SimpleContentSource plug-in once the code snippet had been inserted into the post editor it was treated as plain text (just as if I had written it by hand into the editor). This meant that if I flipped between the Editor and Source windows Live Writer converted my code text to use escape characters. This meant that my code snippet’s quotes and ampersands etc were converted to their friendly alternatives, and WordPress.com would display them without converting them back.

Writing a Windows Live Writer Plug-in (Part Two)

In order to resolve the issue of Live Writer treating my code snippets as regular text I went back to MSDN and found that the SmartContentSource plug-in type provides more control over the HTML that gets sent for publishing. SmartContentSource plug-ins are not treated as regular text and provide more control. Building this type of plug-in is only slightly more work than the less smart plain ContentSource type.

Firstly I needed to change the plug-in class (from above) to override SmartContentSource instead of ContentSource.

[InsertableContentSourceAttribute("Highlighted Source Code")]
public class Plugin : SmartContentSource
{
}    

Then we override the CreateContent method as we did before. Live Writer creates a ISmartContent object and passes it in for us to update with the content the user creates. Again as before this example has a dialog box being shown to the user for them to key the code snippet but this time the data is set within the ISmartContent properties.

public override System.Windows.Forms.DialogResult CreateContent(IWin32Window dialogOwner, ISmartContent newContent)
{
    using (CodeEntryForm entryform = new CodeEntryForm())
    {
        entryform.StartPosition = FormStartPosition.CenterParent;
        DialogResult result = entryform.ShowDialog(dialogOwner); 
        if (result == DialogResult.OK)
        {
            newContent.Properties.SetString("OutputHTML", entryform.GetData());
            newContent.Properties.SetString("RawCode", entryform.GetRawCodeSnippet());
            newContent.Properties.SetString("SelectedLanguage", entryform.GetSelectedLanguage());                    
        }
        return result;
    }
} 


N,B: The reason I’m setting the ‘RawCode’ and the ‘SelectedLanguage’ properties as well as the combined total output is for use later when we want to display the dialog to the user again pre-populated.

In order to ensure that the correct text is being passed to the blog engine for publishing we override the GeneratePublishHtml method. The ISmartContent object passed in has the original data set as one of its properties so in this example I just return that.

public override string GeneratePublishHtml(ISmartContent content, IPublishingContext publishingContext)
{
    return content.Properties["OutputHTML"].ToString();
} 

This gets around the original problem that I experienced with the first ContentSource plug-in. Also SmartContentSource plug-ins benefit from being able to be edited via an editor view. This means that when an inserted code snippet is selected in the Editor in Live Writer we can display our own properties bar on the right hand side. To use this feature we need to add a UserControl to our project and make it derive from SmartContentEditor. We then add relevant controls to it to interact with our Content (i.e. code snippet). In my case I just added a link label control to re-launch the dialog box for the user to modify the code within the currently selected content (code snippet) in the editor. Once complete then override the CreateEditor method and return our own UserControl object as a SmartContentEditor object .

public override SmartContentEditor CreateEditor(ISmartContentEditorSite editorSite)
{
    return new MySmartContentEditor();
} 

An important point is that only one instance of the plug-in is used in Live Writer for all instances of content being inserted. For example you may add multiple code snippets into one blog post but only one instance of your plug-in will be created and reused, therefore you cannot store any state information specific to an individual content items (i.e. Code snippets) within the plug-in. That’s why the data is always passed in to these methods we have been overriding.

Compile it up and copy the assembly to the PlugIns folder of your Windows Live writer installation (e.g. C:\Program Files\Windows Live\Writer\Plugins) to test. This time the content added using SmartContentSource is treated as special and is not converted as you flip between the Editor and Source views.

To extend the plug-in further its possible to update the ‘WriterPlugin’ attribute property ‘HasEditableOptions’ to true and override the ‘EditOptions’ method to display an options dialog to the user. This means that you can let the user modify settings for your plug-in. 

As you can see the plug-in model for Windows Live Writer is very simple and productive. It is surprisingly easy to quickly produce useful plug-ins.

If you want to make use of this plug-in for making it easier to add highlighted code to your WordPress.com blog then feel free to download it here.

Bing (without the Bling)

You want to try out Microsoft’s new search engine, but you don’t quite trust it yet so you’re worried it might miss something which means you search on both – right? Well check out http://www.google-vs-bing.com. This allows you to search both at the same time and displays the results side by side in a split window.

Also Bing by default displays a large background image on its home page. Microsoft argue that this does not slow down a users search experience, but if you like a plain look then it is possible to turn it off.

IIS Host Headers, Secure Bindings, Wix & Custom Actions

Whilst trying to host multiple WCF Services in IIS, each within it’s own web site, I discovered an issue with secure host headers and IIS6. The requirements were to securely install each WCF service inside it’s own web site, with its own application and application pool instance. In order to setup multiple sites I needed either: multiple IP addresses, use different ports on the one IP address or use ‘host headers’. The chosen solution was ‘host headers’ and I set these up in IIS using IIS Manager (inetmgr). This is covered here:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/e7a21b1f-ab13-47f2-8c61-b09cf14a7cb3.mspx

However the UI only supports setting up unsecure bindings and not secure ones. After checking on TechNet I found this:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/596b9108-b1a7-494d-885d-f8941b07554c.mspx?mfr=true

It turns out that whilst IIS6 does support the use of host headers this feature was not added in time for the IIS Admin UI to reflect this. The article informs us to use the handy IIS admin script “adsutil.vbs” which updates the IIS Metabase XML file, as detailed here:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/596b9108-b1a7-494d-885d-f8941b07554c.mspx?mfr=true

The problem with this approach though is that the script rather annoyingly requires the Site Identifier for the web site you wish to update via the script. This is easy to obtain interactively by checking out IIS Manager and clicking on ‘web sites’, but I needed to set up my site via an MSI (via Wix). The installation script will not know what the identifier of the site is as it could vary on each install. Therefore I need to be able to resolve a web site name to an identifier before calling adsutil.vbs. After some Googling I found this excellent post by David Wang:

http://blogs.msdn.com/david.wang/archive/2005/07/13/HOWTO_Enumerate_IIS_Website_Configuration.aspx

Here he explains how to enumerate through IIS entities to find the one you want. If you check out the comments there is a post by ‘Dave’ where he has included a modified script. This script iterates over the web sites and finds the one matching the right name, then calls the “adsutil.vbs” script to update the metabase using the correct site identifier. I copied this script and trimmed it down for my own purposes and now had a viable script for my installer.

Using a VB Script within an MSI is not recognised best practice, a more robust solution would be to code a Custom Action in native code (there are hidden dangers of Custom Actions written in Managed Code), however it is a viable option and in this instance it fitted my requirements perfectly (for many reasons outside the scope of this article).

Now I’m a fan of Wix and am always impressed with how much it can do out the box without customisation. In this case however I discovered that it is not that easy to just run a command or script during install. Don’t get me wrong it is possible and there are a huge number of options as to when and how to run custom actions, but I was expecting it to be easier than it turned out to be. There are many decent blog posts on the subject of Custom Actions in Wix and I’m not going to go into much detail here, however in the end my solution looked like this:

<!--Set up IIS Bindings-->
<installexecutesequence>
   <custom action='CAIISSecureBindingsServiceX' before='InstallFinalize'>
      NOT Installed
   </custom>
   <custom action='CAIISSecureBindingsServiceY' before='InstallFinalize'>
      NOT Installed
   </custom>
</installexecutesequence>

This slots our Custom Actions into the running order of an MSI installation sequence of events. It asks for the actions be run just before the installation has completed. The ‘Not Installed’ argument ensures that the actions only fire on installation and not an uninstall

We then define the Property for the path to the exe we want to run, in this case the Cscript.exe application (to run our VBS Script). After that we need to define each custom action. The ExeCommand tells it what to pass to the Cscript executable as parameters. The Execute=’deferred’ option is important as it means that the scripts will not run on the first pass through of the MSI installation. The MSI installation process involves running through all steps but not committing them, if that runs ok it then does the steps for real. If we run the script before the web sites have been committed to IIS the script will fail. Setting it to ‘deferred’ means it will be left out of the initial run through and committed in the actual “doing ” stage. For more information on the MSI sequence of events check this out (http://www.advancedinstaller.com/user-guide/standard-actions.html). I found that impersonation needs to be set to Yes for this to work correctly.

<!-- Define all the custom actions and the properties required -->
<Property Id='ScriptEngine'>C:\Windows\System32\CScript.exe</Property>
<CustomAction
   Id='CAIISSecureBindingsServiceX'
   Property='ScriptEngine'
   ExeCommand='[INSTALLLOCATION]UpdateIISBindings.vbs ServiceXv1Site X.v1.default'
   Execute='deferred'
   Impersonate='yes'
   Return='ignore'/>
<CustomAction
   Id='CAIISSecureBindingsServiceY'
   Property='ScriptEngine'
   ExeCommand='[INSTALLLOCATION]UpdateIISBindings.vbs ServiceYv1Site Y.v1.default'
   Execute='deferred'
   Impersonate='yes'
   Return='check'/>

I have set up multiple Custom Actions although only one is really needed. In my implementation I have made the VBS file merely a helper script that uses the information passed in as parameters. I then have multiple custom actions that each call the script separately passing in different parameters (web site name and host header value). It would be neater to have the VBS file include all the logic for looping all sites and setting up the bindings for each one and then only one Custom Action would be needed to run that script. The reason I have not done that here is that I didn’t want the IIS implementation details leaking out of the Wix project. With the multiple actions approach it means that the website names and host header names are not stored in the script file and are held only in this WIX project.

Full Trust For Applications Running On Remote Share

A large number of .Net applications in enterprise environments are run directly from a file share on a server within the local corporate intranet. This was usually only achieved after editing the client machine’s registry. However as of .Net 3.5 Sp1 this is no longer an issue as assemblies accessed from a local intranet share are granted full trust. There are some restrictions, for example it only applies to assemblies loaded from the same directory as the target executable. Apparently this restriction has been removed in .Net 4 though. Although this has been around since the beta of 3.5 Sp1 I wasn’t aware of it and thought it was worth sharing. Read more about it here.