Installing PowerShell on Windows Home Server

Whether you manage thousands of Windows boxes in an enterprise environment or you just want a more powerful shell environment with which to manage your backup scripts on your PC, Windows PowerShell is an excellent free tool at your disposal. PowerShell V2 comes pre-installed in Windows 7 but how do you get it up and running on your Windows Home Server?

You can install V1 of PowerShell via Windows Update as it is an optional update on Windows 2003 Servers (SP2) which is what a Windows Home Server is underneath. The instructions to do this are detailed here. Basically you go to Automatic Updates on the Control Panel, view the list of ‘optional’ updates and it should be there for you to choose.

What about PowerShell v2? Well a recent post on the PowerShell Team’s blog suggests that V2 will also be available via an optional update to Windows 2003 Servers, replacing the V1 option. The same instructions as above should apply.

If the update isn’t available to you yet or you want to install V2 the manual way then you need to download the Windows Management Framework Core (KB968930) package which is detailed here. The Windows 2003 download relevant for Home Servers is here. It’s a very simple install and once complete you can access PowerShell V2 from the Start > Programs > Accessories > Windows PowerShell menu item.

POSH_WHS_01  POSH_WHS_03

For more information on Windows PowerShell check out “Getting Started with PowerShell” on MSDN or launch the PowerShell ISE environment from the PowerShell Start Menu folder (detailed above) and press F1. The bundled help files are surprisingly good and will soon get you up to speed. I intend to be posting more on PowerShell as I migrate all my server backup scripts over from standard batch file format to PowerShell.

A Useful Entity Translator Pattern / Object Mapper Template

In this post I’m going to cover the Entity Translator pattern and provide a useful base class for implementing this pattern in your application.

The ‘Entity Translator Pattern’ (defined here and here) covers situations where you need to move object state from one object type to another and there are very strict reasons why these object types are not linked or share inheritance hierarchies.

There are many occasions when you need to convert a set of data from one data type to another. Where you are using DTOs (Data Transport Objects) for example, which serve the purpose of being light and easy to pass between the layers of your application but will often end up needing to be converted in some way into a Business Model or Domain Object. The use of Data Contract objects in WCF (Windows Communication Foundation) is a common example of a where you need this translation from DTOs to Business Model objects, and the Entity Translator pattern is a good fit. After carefully layering your application design into loosely coupled layers you often don’t want WCF created types, or entity objects generated from an ORM tool (Object Relational Mapper) leaking out of their respective layers. Also you may have a requirement to use a ViewModel object pattern requiring state transfer from the ViewModel to the master model object etc. All of these examples have in common the problem of how do you get the state from object A into object B where the two objects are of a different type.

Now of course the most simplistic method would be to hand crank the conversion by hand at the point you need it to. For example:

CustomerDTO dto;
Customer customer;

dto.Name = customer.Name;
dto.Id = customert.Id;

Whilst this approach is acceptable it means writing potentially a lot of code and can lack structure in its use. Other developers working in your team may end up repeating your code elsewhere. Additionally there may be a lack of separation of concerns if this translation code sits with the bulk of your logic making unit testing more complex.

An alternative (keyboard friendly) approach to the problem is to use a tool that can automate the translation of values from one object to another such as AutoMapper. For an example of how to use AutoMapper check out this post and this post, oh and this one. AutoMapper can automatically (through reflection) scan the source object and target object and copy the property values from one to the other (so dto.Name is automatically copied to customer.Name). This works painlessly where the property names match but if they don’t then some mapping has to be configured. The advantage of this approach is that it can save a lot of code having to be written and tested. The downside is performance and less readability of your mapping code. If you have a lot of mapping code to write and you have control over both object structures under translation then this can be a very productive approach.

Of course there is always a middle ground where you want to add some structure to your translation code, perhaps need the performance of specific ‘object to object’ coding or have complex translation logic. For these situations consider the following implementation, courtesy of Microsoft Patterns & Practices .

Microsoft’s Smart Client Software Factory includes an Entity Translator service in it’s Infrastructure.Library project which enables you to write custom translators and then register these with the Entity Translator service. It provides two base classes: ‘EntityMapperTranslator’ and ‘BaseTranslator’. BaseTranslator class implements the an ‘IEntityTranslator’.

BaseTranslator:

public abstract class BaseTranslator : IEntityTranslator
{
  public abstract bool CanTranslate(Type targetType, Type sourceType);
  public bool CanTranslate<TTarget, TSource>()
  {
    return CanTranslate(typeof(TTarget), typeof(TSource));
  } 

  public TTarget Translate<TTarget>(IEntityTranslatorService service, object source)
  {
    return (TTarget)Translate(service, typeof(TTarget), source);
  } 

  public abstract object Translate(IEntityTranslatorService service, Type targetType, object source);
} 

EntityMapperTranslator:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> : BaseTranslator
{ 

  public override bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
                (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public override object Translate(IEntityTranslatorService service, Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
      return ServiceToBusiness(service, (TServiceEntity)source);
    if (targetType == typeof(TServiceEntity))
      return BusinessToService(service, (TBusinessEntity)source);
    throw new EntityTranslatorException();
  } 

  protected abstract TServiceEntity BusinessToService(IEntityTranslatorService service, TBusinessEntity value);

  protected abstract TBusinessEntity ServiceToBusiness(IEntityTranslatorService service, TServiceEntity value); 

} 

If we wanted to use this as a general template for entity translation in a non SCSF solution then we can remove the detail around IEntityTranslator services assuming that we’re not building a translator service but purely writing individual translators. Our classes then look more like this:

BaseTranslator:

public abstract class BaseTranslator 
{
  public abstract bool CanTranslate(Type targetType, Type sourceType); 

  public bool CanTranslate<TTarget, TSource>()
  {
    return CanTranslate(typeof(TTarget), typeof(TSource));
  } 

  public abstract object Translate(Type targetType, object source); 

} 


EntityMapperTranslator:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> : BaseTranslator
{
  public override bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
                (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public TTarget Translate<TTarget>(object source)
  {
   return (TTarget)Translate(typeof(TTarget), source);
  } 

  public override object Translate(Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
     {
       return ServiceToBusiness((TServiceEntity)source);
     } 

     if (targetType == typeof(TServiceEntity))
     {
       return BusinessToService((TBusinessEntity)source);
     } 

    throw new System.ArgumentException("Invalid type passed to Translator", "targetType");
   } 

   protected abstract TServiceEntity BusinessToService(TBusinessEntity value); 

   protected abstract TBusinessEntity ServiceToBusiness(TServiceEntity value);
} 


We could refactor this further by removing the BaseTranslator class completely and just use the EntityMapperTranslator as the base class for our translators. The BaseTranslator class it stands above does provide some benefit if we can foresee circumstances where we want to follow a standard translation pattern for more than just entity translation. We could create a DataMapperTranslator, for example, that would derive from BaseTranslator and would differ the entity translation specifics of the  EntityMapperTranslator implementation. Removing the BaseTranslator class, however, results an in EntityMapperTranslator class like this:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> 
{
  public bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
           (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public TTarget Translate<TTarget>(object source)
  {
    return (TTarget)Translate(typeof(TTarget), source);
  } 

  public object Translate(Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
    {
      return ServiceToBusiness((TServiceEntity)source);
    }
    if (targetType == typeof(TServiceEntity))
    {
      return BusinessToService((TBusinessEntity)source);
    }
    throw new System.ArgumentException("Invalid type passed to Translator", "targetType");
  } 

    protected abstract TServiceEntity BusinessToService(TBusinessEntity value); 

    protected abstract TBusinessEntity ServiceToBusiness(TServiceEntity value);
} 


This is now a neat and simple template from which we can code our custom translators. Now for translating from CustomerDTO to Customer we can create a translator that derives from EntityMapperTranslator and we only need to code the two translate methods (one for translating CustomerDTO to Customer and vice versa).

public class CustomerTranslator : EntityMapperTranslator<BusinessModel.Customer, DataContract.CustomerDTO>
{
  protected override DataContract.CustomerDTO BusinessToService(BusinessModel.Customer value)
  {
    DataContract.CustomerDTO customerDTO = new DataContract.CustomerDTO();
    customerDTO.Id = value.Id;
    customerDTO.Name = value.Name ;
    customerDTO.PostCode = value.PostCode;
    return customerDTO;
  } 

  protected override BusinessModel.Customer ServiceToBusiness(DataContract.CustomerDTO value)
  {
    BusinessModel.Customer customer = new BusinessModel.Customer();
    customer.Id = value.Id;
    customer.Name = value.Name ;
    customer.PostCode = value.PostCode;
    return customer;   
  }
}


This is just a simple example of course as the translation logic could get quite complex. Also don’t forget that the translator classes could also make use of some of the automated tools described earlier such as AutoMapper. Calling AutoMapper inside a translator would enable you to easily combine both manual and automated translation code enabling you to save on tedious typing (and those dreaded copy and paste errors) but gives a simple structure to your manual translation logic where this approach is preferable. Either way by creating custom translators we have encapsulated all of our translation code into separate reusable and testable classes, that all share a developer friendly common interface.

In summary then, we’ve covered using a simple Entity Translation template, based on Microsoft’s Smart Client Software Factory offering and shown how it can be used in your application for adding structure to your object mapping/translation code.


Additional Useful Links:

http://code.google.com/p/translation-tester/

http://icoder.wordpress.com/category/uncategorized/

http://c2.com/cgi/wiki?TranslatorPattern

http://msdn.microsoft.com/en-us/library/cc304747.aspx

http://code.google.com/p/translation-tester/source/browse/#svn/trunk/TranslationTester/TranslationTester%3Fstate%3Dclosed

New Intel Core i5 Desktop Build

After my trusty Pentium 4 home build finally bit the dust I’ve invested in a new desktop. Whilst performance was important for this build (as it’s my main desktop) it had to be cost effective. Luckily Intel’s new Core i3/5/7 CPUs are now mainstream and getting excellent reviews making them the ideal choice for this project. Not only are they more powerful Intel Core i5 750and more energy frugal than their predecessors there is plenty of choice in their ranges to suit most budgets. Initially the i3 seemed to offer the best value for my requirements but some shopping around quickly showed that the i5’s are available for only a few pounds more. What’s more the i5 and i7’s come with the ‘Turbo Boost’ feature which looks good in theory and benefits from a snappy name. In the end I decided on the Intel Core i5 750 which gets good reviews (Best Buy Computer Shopper May 2010) and is available at almost i3 prices. It’s a 2.66 GHz Quad Core that includes ‘Turbo Boost’ but doesn’t include the built in graphics chip that many Core ix’s do (not a problem if you’re expecting to include a dedicated Graphics card).

BioStar TH55B HD  I combined the CPU with a BioStar TH55B HD  Motherboard (Best Buy Computer Shopper May 2010) and 4GB of Kingston ValueRAM (2x 2GB). The motherboard comes with four slots for RAM and so two 2GB sticks under Dual Channel configuration should be fine for now and it leaves two extra slots for a future RAM upgrade.

 

For a case I wanted something that looked good and had plenty of space for future expansion OCZ ModXStream Pro 500W and good air flow. The Antec Three Hundred Case is an excellent midi tower case with Antec 300a smart look and some quality features for an excellent price. For power I’ve gone for the OCZ ModXStream Pro 500W which seems to be well made and came with a good selection of quality power cables (and a cable bag?).  I have yet to measure the power consumption of this build but I expect it to be fairly low.

As I’m a big fan of Windows Home Server I tend to centralise my data storage onto the server resulting in no real need for a large capacity hard drive. I couldn’t justify the cost of an SSD drive so for this build I have opted for a new WD Caviar Blue 250GB SATA hard drive and have thrown in my 200GB Seagate Barracuda drive from my old PC as additional storage. I’ve also recycled my DVD-RW and DVD Rom’s from my old PC.

Ok so that’s the good stuff now what’s the ugly duckling in the build? Well as I don’t use my desktop for gaming I have no need for a meaty video card, hence my decision to go with a budget card (the GeForce 210 512MB DDR-2 PCI Express). Performance of this card seems fine for desktop use but I am currently suffering a random glitch whereby the display sometimes seems to duplicate and not refresh correctly. This could be the card or the Nivida Windows 7 64bit drivers. Either way it is annoying and will result in either a hunt for new drivers or a new card. Other than this problem the build has been plain sailing and I would recommend any of these components, especially the i5 which so far as shown to be powerful and running at steady temperatures.

Lastly I took the opportunity from this build to move to 64bit, a move I’ve been quietly resisting for a while. Whilst I needed a 64bit OS to make use of the 4GB of RAM in this build I was nervous at the thought of not being able to source drivers for older peripherals. Whilst many people recommend running 64bit few could give me a good reason that didn’t involve irregular scenarios of needing to register large amounts of memory etc. Also whilst Microsoft insists the hardware vendors now provide 64 bit drivers in order to qualify for the windows logo I suspect that these 64 bit drivers have historically undergone less testing in the wild due to the lower proportion of 64 bit Windows PCs. These days however most PCs sold seem to include 64 bit Windows 7 and so it seems a safe time to take the plunge. In the end my experience has been very positive with my peripherals and software mostly installing fine (I had a few issues with an old Logitech Webcam) and I’m pleased with the 64 bit experience.

Raising Multiple Exceptions with AggregateException

There are occasions where you are aware of multiple exceptions within your code that you want to raise together in one go. Perhaps your code makes a service call to a middleware orchestration component that potentially returns multiple exceptions detailed in its response or another scenario might be a batch processing task dealing with many items in one process that requires you to collate all exceptions until the end and then thrown them together.

Lets look at the batch scenario in more detail. In this situation if you raised the first exception that you found it would exit the method without processing the remaining items. Alternatively you could store the exception information in a variable of some sort and once all items are processed use the information to construct an exception and throw it. Whilst this approach works there are some drawbacks. There is extra effort required to create a viable storage container to hold the exception information and this may mean modifying existing code to not throw an exception but instead to log the details in this new ‘exception detail helper class’. This solution also lacks the additional benefits you get with creating an exception at that point in time, for example the numerous intrinsic properties that exist within Exception objects that  provide valuable additional context information to support the message within the exception. Even when all the relevant information has been collated into a single exception class then you are still left with one exception holding all that information when you may need to handle the exceptions individually and pass them off to existing error handling frameworks which rely on a type deriving from Exception.

Luckily included in .Net Framework 4.0 is the simple but very useful AggregateException class which lives in the System namespace (within mscorlib.dll). It was created to be used with the Task Parallel Library and it’s use within that library is described on MSDN here. Don’t think that is it’s only use though, as it can be put to good use within your own code in situations like those described above where you need to throw multiple exceptions, so let’s see what it offers.

The AggregateException class is an exception type, inheriting from System.Exception, that acts a wrapper for a collection of child exceptions. Within your code you can create instances of any exception based type and add them to the AggregateException’s collection. The idea is a simple one but the AggregateException’s beauty comes in the implementation of this simplicity. As it is a regular exception class it can be handled in the usual way by existing code but also as a special exception collection by the specific code that actually cares about all the exceptions nested within it’s bowels.

The class accepts the child exceptions on one of it’s seven constructors and then exposes them through it’s InnerExceptions property. Unfortunately this is a read-only collection and so it is not possible to add inner exceptions to the AggregateException after it has been instantiated (which would have been nice) and so you will need to store your exceptions in a collection until you’re ready to create the Aggregate and throw it:

// create a collection container to hold exceptions
List<Exception> exceptions = new List<Exception>();

// do some stuff here ........

// we have an exception with an innerexception, so add it to the list
exceptions.Add(new TimeoutException("It timed out", new ArgumentException("ID missing")));

// do more stuff .....

// Another exception, add to list
exceptions.Add(new NotImplementedException("Somethings not implemented"));

// all done, now create the AggregateException and throw it
AggregateException aggEx = new AggregateException(exceptions);
throw aggEx;

The method you use to store the exceptions is up to you as long as you have them all ready at the time you create the AggregateException class. There are seven constructors allowing you to pass combinations of: nothing, a string message, collections or arrays of inner exceptions.

Once created you interact with the class as you would any other exception type:

try
{
   // do task
}
catch (AggregateException ex)
{
   // handle it 
}

This is key as it means that you can make use of existing code and patterns for handling exceptions within your (or a third parties) codebase.

In addition to the usual Exception members the class exposes a few custom ones. The typical InnerException property is there for compatibility and this appears to return the first exception added to the AggregateException class via the constructor, so in the example above it would be the TimeoutException instance. All of the child exceptions are exposed via the InnerExceptions read-only collection property (as shown below).

image

The Flatten() method is another custom property that might prove useful if you find the need to nest Exceptions as inner exceptions within several AggregateExceptions. The method will iterate the InnerExceptions collection and if it finds AggregateExceptions nested as InnerExceptions it will promote their child exceptions to the parent level. This is best seen in an example:

AggregateException aggExInner = 
          new AggregateException("inner AggEx", new TimeoutException());
AggregateException aggExOuter1 = 
          new AggregateException("outer 1 AggEx", aggExInner);
AggregateException aggExOuter2 = 
          new AggregateException("outer 2 AggEx", new ArgumentException());
AggregateException aggExMaster =
          new AggregateException(aggExOuter1, aggExOuter2);

If we create this structure above of AggregrateExceptions with inner exceptions of TimeoutException and ArgumentException then the InnerExceptions property of the parent AggregateException (i.e. aggExMaster) shows, as expected, two objects, both being of type AggregrateException and both containing child exceptions of their own:

image

But if we call Flatten()…

AggregateException aggExFlatterX = aggExMaster.Flatten();

…we get a new ArgumentException instance returned that contains still two objects but this time the AggregrateException objects have gone and we’re just left with the two child exceptions of TimeoutException and ArgumentException:

image

This is a useful feature to discard the AggregateException containers (which are effectively just packaging) and expose the real meat, i.e. the actual exceptions that have been thrown and need to be addressed.

If you’re wondering how the ToString() is implemented then the aggExMaster object in the examples above (without flattening) produces this:

System.AggregateException: One or more errors occurred. ---> System.AggregateException
: outer 1 AggEx ---> System.AggregateException: inner AggEx ---> 
System.TimeoutException: The operation has timed out.   --- End of inner exception 
stack trace ---  --- End of inner exception stack trace ---  --- End of inner exception 
stack trace ------> (Inner Exception #0) System.AggregateException: outer 1 AggEx ---> 
System.AggregateException: inner AggEx ---> System.TimeoutException: The operation
 has timed out.   --- End of inner exception stack trace ---   --- End of inner 
exception stack trace ------> (Inner Exception #0) System.AggregateException: inner
AggEx ---> System.TimeoutException: The operation has timed out.  --- End of inner 
exception stack trace ------> (Inner Exception #0) System.TimeoutException: The 
operation has timed out.<---<---<------> (Inner Exception #1) System.AggregateException
: outer 2 AggEx --- System.ArgumentException: Value does not fall within the expected
 range. --- End of inner exception stack trace ------> (Inner Exception #0) 
System.ArgumentException: Value does not fall within the expected range.

As you can see the data has been formatted in a neat and convenient way for readability, with separators between the inner exceptions.

In summary this is a very useful class to be aware of and have in your arsenal whether you are dealing with the Parallel Tasks Library or you just need to manage multiple exceptions. I like simple and neat solutions and to me this is a good example of that philosophy.

Free Icons For Your Application Within Visual Studio

If like me you’re always on the lookout for neat little icons and images to add to your shiny .Net applications then you might like to know that Visual Studio includes an image library for you to use. The library is located on your hard drive within the Visual Studio installation at:

For Visual Studio 2010: 
C:\Program Files\Microsoft Visual Studio 10.0\Common7\VS2010ImageLibrary\1033\VS2010ImageLibrary.zip

For Visual Studio 2008:
C:\Program Files\Microsoft Visual Studio 9.0\Common7\VS2008ImageLibrary\1033\VS2008ImageLibrary.zip

For more information check out Weston Hutchins’ blog post on the Visual Studio Blog.

Recommended Podcasts For .Net Developers

Keeping up with the latest trends, applications, techniques and platforms can be very difficult. The amount of excellent resources available out there on the web is phenomenal, so much so that there are not enough hours in the day to read or watch it all. One source that I find very useful though is Podcasts. Listening to Podcasts has become part of my daily commute to work and it can take the edge of the boring drive. I use my Nokia E71 to download and organise my podcasts and to listen to them in the car. Below is my current recommended list of Podcasts for .Net Developers:

Hanselminutes StackOverflow Blog This Week On Channel 9
DotNet Rocks RunAs Radio UK MSDN Flash Podcast for Windows Development
Software Engineering Radio Herding Code CodeCast
MSDN GeekSpeak PluralCast MSDN Ping
MSDN 10-4

Also for any Windows Home Server enthusiasts I also recommend the Home Server Show.

Integrating WCF Services into Web Client Software Factory

For those of you unfamiliar with the Web Client Software Factory (WCSF) it is a very capable web application framework for building web forms based thin client applications. It was created as part of the Patterns and Practices offering from Microsoft (alongside the more well known Enterprise Library). It shares many concepts with it’s sister offering the Smart Client Software Factory (SCSF) but it’s implementation is different and I find it easier to use and, sometimes more importantly for organisations, an easier transition for traditional Web forms developers than ASP.net MVC. It is utilises the Model-View-Presenter pattern (MVP) nicely and I find it a useful framework within which to build web applications where a ASP.net MVC approach may have been discounted. For more information on the WCSF check this link.

WCSF uses the ObjectBuilder framework to provide Dependency Injection services to it’s components. Views, Presenters and Modules can have access to a global (or module level) services collection which traditionally contains the services (internal services, not external web services) that provide business logic or infrastructure support functionality. It is therefore important that any code within the web application can access this services collection (via Dependency Injection) to consume this shared functionality. The problem I’ve run into recently is how to allow WCF Web Services, exposed as part of the web application, to hook into the WCSF framework to consume these internal services. These web services need to be able to declare Service Dependencies on other objects and have those dependencies satisfied by the WCSF framework, just as it does for Views and Presenters etc.

I found that the Order Management WCSF Reference Implementation shows how to hook traditional ASMX Web Services into your WCSF Solution. Here is the implementation of the site’s ProductServiceProxy web service:

using System.Web; 
using System.Web.Services; 
using System.Web.Services.Protocols; 
using System.ComponentModel; 
using OrdersRepository.Interfaces.Services; 
using Microsoft.Practices.CompositeWeb; 
using OrdersRepository.BusinessEntities;

namespace WebApplication.Orders 
{ 
    [WebService(Namespace = "http://tempuri.org/")] 
    public class ProductServiceProxy : WebService 
    { 
        IProductService _productService;

        [ServiceDependency] 
        public IProductService ProductService 
        { 
            set { _productService = value; } 
            get { return _productService; } 
        }

        public ProductServiceProxy() 
        { 
            WebClientApplication.BuildItemWithCurrentContext(this);
        }

        [WebMethod] 
        public Product GetProduct(string sku) 
        { 
            return _productService.GetProductBySku(sku); 
        } 
    } 
}

The key line here is the call to WebClientApplication.BuildItemWithCurrentContext(this) within the constructor. This is the key to how this ASMX Web Service can be built up by ObjectBuilder and therefore have its Dependency Injection requirements met. The rest of the page is typical ASMX and WCSF, for example the ServiceDependency on the ProductService property is declared as normal.

If we look into the WCSF Source Code for BuildItemWithCurrentContext(this) we see how it works:

/// <summary> 
/// Utility method to build up an object without adding it to the container. 
/// It uses the application's PageBuilder and the CompositionContainer" 
/// for the module handling the current request 
/// </summary> 
/// <param name="obj">The object to build.</param> 
public static void BuildItemWithCurrentContext(object obj) 
{ 
  IHttpContext context = CurrentContext; 
  WebClientApplication app = (WebClientApplication) context.ApplicationInstance; 
  IBuilder<WCSFBuilderStage> builder = app.PageBuilder; 
  CompositionContainer container = app.GetModuleContainer(context); 
  CompositionContainer.BuildItem(builder, container.Locator, obj); 
}

protected static IHttpContext CurrentContext 
{ 
  get { return _currentContext ?? new HttpContext(System.Web.HttpContext.Current); } 
  set { _currentContext = value; } 
}

The first line calls off to the CurrentContext property where a new HttpContext is created based on the current HTTP context of the ASMX services session. The following lines get a reference to the WebClientApplication instance (that is WCSF’s version of a HTTPApplication for your web app) and then accesses the Composition Container. BuildItem then does the heavy work of using ObjectBuilder to build up the service’s dependencies.

So this works nicely for ASMX services but what about WCF Services? Well it is possible to follow the same approach and use the BuildItemWithCurrentContext method within the constructor of the WCF service but we have to follow some additional steps too. If we just add the BuildItemWithCurrentContext(this) call to the constructor of our WCF service then it will fail as the HTTPContext will always be null when accessed from within a WCF Service.

ASP.net and IIS hosted WCF services play nicely together within a single Application Domain, sharing state and common infrastructure services, but HTTP runtime features do not apply to WCF. Features such as the current HTTPContext, ASP.Net impersonation, HTTPModule Extensibility, Config based URL and file based Authorisation are not available under WCF. There are alternatives provided by WCF for these features but these don’t help with hooking into the ASP.Net specific WCSF. This is where WCF’s ASP.NET compatibility mode saves the day. By configuring your WCF Service to use ASP.NET compatibility you affect the side by side configuration so that WCF services engage in the HTTP request lifecycle fully and thus can access resources as per ASPX pages and ASMX web services. This provides the WCF service with a reference to the current HTTPContext and allows the WCSF to function correctly. It must be said that there are some drawbacks to using ASP.NET compatibility mode, for example the protocol must be HTTP and the WCF service can’t be hosted out of IIS but these will usually be acceptable when you’re wanting to add the WCF service to a WCSF application.

To turn on ASP.NET compatibility mode update your web.config:

<system.serviceModel> 
  <serviceHostingEnvironment aspNetCompatibilityEnabled=”true” />    
</system.serviceModel>

Your services must then opt in to take advantage of the compatibility mode and this is done via the AspNetCompatibilityRequirementsAttribute. This can be set to ‘Required’, ‘Allowed’ and ‘NotAllowed’ but for our WCSF WCF Service it is required so we need to set it as such as in the example below:

namespace WebApplication 
{ 
    [ServiceBehavior] 
    [AspNetCompatibilityRequirements(RequirementsMode =  
                              AspNetCompatibilityRequirementsMode.Required)]
    public class Service1 : IService1 
    { 
        public void DoWork() 
        { 
          LoggingService.WriteInformation("It worked."); 
        } 
        [ServiceDependency] 
        public ILogging LoggingService{ get; set; }

        public Service1() 
        { 
            WebClientApplication.BuildItemWithCurrentContext(this); 
        } 
    } 
}

And that’s it, with the Asp.Net Compatibility Mode turned on and our service stating that it requires this Compatibility Mode to be on (via its AspNetCompatibilityRequirements attribute), the WCSF BuildItemWithCurrentContext(this) method can run successfully with the current HTTPContext.

For more information on hosting WCF and ASP.net side by side and the ASP.net compatibility mode check out ‘WCF Services and ASP.NET’. For more information on the Web Client Software Factory check out ‘Web Client Software Factory on MSDN’.

New Version of Source Code Syntax Highlighting Live Writer Plugin for WordPress.com blogs

In a previous blog post I covered how to create a Windows Live Writer plug-in and then provided the plug-in for download. The plug-in solves the problem of inserting Source Code Syntax formatted code snippets into a WordPress.com hosted blog post from within Live Writer. There are many Plug-ins that can be used with your own hosted WordPress blog but these don’t work with WordPress.com hosted blogs. This plug-in supports only WordPress.com blogs by inserting the relevant ‘ShortCode’ tag around your code that is required for WordPress to syntax highlight your code. As the plug-in has proved popular I’ve taken the time to update it to Version 1.1.

Since developing the first version WordPress have added extra functionality to their SourceCode shortcode. It now supports additional languages and provides more control over how the code snippet appears in your post. Some of the neatest new features include an option to specify specific lines of code within the snippet to highlight, and the ability to control the line number to start line numbering at.

Full Feature List:

autolinks Makes all URLs in your posted code clickable.
collapse If true, the code box will be collapsed when the page loads, requiring the visitor to click to expand it. Good for large code posts.
gutter Hides line numbering on the left side will be hidden.
firstline Use this to change what number the line numbering starts at.
highlight You can list the line numbers you want to be highlighted. For example “4,7,19″.
htmlscript Highlights HTML/XML in your code. This is useful when you are mixing code into HTML, such as PHP inside of HTML.
light A light version of the display, hiding line numbering and the toolbar. This is helpful when posting only one or two lines of code.
padlinenumbers Allows you to control the line number padding. Options are automatic padding, no padding or a specified amount.
toolbar Show or hide the toolbar containing the helpful buttons that appears when you hover over the code.
wraplines Turn wrapping on or off.

Checkout the WordPress support site for more detail on these.

The list of languages selectable within the Plug-in has been increased to include all the new ones now supported by WordPress. The current list is: actionscript3 , bash, coldfusion , cpp , csharp, css, delphi, erlang, fsharp, diff, groovy, javascript, java, javafx, matlab, objc, perl, php, text, powershell, python, ruby, scala, sql, vb, xml.

I’ve also provided a mechanism to modify the list of languages displayed in the Plug-in. This is useful for two reasons. Firstly it enables you to reduce the number languages in the dropdown list down to just the one’s you are likely to use. This can make the Plug-in slightly easier to use. Perhaps more important however is the fact this enables you to modify the list if WordPress.com decide to update their shortcode to support an additional language and you need to utilise this before the next version of my Plug-in is released. Just add the new language name to the list and away you go.

To modify this language list you need to have a text file named SyntaxHighlight_WordPressCom_WLWPlugIn.txt located in the same folder as the Plug-in (usually C:\Program Files\Windows Live\Writer\Plugins). The download Zip file for the V1.1 version of the Plug-in contains this file. Locate it with the plug-in and then it’s there for when you want to modify the list in the future. Interestingly if you don’t want to make use of this feature then it is optional and the text file is NOT required for the Plug-in to work (you’ll just get the full language list by default).

I have changed the display that is inserted into the Editor view when a snippet is inserted. Previously all code text and the shortcode tag was added to the Editor view. To improve readability of the code you’ve inserted a change has been made to only show the code and not the ShortCode tags. This is particularly useful now that the Shortcode tags are potentially longer with the new supported attributes.

A link to the WordPress.com sourcecode tag support page is also now included in the property editor for inserted code snippets.

 WLW_V1.0_FulLScreenCodeEntryFrm WLW_V1.1_FulLScreenProps WLW_V1.1_Props

 

The “Live Writer Source Code Plugin For WordPress.com” Version 1.1 is available for download here.

Windows Azure Experimentation Is Currently Too Expensive

I’m a fan of Windows Azure and have enjoyed using it during its CTP phase. Once the CTP was open for registration like many I jumped at the chance to play with this new paradigm in Software Development. During this CTP phase I have written some small private web applications that really do nothing more than experiment with the Azure platform. These have provided me with valuable experience and an insight into building a ‘real’ world application on ‘Azure’. I have also used this knowledge to demonstrate Azure to my colleagues and to promote the platform within my enterprise. All this has been possible due to the fact that the CTP version is completely free to use, however this period of experimentation will soon sadly come to an end.

As Windows Azure moves from a free to use CTP to a commercial product it is right that users have to pay for the privilege of using the platform but it seems that many developers are going to have a hard choice to make in the new year. Do you forget about developing on Azure or do you fork out $86/£55 a month for the privilege of experimentation. For those with an MSDN Premium Subscription they’ll have some more time to enjoy it free, but in 8 months the same decision will be required.

Windows Azure pricing details can be found here but if we assume that the transaction and storage costs are minimal for a developers experimental site and just take the basic cost of running one instance per hour it is $0.12 (sounds cheap) but if we consider that there’s 24 hours in a day, 7 days a week etc the cost for a month is around $86 (£53). That’s not a small amount for the average developer to find. Whilst I am pleased by the free hours provided for MSDN subscribers this is a limited offer and it’s really just delaying the problem for those developers. That is unless Microsoft can come up with a basic cheaper proposition similar to the shared web hosting model. If a developer wants to experiment with web technologies for example they can host a web site (for public or private use) with a 3rd party web-hosting company. These hosting companies provide a selection of options based on your requirements. Whilst premium dedicated server hosting is available developers can get their fix from the cheap and cheerful shared server hosting which will provide most of the features on a smaller scale for around $10 (£6) per month. I realise that there is more to Azure than hosting a web site but the point is that you can only really experience a product when you are frequently interacting with it to build something real, and therefore it has to be accessible.

Now I’m not saying Azure is uncompetitive compared to it’s rivals (it actually competes favourably) or that you don’t get your money’s worth. For a new business starting up with some expected revenue then Azure provides huge advantages and the ability to scale up and down is ideal. It’s getting the developer community interested and informed that is the problem. Microsoft needs developers to buy-in to this seismic shift in computing and by making the barrier to entry so high it is making it difficult to spread the love for this excellent product. I believe that it is in Microsoft’s interest to provide some way to get grass routes developers to buy into this product and to gain exposure to it.

I hope that in the new year we will see a new low cost (even advertisement funded) offering for Azure aimed at getting developers tuned in and coding on this great platform without making a large financial commitment. I’m not alone in hoping for this, check out the requested feature list for Azure (the most popular by far at the time of writing is just this, a low cost option).

‘Java Update Secret Warning’ or ‘You WILL Auto-Update’

The Java Runtime on my Dev machine prompted me to Update with a new Version. It didn’t prompt nicely via a little notification popup in the System Tray but instead blows straight into a full UAC prompt. Anyway in a moment of revenge I decided to turn off the ‘Auto Update’ feature. I opened the Java Control Panel and found the Update tab shown below:

image

I unchecked the “Check for Updates Automatically” option and then got this rather amusing ‘Warning’:

image

After guessing which button did what I closed the dialog to see this:

image

No matter how many times I try the same thing happens. I’m guessing that this is either:

  1. A very secret warning message that not just any mere mortal can read.
  2. A very obscure tactic to convince users not to turn off auto updates.
  3. A bug.

You decide.