SQL Server Compact Minimum Date Value

Recently I got this error connecting to a SQL Server Compact database from .Net:

"An overflow occurred while converting to datetime."

SQLSo I dug into my data insertion code (I was using the excellent Massive mini ORM by the way – and yes I know its not really an ORM but that’s another post) and found I was setting the offending DateTime field to DateTime.MinValue which it turns out SQL Compact doesn’t like. After a bit of investigation I found that whilst .Net supports a minimum date (DateTime.MinValue) of "01/01/0001 00:00:00" and a maximum (DateTime.MaxValue) of "31/12/9999 23:59:59" SQL Compact’s minimum date is "01/01/1753 00:00:00" (the same as SQL Server 2005 and before). This is the same for all current versions of SQL Server compact. For SQL Server 2008 upwards (2008 R2 and 2012) it’s recommended to use the DateTime2 data-type which supports the same range of dates as .Net.

To summarise:

Version

Minimum Date

Maximum Date

.Net

All

01/01/0001 00:00:00

31/12/9999 23:59:59

SQL Server Compact

All

01/01/1753 00:00:00

31/12/9999 23:59:59

SQL Server
(using datetime)

All

01/01/1753 00:00:00

31/12/9999 23:59:59

SQL Server
(using datetime2)

2008 onwards

01/01/0001 00:00:00

31/12/9999 23:59:59

                          

For a handy guide to the SQL Compact datatypes see here : http://msdn.microsoft.com….

For SQL Server datatypes see here : http://msdn.microsoft.com…

The System.Data.SqlTypes namespace contains the useful SqlDateTime struct which has MinValue and MaxValue properties so you can safely use SqlDateTime.MinValue instead of DateTime.MinValue in your code. For more information on SqlDateTime see here.

The HTML Agility Pack

For a current project I needed to perform a simple screen scrape action. The resulting solution was functional but a bit rough and ready. Luckily I stumbled upon this open-source HTML library project: The HTML Agility Pack, hosted on CodePlex at http://htmlagilitypack.codeplex.com.

It is an excellent little library that makes dealing with HTML a breeze, whether you are screen scraping or just manipulating HTML documents locally. It is very forgivable with regards to malformed HTML documents and supports loading pages directly from the web. You can just parse the HTML or modify it, and it even supports LINQ.  A key benefit of this library is that it doesn’t force you to learn a new object model but instead mirrors the System.XML object model – a huge help for getting up and running quickly, as well as making coding it feel natural.

Download HTML directly via a URL:

HtmlDocument htmlDoc = new HtmlDocument();
HtmlWeb webGet = new HtmlWeb();
htmlDoc = webGet.Load(url);

Or parse an HTML string:

HtmlDocument htmlDoc = new HtmlDocument();
htmlDoc.LoadHtml(htmlString);

Then you can use XPATH to query the HTML document as you would an XML document:           

// select a <li> where it has an element of <b> with a value of "Name:"
var nameItem = htmlDoc.DocumentNode.SelectSingleNode("//li[b='Name:']");
if (nameItem != null && nameItem.ChildNodes.Count > 1)
{
    name = nameItem.ChildNodes[1].InnerText;
}

You can download it via NuGet here : http://nuget.org/packages/HtmlAgilityPack.

For more examples of it’s use check out these posts: Parsing HTML Documents with the Html Agility Pack and Crawling a web sites with HtmlAgilityPack.

Enjoy.

Getting A Users Username in ASP.NET

Getting A Users Username in ASP.NET

When building an ASP.net application it’s important to understand the authentication solution that you are planning to implement and then ensure that all your developers are aware of it. On a few projects I have noted that some developers lack this knowledge and it can end up causing issues later on in the project once the code is first deployed to a test environment. These problems are usually a result of the differences of running a web project locally and remotely. One problem I found on a recent project was where developers were trying to retrieve the logged on user’s Windows username (within an intranet scenario) for display on screen. Unfortunately the code to retrieve the username from client request had been duplicated and a different solution used in both places, worst still neither worked. Sure they worked on the local machine but not when deployed. It was clear immediately that the developers had not quite grasped the way ASP.net works in this regard. There are several ways of retrieving usernames and admittedly it’s not always clear which is best in each scenario, so here is a very, very, very a quick guide. This post is not a deep dive into this huge subject (I might do a follow up post on that) but merely a quick guide to indicate what user details you get for a user the below objects in the framework.

The members we’re looking at are:
name_badge
1. HTTPRequest.LogonUserIdentity
2. System.Web.HttpContext.Current.Request.IsAuthenticated
3. Security.Principal.WindowsIdentity.GetCurrent().Name
4. System.Environment.UserName
5. HttpContext.Current.User.Identity.Name  (same as just User.Identity.Name)
7. HttpContext.User Property
8. WindowsIdentity

To test we create a basic ASPX page and host it in IIS so we can see what values these properties get for a set of authentication scenarios. The page just calls the various username properties available and writing out the values in the response via Response.Write().

Scenario 1: Anonymous Authentication in IIS with impersonation off.

HttpContext.Current.Request.LogonUserIdentity.Name COMPUTER1\IUSR_COMPUTER1
HttpContext.Current.Request.IsAuthenticated False
HttpContext.Current.User.Identity.Name
System.Environment.UserName ASPNET
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\ASPNET

As you can see where we’re running with Anonymous Authentication HttpContext.Current.Request.LogonUserIdentity is the anonymous guest user defined in IIS (IUSR_COMPUTER1 in this example) and as the user is not authenticated the WindowsIdentity is set to that of the running process (ASPNET), and the HttpContext.Current.User.Identity is not set.

Scenario 2: Windows Authentication in IIS, impersonation off.

HttpContext.Current.Request.LogonUserIdentity.Name MYDOMAIN\USER1
HttpContext.Current.Request.IsAuthenticated True
HttpContext.Current.User.Identity.Name MYDOMAIN\USER1
System.Environment.UserName ASPNET
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\ASPNET

Using Windows Authentication however enables the remote user to be authenticated (i.e. IsAuthenticated is true) automatically via their domain account and therefore the HttpContext.Current.Request user is set to that of the remote clients user account, including the Identity object.

Scenario 3: Anonymous Authentication in IIS, impersonation on

HttpContext.Current.Request.LogonUserIdentity.Name COMPUTER1\IUSR_COMPUTER1
HttpContext.Current.Request.IsAuthenticated False
HttpContext.Current.User.Identity.Name
System.Environment.UserName IUSR_COMPUTER1
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\IUSR_COMPUTER1

This time we’re using Anonymous Authentication but now with ASP.net Impersonation turned on in web.config. The only difference to the first scenario is that now the anonymous guest user IUSR_COMPUTER1 is being impersonated and therefore the System.Environment and Security.Principle are using running under that account’s privileges.

Scenario 4: Windows Authentication in IIS, impersonation on

HttpContext.Current.Request.LogonUserIdentity.Name MYDOMAIN\USER1
HttpContext.Current.Request.IsAuthenticated True
HttpContext.Current.User.Identity.Name MYDOMAIN\USER1
System.Environment.UserName USER1
Security.Principal.WindowsIdentity.GetCurrent().Name MYDOMAIN\USER1

Now with Windows Authentication and Impersonation on everything is running as our calling user’s domain account. This means that the ASP.net worker process will share the privileges of that user.

As you can see each scenario provides a slightly different spin on the results which is what you would expect. It also shows how important it is to get the configuration right in your design and implement it early on in build to avoid confusion. For more information see ASP.NET Web Application Security on MSDN.

A Useful Entity Translator Pattern / Object Mapper Template

In this post I’m going to cover the Entity Translator pattern and provide a useful base class for implementing this pattern in your application.

The ‘Entity Translator Pattern’ (defined here and here) covers situations where you need to move object state from one object type to another and there are very strict reasons why these object types are not linked or share inheritance hierarchies.

There are many occasions when you need to convert a set of data from one data type to another. Where you are using DTOs (Data Transport Objects) for example, which serve the purpose of being light and easy to pass between the layers of your application but will often end up needing to be converted in some way into a Business Model or Domain Object. The use of Data Contract objects in WCF (Windows Communication Foundation) is a common example of a where you need this translation from DTOs to Business Model objects, and the Entity Translator pattern is a good fit. After carefully layering your application design into loosely coupled layers you often don’t want WCF created types, or entity objects generated from an ORM tool (Object Relational Mapper) leaking out of their respective layers. Also you may have a requirement to use a ViewModel object pattern requiring state transfer from the ViewModel to the master model object etc. All of these examples have in common the problem of how do you get the state from object A into object B where the two objects are of a different type.

Now of course the most simplistic method would be to hand crank the conversion by hand at the point you need it to. For example:

CustomerDTO dto;
Customer customer;

dto.Name = customer.Name;
dto.Id = customert.Id;

Whilst this approach is acceptable it means writing potentially a lot of code and can lack structure in its use. Other developers working in your team may end up repeating your code elsewhere. Additionally there may be a lack of separation of concerns if this translation code sits with the bulk of your logic making unit testing more complex.

An alternative (keyboard friendly) approach to the problem is to use a tool that can automate the translation of values from one object to another such as AutoMapper. For an example of how to use AutoMapper check out this post and this post, oh and this one. AutoMapper can automatically (through reflection) scan the source object and target object and copy the property values from one to the other (so dto.Name is automatically copied to customer.Name). This works painlessly where the property names match but if they don’t then some mapping has to be configured. The advantage of this approach is that it can save a lot of code having to be written and tested. The downside is performance and less readability of your mapping code. If you have a lot of mapping code to write and you have control over both object structures under translation then this can be a very productive approach.

Of course there is always a middle ground where you want to add some structure to your translation code, perhaps need the performance of specific ‘object to object’ coding or have complex translation logic. For these situations consider the following implementation, courtesy of Microsoft Patterns & Practices .

Microsoft’s Smart Client Software Factory includes an Entity Translator service in it’s Infrastructure.Library project which enables you to write custom translators and then register these with the Entity Translator service. It provides two base classes: ‘EntityMapperTranslator’ and ‘BaseTranslator’. BaseTranslator class implements the an ‘IEntityTranslator’.

BaseTranslator:

public abstract class BaseTranslator : IEntityTranslator
{
  public abstract bool CanTranslate(Type targetType, Type sourceType);
  public bool CanTranslate<TTarget, TSource>()
  {
    return CanTranslate(typeof(TTarget), typeof(TSource));
  } 

  public TTarget Translate<TTarget>(IEntityTranslatorService service, object source)
  {
    return (TTarget)Translate(service, typeof(TTarget), source);
  } 

  public abstract object Translate(IEntityTranslatorService service, Type targetType, object source);
} 

EntityMapperTranslator:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> : BaseTranslator
{ 

  public override bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
                (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public override object Translate(IEntityTranslatorService service, Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
      return ServiceToBusiness(service, (TServiceEntity)source);
    if (targetType == typeof(TServiceEntity))
      return BusinessToService(service, (TBusinessEntity)source);
    throw new EntityTranslatorException();
  } 

  protected abstract TServiceEntity BusinessToService(IEntityTranslatorService service, TBusinessEntity value);

  protected abstract TBusinessEntity ServiceToBusiness(IEntityTranslatorService service, TServiceEntity value); 

} 

If we wanted to use this as a general template for entity translation in a non SCSF solution then we can remove the detail around IEntityTranslator services assuming that we’re not building a translator service but purely writing individual translators. Our classes then look more like this:

BaseTranslator:

public abstract class BaseTranslator 
{
  public abstract bool CanTranslate(Type targetType, Type sourceType); 

  public bool CanTranslate<TTarget, TSource>()
  {
    return CanTranslate(typeof(TTarget), typeof(TSource));
  } 

  public abstract object Translate(Type targetType, object source); 

} 


EntityMapperTranslator:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> : BaseTranslator
{
  public override bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
                (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public TTarget Translate<TTarget>(object source)
  {
   return (TTarget)Translate(typeof(TTarget), source);
  } 

  public override object Translate(Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
     {
       return ServiceToBusiness((TServiceEntity)source);
     } 

     if (targetType == typeof(TServiceEntity))
     {
       return BusinessToService((TBusinessEntity)source);
     } 

    throw new System.ArgumentException("Invalid type passed to Translator", "targetType");
   } 

   protected abstract TServiceEntity BusinessToService(TBusinessEntity value); 

   protected abstract TBusinessEntity ServiceToBusiness(TServiceEntity value);
} 


We could refactor this further by removing the BaseTranslator class completely and just use the EntityMapperTranslator as the base class for our translators. The BaseTranslator class it stands above does provide some benefit if we can foresee circumstances where we want to follow a standard translation pattern for more than just entity translation. We could create a DataMapperTranslator, for example, that would derive from BaseTranslator and would differ the entity translation specifics of the  EntityMapperTranslator implementation. Removing the BaseTranslator class, however, results an in EntityMapperTranslator class like this:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> 
{
  public bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
           (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public TTarget Translate<TTarget>(object source)
  {
    return (TTarget)Translate(typeof(TTarget), source);
  } 

  public object Translate(Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
    {
      return ServiceToBusiness((TServiceEntity)source);
    }
    if (targetType == typeof(TServiceEntity))
    {
      return BusinessToService((TBusinessEntity)source);
    }
    throw new System.ArgumentException("Invalid type passed to Translator", "targetType");
  } 

    protected abstract TServiceEntity BusinessToService(TBusinessEntity value); 

    protected abstract TBusinessEntity ServiceToBusiness(TServiceEntity value);
} 


This is now a neat and simple template from which we can code our custom translators. Now for translating from CustomerDTO to Customer we can create a translator that derives from EntityMapperTranslator and we only need to code the two translate methods (one for translating CustomerDTO to Customer and vice versa).

public class CustomerTranslator : EntityMapperTranslator<BusinessModel.Customer, DataContract.CustomerDTO>
{
  protected override DataContract.CustomerDTO BusinessToService(BusinessModel.Customer value)
  {
    DataContract.CustomerDTO customerDTO = new DataContract.CustomerDTO();
    customerDTO.Id = value.Id;
    customerDTO.Name = value.Name ;
    customerDTO.PostCode = value.PostCode;
    return customerDTO;
  } 

  protected override BusinessModel.Customer ServiceToBusiness(DataContract.CustomerDTO value)
  {
    BusinessModel.Customer customer = new BusinessModel.Customer();
    customer.Id = value.Id;
    customer.Name = value.Name ;
    customer.PostCode = value.PostCode;
    return customer;   
  }
}


This is just a simple example of course as the translation logic could get quite complex. Also don’t forget that the translator classes could also make use of some of the automated tools described earlier such as AutoMapper. Calling AutoMapper inside a translator would enable you to easily combine both manual and automated translation code enabling you to save on tedious typing (and those dreaded copy and paste errors) but gives a simple structure to your manual translation logic where this approach is preferable. Either way by creating custom translators we have encapsulated all of our translation code into separate reusable and testable classes, that all share a developer friendly common interface.

In summary then, we’ve covered using a simple Entity Translation template, based on Microsoft’s Smart Client Software Factory offering and shown how it can be used in your application for adding structure to your object mapping/translation code.


Additional Useful Links:

http://code.google.com/p/translation-tester/

http://icoder.wordpress.com/category/uncategorized/

http://c2.com/cgi/wiki?TranslatorPattern

http://msdn.microsoft.com/en-us/library/cc304747.aspx

http://code.google.com/p/translation-tester/source/browse/#svn/trunk/TranslationTester/TranslationTester%3Fstate%3Dclosed

Raising Multiple Exceptions with AggregateException

Raising Multiple Exceptions with AggregateException

There are occasions where you are aware of multiple exceptions within your code that you want to raise together in one go. Perhaps your code makes a service call to a middleware orchestration component that potentially returns multiple exceptions detailed in its response or another scenario might be a batch processing task dealing with many items in one process that requires you to collate all exceptions until the end and then thrown them together.

Lets look at the batch scenario in more detail. In this situation if you raised the first exception that you found it would exit the method without processing the remaining items. Alternatively you could store the exception information in a variable of some sort and once all items are processed use the information to construct an exception and throw it. Whilst this approach works there are some drawbacks. There is extra effort required to create a viable storage container to hold the exception information and this may mean modifying existing code to not throw an exception but instead to log the details in this new ‘exception detail helper class’. This solution also lacks the additional benefits you get with creating an exception at that point in time, for example the numerous intrinsic properties that exist within Exception objects that  provide valuable additional context information to support the message within the exception. Even when all the relevant information has been collated into a single exception class then you are still left with one exception holding all that information when you may need to handle the exceptions individually and pass them off to existing error handling frameworks which rely on a type deriving from Exception.

Luckily included in .Net Framework 4.0 is the simple but very useful AggregateException class which lives in the System namespace (within mscorlib.dll). It was created to be used with the Task Parallel Library and it’s use within that library is described on MSDN here. Don’t think that is it’s only use though, as it can be put to good use within your own code in situations like those described above where you need to throw multiple exceptions, so let’s see what it offers.

The AggregateException class is an exception type, inheriting from System.Exception, that acts a wrapper for a collection of child exceptions. Within your code you can create instances of any exception based type and add them to the AggregateException’s collection. The idea is a simple one but the AggregateException’s beauty comes in the implementation of this simplicity. As it is a regular exception class it can be handled in the usual way by existing code but also as a special exception collection by the specific code that actually cares about all the exceptions nested within it’s bowels.

The class accepts the child exceptions on one of it’s seven constructors and then exposes them through it’s InnerExceptions property. Unfortunately this is a read-only collection and so it is not possible to add inner exceptions to the AggregateException after it has been instantiated (which would have been nice) and so you will need to store your exceptions in a collection until you’re ready to create the Aggregate and throw it:

// create a collection container to hold exceptions
List<Exception> exceptions = new List<Exception>();

// do some stuff here ........

// we have an exception with an innerexception, so add it to the list
exceptions.Add(new TimeoutException("It timed out", new ArgumentException("ID missing")));

// do more stuff .....

// Another exception, add to list
exceptions.Add(new NotImplementedException("Somethings not implemented"));

// all done, now create the AggregateException and throw it
AggregateException aggEx = new AggregateException(exceptions);
throw aggEx;

The method you use to store the exceptions is up to you as long as you have them all ready at the time you create the AggregateException class. There are seven constructors allowing you to pass combinations of: nothing, a string message, collections or arrays of inner exceptions.

Once created you interact with the class as you would any other exception type:

try
{
   // do task
}
catch (AggregateException ex)
{
   // handle it 
}

This is key as it means that you can make use of existing code and patterns for handling exceptions within your (or a third parties) codebase.

In addition to the usual Exception members the class exposes a few custom ones. The typical InnerException property is there for compatibility and this appears to return the first exception added to the AggregateException class via the constructor, so in the example above it would be the TimeoutException instance. All of the child exceptions are exposed via the InnerExceptions read-only collection property (as shown below).

image

The Flatten() method is another custom property that might prove useful if you find the need to nest Exceptions as inner exceptions within several AggregateExceptions. The method will iterate the InnerExceptions collection and if it finds AggregateExceptions nested as InnerExceptions it will promote their child exceptions to the parent level. This is best seen in an example:

AggregateException aggExInner = 
          new AggregateException("inner AggEx", new TimeoutException());
AggregateException aggExOuter1 = 
          new AggregateException("outer 1 AggEx", aggExInner);
AggregateException aggExOuter2 = 
          new AggregateException("outer 2 AggEx", new ArgumentException());
AggregateException aggExMaster =
          new AggregateException(aggExOuter1, aggExOuter2);

If we create this structure above of AggregrateExceptions with inner exceptions of TimeoutException and ArgumentException then the InnerExceptions property of the parent AggregateException (i.e. aggExMaster) shows, as expected, two objects, both being of type AggregrateException and both containing child exceptions of their own:

image

But if we call Flatten()…

AggregateException aggExFlatterX = aggExMaster.Flatten();

…we get a new ArgumentException instance returned that contains still two objects but this time the AggregrateException objects have gone and we’re just left with the two child exceptions of TimeoutException and ArgumentException:

image

This is a useful feature to discard the AggregateException containers (which are effectively just packaging) and expose the real meat, i.e. the actual exceptions that have been thrown and need to be addressed.

If you’re wondering how the ToString() is implemented then the aggExMaster object in the examples above (without flattening) produces this:

System.AggregateException: One or more errors occurred. ---> System.AggregateException
: outer 1 AggEx ---> System.AggregateException: inner AggEx ---> 
System.TimeoutException: The operation has timed out.   --- End of inner exception 
stack trace ---  --- End of inner exception stack trace ---  --- End of inner exception 
stack trace ------> (Inner Exception #0) System.AggregateException: outer 1 AggEx ---> 
System.AggregateException: inner AggEx ---> System.TimeoutException: The operation
 has timed out.   --- End of inner exception stack trace ---   --- End of inner 
exception stack trace ------> (Inner Exception #0) System.AggregateException: inner
AggEx ---> System.TimeoutException: The operation has timed out.  --- End of inner 
exception stack trace ------> (Inner Exception #0) System.TimeoutException: The 
operation has timed out.<---<---<------> (Inner Exception #1) System.AggregateException
: outer 2 AggEx --- System.ArgumentException: Value does not fall within the expected
 range. --- End of inner exception stack trace ------> (Inner Exception #0) 
System.ArgumentException: Value does not fall within the expected range.

As you can see the data has been formatted in a neat and convenient way for readability, with separators between the inner exceptions.

In summary this is a very useful class to be aware of and have in your arsenal whether you are dealing with the Parallel Tasks Library or you just need to manage multiple exceptions. I like simple and neat solutions and to me this is a good example of that philosophy.

Integrating WCF Services into Web Client Software Factory

Integrating WCF Services into Web Client Software Factory

For those of you unfamiliar with the Web Client Software Factory (WCSF) it is a very capable web application framework for building web forms based thin client applications. It was created as part of the Patterns and Practices offering from Microsoft (alongside the more well known Enterprise Library). It shares many concepts with it’s sister offering the Smart Client Software Factory (SCSF) but it’s implementation is different and I find it easier to use and, sometimes more importantly for organisations, an easier transition for traditional Web forms developers than ASP.net MVC. It is utilises the Model-View-Presenter pattern (MVP) nicely and I find it a useful framework within which to build web applications where a ASP.net MVC approach may have been discounted. For more information on the WCSF check this link.

WCSF uses the ObjectBuilder framework to provide Dependency Injection services to it’s components. Views, Presenters and Modules can have access to a global (or module level) services collection which traditionally contains the services (internal services, not external web services) that provide business logic or infrastructure support functionality. It is therefore important that any code within the web application can access this services collection (via Dependency Injection) to consume this shared functionality. The problem I’ve run into recently is how to allow WCF Web Services, exposed as part of the web application, to hook into the WCSF framework to consume these internal services. These web services need to be able to declare Service Dependencies on other objects and have those dependencies satisfied by the WCSF framework, just as it does for Views and Presenters etc.

I found that the Order Management WCSF Reference Implementation shows how to hook traditional ASMX Web Services into your WCSF Solution. Here is the implementation of the site’s ProductServiceProxy web service:

using System.Web; 
using System.Web.Services; 
using System.Web.Services.Protocols; 
using System.ComponentModel; 
using OrdersRepository.Interfaces.Services; 
using Microsoft.Practices.CompositeWeb; 
using OrdersRepository.BusinessEntities;

namespace WebApplication.Orders 
{ 
    [WebService(Namespace = "http://tempuri.org/")] 
    public class ProductServiceProxy : WebService 
    { 
        IProductService _productService;

        [ServiceDependency] 
        public IProductService ProductService 
        { 
            set { _productService = value; } 
            get { return _productService; } 
        }

        public ProductServiceProxy() 
        { 
            WebClientApplication.BuildItemWithCurrentContext(this);
        }

        [WebMethod] 
        public Product GetProduct(string sku) 
        { 
            return _productService.GetProductBySku(sku); 
        } 
    } 
}

The key line here is the call to WebClientApplication.BuildItemWithCurrentContext(this) within the constructor. This is the key to how this ASMX Web Service can be built up by ObjectBuilder and therefore have its Dependency Injection requirements met. The rest of the page is typical ASMX and WCSF, for example the ServiceDependency on the ProductService property is declared as normal.

If we look into the WCSF Source Code for BuildItemWithCurrentContext(this) we see how it works:

/// <summary> 
/// Utility method to build up an object without adding it to the container. 
/// It uses the application's PageBuilder and the CompositionContainer" 
/// for the module handling the current request 
/// </summary> 
/// <param name="obj">The object to build.</param> 
public static void BuildItemWithCurrentContext(object obj) 
{ 
  IHttpContext context = CurrentContext; 
  WebClientApplication app = (WebClientApplication) context.ApplicationInstance; 
  IBuilder<WCSFBuilderStage> builder = app.PageBuilder; 
  CompositionContainer container = app.GetModuleContainer(context); 
  CompositionContainer.BuildItem(builder, container.Locator, obj); 
}

protected static IHttpContext CurrentContext 
{ 
  get { return _currentContext ?? new HttpContext(System.Web.HttpContext.Current); } 
  set { _currentContext = value; } 
}

The first line calls off to the CurrentContext property where a new HttpContext is created based on the current HTTP context of the ASMX services session. The following lines get a reference to the WebClientApplication instance (that is WCSF’s version of a HTTPApplication for your web app) and then accesses the Composition Container. BuildItem then does the heavy work of using ObjectBuilder to build up the service’s dependencies.

So this works nicely for ASMX services but what about WCF Services? Well it is possible to follow the same approach and use the BuildItemWithCurrentContext method within the constructor of the WCF service but we have to follow some additional steps too. If we just add the BuildItemWithCurrentContext(this) call to the constructor of our WCF service then it will fail as the HTTPContext will always be null when accessed from within a WCF Service.

ASP.net and IIS hosted WCF services play nicely together within a single Application Domain, sharing state and common infrastructure services, but HTTP runtime features do not apply to WCF. Features such as the current HTTPContext, ASP.Net impersonation, HTTPModule Extensibility, Config based URL and file based Authorisation are not available under WCF. There are alternatives provided by WCF for these features but these don’t help with hooking into the ASP.Net specific WCSF. This is where WCF’s ASP.NET compatibility mode saves the day. By configuring your WCF Service to use ASP.NET compatibility you affect the side by side configuration so that WCF services engage in the HTTP request lifecycle fully and thus can access resources as per ASPX pages and ASMX web services. This provides the WCF service with a reference to the current HTTPContext and allows the WCSF to function correctly. It must be said that there are some drawbacks to using ASP.NET compatibility mode, for example the protocol must be HTTP and the WCF service can’t be hosted out of IIS but these will usually be acceptable when you’re wanting to add the WCF service to a WCSF application.

To turn on ASP.NET compatibility mode update your web.config:

<system.serviceModel> 
  <serviceHostingEnvironment aspNetCompatibilityEnabled=”true” />    
</system.serviceModel>

Your services must then opt in to take advantage of the compatibility mode and this is done via the AspNetCompatibilityRequirementsAttribute. This can be set to ‘Required’, ‘Allowed’ and ‘NotAllowed’ but for our WCSF WCF Service it is required so we need to set it as such as in the example below:

namespace WebApplication 
{ 
    [ServiceBehavior] 
    [AspNetCompatibilityRequirements(RequirementsMode =  
                              AspNetCompatibilityRequirementsMode.Required)]
    public class Service1 : IService1 
    { 
        public void DoWork() 
        { 
          LoggingService.WriteInformation("It worked."); 
        } 
        [ServiceDependency] 
        public ILogging LoggingService{ get; set; }

        public Service1() 
        { 
            WebClientApplication.BuildItemWithCurrentContext(this); 
        } 
    } 
}

And that’s it, with the Asp.Net Compatibility Mode turned on and our service stating that it requires this Compatibility Mode to be on (via its AspNetCompatibilityRequirements attribute), the WCSF BuildItemWithCurrentContext(this) method can run successfully with the current HTTPContext.

For more information on hosting WCF and ASP.net side by side and the ASP.net compatibility mode check out ‘WCF Services and ASP.NET’. For more information on the Web Client Software Factory check out ‘Web Client Software Factory on MSDN’.

Windows Azure Experimentation Is Currently Too Expensive

Windows Azure Experimentation Is Currently Too Expensive

I’m a fan of Windows Azure and have enjoyed using it during its CTP phase. Once the CTP was open for registration like many I jumped at the chance to play with this new paradigm in Software Development. During this CTP phase I have written some small private web applications that really do nothing more than experiment with the Azure platform. These have provided me with valuable experience and an insight into building a ‘real’ world application on ‘Azure’. I have also used this knowledge to demonstrate Azure to my colleagues and to promote the platform within my enterprise. All this has been possible due to the fact that the CTP version is completely free to use, however this period of experimentation will soon sadly come to an end.

As Windows Azure moves from a free to use CTP to a commercial product it is right that users have to pay for the privilege of using the platform but it seems that many developers are going to have a hard choice to make in the new year. Do you forget about developing on Azure or do you fork out $86/£55 a month for the privilege of experimentation. For those with an MSDN Premium Subscription they’ll have some more time to enjoy it free, but in 8 months the same decision will be required.

Windows Azure pricing details can be found here but if we assume that the transaction and storage costs are minimal for a developers experimental site and just take the basic cost of running one instance per hour it is $0.12 (sounds cheap) but if we consider that there’s 24 hours in a day, 7 days a week etc the cost for a month is around $86 (£53). That’s not a small amount for the average developer to find. Whilst I am pleased by the free hours provided for MSDN subscribers this is a limited offer and it’s really just delaying the problem for those developers. That is unless Microsoft can come up with a basic cheaper proposition similar to the shared web hosting model. If a developer wants to experiment with web technologies for example they can host a web site (for public or private use) with a 3rd party web-hosting company. These hosting companies provide a selection of options based on your requirements. Whilst premium dedicated server hosting is available developers can get their fix from the cheap and cheerful shared server hosting which will provide most of the features on a smaller scale for around $10 (£6) per month. I realise that there is more to Azure than hosting a web site but the point is that you can only really experience a product when you are frequently interacting with it to build something real, and therefore it has to be accessible.

Now I’m not saying Azure is uncompetitive compared to it’s rivals (it actually competes favourably) or that you don’t get your money’s worth. For a new business starting up with some expected revenue then Azure provides huge advantages and the ability to scale up and down is ideal. It’s getting the developer community interested and informed that is the problem. Microsoft needs developers to buy-in to this seismic shift in computing and by making the barrier to entry so high it is making it difficult to spread the love for this excellent product. I believe that it is in Microsoft’s interest to provide some way to get grass routes developers to buy into this product and to gain exposure to it.

I hope that in the new year we will see a new low cost (even advertisement funded) offering for Azure aimed at getting developers tuned in and coding on this great platform without making a large financial commitment. I’m not alone in hoping for this, check out the requested feature list for Azure (the most popular by far at the time of writing is just this, a low cost option).

IIS Host Headers, Secure Bindings, Wix & Custom Actions

IIS Host Headers, Secure Bindings, Wix & Custom Actions

Whilst trying to host multiple WCF Services in IIS, each within it’s own web site, I discovered an issue with secure host headers and IIS6. The requirements were to securely install each WCF service inside it’s own web site, with its own application and application pool instance. In order to setup multiple sites I needed either: multiple IP addresses, use different ports on the one IP address or use ‘host headers’. The chosen solution was ‘host headers’ and I set these up in IIS using IIS Manager (inetmgr). This is covered here:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/e7a21b1f-ab13-47f2-8c61-b09cf14a7cb3.mspx

However the UI only supports setting up unsecure bindings and not secure ones. After checking on TechNet I found this:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/596b9108-b1a7-494d-885d-f8941b07554c.mspx?mfr=true

It turns out that whilst IIS6 does support the use of host headers this feature was not added in time for the IIS Admin UI to reflect this. The article informs us to use the handy IIS admin script “adsutil.vbs” which updates the IIS Metabase XML file, as detailed here:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/596b9108-b1a7-494d-885d-f8941b07554c.mspx?mfr=true

The problem with this approach though is that the script rather annoyingly requires the Site Identifier for the web site you wish to update via the script. This is easy to obtain interactively by checking out IIS Manager and clicking on ‘web sites’, but I needed to set up my site via an MSI (via Wix). The installation script will not know what the identifier of the site is as it could vary on each install. Therefore I need to be able to resolve a web site name to an identifier before calling adsutil.vbs. After some Googling I found this excellent post by David Wang:

http://blogs.msdn.com/david.wang/archive/2005/07/13/HOWTO_Enumerate_IIS_Website_Configuration.aspx

Here he explains how to enumerate through IIS entities to find the one you want. If you check out the comments there is a post by ‘Dave’ where he has included a modified script. This script iterates over the web sites and finds the one matching the right name, then calls the “adsutil.vbs” script to update the metabase using the correct site identifier. I copied this script and trimmed it down for my own purposes and now had a viable script for my installer.

Using a VB Script within an MSI is not recognised best practice, a more robust solution would be to code a Custom Action in native code (there are hidden dangers of Custom Actions written in Managed Code), however it is a viable option and in this instance it fitted my requirements perfectly (for many reasons outside the scope of this article).

Now I’m a fan of Wix and am always impressed with how much it can do out the box without customisation. In this case however I discovered that it is not that easy to just run a command or script during install. Don’t get me wrong it is possible and there are a huge number of options as to when and how to run custom actions, but I was expecting it to be easier than it turned out to be. There are many decent blog posts on the subject of Custom Actions in Wix and I’m not going to go into much detail here, however in the end my solution looked like this:

<!--Set up IIS Bindings-->
<installexecutesequence>
   <custom action='CAIISSecureBindingsServiceX' before='InstallFinalize'>
      NOT Installed
   </custom>
   <custom action='CAIISSecureBindingsServiceY' before='InstallFinalize'>
      NOT Installed
   </custom>
</installexecutesequence>

This slots our Custom Actions into the running order of an MSI installation sequence of events. It asks for the actions be run just before the installation has completed. The ‘Not Installed’ argument ensures that the actions only fire on installation and not an uninstall

We then define the Property for the path to the exe we want to run, in this case the Cscript.exe application (to run our VBS Script). After that we need to define each custom action. The ExeCommand tells it what to pass to the Cscript executable as parameters. The Execute=’deferred’ option is important as it means that the scripts will not run on the first pass through of the MSI installation. The MSI installation process involves running through all steps but not committing them, if that runs ok it then does the steps for real. If we run the script before the web sites have been committed to IIS the script will fail. Setting it to ‘deferred’ means it will be left out of the initial run through and committed in the actual “doing ” stage. For more information on the MSI sequence of events check this out (http://www.advancedinstaller.com/user-guide/standard-actions.html). I found that impersonation needs to be set to Yes for this to work correctly.

<!-- Define all the custom actions and the properties required -->
<Property Id='ScriptEngine'>C:\Windows\System32\CScript.exe</Property>
<CustomAction
   Id='CAIISSecureBindingsServiceX'
   Property='ScriptEngine'
   ExeCommand='[INSTALLLOCATION]UpdateIISBindings.vbs ServiceXv1Site X.v1.default'
   Execute='deferred'
   Impersonate='yes'
   Return='ignore'/>
<CustomAction
   Id='CAIISSecureBindingsServiceY'
   Property='ScriptEngine'
   ExeCommand='[INSTALLLOCATION]UpdateIISBindings.vbs ServiceYv1Site Y.v1.default'
   Execute='deferred'
   Impersonate='yes'
   Return='check'/>

I have set up multiple Custom Actions although only one is really needed. In my implementation I have made the VBS file merely a helper script that uses the information passed in as parameters. I then have multiple custom actions that each call the script separately passing in different parameters (web site name and host header value). It would be neater to have the VBS file include all the logic for looping all sites and setting up the bindings for each one and then only one Custom Action would be needed to run that script. The reason I have not done that here is that I didn’t want the IIS implementation details leaking out of the Wix project. With the multiple actions approach it means that the website names and host header names are not stored in the script file and are held only in this WIX project.

Full Trust For Applications Running On Remote Share

Full Trust For Applications Running On Remote Share

A large number of .Net applications in enterprise environments are run directly from a file share on a server within the local corporate intranet. This was usually only achieved after editing the client machine’s registry. However as of .Net 3.5 Sp1 this is no longer an issue as assemblies accessed from a local intranet share are granted full trust. There are some restrictions, for example it only applies to assemblies loaded from the same directory as the target executable. Apparently this restriction has been removed in .Net 4 though. Although this has been around since the beta of 3.5 Sp1 I wasn’t aware of it and thought it was worth sharing. Read more about it here.

WCF Best Practices

WCF Best Practices

Windows Communication Foundation is a huge technology and one that is easy to implement badly. Luckily Mehran Nikoo has collated a selection of WCF best practices in his blog:

http://mehranikoo.net/CS/archive/2008/05/31/WCF_5F00_Best_5F00_Practices.aspx

It covers versioning, hosting and security, all of which are worth reading in detail.

Two highlights for me are the problems with using the ‘Using’ statement with WCF clients and the two HTTP concurrent connection restriction built into System.Net.

It is now quite common practice to wrap calls to objects that implement IDispose within a ‘Using’ statement and many WCF sample code snippets in text books and online use this method. Using this pattern though has since been shown to be far from ideal as it hides a potential source of errors and can result in unhelpful generic exceptions being thrown and the original exception being hidden. I have witnessed this recently where a service call reported an odd generic transport exception but when removed from the ‘Using’ statement the original exception was caught and was easily resolved. Checkout the above link for the recommended approach using try/catch blocks.

There is a HTTP specification that enforces a maximum of just two concurrent connections with a remote server at any time. This restriction may have a negative effect on your WCF client application. It can be adjusted via configuration files (app.config, web.config or machine.config) with this setting:

<system.net>
<connectionManagement>
<add address=”*” maxconnection=”6″/>
</connectionManagement>
</system.net>

Adjusting this setting may of course have side effects, for example an increase in CPU usage etc. It is strongly recommended that you test out the right setting for your application and also follow Microsoft’s guidelines in this article.