Break on Exceptions in Visual Studio 2015

Break on Exceptions in Visual Studio 2015

Looking for the option to break on exceptions during debugging in Microsoft Visual Studio 2015? Well Microsoft dumped the old exceptions dialog and replaced it with the new Exception Settings Window. To see it to show that window via the menu: Debug > Windows > Exception Settings.

vsexceptionsettingsmenu

Use the Exception Settings window to choose the types of exceptions on which you wish to break. Right click for the context menu option to turn on/off the option to break or continue when the exception is handled (see below). To break on all exceptions you’ll want to ensure this is set to off (not ticked).

vs2015exceptionssettingsdiag2

For more information check out these MSDN links:

https://blogs.msdn.microsoft.com/visualstudioalm/2015/02/23/the-new-exception-settings-window-in-visual-studio-2015/

https://blogs.msdn.microsoft.com/visualstudioalm/2015/01/07/understanding-exceptions-while-debugging-with-visual-studio/

Preventing Browser Caching using HTTP Headers

Many developers consider the use of HTTPS on a site enough security for a user’s data, however one area often overlooked is the caching of your sites pages by the users browser. By default (for performance) browsers will cache pages visited regardless of whether they are served via HTTP or HTTPS. This behaviour is not ideal for security as it allows an attacker to use the locally stored browser history and browser cache to read possibly sensitive data entered by a user during their web session. The attacker would need access to the users physical machine (either locally in the case of a shared device or remotely via remote access or malware). To avoid this scenario for your site you should consider informing the browser not to cache sensitive pages via the header values in your HTTP response. Unfortunately it’s not quite that easy as different browsers implement different policies and treat the various cache control values in HTTP headers differently.

Taking control of caching via the use of HTTP headers

To control how the browser (and any intermediate server) caches the pages within our web application we need to change the HTTP headers to explicitly prevent caching. The minimum recommended HTTP headers to de-activate caching are:

Cache-control: no-store
Pragma: no-cache

Below are the settings seen on many secure sites as a comparison to above and perhaps as a guide to what we should really be aiming for:

Cache-Control:max-age=0, no-cache, no-store, must-revalidate
Expires:Thu, 01 Jan 1970 00:00:00 GMT
Pragma:no-cache

HTTP Headers & Browser Implementation Differences:

Different web browsers implement caching in differing ways and therefore also implement various subtleties in their support for the cache controlling HTTP headers. This also means that as browsers evolve so too will their implementations related to these header values.

Pragma Header Setting

Use of the ‘Pragma’ setting is often used but it is now outdated (a retained setting from HTTP 1.0) and actually relates to requests and not responses. As developers have been ‘over using’ this on responses many browsers actually started to make use of this setting to control response caching. This is why it is best included even though it has been superseded by specific HTTP 1.1 directives.

Cache-Control ‘No-Store’ & ‘No-Cache’ Header Settings

A “Cache-Control” setting of private instructs any proxies not to cache the page but it does still permit the browser to cache. Changing this to no-store instructs the browser to not cache the page and not store it in a local cache. This is the most secure setting. Again due to variances in implementation a setting of no-cache is also sometimes used to mean no-store (despite this setting actually meaning cache but always re-validate, see here). Due to this the common recommendation is to include both settings, i.e: Cache-control: no-store, no-cache.

Expires Header Setting

This again is an old HTTP 1.0 setting that is maintained for backward compatibility. Setting this date to a date in the past forces the browser to treat the data as stale and therefore it will not be loaded from cache but re-queried from the originating server. The data is still cached locally on disk though and so only provides little security benefits but does prevent an attacker directly using the browser back button to read the data without resorting to accessing the cache on the file system.  For example:  Expires: Thu, 01 Jan 1970 00:00:00 GMT

Max-Age Header Setting

The HTTP 1.1 equivalent of expires header. Setting to 0 will force the browser to re-validate with the originating server before displaying the page from cache. For example: Cache-control: max-age=0

Must-Revalidate Header Setting

This instructs the browser that it must revalidate the page against the originating server before loading from the cache, i.e. Cache-Control: must-revalidate

Implementing the HTTP Header Options

Which pages will be affected?

Technically you only need to turn off caching on those pages where sensitive data is being collected or displayed. This needs to be balanced against the risk of accidently not implementing the change on new pages in the future or making it possible to remove this change accidently on individual pages. A review of your web application might show that the majority of pages display sensitive data and therefore a global setting would be beneficial. A global setting would also ensure that any new future pages added to the application would automatically be covered by this change, reducing the impact of developers forgetting to set the values.

There is a trade off with performance here and this must be considered in your approach. As this change impacts the client caching mechanics of the site there will be performance implications of this change. Pages will no longer be cached on the client, impacting client response times and may also increase load on the servers. A full performance test is required following any change in this area.

Implementing in ASP.net

There are numerous options for implementing the HTTP headers into a web application. These options are outlined below with their strengths/weaknesses. ASP.net and the .Net framework provide methods to set caching controls on the Request and Cache objects. These in turn result in HTTP headers being set for the page/application’s HTTP responses. This provides a level of abstraction from the HTTP headers but that abstraction prevents you setting the headers exactly how you might like them for full browser compatibility. The alternative approach is to explicitly set the HTTP headers. Both options and how they can be implemented are explored below:

Using ASP.net Intrinsic Cache Settings
Declaratively Set Output Cache per ASPX Page

Using the ASPX Page object’s attributes you can declaratively set the output cache properties for the page including the HTTP header values regarding caching. The syntax is show in the example below:

Example ASPX page:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="CacheTestApp._Default" %> 
<%@ OutputCache Duration="60" VaryByParam="None"%> 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml" > 
<head runat="server"> 
<title></title> 
</head> 
<body> 
<form id="form1" runat="server"> This is Page 1.</form> 
</body> 
</html>

Parameters can be added to the OutputCache settings via the various supported attributes. Whilst this configuration allows specific targeting of the caching solution by enabling you to define a cache setting for each separate page it has the drawback that it needs changes to be made to all pages and all user controls. In addition developers of any new pages will need to ensure that the page’s cache settings are correctly configured. Lastly this solution is not configurable should the setting need to be changed per environment or disabled for performance reasons.

Declaratively Set Output Cache Using a Global Output Cache Profile

An alternative declarative solution for configuring a page’s cache settings is to use a Cache Profile. This works by again adding an OutputCache directive to each page (and user control) but this time deferring the configuration settings to a CacheProfile in the web.config file.

Example ASPX page:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="CacheTestApp._Default" %> 
<%@ OutputCache CacheProfile=" RHCacheProfile "%> 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml" > 
<head runat="server"> 
<title></title> 
</head> 
<body> 
<form id="form1" runat="server"> 
This is Page 1. 
</form> 
</body> 
</html>

Web.config file:

<system.web> 
<caching> 
<outputCache enableOutputCache="false"/> 
<outputCacheSettings> 
<outputCacheProfiles> 
<OutputCache CacheProfile=" RHCacheProfile"> 
<add name="RHCacheProfile" 
location="None" 
noStore="true"/> 
</outputCacheProfiles> 
</outputCacheSettings> 
</caching> 
</system.web>

This option provides the specific targeting per page and the related drawbacks of having to make changes to every page and user control. This solution does provide the ability to centralise the cache settings in one place (minimising the impact of future changes) and enables caching to be set during installation depending on target environment via the deployment process.

Programmatically Set HTTP Headers in ASPX Pages

Output caching can also be set in code in the code behind page (or indeed anywhere where the response object can be manipulated). The code snippet below shows setting the HTTP headers indirectly via the Response.Cache object:

Response.Cache.SetCacheability(HttpCacheability.NoCache); 
Response.Cache.SetExpires(DateTime.UtcNow.AddHours(-1)); 
Response.Cache.SetNoStore();
Response.Cache.SetMaxAge(new TimeSpan(0,0,30));

This code would need to be added to each page and so results in duplicate code to maintain and again introduces the requirement for this to be remembered to be added to all new pages as they are developed. It results in the below headers being produced:

Cache-Control:no-cache, no-store

Expires:-1

Pragma:no-cache

Programmatically Set HTTP Headers in Global ASAX File

Instead of adding the above code in each page an alternative approach is to add it to the Global ASAX file so as to apply to all requests made through the application.

void Application_BeginRequest(object sender, EventArgs e)
{
	Response.Cache.SetCacheability(HttpCacheability.NoCache);
	Response.Cache.SetExpires(DateTime.Now);
	Response.Cache.SetNoStore();
	Response.Cache.SetMaxAge(new TimeSpan(0,0,30));
}

This would apply to all pages being requested through the application. It results in the below headers being produced:

Cache-Control:no-cache, no-store

Expires:-1

Pragma:no-cache

Explicitly define HTTP Headers outside of ASP.net Cache settings.

Explicitly Define HTTP Headers in ASPX Pages

The response object can have its HTTP Headers set explicitly instead of using the ASP.net Cache objects abstraction layer. This involves setting the header on every page:

void Page_Load(object sender, EventArgs e)
{
	Response.AddHeader("Cache-Control", "max-age=0,no-cache,no-store,must-revalidate");
	Response.AddHeader("Pragma", "no-cache");
	Response.AddHeader("Expires", "Tue, 01 Jan 1970 00:00:00 GMT");
}

Again as a page specific approach it requires a change to be made on each page. It results in the below headers being produced:

Cache-Control:max-age=0,no-cache,no-store,must-revalidate

Expires:Tue, 01 Jan 1970 00:00:00 GMT

Pragma:no-cache

Explicitly Define HTTP Headers in Global ASAX File

To avoid having to set the header explicitly on each page the above code can be inserted into the Application_BeginRequest event within the application’s Global ASAX file:

void Application_BeginRequest(object sender, EventArgs e)
{
	Response.AddHeader("Cache-Control", "max-age=0,no-cache,no-store,must-revalidate");
	Response.AddHeader("Pragma", "no-cache");
	Response.AddHeader("Expires", "Tue, 01 Jan 1970 00:00:00 GMT");
}

Again this results in the below headers being produced:

Cache-Control:max-age=0,no-cache,no-store,must-revalidate

Expires:Tue, 01 Jan 1970 00:00:00 GMT

Pragma:no-cache

Environment Specific Settings

It’s useful to be able to set the header values via configuration settings, not least to be able to test this change in a performance test environment via before/after tests.

All of the above changes should be made configurable and be able to be triggered/tweaked via the web.config file (and therefore can be modified via deployment settings).

Useful Links For More Information 

Upgrading MVC 3 to MVC 4 via NuGet

Upgrading MVC 3 to MVC 4 via NuGet

I had to upgrade an old ASP.NET MVC 3 project to MVC 4 yesterday and whilst searching for the official instructions I found that there is a NuGet package that does all the hard work for you.

The official instructions for upgrading are in the MVC 4 release notes here: http://www.asp.net/whitepapers/mvc4-release-notes#_Toc303253806

But Nandip Makwana has created a NuGet package that automates this process. Check it out here: https://www.nuget.org/packages/UpgradeMvc3ToMvc4

It worked great for me.

Host Static HTML or WebForms Page within MVC Site

Host Static HTML or WebForms Page within MVC Site

If you need to host a static HTML page within an ASP.net MVC website or you need to mix ASP.net WebForms with an MVC website then you need to configure your routing configuration in MVC to ignore requests for those pages.

File:Belgian road sign F7.svgRecently I wanted to host a static HTML welcome page (e.g. hello.htm) on an MVC website. I added the HTML page to my MVC solution (setting it as the Visual Studio project’s start page) and configured my web site’s default page to be the HTML page (hello.htm). It tested ok at first but then I realised that it was only displaying the hello page first on debug because I’d set the page to be the Visual Studio project’s start-up page and I hadn’t actually configured the MVC routes correctly so it wouldn’t work once deployed.

For this to work you need to tell MVC to ignore the route if its for the HTML page (or ASPX page in the case of mixing WebForms and MVC). Find your routing configuration section (for MVC4 it’s in RouteConfig.cs under the App_Start folder, for MVC1,2,3 it’s in Global.asax). Once found use the IgnoreRoute() method to tell Routing to ignore the specific paths. I used this:

routes.IgnoreRoute("hello.htm"); //ignore the specific HTML start page
routes.IgnoreRoute(""); //to ignore any default root requests

Now MVC ignores a request to load the hello HTML page and leaves IIS to handle returning the resource and hence the page displays correctly.

Setting a Custom Domain Name on an Azure Web Site

Setting a Custom Domain Name on an Azure Web Site

I recently decided to add a custom domain name to a free Azure website that I use for development purposes. As the FREE Azure web site model doesn’t support custom domains (a shame but hard to complain as it’s FREE) I needed to upgrade the site to the ‘Shared’ mode. This is easily done by the Scaling button in the azure portal.

Firstly however I needed to link my current azure web site to sit under a different subscription to the one I used to set it up. The problem is that cannot move sites between subscription models yet (please fix this Microsoft). To get around this I needed to create a new website under the correct subscription and then publish my web site code to it. Luckily this is easy to do as it’s just a basic website but I can imagine that this could be painful if you have a bunch of storage accounts or a database to re-create.

Using the Azure Portal, creating a new site is a simple process Click +NEW at the bottom of the portal for the menu shown below:

image

Once created all I needed to do was download a Publish profile (see this tutorial link for how to publish to Azure) for the new site for Visual Studio to use. Once downloaded I opened my VS2012 solution and brought up the Publish dialog. I pointed it to the new Publish profile file and clicked Publish. In just a few seconds I’ve got a new Azure web site up and running with my existing MVC web application. This was very smooth, with no change to config or code required. The sheer simplicity of this impressed me as I was short on time.

Next I needed to allocate my custom domain which as previously mentioned is not available for FREE websites so i needed to upgrade to SHARED mode. From the Azure portal >web site configuration > scale > click SHARED (remember this model incurs a cost).

image

Once upgraded I could then immediately select DOMAINS and set up my CNAME and A record references, for more information see this useful link (configuring a custom domain name for a Windows Azure web site). It’s worth reading the comments on the post too as it covers issues with registering the domain without the WWW subdomain.

Once the DNS entries had propagated I had my existing site up and running under a custom domain running within a shared Azure instance, all with very little effort.

The Enterprise & Open Web Developer Divide

In this interesting Forrester post about embracing the open web Jeffrey Hammond highlights the presence of two different developer communities. In his words:

"…there are two different developer communities out there that I deal with. In the past, I’ve referred to these groups as the "inside the firewall crowd" and the "outside the firewall crowd." The inquiries I have with the first group are fairly conventional — they segment as .NET or Java development shops, they use app servers and RDBMSes, and they worry about security and governance. Inquiries with the second group are very different — these developers are multilingual, hold very few alliances to vendors, tend to be younger, and embrace open source and open communities as a way to get almost everything done. The first group thinks web services are done with SOAP; the second does them with REST and JSON. The first group thinks MVC, the second thinks "pipes and filters" and eventing."

Following the tech industry it is clear to me that this division is tangible and in fact I would suggest the gap is currently increasing. I recently started to revisit my open web development skills after it occurred to me how large this divide was beginning to get and how important these skills will be key in the future. Whilst the Enterprise developer often traditionally focuses deeply on a handful of technologies (too often from one Vendor) the Open Web developer is constantly learning new languages and choosing between best of breed open source frameworks to get the job done. The new Open Web developer has evolved from a different age and with different perspectives and in many ways leaving behind the rules/constraints of the Enterprise developer building typical Line Of Business (LOB) applications. I’m not suggesting that Enterprise developers don’t understand these technologies already, I assume many do, but they’re unlikely to be living and breathing them. This is not just about web development technologies and techniques, but more about mind-sets, architectural styles and patterns. Perhaps it can be viewed historically as similar to the evolution from mainframes to distributed computing, and this is just the next evolution. This movement compliments the emergence of cloud computing and one can assume that the social, dynamic LOB applications of tomorrow will rely heavily on the skills and technologies of the Open Web community. To quote Jeffrey again:

"In the next few years, their world is headed straight to an IT shop near you."

The proliferation of devices, cloud computing and a new breed of ‘surfing since birth’ young blood entering the industry combined with the shift towards this new world from big players like Microsoft (e.g. using JavaScript to build Windows 8 apps) mean that Enterprise IT will have to converge with the Open Web approach in order to meet future consumer needs. Only the integration of these worlds will enable Enterprises to integrate their existing application landscapes with the new web based consumption model.

John R. Rymer’s Forrester post on the subject provides his view on the differences between these communities and his accompanying post details the technologies you need to focus on now (HTML5, CSS3, JavaScript, REST). Whilst it can be tricky to follow this sort of fast moving decentralized movement, the good news is that now is a great time to get into these technologies with the growth of the umbrella HTML5 movement raising awareness within the industry and bringing some standards to advanced web design. Keep an eye on what the big web frameworks are offering, and track the innovations at companies like Google and Twitter. I recommend you read these Forrester articles and think about how this affects your architecture, IT organization and career.

For some quality content on these technologies check out these links:  ‘Mozilla Developer Network’, ‘Move The Web Forward’ and ‘HTML5 Rocks’.

Getting A Users Username in ASP.NET

Getting A Users Username in ASP.NET

When building an ASP.net application it’s important to understand the authentication solution that you are planning to implement and then ensure that all your developers are aware of it. On a few projects I have noted that some developers lack this knowledge and it can end up causing issues later on in the project once the code is first deployed to a test environment. These problems are usually a result of the differences of running a web project locally and remotely. One problem I found on a recent project was where developers were trying to retrieve the logged on user’s Windows username (within an intranet scenario) for display on screen. Unfortunately the code to retrieve the username from client request had been duplicated and a different solution used in both places, worst still neither worked. Sure they worked on the local machine but not when deployed. It was clear immediately that the developers had not quite grasped the way ASP.net works in this regard. There are several ways of retrieving usernames and admittedly it’s not always clear which is best in each scenario, so here is a very, very, very a quick guide. This post is not a deep dive into this huge subject (I might do a follow up post on that) but merely a quick guide to indicate what user details you get for a user the below objects in the framework.

The members we’re looking at are:
name_badge
1. HTTPRequest.LogonUserIdentity
2. System.Web.HttpContext.Current.Request.IsAuthenticated
3. Security.Principal.WindowsIdentity.GetCurrent().Name
4. System.Environment.UserName
5. HttpContext.Current.User.Identity.Name  (same as just User.Identity.Name)
7. HttpContext.User Property
8. WindowsIdentity

To test we create a basic ASPX page and host it in IIS so we can see what values these properties get for a set of authentication scenarios. The page just calls the various username properties available and writing out the values in the response via Response.Write().

Scenario 1: Anonymous Authentication in IIS with impersonation off.

HttpContext.Current.Request.LogonUserIdentity.Name COMPUTER1\IUSR_COMPUTER1
HttpContext.Current.Request.IsAuthenticated False
HttpContext.Current.User.Identity.Name
System.Environment.UserName ASPNET
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\ASPNET

As you can see where we’re running with Anonymous Authentication HttpContext.Current.Request.LogonUserIdentity is the anonymous guest user defined in IIS (IUSR_COMPUTER1 in this example) and as the user is not authenticated the WindowsIdentity is set to that of the running process (ASPNET), and the HttpContext.Current.User.Identity is not set.

Scenario 2: Windows Authentication in IIS, impersonation off.

HttpContext.Current.Request.LogonUserIdentity.Name MYDOMAIN\USER1
HttpContext.Current.Request.IsAuthenticated True
HttpContext.Current.User.Identity.Name MYDOMAIN\USER1
System.Environment.UserName ASPNET
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\ASPNET

Using Windows Authentication however enables the remote user to be authenticated (i.e. IsAuthenticated is true) automatically via their domain account and therefore the HttpContext.Current.Request user is set to that of the remote clients user account, including the Identity object.

Scenario 3: Anonymous Authentication in IIS, impersonation on

HttpContext.Current.Request.LogonUserIdentity.Name COMPUTER1\IUSR_COMPUTER1
HttpContext.Current.Request.IsAuthenticated False
HttpContext.Current.User.Identity.Name
System.Environment.UserName IUSR_COMPUTER1
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\IUSR_COMPUTER1

This time we’re using Anonymous Authentication but now with ASP.net Impersonation turned on in web.config. The only difference to the first scenario is that now the anonymous guest user IUSR_COMPUTER1 is being impersonated and therefore the System.Environment and Security.Principle are using running under that account’s privileges.

Scenario 4: Windows Authentication in IIS, impersonation on

HttpContext.Current.Request.LogonUserIdentity.Name MYDOMAIN\USER1
HttpContext.Current.Request.IsAuthenticated True
HttpContext.Current.User.Identity.Name MYDOMAIN\USER1
System.Environment.UserName USER1
Security.Principal.WindowsIdentity.GetCurrent().Name MYDOMAIN\USER1

Now with Windows Authentication and Impersonation on everything is running as our calling user’s domain account. This means that the ASP.net worker process will share the privileges of that user.

As you can see each scenario provides a slightly different spin on the results which is what you would expect. It also shows how important it is to get the configuration right in your design and implement it early on in build to avoid confusion. For more information see ASP.NET Web Application Security on MSDN.

Integrating WCF Services into Web Client Software Factory

Integrating WCF Services into Web Client Software Factory

For those of you unfamiliar with the Web Client Software Factory (WCSF) it is a very capable web application framework for building web forms based thin client applications. It was created as part of the Patterns and Practices offering from Microsoft (alongside the more well known Enterprise Library). It shares many concepts with it’s sister offering the Smart Client Software Factory (SCSF) but it’s implementation is different and I find it easier to use and, sometimes more importantly for organisations, an easier transition for traditional Web forms developers than ASP.net MVC. It is utilises the Model-View-Presenter pattern (MVP) nicely and I find it a useful framework within which to build web applications where a ASP.net MVC approach may have been discounted. For more information on the WCSF check this link.

WCSF uses the ObjectBuilder framework to provide Dependency Injection services to it’s components. Views, Presenters and Modules can have access to a global (or module level) services collection which traditionally contains the services (internal services, not external web services) that provide business logic or infrastructure support functionality. It is therefore important that any code within the web application can access this services collection (via Dependency Injection) to consume this shared functionality. The problem I’ve run into recently is how to allow WCF Web Services, exposed as part of the web application, to hook into the WCSF framework to consume these internal services. These web services need to be able to declare Service Dependencies on other objects and have those dependencies satisfied by the WCSF framework, just as it does for Views and Presenters etc.

I found that the Order Management WCSF Reference Implementation shows how to hook traditional ASMX Web Services into your WCSF Solution. Here is the implementation of the site’s ProductServiceProxy web service:

using System.Web; 
using System.Web.Services; 
using System.Web.Services.Protocols; 
using System.ComponentModel; 
using OrdersRepository.Interfaces.Services; 
using Microsoft.Practices.CompositeWeb; 
using OrdersRepository.BusinessEntities;

namespace WebApplication.Orders 
{ 
    [WebService(Namespace = "http://tempuri.org/")] 
    public class ProductServiceProxy : WebService 
    { 
        IProductService _productService;

        [ServiceDependency] 
        public IProductService ProductService 
        { 
            set { _productService = value; } 
            get { return _productService; } 
        }

        public ProductServiceProxy() 
        { 
            WebClientApplication.BuildItemWithCurrentContext(this);
        }

        [WebMethod] 
        public Product GetProduct(string sku) 
        { 
            return _productService.GetProductBySku(sku); 
        } 
    } 
}

The key line here is the call to WebClientApplication.BuildItemWithCurrentContext(this) within the constructor. This is the key to how this ASMX Web Service can be built up by ObjectBuilder and therefore have its Dependency Injection requirements met. The rest of the page is typical ASMX and WCSF, for example the ServiceDependency on the ProductService property is declared as normal.

If we look into the WCSF Source Code for BuildItemWithCurrentContext(this) we see how it works:

/// <summary> 
/// Utility method to build up an object without adding it to the container. 
/// It uses the application's PageBuilder and the CompositionContainer" 
/// for the module handling the current request 
/// </summary> 
/// <param name="obj">The object to build.</param> 
public static void BuildItemWithCurrentContext(object obj) 
{ 
  IHttpContext context = CurrentContext; 
  WebClientApplication app = (WebClientApplication) context.ApplicationInstance; 
  IBuilder<WCSFBuilderStage> builder = app.PageBuilder; 
  CompositionContainer container = app.GetModuleContainer(context); 
  CompositionContainer.BuildItem(builder, container.Locator, obj); 
}

protected static IHttpContext CurrentContext 
{ 
  get { return _currentContext ?? new HttpContext(System.Web.HttpContext.Current); } 
  set { _currentContext = value; } 
}

The first line calls off to the CurrentContext property where a new HttpContext is created based on the current HTTP context of the ASMX services session. The following lines get a reference to the WebClientApplication instance (that is WCSF’s version of a HTTPApplication for your web app) and then accesses the Composition Container. BuildItem then does the heavy work of using ObjectBuilder to build up the service’s dependencies.

So this works nicely for ASMX services but what about WCF Services? Well it is possible to follow the same approach and use the BuildItemWithCurrentContext method within the constructor of the WCF service but we have to follow some additional steps too. If we just add the BuildItemWithCurrentContext(this) call to the constructor of our WCF service then it will fail as the HTTPContext will always be null when accessed from within a WCF Service.

ASP.net and IIS hosted WCF services play nicely together within a single Application Domain, sharing state and common infrastructure services, but HTTP runtime features do not apply to WCF. Features such as the current HTTPContext, ASP.Net impersonation, HTTPModule Extensibility, Config based URL and file based Authorisation are not available under WCF. There are alternatives provided by WCF for these features but these don’t help with hooking into the ASP.Net specific WCSF. This is where WCF’s ASP.NET compatibility mode saves the day. By configuring your WCF Service to use ASP.NET compatibility you affect the side by side configuration so that WCF services engage in the HTTP request lifecycle fully and thus can access resources as per ASPX pages and ASMX web services. This provides the WCF service with a reference to the current HTTPContext and allows the WCSF to function correctly. It must be said that there are some drawbacks to using ASP.NET compatibility mode, for example the protocol must be HTTP and the WCF service can’t be hosted out of IIS but these will usually be acceptable when you’re wanting to add the WCF service to a WCSF application.

To turn on ASP.NET compatibility mode update your web.config:

<system.serviceModel> 
  <serviceHostingEnvironment aspNetCompatibilityEnabled=”true” />    
</system.serviceModel>

Your services must then opt in to take advantage of the compatibility mode and this is done via the AspNetCompatibilityRequirementsAttribute. This can be set to ‘Required’, ‘Allowed’ and ‘NotAllowed’ but for our WCSF WCF Service it is required so we need to set it as such as in the example below:

namespace WebApplication 
{ 
    [ServiceBehavior] 
    [AspNetCompatibilityRequirements(RequirementsMode =  
                              AspNetCompatibilityRequirementsMode.Required)]
    public class Service1 : IService1 
    { 
        public void DoWork() 
        { 
          LoggingService.WriteInformation("It worked."); 
        } 
        [ServiceDependency] 
        public ILogging LoggingService{ get; set; }

        public Service1() 
        { 
            WebClientApplication.BuildItemWithCurrentContext(this); 
        } 
    } 
}

And that’s it, with the Asp.Net Compatibility Mode turned on and our service stating that it requires this Compatibility Mode to be on (via its AspNetCompatibilityRequirements attribute), the WCSF BuildItemWithCurrentContext(this) method can run successfully with the current HTTPContext.

For more information on hosting WCF and ASP.net side by side and the ASP.net compatibility mode check out ‘WCF Services and ASP.NET’. For more information on the Web Client Software Factory check out ‘Web Client Software Factory on MSDN’.

IIS Host Headers, Secure Bindings, Wix & Custom Actions

Whilst trying to host multiple WCF Services in IIS, each within it’s own web site, I discovered an issue with secure host headers and IIS6. The requirements were to securely install each WCF service inside it’s own web site, with its own application and application pool instance. In order to setup multiple sites I needed either: multiple IP addresses, use different ports on the one IP address or use ‘host headers’. The chosen solution was ‘host headers’ and I set these up in IIS using IIS Manager (inetmgr). This is covered here:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/e7a21b1f-ab13-47f2-8c61-b09cf14a7cb3.mspx

However the UI only supports setting up unsecure bindings and not secure ones. After checking on TechNet I found this:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/596b9108-b1a7-494d-885d-f8941b07554c.mspx?mfr=true

It turns out that whilst IIS6 does support the use of host headers this feature was not added in time for the IIS Admin UI to reflect this. The article informs us to use the handy IIS admin script “adsutil.vbs” which updates the IIS Metabase XML file, as detailed here:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/596b9108-b1a7-494d-885d-f8941b07554c.mspx?mfr=true

The problem with this approach though is that the script rather annoyingly requires the Site Identifier for the web site you wish to update via the script. This is easy to obtain interactively by checking out IIS Manager and clicking on ‘web sites’, but I needed to set up my site via an MSI (via Wix). The installation script will not know what the identifier of the site is as it could vary on each install. Therefore I need to be able to resolve a web site name to an identifier before calling adsutil.vbs. After some Googling I found this excellent post by David Wang:

http://blogs.msdn.com/david.wang/archive/2005/07/13/HOWTO_Enumerate_IIS_Website_Configuration.aspx

Here he explains how to enumerate through IIS entities to find the one you want. If you check out the comments there is a post by ‘Dave’ where he has included a modified script. This script iterates over the web sites and finds the one matching the right name, then calls the “adsutil.vbs” script to update the metabase using the correct site identifier. I copied this script and trimmed it down for my own purposes and now had a viable script for my installer.

Using a VB Script within an MSI is not recognised best practice, a more robust solution would be to code a Custom Action in native code (there are hidden dangers of Custom Actions written in Managed Code), however it is a viable option and in this instance it fitted my requirements perfectly (for many reasons outside the scope of this article).

Now I’m a fan of Wix and am always impressed with how much it can do out the box without customisation. In this case however I discovered that it is not that easy to just run a command or script during install. Don’t get me wrong it is possible and there are a huge number of options as to when and how to run custom actions, but I was expecting it to be easier than it turned out to be. There are many decent blog posts on the subject of Custom Actions in Wix and I’m not going to go into much detail here, however in the end my solution looked like this:

&lt;!--Set up IIS Bindings--&gt;
&lt;installexecutesequence&gt;
   &lt;custom action='CAIISSecureBindingsServiceX' before='InstallFinalize'&gt;
      NOT Installed
   &lt;/custom&gt;
   &lt;custom action='CAIISSecureBindingsServiceY' before='InstallFinalize'&gt;
      NOT Installed
   &lt;/custom&gt;
&lt;/installexecutesequence&gt;

This slots our Custom Actions into the running order of an MSI installation sequence of events. It asks for the actions be run just before the installation has completed. The ‘Not Installed’ argument ensures that the actions only fire on installation and not an uninstall

We then define the Property for the path to the exe we want to run, in this case the Cscript.exe application (to run our VBS Script). After that we need to define each custom action. The ExeCommand tells it what to pass to the Cscript executable as parameters. The Execute=’deferred’ option is important as it means that the scripts will not run on the first pass through of the MSI installation. The MSI installation process involves running through all steps but not committing them, if that runs ok it then does the steps for real. If we run the script before the web sites have been committed to IIS the script will fail. Setting it to ‘deferred’ means it will be left out of the initial run through and committed in the actual “doing ” stage. For more information on the MSI sequence of events check this out (http://www.advancedinstaller.com/user-guide/standard-actions.html). I found that impersonation needs to be set to Yes for this to work correctly.

&lt;!-- Define all the custom actions and the properties required --&gt;
&lt;Property Id='ScriptEngine'&gt;C:\Windows\System32\CScript.exe&lt;/Property&gt;
&lt;CustomAction
   Id='CAIISSecureBindingsServiceX'
   Property='ScriptEngine'
   ExeCommand='[INSTALLLOCATION]UpdateIISBindings.vbs ServiceXv1Site X.v1.default'
   Execute='deferred'
   Impersonate='yes'
   Return='ignore'/&gt;
&lt;CustomAction
   Id='CAIISSecureBindingsServiceY'
   Property='ScriptEngine'
   ExeCommand='[INSTALLLOCATION]UpdateIISBindings.vbs ServiceYv1Site Y.v1.default'
   Execute='deferred'
   Impersonate='yes'
   Return='check'/&gt;

I have set up multiple Custom Actions although only one is really needed. In my implementation I have made the VBS file merely a helper script that uses the information passed in as parameters. I then have multiple custom actions that each call the script separately passing in different parameters (web site name and host header value). It would be neater to have the VBS file include all the logic for looping all sites and setting up the bindings for each one and then only one Custom Action would be needed to run that script. The reason I have not done that here is that I didn’t want the IIS implementation details leaking out of the Wix project. With the multiple actions approach it means that the website names and host header names are not stored in the script file and are held only in this WIX project.

WCF Best Practices

WCF Best Practices

Windows Communication Foundation is a huge technology and one that is easy to implement badly. Luckily Mehran Nikoo has collated a selection of WCF best practices in his blog:

http://mehranikoo.net/CS/archive/2008/05/31/WCF_5F00_Best_5F00_Practices.aspx

It covers versioning, hosting and security, all of which are worth reading in detail.

Two highlights for me are the problems with using the ‘Using’ statement with WCF clients and the two HTTP concurrent connection restriction built into System.Net.

It is now quite common practice to wrap calls to objects that implement IDispose within a ‘Using’ statement and many WCF sample code snippets in text books and online use this method. Using this pattern though has since been shown to be far from ideal as it hides a potential source of errors and can result in unhelpful generic exceptions being thrown and the original exception being hidden. I have witnessed this recently where a service call reported an odd generic transport exception but when removed from the ‘Using’ statement the original exception was caught and was easily resolved. Checkout the above link for the recommended approach using try/catch blocks.

There is a HTTP specification that enforces a maximum of just two concurrent connections with a remote server at any time. This restriction may have a negative effect on your WCF client application. It can be adjusted via configuration files (app.config, web.config or machine.config) with this setting:

<system.net>
<connectionManagement>
<add address=”*” maxconnection=”6″/>
</connectionManagement>
</system.net>

Adjusting this setting may of course have side effects, for example an increase in CPU usage etc. It is strongly recommended that you test out the right setting for your application and also follow Microsoft’s guidelines in this article.