Platform Targeting in .Net

Platform Targeting in .Net

If you see one of the errors below within your .Net application it is likely as a result of your assembly having been compiled with the wrong target architecture flag set.

1) In this instance the assembly was accidentally compiled for x86 only and run within IIS in a 64bit process:

Unhandled Exception: System.BadImageFormatException: Could not load file or assembly “AssemblyName Version=1.0.0.0, Culture=neutral” or one of its dependencies. An attempt was made to load a program with an incorrect format.

(For info IIS has an Application Pool setting under Advanced  Settings that enables you to run the App Pool as a 32 bit process instead of 64bit. This is off by default.)

2) Here a .Net application targeting a 64 bit processor was run on a 32bit system:

This version of ConsoleApplication1.exe is not compatible with the version of Windows you’re running. Check your computer’s system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.

If you are looking for a quick fix, then just recompile your assembly with the “Any CPU” option as your platform target. See project properties, build tab for this setting.  If you want more information and explanation then read on.

BlgPlatformTarget1When compiling .Net assemblies in Visual Studio (or via the command line) there are a few options regarding optimising for certain target platforms. If you know that your assembly needs a to be deployed only to a 32bit environment (or it needs to target a specific processor instruction set) then you  can optimise the output of MSIL code (what the compiler produces ready for a JIT compilation at run time) for that platform. This is not normally required but perhaps, for example, you have to reference unmanaged code that targets a specific processor architecture.

Essentially this sets a flag inside the assembly metadata that is read by the CLR. If this is set incorrectly then this can result in the above errors but also other odd situations. For example if you compile your app to target “x86”  and then reference an assembly that is targeting a “x64” platform then you will see an error at runtime due to this mismatch (BadImageFormatException). Running an “x86” application will still work on a 64 bit Windows but it will not run natively as 64bit but will instead run under the WOW64 emulation mode which enables x86 execution under 64 bit  (with a performance overhead), which may or may not be a valid scenario in your case.

If you want to reproduce the situation try creating a new console application and in the build properties tab set Platform Target to “x86”. Then create a new Class Library project, set a reference to it in the Console Application, and then in the build properties tab set it to target the “x64” platform. Build and run the application which will show the above BadImageFormatException.

The target platform for your project is set in the Project Properties tab in Visual Studio, under Build (see screenshot above). If you are compiling via the command line you use the /platform switch.

“AnyCPU” became the default value from VS2010 onwards. “AnyCPU” used up to .Net 4.0 means that if the process runs on a 32-bit system, it runs as a 32-bit process and MSIL is compiled to x86 machine code. If the process runs on a 64-bit system, it runs as a 64-bit process and MSIL is compiled to x64 machine code. Whilst this enables more compatibility with multiple target machines it can lead to confusion or unexpected results when you consider that Windows duplicates system DLLs, configuration and registry views for 32bit and 64bit processes.  So Since .Net 4.5 (VS2012)  there is now a new default subtype of “AnyCPU” called “Any CPU 32-bit preferred” which basically follows the above rules except that if a process runs on 32bit system then it will run as a 32 bit process (not 64bit as before) and its MSIL will be compiled to x86 code not x64. This change essentially now forces your process to run under 32bit on a 64bit machine unless you untick the option and turn off the default setting. This setting can be seen on Project properites Build tab as “Prefer 32-bit”.

BlgPlatformTarget2

It is worth noting that you may see a confusing “Multiple Environments” option in Visual Studio which can be automatically added  after migrating solutions from older versions of Visual Studio (I believe it has been removed as a new option in VS2015 onwards but can hang around in older solutions). Use the Configuration Manager tab to check the setting for each assembly. Most developers will want to target  “Any CPU” which supports multiple target environments. If you are getting the above errors then use the steps below to check the assembly and if incorrect then try recompiling with “Any CPU” instead.

How to confirm a target platform for a given assembly:

So how do you confirm which processor architecture an assembly was built for? Well there are a few ways:

1) Using .Net reflection via a simple Powershell command:

[reflection.assemblyname]::GetAssemblyName("${pwd}\MyClassLibrary1.dll") | format-list

Running this command pointing to the assembly you want to check will produce output like this below (results truncated):

An assembly compiled to target AnyCPU:

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : MSIL
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

Same assembly but now compiled to target x86 :

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : X86
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

Same assembly again but now compiled to target x64 :

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : Amd64
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

 

2 )You can also see this information from a de-compiler tool such as dotPeek. Below shows a screenshot showing a x86, 64, AnyCPU and x64 targeted assemblies.
BlgPlatformTarget4_dotPeek

3) Use the CorFlags Conversion Tool

The CorFlags Conversion Tool (CorFlags.exe) is installed with Visual Studio and easily accessed by the VS Command Line. It enables reading and editing of flags for assemblies.

CorFlags  assemblyName

Assuming you have the .Net 4 version installed with .net 4 and above  you’ll see something like this. Older versions do not have the 32BITREQ/PREF flags as per the change for .Net4 discussed above:

Microsoft (R) .NET Framework CorFlags Conversion Tool.  Version  4.6.1055.0
Copyright (c) Microsoft Corporation.  All rights reserved.

Version   : v4.0.30319
CLR Header: 2.5
PE        : PE32
CorFlags  : 0x1
ILONLY    : 1
32BITREQ  : 0
32BITPREF : 0
Signed    : 0

To interpret this output see the table below and check the PE value (PE32+ only on 64bit) and the 32bit Required/Preferred flags. It is also possible to update these flags using this tool.

Summary

Below is a simple table of results to show the impact of Platform Target on the complied assembly and how it is run on a 32/64bit OS. As you can see the 32 preferred flag has resulted in an AnyCPU assembly being run as 32bit on a 64bit OS, the table also shows the values you get when you use the techniques above for determining the target platform for a given assembly.

 

Platform Target in Visual Studio PowerShell
Result
dotPeek
Result
CorFlags
32BITREQ
CorFlags
32PREF
CorFlags
PE
Runs on
32bit OS as
Runs on
64bit
OS as
AnyCPU (pre .Net 4) MSIL MSIL 0 0 PE32 32 bit process 64 bit process
AnyCPU (.Net 4 +)
32 bit NOT preferred
MSIL MSIL 0 0 PE32 32 bit process 64 bit process
AnyCPU (.Net 4 +)
32 bit Preferred (default)
MSIL x86 0 1 PE32 32 bit process 32 bit process
(under WOW64)
x86 x86 x86 1 0 PE32 32 bit process 32 bit process
(under WOW64)
x64 Amd64 x64 0 0 PE32+ ERROR 64 bit process

In summary, there are a few simple rules to be aware of with using AnyCPU and the 32bit Preferred flag but essentially AnyCPU will enable the most compatibility in most cases.

Advertisements

Break on Exceptions in Visual Studio 2015

Break on Exceptions in Visual Studio 2015

Looking for the option to break on exceptions during debugging in Microsoft Visual Studio 2015? Well Microsoft dumped the old exceptions dialog and replaced it with the new Exception Settings Window. To see it to show that window via the menu: Debug > Windows > Exception Settings.

vsexceptionsettingsmenu

Use the Exception Settings window to choose the types of exceptions on which you wish to break. Right click for the context menu option to turn on/off the option to break or continue when the exception is handled (see below). To break on all exceptions you’ll want to ensure this is set to off (not ticked).

vs2015exceptionssettingsdiag2

For more information check out these MSDN links:

https://blogs.msdn.microsoft.com/visualstudioalm/2015/02/23/the-new-exception-settings-window-in-visual-studio-2015/

https://blogs.msdn.microsoft.com/visualstudioalm/2015/01/07/understanding-exceptions-while-debugging-with-visual-studio/

Preventing Browser Caching using HTTP Headers

Many developers consider the use of HTTPS on a site enough security for a user’s data, however one area often overlooked is the caching of your sites pages by the users browser. By default (for performance) browsers will cache pages visited regardless of whether they are served via HTTP or HTTPS. This behaviour is not ideal for security as it allows an attacker to use the locally stored browser history and browser cache to read possibly sensitive data entered by a user during their web session. The attacker would need access to the users physical machine (either locally in the case of a shared device or remotely via remote access or malware). To avoid this scenario for your site you should consider informing the browser not to cache sensitive pages via the header values in your HTTP response. Unfortunately it’s not quite that easy as different browsers implement different policies and treat the various cache control values in HTTP headers differently.

Taking control of caching via the use of HTTP headers

To control how the browser (and any intermediate server) caches the pages within our web application we need to change the HTTP headers to explicitly prevent caching. The minimum recommended HTTP headers to de-activate caching are:

Cache-control: no-store
Pragma: no-cache

Below are the settings seen on many secure sites as a comparison to above and perhaps as a guide to what we should really be aiming for:

Cache-Control:max-age=0, no-cache, no-store, must-revalidate
Expires:Thu, 01 Jan 1970 00:00:00 GMT
Pragma:no-cache

HTTP Headers & Browser Implementation Differences:

Different web browsers implement caching in differing ways and therefore also implement various subtleties in their support for the cache controlling HTTP headers. This also means that as browsers evolve so too will their implementations related to these header values.

Pragma Header Setting

Use of the ‘Pragma’ setting is often used but it is now outdated (a retained setting from HTTP 1.0) and actually relates to requests and not responses. As developers have been ‘over using’ this on responses many browsers actually started to make use of this setting to control response caching. This is why it is best included even though it has been superseded by specific HTTP 1.1 directives.

Cache-Control ‘No-Store’ & ‘No-Cache’ Header Settings

A “Cache-Control” setting of private instructs any proxies not to cache the page but it does still permit the browser to cache. Changing this to no-store instructs the browser to not cache the page and not store it in a local cache. This is the most secure setting. Again due to variances in implementation a setting of no-cache is also sometimes used to mean no-store (despite this setting actually meaning cache but always re-validate, see here). Due to this the common recommendation is to include both settings, i.e: Cache-control: no-store, no-cache.

Expires Header Setting

This again is an old HTTP 1.0 setting that is maintained for backward compatibility. Setting this date to a date in the past forces the browser to treat the data as stale and therefore it will not be loaded from cache but re-queried from the originating server. The data is still cached locally on disk though and so only provides little security benefits but does prevent an attacker directly using the browser back button to read the data without resorting to accessing the cache on the file system.  For example:  Expires: Thu, 01 Jan 1970 00:00:00 GMT

Max-Age Header Setting

The HTTP 1.1 equivalent of expires header. Setting to 0 will force the browser to re-validate with the originating server before displaying the page from cache. For example: Cache-control: max-age=0

Must-Revalidate Header Setting

This instructs the browser that it must revalidate the page against the originating server before loading from the cache, i.e. Cache-Control: must-revalidate

Implementing the HTTP Header Options

Which pages will be affected?

Technically you only need to turn off caching on those pages where sensitive data is being collected or displayed. This needs to be balanced against the risk of accidently not implementing the change on new pages in the future or making it possible to remove this change accidently on individual pages. A review of your web application might show that the majority of pages display sensitive data and therefore a global setting would be beneficial. A global setting would also ensure that any new future pages added to the application would automatically be covered by this change, reducing the impact of developers forgetting to set the values.

There is a trade off with performance here and this must be considered in your approach. As this change impacts the client caching mechanics of the site there will be performance implications of this change. Pages will no longer be cached on the client, impacting client response times and may also increase load on the servers. A full performance test is required following any change in this area.

Implementing in ASP.net

There are numerous options for implementing the HTTP headers into a web application. These options are outlined below with their strengths/weaknesses. ASP.net and the .Net framework provide methods to set caching controls on the Request and Cache objects. These in turn result in HTTP headers being set for the page/application’s HTTP responses. This provides a level of abstraction from the HTTP headers but that abstraction prevents you setting the headers exactly how you might like them for full browser compatibility. The alternative approach is to explicitly set the HTTP headers. Both options and how they can be implemented are explored below:

Using ASP.net Intrinsic Cache Settings
Declaratively Set Output Cache per ASPX Page

Using the ASPX Page object’s attributes you can declaratively set the output cache properties for the page including the HTTP header values regarding caching. The syntax is show in the example below:

Example ASPX page:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="CacheTestApp._Default" %> 
<%@ OutputCache Duration="60" VaryByParam="None"%> 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml" > 
<head runat="server"> 
<title></title> 
</head> 
<body> 
<form id="form1" runat="server"> This is Page 1.</form> 
</body> 
</html>

Parameters can be added to the OutputCache settings via the various supported attributes. Whilst this configuration allows specific targeting of the caching solution by enabling you to define a cache setting for each separate page it has the drawback that it needs changes to be made to all pages and all user controls. In addition developers of any new pages will need to ensure that the page’s cache settings are correctly configured. Lastly this solution is not configurable should the setting need to be changed per environment or disabled for performance reasons.

Declaratively Set Output Cache Using a Global Output Cache Profile

An alternative declarative solution for configuring a page’s cache settings is to use a Cache Profile. This works by again adding an OutputCache directive to each page (and user control) but this time deferring the configuration settings to a CacheProfile in the web.config file.

Example ASPX page:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="CacheTestApp._Default" %> 
<%@ OutputCache CacheProfile=" RHCacheProfile "%> 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml" > 
<head runat="server"> 
<title></title> 
</head> 
<body> 
<form id="form1" runat="server"> 
This is Page 1. 
</form> 
</body> 
</html>

Web.config file:

<system.web> 
<caching> 
<outputCache enableOutputCache="false"/> 
<outputCacheSettings> 
<outputCacheProfiles> 
<OutputCache CacheProfile=" RHCacheProfile"> 
<add name="RHCacheProfile" 
location="None" 
noStore="true"/> 
</outputCacheProfiles> 
</outputCacheSettings> 
</caching> 
</system.web>

This option provides the specific targeting per page and the related drawbacks of having to make changes to every page and user control. This solution does provide the ability to centralise the cache settings in one place (minimising the impact of future changes) and enables caching to be set during installation depending on target environment via the deployment process.

Programmatically Set HTTP Headers in ASPX Pages

Output caching can also be set in code in the code behind page (or indeed anywhere where the response object can be manipulated). The code snippet below shows setting the HTTP headers indirectly via the Response.Cache object:

Response.Cache.SetCacheability(HttpCacheability.NoCache); 
Response.Cache.SetExpires(DateTime.UtcNow.AddHours(-1)); 
Response.Cache.SetNoStore();
Response.Cache.SetMaxAge(new TimeSpan(0,0,30));

This code would need to be added to each page and so results in duplicate code to maintain and again introduces the requirement for this to be remembered to be added to all new pages as they are developed. It results in the below headers being produced:

Cache-Control:no-cache, no-store

Expires:-1

Pragma:no-cache

Programmatically Set HTTP Headers in Global ASAX File

Instead of adding the above code in each page an alternative approach is to add it to the Global ASAX file so as to apply to all requests made through the application.

void Application_BeginRequest(object sender, EventArgs e)
{
	Response.Cache.SetCacheability(HttpCacheability.NoCache);
	Response.Cache.SetExpires(DateTime.Now);
	Response.Cache.SetNoStore();
	Response.Cache.SetMaxAge(new TimeSpan(0,0,30));
}

This would apply to all pages being requested through the application. It results in the below headers being produced:

Cache-Control:no-cache, no-store

Expires:-1

Pragma:no-cache

Explicitly define HTTP Headers outside of ASP.net Cache settings.

Explicitly Define HTTP Headers in ASPX Pages

The response object can have its HTTP Headers set explicitly instead of using the ASP.net Cache objects abstraction layer. This involves setting the header on every page:

void Page_Load(object sender, EventArgs e)
{
	Response.AddHeader("Cache-Control", "max-age=0,no-cache,no-store,must-revalidate");
	Response.AddHeader("Pragma", "no-cache");
	Response.AddHeader("Expires", "Tue, 01 Jan 1970 00:00:00 GMT");
}

Again as a page specific approach it requires a change to be made on each page. It results in the below headers being produced:

Cache-Control:max-age=0,no-cache,no-store,must-revalidate

Expires:Tue, 01 Jan 1970 00:00:00 GMT

Pragma:no-cache

Explicitly Define HTTP Headers in Global ASAX File

To avoid having to set the header explicitly on each page the above code can be inserted into the Application_BeginRequest event within the application’s Global ASAX file:

void Application_BeginRequest(object sender, EventArgs e)
{
	Response.AddHeader("Cache-Control", "max-age=0,no-cache,no-store,must-revalidate");
	Response.AddHeader("Pragma", "no-cache");
	Response.AddHeader("Expires", "Tue, 01 Jan 1970 00:00:00 GMT");
}

Again this results in the below headers being produced:

Cache-Control:max-age=0,no-cache,no-store,must-revalidate

Expires:Tue, 01 Jan 1970 00:00:00 GMT

Pragma:no-cache

Environment Specific Settings

It’s useful to be able to set the header values via configuration settings, not least to be able to test this change in a performance test environment via before/after tests.

All of the above changes should be made configurable and be able to be triggered/tweaked via the web.config file (and therefore can be modified via deployment settings).

Useful Links For More Information 

Host Static HTML or WebForms Page within MVC Site

Host Static HTML or WebForms Page within MVC Site

If you need to host a static HTML page within an ASP.net MVC website or you need to mix ASP.net WebForms with an MVC website then you need to configure your routing configuration in MVC to ignore requests for those pages.

File:Belgian road sign F7.svgRecently I wanted to host a static HTML welcome page (e.g. hello.htm) on an MVC website. I added the HTML page to my MVC solution (setting it as the Visual Studio project’s start page) and configured my web site’s default page to be the HTML page (hello.htm). It tested ok at first but then I realised that it was only displaying the hello page first on debug because I’d set the page to be the Visual Studio project’s start-up page and I hadn’t actually configured the MVC routes correctly so it wouldn’t work once deployed.

For this to work you need to tell MVC to ignore the route if its for the HTML page (or ASPX page in the case of mixing WebForms and MVC). Find your routing configuration section (for MVC4 it’s in RouteConfig.cs under the App_Start folder, for MVC1,2,3 it’s in Global.asax). Once found use the IgnoreRoute() method to tell Routing to ignore the specific paths. I used this:

routes.IgnoreRoute("hello.htm"); //ignore the specific HTML start page
routes.IgnoreRoute(""); //to ignore any default root requests

Now MVC ignores a request to load the hello HTML page and leaves IIS to handle returning the resource and hence the page displays correctly.

SQL Server Compact Minimum Date Value

Recently I got this error connecting to a SQL Server Compact database from .Net:

"An overflow occurred while converting to datetime."

SQLSo I dug into my data insertion code (I was using the excellent Massive mini ORM by the way – and yes I know its not really an ORM but that’s another post) and found I was setting the offending DateTime field to DateTime.MinValue which it turns out SQL Compact doesn’t like. After a bit of investigation I found that whilst .Net supports a minimum date (DateTime.MinValue) of "01/01/0001 00:00:00" and a maximum (DateTime.MaxValue) of "31/12/9999 23:59:59" SQL Compact’s minimum date is "01/01/1753 00:00:00" (the same as SQL Server 2005 and before). This is the same for all current versions of SQL Server compact. For SQL Server 2008 upwards (2008 R2 and 2012) it’s recommended to use the DateTime2 data-type which supports the same range of dates as .Net.

To summarise:

Version

Minimum Date

Maximum Date

.Net

All

01/01/0001 00:00:00

31/12/9999 23:59:59

SQL Server Compact

All

01/01/1753 00:00:00

31/12/9999 23:59:59

SQL Server
(using datetime)

All

01/01/1753 00:00:00

31/12/9999 23:59:59

SQL Server
(using datetime2)

2008 onwards

01/01/0001 00:00:00

31/12/9999 23:59:59

                          

For a handy guide to the SQL Compact datatypes see here : http://msdn.microsoft.com….

For SQL Server datatypes see here : http://msdn.microsoft.com…

The System.Data.SqlTypes namespace contains the useful SqlDateTime struct which has MinValue and MaxValue properties so you can safely use SqlDateTime.MinValue instead of DateTime.MinValue in your code. For more information on SqlDateTime see here.

The HTML Agility Pack

For a current project I needed to perform a simple screen scrape action. The resulting solution was functional but a bit rough and ready. Luckily I stumbled upon this open-source HTML library project: The HTML Agility Pack, hosted on CodePlex at http://htmlagilitypack.codeplex.com.

It is an excellent little library that makes dealing with HTML a breeze, whether you are screen scraping or just manipulating HTML documents locally. It is very forgivable with regards to malformed HTML documents and supports loading pages directly from the web. You can just parse the HTML or modify it, and it even supports LINQ.  A key benefit of this library is that it doesn’t force you to learn a new object model but instead mirrors the System.XML object model – a huge help for getting up and running quickly, as well as making coding it feel natural.

Download HTML directly via a URL:

HtmlDocument htmlDoc = new HtmlDocument();
HtmlWeb webGet = new HtmlWeb();
htmlDoc = webGet.Load(url);

Or parse an HTML string:

HtmlDocument htmlDoc = new HtmlDocument();
htmlDoc.LoadHtml(htmlString);

Then you can use XPATH to query the HTML document as you would an XML document:           

// select a <li> where it has an element of <b> with a value of "Name:"
var nameItem = htmlDoc.DocumentNode.SelectSingleNode("//li[b='Name:']");
if (nameItem != null && nameItem.ChildNodes.Count > 1)
{
    name = nameItem.ChildNodes[1].InnerText;
}

You can download it via NuGet here : http://nuget.org/packages/HtmlAgilityPack.

For more examples of it’s use check out these posts: Parsing HTML Documents with the Html Agility Pack and Crawling a web sites with HtmlAgilityPack.

Enjoy.

Getting A Users Username in ASP.NET

Getting A Users Username in ASP.NET

When building an ASP.net application it’s important to understand the authentication solution that you are planning to implement and then ensure that all your developers are aware of it. On a few projects I have noted that some developers lack this knowledge and it can end up causing issues later on in the project once the code is first deployed to a test environment. These problems are usually a result of the differences of running a web project locally and remotely. One problem I found on a recent project was where developers were trying to retrieve the logged on user’s Windows username (within an intranet scenario) for display on screen. Unfortunately the code to retrieve the username from client request had been duplicated and a different solution used in both places, worst still neither worked. Sure they worked on the local machine but not when deployed. It was clear immediately that the developers had not quite grasped the way ASP.net works in this regard. There are several ways of retrieving usernames and admittedly it’s not always clear which is best in each scenario, so here is a very, very, very a quick guide. This post is not a deep dive into this huge subject (I might do a follow up post on that) but merely a quick guide to indicate what user details you get for a user the below objects in the framework.

The members we’re looking at are:
name_badge
1. HTTPRequest.LogonUserIdentity
2. System.Web.HttpContext.Current.Request.IsAuthenticated
3. Security.Principal.WindowsIdentity.GetCurrent().Name
4. System.Environment.UserName
5. HttpContext.Current.User.Identity.Name  (same as just User.Identity.Name)
7. HttpContext.User Property
8. WindowsIdentity

To test we create a basic ASPX page and host it in IIS so we can see what values these properties get for a set of authentication scenarios. The page just calls the various username properties available and writing out the values in the response via Response.Write().

Scenario 1: Anonymous Authentication in IIS with impersonation off.

HttpContext.Current.Request.LogonUserIdentity.Name COMPUTER1\IUSR_COMPUTER1
HttpContext.Current.Request.IsAuthenticated False
HttpContext.Current.User.Identity.Name
System.Environment.UserName ASPNET
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\ASPNET

As you can see where we’re running with Anonymous Authentication HttpContext.Current.Request.LogonUserIdentity is the anonymous guest user defined in IIS (IUSR_COMPUTER1 in this example) and as the user is not authenticated the WindowsIdentity is set to that of the running process (ASPNET), and the HttpContext.Current.User.Identity is not set.

Scenario 2: Windows Authentication in IIS, impersonation off.

HttpContext.Current.Request.LogonUserIdentity.Name MYDOMAIN\USER1
HttpContext.Current.Request.IsAuthenticated True
HttpContext.Current.User.Identity.Name MYDOMAIN\USER1
System.Environment.UserName ASPNET
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\ASPNET

Using Windows Authentication however enables the remote user to be authenticated (i.e. IsAuthenticated is true) automatically via their domain account and therefore the HttpContext.Current.Request user is set to that of the remote clients user account, including the Identity object.

Scenario 3: Anonymous Authentication in IIS, impersonation on

HttpContext.Current.Request.LogonUserIdentity.Name COMPUTER1\IUSR_COMPUTER1
HttpContext.Current.Request.IsAuthenticated False
HttpContext.Current.User.Identity.Name
System.Environment.UserName IUSR_COMPUTER1
Security.Principal.WindowsIdentity.GetCurrent().Name COMPUTER1\IUSR_COMPUTER1

This time we’re using Anonymous Authentication but now with ASP.net Impersonation turned on in web.config. The only difference to the first scenario is that now the anonymous guest user IUSR_COMPUTER1 is being impersonated and therefore the System.Environment and Security.Principle are using running under that account’s privileges.

Scenario 4: Windows Authentication in IIS, impersonation on

HttpContext.Current.Request.LogonUserIdentity.Name MYDOMAIN\USER1
HttpContext.Current.Request.IsAuthenticated True
HttpContext.Current.User.Identity.Name MYDOMAIN\USER1
System.Environment.UserName USER1
Security.Principal.WindowsIdentity.GetCurrent().Name MYDOMAIN\USER1

Now with Windows Authentication and Impersonation on everything is running as our calling user’s domain account. This means that the ASP.net worker process will share the privileges of that user.

As you can see each scenario provides a slightly different spin on the results which is what you would expect. It also shows how important it is to get the configuration right in your design and implement it early on in build to avoid confusion. For more information see ASP.NET Web Application Security on MSDN.

A Useful Entity Translator Pattern / Object Mapper Template

In this post I’m going to cover the Entity Translator pattern and provide a useful base class for implementing this pattern in your application.

The ‘Entity Translator Pattern’ (defined here and here) covers situations where you need to move object state from one object type to another and there are very strict reasons why these object types are not linked or share inheritance hierarchies.

There are many occasions when you need to convert a set of data from one data type to another. Where you are using DTOs (Data Transport Objects) for example, which serve the purpose of being light and easy to pass between the layers of your application but will often end up needing to be converted in some way into a Business Model or Domain Object. The use of Data Contract objects in WCF (Windows Communication Foundation) is a common example of a where you need this translation from DTOs to Business Model objects, and the Entity Translator pattern is a good fit. After carefully layering your application design into loosely coupled layers you often don’t want WCF created types, or entity objects generated from an ORM tool (Object Relational Mapper) leaking out of their respective layers. Also you may have a requirement to use a ViewModel object pattern requiring state transfer from the ViewModel to the master model object etc. All of these examples have in common the problem of how do you get the state from object A into object B where the two objects are of a different type.

Now of course the most simplistic method would be to hand crank the conversion by hand at the point you need it to. For example:

CustomerDTO dto;
Customer customer;

dto.Name = customer.Name;
dto.Id = customert.Id;

Whilst this approach is acceptable it means writing potentially a lot of code and can lack structure in its use. Other developers working in your team may end up repeating your code elsewhere. Additionally there may be a lack of separation of concerns if this translation code sits with the bulk of your logic making unit testing more complex.

An alternative (keyboard friendly) approach to the problem is to use a tool that can automate the translation of values from one object to another such as AutoMapper. For an example of how to use AutoMapper check out this post and this post, oh and this one. AutoMapper can automatically (through reflection) scan the source object and target object and copy the property values from one to the other (so dto.Name is automatically copied to customer.Name). This works painlessly where the property names match but if they don’t then some mapping has to be configured. The advantage of this approach is that it can save a lot of code having to be written and tested. The downside is performance and less readability of your mapping code. If you have a lot of mapping code to write and you have control over both object structures under translation then this can be a very productive approach.

Of course there is always a middle ground where you want to add some structure to your translation code, perhaps need the performance of specific ‘object to object’ coding or have complex translation logic. For these situations consider the following implementation, courtesy of Microsoft Patterns & Practices .

Microsoft’s Smart Client Software Factory includes an Entity Translator service in it’s Infrastructure.Library project which enables you to write custom translators and then register these with the Entity Translator service. It provides two base classes: ‘EntityMapperTranslator’ and ‘BaseTranslator’. BaseTranslator class implements the an ‘IEntityTranslator’.

BaseTranslator:

public abstract class BaseTranslator : IEntityTranslator
{
  public abstract bool CanTranslate(Type targetType, Type sourceType);
  public bool CanTranslate<TTarget, TSource>()
  {
    return CanTranslate(typeof(TTarget), typeof(TSource));
  } 

  public TTarget Translate<TTarget>(IEntityTranslatorService service, object source)
  {
    return (TTarget)Translate(service, typeof(TTarget), source);
  } 

  public abstract object Translate(IEntityTranslatorService service, Type targetType, object source);
} 

EntityMapperTranslator:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> : BaseTranslator
{ 

  public override bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
                (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public override object Translate(IEntityTranslatorService service, Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
      return ServiceToBusiness(service, (TServiceEntity)source);
    if (targetType == typeof(TServiceEntity))
      return BusinessToService(service, (TBusinessEntity)source);
    throw new EntityTranslatorException();
  } 

  protected abstract TServiceEntity BusinessToService(IEntityTranslatorService service, TBusinessEntity value);

  protected abstract TBusinessEntity ServiceToBusiness(IEntityTranslatorService service, TServiceEntity value); 

} 

If we wanted to use this as a general template for entity translation in a non SCSF solution then we can remove the detail around IEntityTranslator services assuming that we’re not building a translator service but purely writing individual translators. Our classes then look more like this:

BaseTranslator:

public abstract class BaseTranslator 
{
  public abstract bool CanTranslate(Type targetType, Type sourceType); 

  public bool CanTranslate<TTarget, TSource>()
  {
    return CanTranslate(typeof(TTarget), typeof(TSource));
  } 

  public abstract object Translate(Type targetType, object source); 

} 


EntityMapperTranslator:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> : BaseTranslator
{
  public override bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
                (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public TTarget Translate<TTarget>(object source)
  {
   return (TTarget)Translate(typeof(TTarget), source);
  } 

  public override object Translate(Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
     {
       return ServiceToBusiness((TServiceEntity)source);
     } 

     if (targetType == typeof(TServiceEntity))
     {
       return BusinessToService((TBusinessEntity)source);
     } 

    throw new System.ArgumentException("Invalid type passed to Translator", "targetType");
   } 

   protected abstract TServiceEntity BusinessToService(TBusinessEntity value); 

   protected abstract TBusinessEntity ServiceToBusiness(TServiceEntity value);
} 


We could refactor this further by removing the BaseTranslator class completely and just use the EntityMapperTranslator as the base class for our translators. The BaseTranslator class it stands above does provide some benefit if we can foresee circumstances where we want to follow a standard translation pattern for more than just entity translation. We could create a DataMapperTranslator, for example, that would derive from BaseTranslator and would differ the entity translation specifics of the  EntityMapperTranslator implementation. Removing the BaseTranslator class, however, results an in EntityMapperTranslator class like this:

public abstract class EntityMapperTranslator<TBusinessEntity, TServiceEntity> 
{
  public bool CanTranslate(Type targetType, Type sourceType)
  {
    return (targetType == typeof(TBusinessEntity) && sourceType == typeof(TServiceEntity)) ||
           (targetType == typeof(TServiceEntity) && sourceType == typeof(TBusinessEntity));
  } 

  public TTarget Translate<TTarget>(object source)
  {
    return (TTarget)Translate(typeof(TTarget), source);
  } 

  public object Translate(Type targetType, object source)
  {
    if (targetType == typeof(TBusinessEntity))
    {
      return ServiceToBusiness((TServiceEntity)source);
    }
    if (targetType == typeof(TServiceEntity))
    {
      return BusinessToService((TBusinessEntity)source);
    }
    throw new System.ArgumentException("Invalid type passed to Translator", "targetType");
  } 

    protected abstract TServiceEntity BusinessToService(TBusinessEntity value); 

    protected abstract TBusinessEntity ServiceToBusiness(TServiceEntity value);
} 


This is now a neat and simple template from which we can code our custom translators. Now for translating from CustomerDTO to Customer we can create a translator that derives from EntityMapperTranslator and we only need to code the two translate methods (one for translating CustomerDTO to Customer and vice versa).

public class CustomerTranslator : EntityMapperTranslator<BusinessModel.Customer, DataContract.CustomerDTO>
{
  protected override DataContract.CustomerDTO BusinessToService(BusinessModel.Customer value)
  {
    DataContract.CustomerDTO customerDTO = new DataContract.CustomerDTO();
    customerDTO.Id = value.Id;
    customerDTO.Name = value.Name ;
    customerDTO.PostCode = value.PostCode;
    return customerDTO;
  } 

  protected override BusinessModel.Customer ServiceToBusiness(DataContract.CustomerDTO value)
  {
    BusinessModel.Customer customer = new BusinessModel.Customer();
    customer.Id = value.Id;
    customer.Name = value.Name ;
    customer.PostCode = value.PostCode;
    return customer;   
  }
}


This is just a simple example of course as the translation logic could get quite complex. Also don’t forget that the translator classes could also make use of some of the automated tools described earlier such as AutoMapper. Calling AutoMapper inside a translator would enable you to easily combine both manual and automated translation code enabling you to save on tedious typing (and those dreaded copy and paste errors) but gives a simple structure to your manual translation logic where this approach is preferable. Either way by creating custom translators we have encapsulated all of our translation code into separate reusable and testable classes, that all share a developer friendly common interface.

In summary then, we’ve covered using a simple Entity Translation template, based on Microsoft’s Smart Client Software Factory offering and shown how it can be used in your application for adding structure to your object mapping/translation code.


Additional Useful Links:

http://code.google.com/p/translation-tester/

http://icoder.wordpress.com/category/uncategorized/

http://c2.com/cgi/wiki?TranslatorPattern

http://msdn.microsoft.com/en-us/library/cc304747.aspx

http://code.google.com/p/translation-tester/source/browse/#svn/trunk/TranslationTester/TranslationTester%3Fstate%3Dclosed

Raising Multiple Exceptions with AggregateException

Raising Multiple Exceptions with AggregateException

There are occasions where you are aware of multiple exceptions within your code that you want to raise together in one go. Perhaps your code makes a service call to a middleware orchestration component that potentially returns multiple exceptions detailed in its response or another scenario might be a batch processing task dealing with many items in one process that requires you to collate all exceptions until the end and then thrown them together.

Lets look at the batch scenario in more detail. In this situation if you raised the first exception that you found it would exit the method without processing the remaining items. Alternatively you could store the exception information in a variable of some sort and once all items are processed use the information to construct an exception and throw it. Whilst this approach works there are some drawbacks. There is extra effort required to create a viable storage container to hold the exception information and this may mean modifying existing code to not throw an exception but instead to log the details in this new ‘exception detail helper class’. This solution also lacks the additional benefits you get with creating an exception at that point in time, for example the numerous intrinsic properties that exist within Exception objects that  provide valuable additional context information to support the message within the exception. Even when all the relevant information has been collated into a single exception class then you are still left with one exception holding all that information when you may need to handle the exceptions individually and pass them off to existing error handling frameworks which rely on a type deriving from Exception.

Luckily included in .Net Framework 4.0 is the simple but very useful AggregateException class which lives in the System namespace (within mscorlib.dll). It was created to be used with the Task Parallel Library and it’s use within that library is described on MSDN here. Don’t think that is it’s only use though, as it can be put to good use within your own code in situations like those described above where you need to throw multiple exceptions, so let’s see what it offers.

The AggregateException class is an exception type, inheriting from System.Exception, that acts a wrapper for a collection of child exceptions. Within your code you can create instances of any exception based type and add them to the AggregateException’s collection. The idea is a simple one but the AggregateException’s beauty comes in the implementation of this simplicity. As it is a regular exception class it can be handled in the usual way by existing code but also as a special exception collection by the specific code that actually cares about all the exceptions nested within it’s bowels.

The class accepts the child exceptions on one of it’s seven constructors and then exposes them through it’s InnerExceptions property. Unfortunately this is a read-only collection and so it is not possible to add inner exceptions to the AggregateException after it has been instantiated (which would have been nice) and so you will need to store your exceptions in a collection until you’re ready to create the Aggregate and throw it:

// create a collection container to hold exceptions
List<Exception> exceptions = new List<Exception>();

// do some stuff here ........

// we have an exception with an innerexception, so add it to the list
exceptions.Add(new TimeoutException("It timed out", new ArgumentException("ID missing")));

// do more stuff .....

// Another exception, add to list
exceptions.Add(new NotImplementedException("Somethings not implemented"));

// all done, now create the AggregateException and throw it
AggregateException aggEx = new AggregateException(exceptions);
throw aggEx;

The method you use to store the exceptions is up to you as long as you have them all ready at the time you create the AggregateException class. There are seven constructors allowing you to pass combinations of: nothing, a string message, collections or arrays of inner exceptions.

Once created you interact with the class as you would any other exception type:

try
{
   // do task
}
catch (AggregateException ex)
{
   // handle it 
}

This is key as it means that you can make use of existing code and patterns for handling exceptions within your (or a third parties) codebase.

In addition to the usual Exception members the class exposes a few custom ones. The typical InnerException property is there for compatibility and this appears to return the first exception added to the AggregateException class via the constructor, so in the example above it would be the TimeoutException instance. All of the child exceptions are exposed via the InnerExceptions read-only collection property (as shown below).

image

The Flatten() method is another custom property that might prove useful if you find the need to nest Exceptions as inner exceptions within several AggregateExceptions. The method will iterate the InnerExceptions collection and if it finds AggregateExceptions nested as InnerExceptions it will promote their child exceptions to the parent level. This is best seen in an example:

AggregateException aggExInner = 
          new AggregateException("inner AggEx", new TimeoutException());
AggregateException aggExOuter1 = 
          new AggregateException("outer 1 AggEx", aggExInner);
AggregateException aggExOuter2 = 
          new AggregateException("outer 2 AggEx", new ArgumentException());
AggregateException aggExMaster =
          new AggregateException(aggExOuter1, aggExOuter2);

If we create this structure above of AggregrateExceptions with inner exceptions of TimeoutException and ArgumentException then the InnerExceptions property of the parent AggregateException (i.e. aggExMaster) shows, as expected, two objects, both being of type AggregrateException and both containing child exceptions of their own:

image

But if we call Flatten()…

AggregateException aggExFlatterX = aggExMaster.Flatten();

…we get a new ArgumentException instance returned that contains still two objects but this time the AggregrateException objects have gone and we’re just left with the two child exceptions of TimeoutException and ArgumentException:

image

This is a useful feature to discard the AggregateException containers (which are effectively just packaging) and expose the real meat, i.e. the actual exceptions that have been thrown and need to be addressed.

If you’re wondering how the ToString() is implemented then the aggExMaster object in the examples above (without flattening) produces this:

System.AggregateException: One or more errors occurred. ---> System.AggregateException
: outer 1 AggEx ---> System.AggregateException: inner AggEx ---> 
System.TimeoutException: The operation has timed out.   --- End of inner exception 
stack trace ---  --- End of inner exception stack trace ---  --- End of inner exception 
stack trace ------> (Inner Exception #0) System.AggregateException: outer 1 AggEx ---> 
System.AggregateException: inner AggEx ---> System.TimeoutException: The operation
 has timed out.   --- End of inner exception stack trace ---   --- End of inner 
exception stack trace ------> (Inner Exception #0) System.AggregateException: inner
AggEx ---> System.TimeoutException: The operation has timed out.  --- End of inner 
exception stack trace ------> (Inner Exception #0) System.TimeoutException: The 
operation has timed out.<---<---<------> (Inner Exception #1) System.AggregateException
: outer 2 AggEx --- System.ArgumentException: Value does not fall within the expected
 range. --- End of inner exception stack trace ------> (Inner Exception #0) 
System.ArgumentException: Value does not fall within the expected range.

As you can see the data has been formatted in a neat and convenient way for readability, with separators between the inner exceptions.

In summary this is a very useful class to be aware of and have in your arsenal whether you are dealing with the Parallel Tasks Library or you just need to manage multiple exceptions. I like simple and neat solutions and to me this is a good example of that philosophy.

Integrating WCF Services into Web Client Software Factory

Integrating WCF Services into Web Client Software Factory

For those of you unfamiliar with the Web Client Software Factory (WCSF) it is a very capable web application framework for building web forms based thin client applications. It was created as part of the Patterns and Practices offering from Microsoft (alongside the more well known Enterprise Library). It shares many concepts with it’s sister offering the Smart Client Software Factory (SCSF) but it’s implementation is different and I find it easier to use and, sometimes more importantly for organisations, an easier transition for traditional Web forms developers than ASP.net MVC. It is utilises the Model-View-Presenter pattern (MVP) nicely and I find it a useful framework within which to build web applications where a ASP.net MVC approach may have been discounted. For more information on the WCSF check this link.

WCSF uses the ObjectBuilder framework to provide Dependency Injection services to it’s components. Views, Presenters and Modules can have access to a global (or module level) services collection which traditionally contains the services (internal services, not external web services) that provide business logic or infrastructure support functionality. It is therefore important that any code within the web application can access this services collection (via Dependency Injection) to consume this shared functionality. The problem I’ve run into recently is how to allow WCF Web Services, exposed as part of the web application, to hook into the WCSF framework to consume these internal services. These web services need to be able to declare Service Dependencies on other objects and have those dependencies satisfied by the WCSF framework, just as it does for Views and Presenters etc.

I found that the Order Management WCSF Reference Implementation shows how to hook traditional ASMX Web Services into your WCSF Solution. Here is the implementation of the site’s ProductServiceProxy web service:

using System.Web; 
using System.Web.Services; 
using System.Web.Services.Protocols; 
using System.ComponentModel; 
using OrdersRepository.Interfaces.Services; 
using Microsoft.Practices.CompositeWeb; 
using OrdersRepository.BusinessEntities;

namespace WebApplication.Orders 
{ 
    [WebService(Namespace = "http://tempuri.org/")] 
    public class ProductServiceProxy : WebService 
    { 
        IProductService _productService;

        [ServiceDependency] 
        public IProductService ProductService 
        { 
            set { _productService = value; } 
            get { return _productService; } 
        }

        public ProductServiceProxy() 
        { 
            WebClientApplication.BuildItemWithCurrentContext(this);
        }

        [WebMethod] 
        public Product GetProduct(string sku) 
        { 
            return _productService.GetProductBySku(sku); 
        } 
    } 
}

The key line here is the call to WebClientApplication.BuildItemWithCurrentContext(this) within the constructor. This is the key to how this ASMX Web Service can be built up by ObjectBuilder and therefore have its Dependency Injection requirements met. The rest of the page is typical ASMX and WCSF, for example the ServiceDependency on the ProductService property is declared as normal.

If we look into the WCSF Source Code for BuildItemWithCurrentContext(this) we see how it works:

/// <summary> 
/// Utility method to build up an object without adding it to the container. 
/// It uses the application's PageBuilder and the CompositionContainer" 
/// for the module handling the current request 
/// </summary> 
/// <param name="obj">The object to build.</param> 
public static void BuildItemWithCurrentContext(object obj) 
{ 
  IHttpContext context = CurrentContext; 
  WebClientApplication app = (WebClientApplication) context.ApplicationInstance; 
  IBuilder<WCSFBuilderStage> builder = app.PageBuilder; 
  CompositionContainer container = app.GetModuleContainer(context); 
  CompositionContainer.BuildItem(builder, container.Locator, obj); 
}

protected static IHttpContext CurrentContext 
{ 
  get { return _currentContext ?? new HttpContext(System.Web.HttpContext.Current); } 
  set { _currentContext = value; } 
}

The first line calls off to the CurrentContext property where a new HttpContext is created based on the current HTTP context of the ASMX services session. The following lines get a reference to the WebClientApplication instance (that is WCSF’s version of a HTTPApplication for your web app) and then accesses the Composition Container. BuildItem then does the heavy work of using ObjectBuilder to build up the service’s dependencies.

So this works nicely for ASMX services but what about WCF Services? Well it is possible to follow the same approach and use the BuildItemWithCurrentContext method within the constructor of the WCF service but we have to follow some additional steps too. If we just add the BuildItemWithCurrentContext(this) call to the constructor of our WCF service then it will fail as the HTTPContext will always be null when accessed from within a WCF Service.

ASP.net and IIS hosted WCF services play nicely together within a single Application Domain, sharing state and common infrastructure services, but HTTP runtime features do not apply to WCF. Features such as the current HTTPContext, ASP.Net impersonation, HTTPModule Extensibility, Config based URL and file based Authorisation are not available under WCF. There are alternatives provided by WCF for these features but these don’t help with hooking into the ASP.Net specific WCSF. This is where WCF’s ASP.NET compatibility mode saves the day. By configuring your WCF Service to use ASP.NET compatibility you affect the side by side configuration so that WCF services engage in the HTTP request lifecycle fully and thus can access resources as per ASPX pages and ASMX web services. This provides the WCF service with a reference to the current HTTPContext and allows the WCSF to function correctly. It must be said that there are some drawbacks to using ASP.NET compatibility mode, for example the protocol must be HTTP and the WCF service can’t be hosted out of IIS but these will usually be acceptable when you’re wanting to add the WCF service to a WCSF application.

To turn on ASP.NET compatibility mode update your web.config:

<system.serviceModel> 
  <serviceHostingEnvironment aspNetCompatibilityEnabled=”true” />    
</system.serviceModel>

Your services must then opt in to take advantage of the compatibility mode and this is done via the AspNetCompatibilityRequirementsAttribute. This can be set to ‘Required’, ‘Allowed’ and ‘NotAllowed’ but for our WCSF WCF Service it is required so we need to set it as such as in the example below:

namespace WebApplication 
{ 
    [ServiceBehavior] 
    [AspNetCompatibilityRequirements(RequirementsMode =  
                              AspNetCompatibilityRequirementsMode.Required)]
    public class Service1 : IService1 
    { 
        public void DoWork() 
        { 
          LoggingService.WriteInformation("It worked."); 
        } 
        [ServiceDependency] 
        public ILogging LoggingService{ get; set; }

        public Service1() 
        { 
            WebClientApplication.BuildItemWithCurrentContext(this); 
        } 
    } 
}

And that’s it, with the Asp.Net Compatibility Mode turned on and our service stating that it requires this Compatibility Mode to be on (via its AspNetCompatibilityRequirements attribute), the WCSF BuildItemWithCurrentContext(this) method can run successfully with the current HTTPContext.

For more information on hosting WCF and ASP.net side by side and the ASP.net compatibility mode check out ‘WCF Services and ASP.NET’. For more information on the Web Client Software Factory check out ‘Web Client Software Factory on MSDN’.