Preventing Browser Caching using HTTP Headers

Many developers consider the use of HTTPS on a site enough security for a user’s data, however one area often overlooked is the caching of your sites pages by the users browser. By default (for performance) browsers will cache pages visited regardless of whether they are served via HTTP or HTTPS. This behaviour is not ideal for security as it allows an attacker to use the locally stored browser history and browser cache to read possibly sensitive data entered by a user during their web session. The attacker would need access to the users physical machine (either locally in the case of a shared device or remotely via remote access or malware). To avoid this scenario for your site you should consider informing the browser not to cache sensitive pages via the header values in your HTTP response. Unfortunately it’s not quite that easy as different browsers implement different policies and treat the various cache control values in HTTP headers differently.

Taking control of caching via the use of HTTP headers

To control how the browser (and any intermediate server) caches the pages within our web application we need to change the HTTP headers to explicitly prevent caching. The minimum recommended HTTP headers to de-activate caching are:

Cache-control: no-store
Pragma: no-cache

Below are the settings seen on many secure sites as a comparison to above and perhaps as a guide to what we should really be aiming for:

Cache-Control:max-age=0, no-cache, no-store, must-revalidate
Expires:Thu, 01 Jan 1970 00:00:00 GMT
Pragma:no-cache

HTTP Headers & Browser Implementation Differences:

Different web browsers implement caching in differing ways and therefore also implement various subtleties in their support for the cache controlling HTTP headers. This also means that as browsers evolve so too will their implementations related to these header values.

Pragma Header Setting

Use of the ‘Pragma’ setting is often used but it is now outdated (a retained setting from HTTP 1.0) and actually relates to requests and not responses. As developers have been ‘over using’ this on responses many browsers actually started to make use of this setting to control response caching. This is why it is best included even though it has been superseded by specific HTTP 1.1 directives.

Cache-Control ‘No-Store’ & ‘No-Cache’ Header Settings

A “Cache-Control” setting of private instructs any proxies not to cache the page but it does still permit the browser to cache. Changing this to no-store instructs the browser to not cache the page and not store it in a local cache. This is the most secure setting. Again due to variances in implementation a setting of no-cache is also sometimes used to mean no-store (despite this setting actually meaning cache but always re-validate, see here). Due to this the common recommendation is to include both settings, i.e: Cache-control: no-store, no-cache.

Expires Header Setting

This again is an old HTTP 1.0 setting that is maintained for backward compatibility. Setting this date to a date in the past forces the browser to treat the data as stale and therefore it will not be loaded from cache but re-queried from the originating server. The data is still cached locally on disk though and so only provides little security benefits but does prevent an attacker directly using the browser back button to read the data without resorting to accessing the cache on the file system.  For example:  Expires: Thu, 01 Jan 1970 00:00:00 GMT

Max-Age Header Setting

The HTTP 1.1 equivalent of expires header. Setting to 0 will force the browser to re-validate with the originating server before displaying the page from cache. For example: Cache-control: max-age=0

Must-Revalidate Header Setting

This instructs the browser that it must revalidate the page against the originating server before loading from the cache, i.e. Cache-Control: must-revalidate

Implementing the HTTP Header Options

Which pages will be affected?

Technically you only need to turn off caching on those pages where sensitive data is being collected or displayed. This needs to be balanced against the risk of accidently not implementing the change on new pages in the future or making it possible to remove this change accidently on individual pages. A review of your web application might show that the majority of pages display sensitive data and therefore a global setting would be beneficial. A global setting would also ensure that any new future pages added to the application would automatically be covered by this change, reducing the impact of developers forgetting to set the values.

There is a trade off with performance here and this must be considered in your approach. As this change impacts the client caching mechanics of the site there will be performance implications of this change. Pages will no longer be cached on the client, impacting client response times and may also increase load on the servers. A full performance test is required following any change in this area.

Implementing in ASP.net

There are numerous options for implementing the HTTP headers into a web application. These options are outlined below with their strengths/weaknesses. ASP.net and the .Net framework provide methods to set caching controls on the Request and Cache objects. These in turn result in HTTP headers being set for the page/application’s HTTP responses. This provides a level of abstraction from the HTTP headers but that abstraction prevents you setting the headers exactly how you might like them for full browser compatibility. The alternative approach is to explicitly set the HTTP headers. Both options and how they can be implemented are explored below:

Using ASP.net Intrinsic Cache Settings
Declaratively Set Output Cache per ASPX Page

Using the ASPX Page object’s attributes you can declaratively set the output cache properties for the page including the HTTP header values regarding caching. The syntax is show in the example below:

Example ASPX page:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="CacheTestApp._Default" %> 
<%@ OutputCache Duration="60" VaryByParam="None"%> 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml" > 
<head runat="server"> 
<title></title> 
</head> 
<body> 
<form id="form1" runat="server"> This is Page 1.</form> 
</body> 
</html>

Parameters can be added to the OutputCache settings via the various supported attributes. Whilst this configuration allows specific targeting of the caching solution by enabling you to define a cache setting for each separate page it has the drawback that it needs changes to be made to all pages and all user controls. In addition developers of any new pages will need to ensure that the page’s cache settings are correctly configured. Lastly this solution is not configurable should the setting need to be changed per environment or disabled for performance reasons.

Declaratively Set Output Cache Using a Global Output Cache Profile

An alternative declarative solution for configuring a page’s cache settings is to use a Cache Profile. This works by again adding an OutputCache directive to each page (and user control) but this time deferring the configuration settings to a CacheProfile in the web.config file.

Example ASPX page:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="CacheTestApp._Default" %> 
<%@ OutputCache CacheProfile=" RHCacheProfile "%> 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml" > 
<head runat="server"> 
<title></title> 
</head> 
<body> 
<form id="form1" runat="server"> 
This is Page 1. 
</form> 
</body> 
</html>

Web.config file:

<system.web> 
<caching> 
<outputCache enableOutputCache="false"/> 
<outputCacheSettings> 
<outputCacheProfiles> 
<OutputCache CacheProfile=" RHCacheProfile"> 
<add name="RHCacheProfile" 
location="None" 
noStore="true"/> 
</outputCacheProfiles> 
</outputCacheSettings> 
</caching> 
</system.web>

This option provides the specific targeting per page and the related drawbacks of having to make changes to every page and user control. This solution does provide the ability to centralise the cache settings in one place (minimising the impact of future changes) and enables caching to be set during installation depending on target environment via the deployment process.

Programmatically Set HTTP Headers in ASPX Pages

Output caching can also be set in code in the code behind page (or indeed anywhere where the response object can be manipulated). The code snippet below shows setting the HTTP headers indirectly via the Response.Cache object:

Response.Cache.SetCacheability(HttpCacheability.NoCache); 
Response.Cache.SetExpires(DateTime.UtcNow.AddHours(-1)); 
Response.Cache.SetNoStore();
Response.Cache.SetMaxAge(new TimeSpan(0,0,30));

This code would need to be added to each page and so results in duplicate code to maintain and again introduces the requirement for this to be remembered to be added to all new pages as they are developed. It results in the below headers being produced:

Cache-Control:no-cache, no-store

Expires:-1

Pragma:no-cache

Programmatically Set HTTP Headers in Global ASAX File

Instead of adding the above code in each page an alternative approach is to add it to the Global ASAX file so as to apply to all requests made through the application.

void Application_BeginRequest(object sender, EventArgs e)
{
	Response.Cache.SetCacheability(HttpCacheability.NoCache);
	Response.Cache.SetExpires(DateTime.Now);
	Response.Cache.SetNoStore();
	Response.Cache.SetMaxAge(new TimeSpan(0,0,30));
}

This would apply to all pages being requested through the application. It results in the below headers being produced:

Cache-Control:no-cache, no-store

Expires:-1

Pragma:no-cache

Explicitly define HTTP Headers outside of ASP.net Cache settings.

Explicitly Define HTTP Headers in ASPX Pages

The response object can have its HTTP Headers set explicitly instead of using the ASP.net Cache objects abstraction layer. This involves setting the header on every page:

void Page_Load(object sender, EventArgs e)
{
	Response.AddHeader("Cache-Control", "max-age=0,no-cache,no-store,must-revalidate");
	Response.AddHeader("Pragma", "no-cache");
	Response.AddHeader("Expires", "Tue, 01 Jan 1970 00:00:00 GMT");
}

Again as a page specific approach it requires a change to be made on each page. It results in the below headers being produced:

Cache-Control:max-age=0,no-cache,no-store,must-revalidate

Expires:Tue, 01 Jan 1970 00:00:00 GMT

Pragma:no-cache

Explicitly Define HTTP Headers in Global ASAX File

To avoid having to set the header explicitly on each page the above code can be inserted into the Application_BeginRequest event within the application’s Global ASAX file:

void Application_BeginRequest(object sender, EventArgs e)
{
	Response.AddHeader("Cache-Control", "max-age=0,no-cache,no-store,must-revalidate");
	Response.AddHeader("Pragma", "no-cache");
	Response.AddHeader("Expires", "Tue, 01 Jan 1970 00:00:00 GMT");
}

Again this results in the below headers being produced:

Cache-Control:max-age=0,no-cache,no-store,must-revalidate

Expires:Tue, 01 Jan 1970 00:00:00 GMT

Pragma:no-cache

Environment Specific Settings

It’s useful to be able to set the header values via configuration settings, not least to be able to test this change in a performance test environment via before/after tests.

All of the above changes should be made configurable and be able to be triggered/tweaked via the web.config file (and therefore can be modified via deployment settings).

Useful Links For More Information 

Upgrading MVC 3 to MVC 4 via NuGet

I had to upgrade an old ASP.NET MVC 3 project to MVC 4 yesterday and whilst searching for the official instructions I found that there is a NuGet package that does all the hard work for you.

The official instructions for upgrading are in the MVC 4 release notes here: http://www.asp.net/whitepapers/mvc4-release-notes#_Toc303253806

But Nandip Makwana has created a NuGet package that automates this process. Check it out here: https://www.nuget.org/packages/UpgradeMvc3ToMvc4

It worked great for me.

Backing Up Your Blog Content Using HTTrack

I’m pretty strict on making sure I have my data backed up in numerous places and my blog content is no different. I would hate to lose all these years of babbling. In this post I cover how I back up this blog, and this will apply to any blog engine or indeed any website.

This blog is hosted on WordPress.com and I trust the guys at ‘Automatic’ to keep my data safe, but accidents do happen. Ideally I want an up to date backup of this blog together with any images used. Personally I?m not too concerned about having it in a WordPress format but rather actually prefer having the raw content that I could use to recreate the blog elsewhere.

The HTTrack Tool

The tool I use is HTTrack (http://www.httrack.com) which is a web site copying tool. It essentially re-creates a working copy of the site on the local disk (which you can navigate in a browser too). The tool has many features and includes command line interface which makes it easy to run via a scheduled task. The various command line switches are documented here http://www.httrack.com/html/fcguide.html , but I use this simple command below: 

C:\HTTrack\httrack.exe http://richhewlett.com -O c:\TargetFolderPathHere

For interest the –O switch tells HTTrack to output the site to disk and hence produce a copy.

You can create a Windows Scheduled Task that periodically runs this command line and you have an automated backup. I personally go a bit further and wrap this command into a Windows PowerShell script. This script creates new folder each time with the current date and implements error handling which writes to the system eventlog.

This script is an example only and comes with no guarantees that it will work for you without modification:

#===============================================
# Backup blog to disk using HTTRACK 
# (Created by Rich Hewlett, see blog at RichHewlett.com) 
#==============================================
clear-host
write-output &quot;---------------------Script Start--------------------&quot;
write-output &quot; HTTrack Site Backup Script&quot;
write-output &quot;-------------------------------------------------------&quot;

# set file paths
$timestamp = Get-Date -format yyyy_MMM_dd_HHmmss
$TargetFolderPath=&quot;F:\MyBlogBackUp\$timestamp&quot;
$HTTrackPath=&quot;C:\HTTrack\httrack.exe&quot;

write-output &quot;Backup target path is $TargetFolderPath&quot;
write-output &quot;HTTrack is at $HTTrackPath&quot;

# set error action preference so errors don't stop and the trycatch 
# kicks in to handle gracefully
$erroractionpreference = &quot;Continue&quot;

try
{
    write-output &quot;Creating output folder $TargetFolderPath ...&quot;
    New-Item $TargetFolderPath -type directory
    
    write-output &quot;Download data ...&quot;
    invoke-expression &quot;$HTTrackPath http://MyBlog.com -O $TargetFolderPath&quot;
    write-output &quot;Done with downloading.&quot;    
    
    write-eventlog -LogName &quot;Network&quot; -Source &quot;HTTrack&quot; -EventId 1 -Message &quot;Downloaded blog for backup&quot;
    
}
catch
{
    # error occurred so lets report it
    write-output &quot;ERROR OCCURRED DURING SCRIPT &quot; $error

    # write an event to the event log
    write-output &quot;Writing FAIL to EventLog&quot;
    write-eventlog -LogName &quot;Network&quot; -Source &quot;HTTrack&quot; -EventId 1 -Message &quot;Download blog for backup FAILED during execution. $error&quot; -EntryType Error
}

write-output &quot;------------------------------------Script end------------------------------------&quot;

I run this job monthly via a Windows scheduled task.

 

Using WordPress Export Feature

If you run a WordPress blog you can also do an export via the Dashboard which will export all site content (including comments) which is useful. In addition to the above raw HTML backup above I also use the export tool periodically manually (and therefore infrequently). For more information on this feature check out http://en.support.wordpress.com/export/

Adding a Return Message in an RSA Sequence Diagram

Here’s a quick tip that I found useful last week.

If you’re using IBM Rational Software Architect to produce a UML Sequence diagram and you add a new Synchronous Message activity the tool automatically inserts a return message for you (this can be turned off in the preferences tab). Last week I discovered that if you happen to delete that return message (or it disappears by itself somehow) it is certainly not intuitive as to how to insert it back again. After much head scratching my colleague found how to do it and it’s of course easy once you know how (thanks Si).

Here is an example sequence diagram in RSA but its missing a return message:

RSA1_

Now to add the return message, right click, select ‘Add UML’ > ‘Return Message’ (as shown below):

RSA2

A return message is inserted:

RSA3

As I said it’s easy once you know how, but I know I’ll forget :-)

How to Backup To USB Drive Only If It’s Connected

A key part of most personal data backup strategies involves backing up data to an external USB drive but I don’t want to leave it constantly connected. In this post I cover how to backup to an external drive using a scheduled automated process but only if the external drive is connected at the time. 

I don’t believe in leaving external backup USB drive always connected to my system (PC or Server) to avoid the data being corrupted or deleted. Also if the backup drive is for off-site storage then its not possible to be always connected anyway. Using the steps below I can perform a full data backup by simply physically connecting the drive (connecting a USB drive or docking the SATA drive into a USB dock for example), and then leaving it overnight. In the morning I can safely physically disconnect it and store it without even having to log onto the machine.

The basic flow:

1) Assuming that the machine is always on, which my server is, a scheduled task runs every night and executes a backup script (regardless of the backup drive being connected or not). If your machine is not always on then vary this by setting the scheduled task at a time when the machine is usually on.
2) The script checks for the existence of a specific folder on the connected drive (e.g: U:\backup\). If the drive isn’t connected then the folder path won’t exist and the script just exits happily. However, if the drive has been connected then the folder path will exist and the backup script will continue and copy over the data.
3) After a successful backup the script safely disconnects the USB drive. This step is technically optional as Windows supports pulling a USB drive out without doing a soft eject but its highly recommended to tell Windows first to avoid data corruption.
4) Optionally you can also output a backup log somewhere to enable you to check the logs periodically without having to reconnect the USB drive to verify the job worked.

More detailed steps:

Firstly connect the external USB drive and make a note of the drive letter it uses. Changing the drive letter to something memorable might help (B for backup, U for USB, O for Offsite etc). We’ll use U for this example. Next we need to write a DOS command script, a simple program or Powershell script to perform the backup of the data using the backup tool of your choice. I use Robocopy to copy the files via a Powershell script and below is a simplified version of my script.

#==========================================================================================================
# Checks for presence of offsite backup USB drive, and backs up relevant data to drive if present, 
# exits gracefully if not present. Run script everynight and it only backs up data when offsite 
# USB external drive is turned on. 
#==========================================================================================================
clear-host

# set file paths and log file names 
$timestamp = Get-Date -format yyyyMMdd_HHmmss
$LogBasePath="D:\Logs\OffSiteUSBBackup"
$LogFile="$LogBasePath\USBBkUp_$timestamp.txt"
$USBDriveLetter="U"
$USBDriveBackupPath="U:\Backup"

# set error action preference so errors don't stop and the trycatch kicks in to handle gracefully
$erroractionpreference = "Continue"

try
{
	# Check USB drive is on by verfiying the path
	if(Test-Path $USBDriveBackupPath)
	{	
		# now copy the data folders to backup drive
		invoke-expression "Robocopy C:\Docs $USBDriveBackupPath\Docs /MIR /LOG:$LogFile /NP"
		invoke-expression "Robocopy C:\Stuff $USBDriveBackupPath\Stuff /MIR /LOG+:$LogFile /NP"
		
		# Copy the log file too
		invoke-expression "Robocopy $LogBasePath $USBDriveBackupPath\Logs /MIR /NP"
					
		# Sleep for 60 to ensure all transactions complete, then disconnect USB drive		
		Start-Sleep -Seconds 60
		Invoke-Expression "c:\DevCon\USB_Disk_Eject /removeletter $USBDriveLetter"        
	}
}
catch
{
	# Catch the error, log it somewhere, but make sure you still eject the drive using below
	Start-Sleep -Seconds 60
	Invoke-Expression "c:\DevCon\USB_Disk_Eject /removeletter $USBDriveLetter"
}

It is key to include in the script a check for the existence of a specific folder on the drive letter belonging to the external drive (U in our example). Only if its present do we continue with the backup.

I make sure that Robocopy logs the output to a file and that file is on the server and copied to the USB drive (as a record of the last backup date etc). I also report the running of the PowerShell script to the Eventlog for reporting purposes but this is outside the scope of this post.

It’s safer to tell Windows that you’re gonna pull the drive out and so I call  USB_Disk_Eject  from within my script, passing in the drive letter. I then wait 30 seconds to ensure the drive has had sufficient time to disconnect before I exit the script. There are a few tools available for ejecting USB drives such as Microsoft’s Device Console (DevCon.exe) but I use USB Disk Ejector (https://github.com/bgbennyboy/USB-Disk-Ejector). 

Now set up a Scheduled Task in Windows to run the script every night at a set time.  As the script is scheduled to run every night all I have to do if I want to perform a back-up is connect my backup drive and leave it until the morning. The script will run overnight, find the drive, backup and disconnect. In the morning I can just physically disconnect the drive safely without having to log onto the machine. Periodically I’ll check the backup logs and the backup drive to make sure all is well and to check remaining drive space etc.

Do you like this approach? Got a better idea? Let me know via the comments.

Critical Preview Fix For Live Writer Code Plugin

WLW2011TemplatePreview

Just a quick note to announce a new version of my Windows Live Writer plug-in, for Source Code Syntax Highlighting in WordPress.com posts, has been released.

CaptainKernel posted a comment on this blog to say he was having issues with the preview feature not formatting the code correctly. After some investigation this has been caused by a change on the WordPress.com.

Anyway a new fixed version, v.1.4.2, is now available to download here which resolves this issue.

Boot into Safe Mode With Windows 8

My laptop running Windows 8.1 decided not to boot this week but instead gave me a blue screen with the error "System Thread Exception Not Handled". As I’d not installed anything new recently I guessed it could be related to a Video Driver issue, so I tried to Safe Boot – but wait where is Safe Boot in Windows 8? Google and Toms Hardware site to the rescue with this excellent article for resolving the issue. Note the use of in the article of BCDEDIT from the Command Prompt to turn on the legacy Windows boot menu (accessed via pressing F8 during boot).

At the C:\ command prompt:  BCDEDIT /SET {DEFAULT} BOOTMENUPOLICY LEGACY

You can dig a bit more into this command on the ‘Windows Developer Center’ site and check out the various options you can specify, including a useful ‘onetimeadvancedoptions’ option to only turn on F8 menu for a one time use on the next boot. For more detail on the Windows 8 Start-up settings including how to restart in Safe Mode from within Windows check out this page on the windows site. Also note that you can use MSConfig (Start > Run > "msconfig.exe") to restart Windows in Safe Mode too.

To return to the standard Windows 8 boot menu (for faster boot times), once you have resolved your issue, you can run the BCDEDIT command again but this time set the BOOTMENUPOLICY to STANDARD:

At the C:\ command prompt:  BCDEDIT /SET {DEFAULT} BOOTMENUPOLICY STANDARD

If you get an ‘Access Denied’ message make sure the command prompt window is running as Administrator (right click the shortcut > Run as Administrator).

As for my laptop issue, I used Safe Mode to uninstall my video drivers, enabling me to boot normally and then successfully update the drivers.

Host Static HTML or WebForms Page within MVC Site

If you need to host a static HTML page within an ASP.net MVC website or you need to mix ASP.net WebForms with an MVC website then you need to configure your routing configuration in MVC to ignore requests for those pages.

File:Belgian road sign F7.svgRecently I wanted to host a static HTML welcome page (e.g. hello.htm) on an MVC website. I added the HTML page to my MVC solution (setting it as the Visual Studio project’s start page) and configured my web site’s default page to be the HTML page (hello.htm). It tested ok at first but then I realised that it was only displaying the hello page first on debug because I’d set the page to be the Visual Studio project’s start-up page and I hadn’t actually configured the MVC routes correctly so it wouldn’t work once deployed.

For this to work you need to tell MVC to ignore the route if its for the HTML page (or ASPX page in the case of mixing WebForms and MVC). Find your routing configuration section (for MVC4 it’s in RouteConfig.cs under the App_Start folder, for MVC1,2,3 it’s in Global.asax). Once found use the IgnoreRoute() method to tell Routing to ignore the specific paths. I used this:

routes.IgnoreRoute("hello.htm"); //ignore the specific HTML start page
routes.IgnoreRoute(""); //to ignore any default root requests

Now MVC ignores a request to load the hello HTML page and leaves IIS to handle returning the resource and hence the page displays correctly. 

Full Screen Remote Desktop Sessions

Sometimes if you are on a new machine or using Remote Desktop for the first time you might find that the display size is not correct when you connect to a remote machine. If the remote machine session won’t go Full Screen it can be annoying. To resolve launch Remote Desktop (tip: Start > Run > mstsc is the easiest way) or via Start Menu (Start > Programs or All Programs > Accessories > Remote Desktop Connection). Once launched click ‘Options’ or ‘Show Options’ and then on the ‘Display’ tab adjust the size of your remote desktop screen. Move the slider all the way to the right for full screen.

image

Once you connect the settings becomes the default for all Remote Desktop connections and so you’ll only need to do this once. The settings are saved in a Default.rdp file, usually stored in ‘My Documents’ or ‘Users/<UserName>/Documents’. It is possible to save multiple versions of *.rdp files and pass them to MSTSC as a command line parameter if you need to connect to different machines with different settings.

Building A Learning Culture

I’m keen on fostering a learning culture within teams and was drawn to this article on InfoQ Creating a Culture of Learning and Innovation by Jeff Plummer which shows what can be achieved through community learning. In the article Jeff outlines how a learning culture was developed within his organisation using simple yet effective crowd sourcing methods.

imageI have implemented a community learning approach on a smaller scale using informal Lunch & Learns where dev’s give up their lunch break routine to eat their lunch together whilst learning something new, with the presenter\teacher being one of the team who has volunteered to share their knowledge on a particular subject. Sometimes the presenter will already have the knowledge they are sharing but other times they have volunteered to go and learn a subject first and then present it back to the group. Lunch & Learns work even better if you can convince your company to buy the lunch (it’s much cheaper per head than most other training options).

It’s hard to justify expensive training courses these days but that said it’s also never been easier to find free or low cost training by looking online. As Jeff points out innovation often comes from learning subjects not directly relevant to your day job. In my approach to learning with team I have always tried to mix specific job relevant subjects with seemingly less relevant ones. For example a session on Node.js for a team of .Net developers would be hard to justify in monetary terms however I’ve no doubt the developers took away important points around new web paradigms, non-blocking threads, web server models, and much more. Developers like to learn something new and innovation often comes from taking ideas that already exist elsewhere in a different domain and applying them to the current problem.

I agree with Jeff’s point that the champions are key to the success of this initiative. It is likely that the first few subjects will be taught by the champion(s) and they will need to promote the process to others. One tip to take some of the load off the champions is to mix in video sessions as well as presenter based learning sessions. There are a lot of excellent conference session videos and these can make a good Lunch & Learn sessions. Once the momentum builds it becomes the norm for everyone to be involved and this crucially triggers a general sense of learning and of sharing that learning experience with others.