Platform Targeting in .Net

Platform Targeting in .Net

If you see one of the errors below within your .Net application it is likely as a result of your assembly having been compiled with the wrong target architecture flag set.

1) In this instance the assembly was accidentally compiled for x86 only and run within IIS in a 64bit process:

Unhandled Exception: System.BadImageFormatException: Could not load file or assembly “AssemblyName Version=1.0.0.0, Culture=neutral” or one of its dependencies. An attempt was made to load a program with an incorrect format.

(For info IIS has an Application Pool setting under Advanced  Settings that enables you to run the App Pool as a 32 bit process instead of 64bit. This is off by default.)

2) Here a .Net application targeting a 64 bit processor was run on a 32bit system:

This version of ConsoleApplication1.exe is not compatible with the version of Windows you’re running. Check your computer’s system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.

If you are looking for a quick fix, then just recompile your assembly with the “Any CPU” option as your platform target. See project properties, build tab for this setting.  If you want more information and explanation then read on.

BlgPlatformTarget1When compiling .Net assemblies in Visual Studio (or via the command line) there are a few options regarding optimising for certain target platforms. If you know that your assembly needs a to be deployed only to a 32bit environment (or it needs to target a specific processor instruction set) then you  can optimise the output of MSIL code (what the compiler produces ready for a JIT compilation at run time) for that platform. This is not normally required but perhaps, for example, you have to reference unmanaged code that targets a specific processor architecture.

Essentially this sets a flag inside the assembly metadata that is read by the CLR. If this is set incorrectly then this can result in the above errors but also other odd situations. For example if you compile your app to target “x86”  and then reference an assembly that is targeting a “x64” platform then you will see an error at runtime due to this mismatch (BadImageFormatException). Running an “x86” application will still work on a 64 bit Windows but it will not run natively as 64bit but will instead run under the WOW64 emulation mode which enables x86 execution under 64 bit  (with a performance overhead), which may or may not be a valid scenario in your case.

If you want to reproduce the situation try creating a new console application and in the build properties tab set Platform Target to “x86”. Then create a new Class Library project, set a reference to it in the Console Application, and then in the build properties tab set it to target the “x64” platform. Build and run the application which will show the above BadImageFormatException.

The target platform for your project is set in the Project Properties tab in Visual Studio, under Build (see screenshot above). If you are compiling via the command line you use the /platform switch.

“AnyCPU” became the default value from VS2010 onwards. “AnyCPU” used up to .Net 4.0 means that if the process runs on a 32-bit system, it runs as a 32-bit process and MSIL is compiled to x86 machine code. If the process runs on a 64-bit system, it runs as a 64-bit process and MSIL is compiled to x64 machine code. Whilst this enables more compatibility with multiple target machines it can lead to confusion or unexpected results when you consider that Windows duplicates system DLLs, configuration and registry views for 32bit and 64bit processes.  So Since .Net 4.5 (VS2012)  there is now a new default subtype of “AnyCPU” called “Any CPU 32-bit preferred” which basically follows the above rules except that if a process runs on 32bit system then it will run as a 32 bit process (not 64bit as before) and its MSIL will be compiled to x86 code not x64. This change essentially now forces your process to run under 32bit on a 64bit machine unless you untick the option and turn off the default setting. This setting can be seen on Project properites Build tab as “Prefer 32-bit”.

BlgPlatformTarget2

It is worth noting that you may see a confusing “Multiple Environments” option in Visual Studio which can be automatically added  after migrating solutions from older versions of Visual Studio (I believe it has been removed as a new option in VS2015 onwards but can hang around in older solutions). Use the Configuration Manager tab to check the setting for each assembly. Most developers will want to target  “Any CPU” which supports multiple target environments. If you are getting the above errors then use the steps below to check the assembly and if incorrect then try recompiling with “Any CPU” instead.

How to confirm a target platform for a given assembly:

So how do you confirm which processor architecture an assembly was built for? Well there are a few ways:

1) Using .Net reflection via a simple Powershell command:

[reflection.assemblyname]::GetAssemblyName("${pwd}\MyClassLibrary1.dll") | format-list

Running this command pointing to the assembly you want to check will produce output like this below (results truncated):

An assembly compiled to target AnyCPU:

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : MSIL
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

Same assembly but now compiled to target x86 :

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : X86
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

Same assembly again but now compiled to target x64 :

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : Amd64
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

 

2 )You can also see this information from a de-compiler tool such as dotPeek. Below shows a screenshot showing a x86, 64, AnyCPU and x64 targeted assemblies.
BlgPlatformTarget4_dotPeek

3) Use the CorFlags Conversion Tool

The CorFlags Conversion Tool (CorFlags.exe) is installed with Visual Studio and easily accessed by the VS Command Line. It enables reading and editing of flags for assemblies.

CorFlags  assemblyName

Assuming you have the .Net 4 version installed with .net 4 and above  you’ll see something like this. Older versions do not have the 32BITREQ/PREF flags as per the change for .Net4 discussed above:

Microsoft (R) .NET Framework CorFlags Conversion Tool.  Version  4.6.1055.0
Copyright (c) Microsoft Corporation.  All rights reserved.

Version   : v4.0.30319
CLR Header: 2.5
PE        : PE32
CorFlags  : 0x1
ILONLY    : 1
32BITREQ  : 0
32BITPREF : 0
Signed    : 0

To interpret this output see the table below and check the PE value (PE32+ only on 64bit) and the 32bit Required/Preferred flags. It is also possible to update these flags using this tool.

Summary

Below is a simple table of results to show the impact of Platform Target on the complied assembly and how it is run on a 32/64bit OS. As you can see the 32 preferred flag has resulted in an AnyCPU assembly being run as 32bit on a 64bit OS, the table also shows the values you get when you use the techniques above for determining the target platform for a given assembly.

 

Platform Target in Visual Studio PowerShell
Result
dotPeek
Result
CorFlags
32BITREQ
CorFlags
32PREF
CorFlags
PE
Runs on
32bit OS as
Runs on
64bit
OS as
AnyCPU (pre .Net 4) MSIL MSIL 0 0 PE32 32 bit process 64 bit process
AnyCPU (.Net 4 +)
32 bit NOT preferred
MSIL MSIL 0 0 PE32 32 bit process 64 bit process
AnyCPU (.Net 4 +)
32 bit Preferred (default)
MSIL x86 0 1 PE32 32 bit process 32 bit process
(under WOW64)
x86 x86 x86 1 0 PE32 32 bit process 32 bit process
(under WOW64)
x64 Amd64 x64 0 0 PE32+ ERROR 64 bit process

In summary, there are a few simple rules to be aware of with using AnyCPU and the 32bit Preferred flag but essentially AnyCPU will enable the most compatibility in most cases.

Advertisements

Calculate a file hash without 3rd party tools on Windows & Linux.

Calculate a file hash without 3rd party tools on Windows & Linux.

If you need to generate a hash of a file (e.g. MD5, SHA256 etc) then there are numerous 3rd party tools that you can download but if you are restricted to only built in tools or don’t need to do this often enough to install something then there are built in OS tools for Windows and Linux that can be used.

Windows:

For Windows there is “certUtil” which can be used from the command prompt console with  the “-hashfile” option to generate a hash for a supplied file:

CertUtil [Options] -hashfile filePath [HashAlgorithm]

The [HashAlgorithm] options are MD2, MD4, MD5, SHA1 (default), SHA256, SHA384 and SHA512.

For example to get an MD5 hash of a file use:

CertUtil -hashfile C:\ExampleFile1.txt MD5

More documentation for CertUtil can be seen here.

For those with access to PowerShell v4  and above (Windows 8.1 & Win Server 2012 R2) you can use the built in commandlet called get-filehash like this:

Get-FileHash C:\ExampleFile1.txt  -Algorithm MD5 | Format-List

The algorithms supported are SHA1, SHA256 (default), SHA384, SHA512, MACTripleDES, MD5 & RIPEMD160.

For Powershell versions prior to V4 there are numerous scripts available on the web that will work out the hash for you using various methods.

Linux:

For Linux use the correct  hashalgorithmSUM command in the terminal for the algorithm you are looking for, i.e. for an MD5 hash use md5sum or for SHA512 hash use sha512sum.

For example:

md5sum /home/rich/Documents/ExampleFile1.txt 
sha1sum /home/rich/Documents/ExampleFile1.txt
sha512sum /home/rich/Documents/ExampleFile1.txt

 

 

Speed up a slow JSF XHTML editing experience in Eclipse or IBM RAD/RSA.

Speed up a slow JSF XHTML editing experience in Eclipse or IBM RAD/RSA.

If you find yourself doing some JSF (Java Server Faces) development within either Eclipse, IBM’s RAD (Rapid Application Developer) or IBM RSA (Rational Software Architect) IDEs you may find that the JSF editor can run slowly with some lag. This seems particularly a problem on RAM starved machines and/or older versions of the Eclipse/RAD IDEs. The problem (which can be intermittent) is very frustrating and can result in whole seconds going by after typing before your changes appear in the editor. It seems that the JSF code validator is taking too long to re-validate the edited JSF code file. At one point this got so bad for our team many would revert to making JSF changes in a text editor and then copy/paste the final code into the IDE.

java_small

Thankfully there is a workaround and in order that I don’t forget if I hit this problem again I’m posting it here. The workaround (although sadly not a fix) is to use a different “editor” within the same IDE. If you right click the JSF file you want to edit and use the pop-up menu to choose to open it with the XML Editor instead of the XHTML Editor then you will find a much faster experience. Whilst this does remove some of the JSF/XHTML specific validations it provides support for tags etc and will perform faster.

Should you wish to always use the XML Editor to edit XHTML files you can make this global change via the preferences. Go to General > Editors > File Associations > File Types list > select XHTML extension > click Add > Add XML Editor. Then in the associated editors list select the XML Editor and click the ‘Default’ button – thus making XML Editor the default for all XHTML files. Of course once this is done you can still click on individual XHTML files and right click to open in the original XHTML editor should you want to temporarily switch back for an individual file.

Hopefully this will prevent you pulling your hair out in frustration when editing XHTML files.

Archiving Adobe Lightroom Back Ups with PowerShell

Archiving Adobe Lightroom Back Ups with PowerShell

LightroomLogoIf you are an Adobe Lightroom user it is critical to have regular backups of your photo library catalogue. Luckily this is a simple task thanks to the fact that Lightroom has features built in to regularly taka a backup for you (which in effect means making a copy of your current catalogue file into a new location in the location you have specified in the user preferences of the application.

For information on how to configure the backup settings in Lightroom check out this Adobe link: https://helpx.adobe.com/lightroom/help/back-catalog.html

Lightroom unfortunately does nothing to clear out old backups and prior to Lightroom version 6 these backups were not even compressed, which together can mean the space required to store backups grows very quickly. It was always frustrating as the catalogue files can be compressed by a huge margin (80-90% in cases). Luckily newer versions of Lightroom now compress the backups into zip files which makes their size much less of an issue.

Anyway for those familiar with PowerShell I have a script that I use which after each backup to remove old backups, compress the new backups and move the backup to a new location (to a separate drive to guard against drive failure).

powershellLogo1The script is called LR_Zip_Tuck as it zips the backups and tucks them away. There are two versions of the script. V1 is for Lightroom versions before V6/CC as it includes the additional compression step which is no required since Lightroom V6. This still wo9rks with Lightroom V6 but is slower , and so V2 of the Script is recommended.

The script first waits until the Lightroom application is no longer running before proceeding. This means that you can run this script on exit of Lightroom as it is still backing up (if you have it set to backup on exit) and it will wait until Lightroom has finished (I run it from a desktop shortcut when I still in Lightroom or it is backing up on exit).

## check if Lightroom is running, and if so just wait for it to close
$target = "lightroom"

$process = Get-Process | Where-Object {$_.ProcessName -eq $target}

if($process)
{
	Write-Output "Waiting for Lightroom to exit..."
	$process.WaitForExit()
	start-sleep -s 2
}

It then loops each folder in the backup location looking for catalogue backups that Lightroom has created since the last time the script was run. It then copies it to the off drive backup location and then deletes local the file.

## loop each subfolder in backup location and process
foreach ($path in (Get-ChildItem -Path $LocalBkUpFolder -Directory))
{
	## find zip file in this folder and rename
	$path | Get-ChildItem | where {$_.extension -eq ".zip"} | Select-Object -first 1 | % { $_.FullName} | Rename-Item -NewName {$path.Name + ".zip"}

	## move file to parent folder (as dont need subfolders now)
	$SourceFilePath = $path.FullName + "\" + $path.Name + ".zip"
	Move-Item $SourceFilePath -Destination $LocalBkUpFolder

	## copy zip to remote share location
	Write-Output "Tucking backup away on remote share"
	Copy-Item $NewFileName -Destination $RemoteBkUpFolder

	## delete folder
	Remove-Item -Recurse -Force $path.FullName
}

It then does some house keeping ensuring that only the configured number of old backups exist in the local and remote locations (ensuring that the oldest are deleted first). This prevents the backups building up over time.

## cleanup zip files (local)
Remove-MostFiles $LocalBkUpFolder *.zip 8

## cleanup zip files (remote)
Remove-MostFiles $RemoteBkUpFolder *.zip 20

That’s about it. The scripts are available on my GitHub repo here (as LR_ZipTuck_V1.ps1 and LR_ZipTuck_V2.ps1).

Some SonarQube Upgrade Issues & Fixes

I recesq-ci-72xntly upgraded a SonarQube server installation from v5.6.2 to v6, and unfortunately hit a few issues along the way which I thought I’d share here in case others experiences the same issues. All were resolved in the end and if you are yet to be running SonarQube to analyse your software assets please don’t be put off my these small issues. SonarQube is an outstanding tool to have in your Quality Control armoury and it is really incredibly easy to set and run. In fact you can download it and run it  straightaway in under two minutes without installing anything (check out this link Get Started in Two Minutes to learn how).

Anyway the first problem I hit with the upgrade was an error message in the log when the service was trying to connect to the database (in my instance an MS SQL Server):

Unsupported JDBC driver provider: jtds 

Apparently support for jtds was changed to a bundled version by  SonarQube at some point and so it needs to be removed from the connection string:

Original connection string:
sonar.jdbc.url=jdbc:jtds:sqlserver://ServerName;instance=sonar;databaseName=sonar

New connection string:
sonar.jdbc.url=jdbc:sqlserver://ServerName;instance=sonar;databaseName=sonar

This change made I still could not connect to the database but this time due to different error, which was:

Can not connect to database. Please check connectivity and settings. The TCP/IP connection to the host ServerName1, port 1433 has failed. Error: “Connection refused: connect. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.”

After verifying the database was indeed up, running, not blocked by a firewall and indeedsqlserverconfigmgrexample open on the specified port I found that I had to turn off dynamic ports on my Sonar DB server. To do this open the SQL Server Configuration Manager application, under SQL Server Network Configuration – Protocols for Sonar, right click TCIP/IP and choose properties. Under IP Addresses ensure that TCP Port is 1433 for all entries (including IPAll) AND ensure that TCP Dynamic Ports is blank. My TCP Dynamic Ports value was “0” which actually enables dynamic ports! After this change DB connectivity was successful.

At this point the auto-upgrade step failed and after integrating the logs I found this problem:

Cannot resolve the collation conflict between “Latin1_General_CI_AS” and “Latin1_General_CS_AS” in the equal to operation.

After some googling I hit this very useful Stack Overflow post where the problem is explained. I choose to manually update the database collation (option 3). After running the suggested query I was able to work out the indexes that needed to be dropped and recreated to enable the collation to be updated.

After this I deleted the data out of the SonarQube temp folder (ensuring that the Sonar Service had been stopped) and restarted the service and this triggered the upgrade process again which this time completed successfully.

Using PowerShell for your VS Code Integrated Terminal

Using PowerShell for your VS Code Integrated Terminal

Microsoft’s superb Visual Studio Code editor has an integrated terminal which is accessed via the ‘View’menu or via the Ctrl+’ shortcut keys. On Windows by default the terminal used is the Windows Command Prompt (cmd.exe) terminal, however you can easily configure VS Code to use a different terminal such as Windows PowerShell.

Open the User Settings config file (the ‘settings.json’ file accessed via File > Preferences > User Settings) and modify the setting for which terminal to run on Windows:

The default setting is:

 “terminal.integrated.shell.windows”: “C:\\WINDOWS\\system32\\cmd.exe”,

To use the PowerShell terminal instead add this to your settings.json user settings file:

“terminal.integrated.shell.windows”: “C:\\Windows\\sysnative\\WindowsPowerShell\\v1.0\\Powershell.exe”,

Now PowerShell will be used instead of cmd.exe. Currently only one terminal can be configured in VS Code and so you can’t have both PowerShell and cmd.exe so you’ll have to choose your favourite for now. You can however access mutliple instances of the terminal via the drop down on the terminal window.

vscodeposhterminal2

Finally whilst on the subject of VS Code and PowerShell I recommend installing Microsoft’s PowerShell Extension which lets you code and debug PowerShell scripts directly within VS Code (and benefit from its features, e.g. git integration etc).

Interactive file reading with Powershell

Interactive file reading with Powershell

Sometimes you want to see the contents of text file whilst it is still being updated, a common example is where you are outputting to a log file and need to see the output interactively without having to keep opening the file to check for progress, or to see if a job has complete.

Luckily there is the very useful Get-Content Powershell Command.

This can take a “-wait” parameter that will reread the file every second or so and check for updates, displaying it in the console (until you end the command with Ctrl&C as usual).

So for example the command below will read the file constantly until you tell it to stop:

Get-Content C:\Logs\Log.txt -wait

In addition there is the “-Tail” parameter which is the Powershell equivalent to the Unix tail command. This will read the last few lines of the file only and not the whole file. This can be used on its own like this:

Get-Content C:\Logs\Log.txt -tail 5

…which displays the last 5 lines of the file. Or you can combine -wait and -tail together to constantly read the last (specified) number of lines:

Get-Content C:\Logs\Log.txt -wait -tail 5

For more information, see Get-Content Powershell Command on MSDN.

Break on Exceptions in Visual Studio 2015

Break on Exceptions in Visual Studio 2015

Looking for the option to break on exceptions during debugging in Microsoft Visual Studio 2015? Well Microsoft dumped the old exceptions dialog and replaced it with the new Exception Settings Window. To see it to show that window via the menu: Debug > Windows > Exception Settings.

vsexceptionsettingsmenu

Use the Exception Settings window to choose the types of exceptions on which you wish to break. Right click for the context menu option to turn on/off the option to break or continue when the exception is handled (see below). To break on all exceptions you’ll want to ensure this is set to off (not ticked).

vs2015exceptionssettingsdiag2

For more information check out these MSDN links:

https://blogs.msdn.microsoft.com/visualstudioalm/2015/02/23/the-new-exception-settings-window-in-visual-studio-2015/

https://blogs.msdn.microsoft.com/visualstudioalm/2015/01/07/understanding-exceptions-while-debugging-with-visual-studio/

Disable Start Menu Web Search in Windows 10

Disable Start Menu Web Search in Windows 10

If like me you like the Windows 10 “start” menu to only provide applications and Windows settings in the search results and not web search results you need to configure it using these steps.

Using the Start Menu find “Cortana & Search Settings” , then click the settings icon (the cog),  turn Cortana off, and then turn off “Searh Online and Include Web Results”.

Allow PowerShell Execution

Allow PowerShell Execution

By default PowerShell’s execution policy is very restrictive which is a good thing for security. If you are editing or running scripts on your machine you may want to relax it slightly. As I often forget to do this on new machines I’m making a note of the command in this post:

Open PowerShell prompt as Administrator, and then run:

set-executionpolicy remotesigned

Remotesigned means local scripts can be run but downloaded ones must be signed. You can remove all restriction via:

set-executionpolicy Unrestricted

To view the current setting on your machine use get* instead of set*:

get-executionpolicy

For more information see technet here:
https://technet.microsoft.com/en-us/library/ee176961.aspx