Useful Git Training Links

Useful Git Training Links

git_logoHaving recently had to compile a list of useful learning resources for a development team migrating to git, I thought I would share them here.

Git is a very powerful and versatile distributed source control system but its not the easiest for a newbie to get their head around. The below links are ordered from tutorials based on giving an overview of git through to more advanced topics.

  1. What is Git – a nice overview article by Atlassian
  2. Learn Enough Git to Be Dangerous tutorial by Michael Hartl
  3. Git the Simple Guide – An excellent simple, straight to the point guide to git  by Roger Dudler. (My favourite guide)
  4. Git Tutorial – Another tutorial
  5. Git Cheat Sheet – cheat sheet for git and github commands
  6. The official git site documentation and tutorials
  7. Pro GIT ebook – an excellent definitive guide to git in a free ebook format


GitHub External Training Links: 

If you or your team also need to learn GitHub then here are some good training links.

  1. A great hello world example and introduction to GitHub
  2. Git Started With GitHub – free course from udemy
  3. Training videos on YouTube

Also its worth remembering that Microsoft offer FREE private git repository hosting via the Visual Studio Team Services if you don’t want to host all your projects publicly.

 

Advertisements

Consume JSON REST Service via WCF Message Class

Consume JSON REST Service via WCF Message Class

Since WCF was designed and envisioned by Microsoft the world has changed and the use of RESTful JSON based web services has increased at the expense of SOAP based services. WCF was updated to reflect this change and for several years has supported RESTful services through webHTTPBinding etc (more on MSDN), and there are many resources on the web for how to consume or host a REST service with WCF, but many of these assume you are not using a generic channel factory approach with the low-level message class. Usually in WCF you would consume a service via a proxy, or perhaps by directly creating a Channel Factory, however these require explicit knowledge of the service contract being consumed and sometimes a more generic solution is required. If, for example, you wanted to create a  generic WCF helper class for your application which would build a message directly from passed in data and call a service generically then you could use the Message class directly. This advanced approach is documented for SOAP messaging, but what about if you need to send JSON?

Below are some notes on how you would use the Message class to send JSON in a generic way (i.e. without needing intimate knowledge of the service contract you’re calling).

In the code below we need to pass the Person object named “bob” as JSON so we create a WebChannelFactory and use the “Endpoint1” config (which is very generic in nature). The special WebChannelFactory is a ChannelFactory that automatically adds WebHttpBinding and WebHttpBehavior to the endpoint config if its missing. Then we create a proxy and directly build a Channels.Message class using a SOAP version of “None” (as we’re not using SOAP here but JSON) and the DataContractJsonSerializer .

Person bob = new Person() {age = 89, name="Bob"};

WebChannelFactory factory = new WebChannelFactory("Endpoint1");

IRequestChannel proxy = factory.CreateChannel(
      new EndpointAddress("http://localhost:8080/Test"));

System.ServiceModel.Channels.Message requestMsg = 
      System.ServiceModel.Channels.Message.CreateMessage(
          MessageVersion.None, "", bob, new DataContractJsonSerializer(typeof(Person)));

requestMsg.Headers.To = new Uri(factory.Endpoint.Address.ToString());
requestMsg.Properties[WebBodyFormatMessageProperty.Name] = new WebBodyFormatMessageProperty(WebContentFormat.Json);

You will notice above we also need to set the message header URI too and also set the WebBodyFormatMessageProperty format to JSON. If we forget to do this step then the message will be sent in XML format despite the previous web config we have set (for more info on this issue see here and here). This is what is sent without setting the WebBodyFormatMessageProperty to JSON:

<root type="object"><age type="number">89</age><name>Bob</name></root>

and with the WebBodyFormatMessageProperty set to “WebContentFormat.Json”:

{“age”:89,”name”:”Bob”}

Next we call the nice and generic “Request()” method on the proxy and handle the response, picking out the body and de-serialising it into a Person object via the DataContractJsonSerializer.

System.ServiceModel.Channels.Message responseMsg = proxy.Request(requestMsg);

Person BobResponse = responseMsg.GetBody(new DataContractJsonSerializer(typeof(Person)));

Endpoint Config:

<system.serviceModel>
 <client>
 <endpoint name="Endpoint1"
 address="http://localhost:8080/Test" 
 binding="webHttpBinding"
 contract="System.ServiceModel.Channels.IRequestChannel"
 />
 </client>
</system.serviceModel>

In this snippet the only thing that is specific to the service being called is the Person object which the DataContractJsonSerializer needs to know about in order to be able to serialise it into JSON correctly. The actual service call is generic. To make this a completely generic helper we can instead pass in a type for the DataContractJsonSerializer to use instead of a real object, leaving the calling component to pass the right type in when it calls this generic helper method.

If you are already using this message class approach for SOAP services and need to now call some JSON REST services then hopefully this will help.

SonarQube: Unit Test Results Not Shown

SonarQube: Unit Test Results Not Shown

Recently whilst building Jenkins CI pipeline, with SonarQube static analysis, the JUnit unit test results were not being included in the Sonar dashboard results. The Jacoco based test coverage results were being included fine but not the actual test pass/fail percentage.

sonardash2

After digging into the log for the Jenkins build I found this warning being logged for the SurefireSensor (the Sonar sensor responsible for scanning JUnit XML reports for results):

[sonar] 10:26:34.534 INFO - Sensor SurefireSensor
[sonar] 10:26:34.534 INFO - parsing /apps/jenkins2/var/lib/jenkins/workspace/abc/code_master/examplecode/UnitTest/junit
[sonar] 10:26:34.864 DEBUG - Class not found in resource cache : com.rh.examplecode.UIMapperTest
[sonar] 10:26:34.864 WARN - Resource not found: com.rh.examplecode.UIMapperTest

The JUnit XML reports were being found and parsed fine but when it’s looking for the actual test code (the *.java code) it could not be found by the scanner and hence it throws the warning. It turns out that the java code for the tests is required in order analyse the JUnit results files and so you need to tell Sonar where to find the source code for the tests. How? Well this is done via the sonar.tests” property which is a comma-separated list of filepaths for directories containing the test code (the *.java files not *.class files). For example:

sonar.tests = "/UnitTests/junit"

Set this property alongside the other parameters for Sonar, for example:

sonar.projectBaseDir="${WORKSPACE}/exampleApp"
sonar.projectKey="testbuild1"
sonar.projectName="testBuild"
sonar.sourceEncoding="UTF-8"
sonar.sources="src/main/java/com/rh/examplecode/"
sonar.junit.reportsPath="ReportsXML/"
sonar.tests= "/UnitTests/junit"
sonar.jacoco.reportPath="target/jacoco.exec"
sonar.jacoco.reportMissing.force.zero="true"
sonar.binaries="build/com/rh/"

After this change the Sonar scanner will run and this time find the test source code, enabling it to complete the analysis. The log should report something like this:

[sonar] 13:10:20.848 INFO - Sensor SurefireSensor
[sonar] 13:10:20.848 INFO - parsing /apps/jenkins2/var/lib/jenkins/workspace/abc/code_master/ReportsXML
[sonar] 13:10:21.022 INFO - Sensor SurefireSensor (done) | time=10ms

And you should now have your unit test success/failure results in your unit test widgets in the projects Sonar dashboard, like so:

sonardash1

A Custom JSF Tag Lib For Toggling Render of Child Elements

A Custom JSF Tag Lib For Toggling Render of Child Elements

 

I’ve added a new sample project on GitHub that shows a custom Tag Library for JSF (Java Server Faces) that can be used to show/hide its children. There are several uses for this sort of custom component in your JSF web project but in this sample code I just read a property on the custom tag to determine whether to render all child elements or not. In a real application this logic could be swapped to implement application logic or perhaps call a Feature Toggle framework which decides whether to render elements within this tag or not.

Whilst the code is a complete same JSF project the main class is the CustomTag class.

This custom tag extends UIComponentBase to control the encodeChildren and getRendersChildren methods. In the getRendersChildren method we need to determine whether any child elements should be shown or not (again this can be any logic you find useful but for this example we are just reading a parameter on the XHTML). If its determined that we DO want to display children then we will follow normal processing and pass the call onto the base class’s getRendersChildren method. If its determined we DO NOT want to display child elements we instead return TRUE which tells the JSF framework that we want to render children ourselves using our custom encodeChildren method (which then ignores the request).

public class CustomTag extends UIComponentBase {

    @Override
    public String getFamily() {
         return "com.test.common.CustomTag";
    }

    @Override
    public void encodeChildren(FacesContext arg0) throws IOException {
         return;
    }

    @Override
    public boolean getRendersChildren() {
         boolean enabled = false;
         // Replace this logic with whatever you need to determine whether to display children or not
        String ruleName = (String) getAttributes().get("showChildren");
        enabled = (ruleName.equalsIgnoreCase("enabled"));

        if(enabled)
        {
             // we want the children rendered so we tell JSF that we wont be doing it in this custom tag.
             return super.getRendersChildren();
        }
        else {
             // we will tell JSF that we will render the children, but then we'll not.
             return true;
        }
    }
}

Next we define the new tag in the web.xml and custom.taglib.xml config files.

<!-- register custom tag -->
<context-param>
    <param-name>javax.faces.FACELETS_LIBRARIES</param-name>
    <param-value>/WEB-INF/custom.taglib.xml</param-value>
</context-param>

and

<?xml version="1.0"?>
<!DOCTYPE facelet-taglib PUBLIC
        "-//Sun Microsystems, Inc.//DTD Facelet Taglib 1.0//EN"
        "http://java.sun.com/dtd/facelet-taglib_1_0.dtd">
<facelet-taglib>
    <namespace>http://RichCustomTag.com</namespace>
    <tag>
        <tag-name>customTag</tag-name>
        <component>
            <component-type>com.test.common.CustomTag</component-type>
        </component>
    </tag>
</facelet-taglib>

We then use that new CustomTag within our XHTML page to wrap the elements that we want to toggle on/off depending on our logic.

Items below may or may not appear depending whether they are turned on or off:
<!-- to show children set showChildren to 'enabled' else 'disabled' -->
<rh:customTag showChildren="enabled">
   If you see this then child items are enabled
    <h:inputText value="#{helloBean.name2}"></h:inputText>
    end of dynamic content.
</rh:customTag>

That’s it.

Check out the code on GitHub via https://github.com/RichHewlett/JSFTag_ToggleChildRendering.

Create New MSTest Projects for Pre .Net 4.5 in Visual Studio 2017

Create New MSTest Projects for Pre .Net 4.5 in Visual Studio 2017

This post outlines the steps to create a new unit test project in Visual Studio 2017 using MS Test V1 and that targets .Net Frameworks prior to .Net 4.5.

Visual Studio 2017 onwards only has new unit test projects for MS Test V2 onwards and .Net 4.5. This is fine for new applications or ones targeting a recent .Net framework version but what if you have an existing solution targeting an older .Net version. Below shows the Unit Test Project available for .Net 4.5, but as you can see in the second screenshot for .Net 3.5 its not available.

NewTestProj_45

NewTestProj_35

If you want to create a new unit test project targeting .Net 3.5/4 for example then follow the steps below:

Create a new MS Test V2 project targetting .Net Framework 4.5 as in the first screenshot above (i.e. File > New Project > Test > Unit Test Project targeting .Net 4.5).

Once its created, change the project to target your earlier .Net Framework (e.g. .Net 3.5). This is done via the Project Properties page.  Click Yes and Visual Studio will reload the project.

NewTestProj_ChgFmk

Once it reloads the project will complain about some references which is fine as we’re now going to remove the MS Test V2 assemblies.

NewTestProj_Errors

Now remove the two project references to test assemblies.

NewTestProj_RemoveRefs

Then add a reference to the MSTest v1 assembly, Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll.  This should be under Extensions in the Add Reference dialog. Alternatively you can browse to them on your hard drive at the following locations:

For pre .Net 4 projects : C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v2.0

For post .Net 4 projects: C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.0

If you are not running Visual Studio Enterprise, then swap Enterprise in the path for Community etc.

NewTestProj_AddRefs

Now rebuild the project and you’re all done.

Find assemblies loaded during debugging in Visual Studio

Find assemblies loaded during debugging in Visual Studio

Sometimes you may get the following error when you are debugging a .Net app in Visual Studio:

“The breakpoint will not currently be hit. No symbols have been loaded for this document.”

Or you may have issues whereby the wrong code version appears to be loading at run time or perhaps when debugging you get an error saying a referenced component cannot be located.

All these issues stem from you not being able to view what components are actually being loaded during debug. If only there was a view in Visual Studio that gave you that info…well this is Visual Studio and so they’ve already thought of that, and its called the Modules view.

During debugging of your application from the menu go : Debug > Windows > Modules

Mod1

From this really useful view you can see each component that’s been loaded, the file path, the symbol file location, version information and more. This will show you if a component from the GAC has been loaded instead of your local file version, for example. It also enables you to find and load Symbol files for components where they have not been loaded automatically.

For information on the full functionality of this view check out the documentation here.

Platform Targeting in .Net

Platform Targeting in .Net

If you see one of the errors below within your .Net application it is likely as a result of your assembly having been compiled with the wrong target architecture flag set.

1) In this instance the assembly was accidentally compiled for x86 only and run within IIS in a 64bit process:

Unhandled Exception: System.BadImageFormatException: Could not load file or assembly “AssemblyName Version=1.0.0.0, Culture=neutral” or one of its dependencies. An attempt was made to load a program with an incorrect format.

(For info IIS has an Application Pool setting under Advanced  Settings that enables you to run the App Pool as a 32 bit process instead of 64bit. This is off by default.)

2) Here a .Net application targeting a 64 bit processor was run on a 32bit system:

This version of ConsoleApplication1.exe is not compatible with the version of Windows you’re running. Check your computer’s system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.

If you are looking for a quick fix, then just recompile your assembly with the “Any CPU” option as your platform target. See project properties, build tab for this setting.  If you want more information and explanation then read on.

BlgPlatformTarget1When compiling .Net assemblies in Visual Studio (or via the command line) there are a few options regarding optimising for certain target platforms. If you know that your assembly needs a to be deployed only to a 32bit environment (or it needs to target a specific processor instruction set) then you  can optimise the output of MSIL code (what the compiler produces ready for a JIT compilation at run time) for that platform. This is not normally required but perhaps, for example, you have to reference unmanaged code that targets a specific processor architecture.

Essentially this sets a flag inside the assembly metadata that is read by the CLR. If this is set incorrectly then this can result in the above errors but also other odd situations. For example if you compile your app to target “x86”  and then reference an assembly that is targeting a “x64” platform then you will see an error at runtime due to this mismatch (BadImageFormatException). Running an “x86” application will still work on a 64 bit Windows but it will not run natively as 64bit but will instead run under the WOW64 emulation mode which enables x86 execution under 64 bit  (with a performance overhead), which may or may not be a valid scenario in your case.

If you want to reproduce the situation try creating a new console application and in the build properties tab set Platform Target to “x86”. Then create a new Class Library project, set a reference to it in the Console Application, and then in the build properties tab set it to target the “x64” platform. Build and run the application which will show the above BadImageFormatException.

The target platform for your project is set in the Project Properties tab in Visual Studio, under Build (see screenshot above). If you are compiling via the command line you use the /platform switch.

“AnyCPU” became the default value from VS2010 onwards. “AnyCPU” used up to .Net 4.0 means that if the process runs on a 32-bit system, it runs as a 32-bit process and MSIL is compiled to x86 machine code. If the process runs on a 64-bit system, it runs as a 64-bit process and MSIL is compiled to x64 machine code. Whilst this enables more compatibility with multiple target machines it can lead to confusion or unexpected results when you consider that Windows duplicates system DLLs, configuration and registry views for 32bit and 64bit processes.  So Since .Net 4.5 (VS2012)  there is now a new default subtype of “AnyCPU” called “Any CPU 32-bit preferred” which basically follows the above rules except that if a process runs on 32bit system then it will run as a 32 bit process (not 64bit as before) and its MSIL will be compiled to x86 code not x64. This change essentially now forces your process to run under 32bit on a 64bit machine unless you untick the option and turn off the default setting. This setting can be seen on Project properites Build tab as “Prefer 32-bit”.

BlgPlatformTarget2

It is worth noting that you may see a confusing “Multiple Environments” option in Visual Studio which can be automatically added  after migrating solutions from older versions of Visual Studio (I believe it has been removed as a new option in VS2015 onwards but can hang around in older solutions). Use the Configuration Manager tab to check the setting for each assembly. Most developers will want to target  “Any CPU” which supports multiple target environments. If you are getting the above errors then use the steps below to check the assembly and if incorrect then try recompiling with “Any CPU” instead.

How to confirm a target platform for a given assembly:

So how do you confirm which processor architecture an assembly was built for? Well there are a few ways:

1) Using .Net reflection via a simple Powershell command:

[reflection.assemblyname]::GetAssemblyName("${pwd}\MyClassLibrary1.dll") | format-list

Running this command pointing to the assembly you want to check will produce output like this below (results truncated):

An assembly compiled to target AnyCPU:

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : MSIL
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

Same assembly but now compiled to target x86 :

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : X86
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

Same assembly again but now compiled to target x64 :

Name                  : ClassLibrary2
Version               : 1.0.0.0
CodeBase              : file:///C:/TempCode/crap/pttest/x86/ClassLibrary2.dll
ProcessorArchitecture : Amd64
FullName              : ClassLibrary2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

 

2 )You can also see this information from a de-compiler tool such as dotPeek. Below shows a screenshot showing a x86, 64, AnyCPU and x64 targeted assemblies.
BlgPlatformTarget4_dotPeek

3) Use the CorFlags Conversion Tool

The CorFlags Conversion Tool (CorFlags.exe) is installed with Visual Studio and easily accessed by the VS Command Line. It enables reading and editing of flags for assemblies.

CorFlags  assemblyName

Assuming you have the .Net 4 version installed with .net 4 and above  you’ll see something like this. Older versions do not have the 32BITREQ/PREF flags as per the change for .Net4 discussed above:

Microsoft (R) .NET Framework CorFlags Conversion Tool.  Version  4.6.1055.0
Copyright (c) Microsoft Corporation.  All rights reserved.

Version   : v4.0.30319
CLR Header: 2.5
PE        : PE32
CorFlags  : 0x1
ILONLY    : 1
32BITREQ  : 0
32BITPREF : 0
Signed    : 0

To interpret this output see the table below and check the PE value (PE32+ only on 64bit) and the 32bit Required/Preferred flags. It is also possible to update these flags using this tool.

Summary

Below is a simple table of results to show the impact of Platform Target on the complied assembly and how it is run on a 32/64bit OS. As you can see the 32 preferred flag has resulted in an AnyCPU assembly being run as 32bit on a 64bit OS, the table also shows the values you get when you use the techniques above for determining the target platform for a given assembly.

 

Platform Target in Visual Studio PowerShell
Result
dotPeek
Result
CorFlags
32BITREQ
CorFlags
32PREF
CorFlags
PE
Runs on
32bit OS as
Runs on
64bit
OS as
AnyCPU (pre .Net 4) MSIL MSIL 0 0 PE32 32 bit process 64 bit process
AnyCPU (.Net 4 +)
32 bit NOT preferred
MSIL MSIL 0 0 PE32 32 bit process 64 bit process
AnyCPU (.Net 4 +)
32 bit Preferred (default)
MSIL x86 0 1 PE32 32 bit process 32 bit process
(under WOW64)
x86 x86 x86 1 0 PE32 32 bit process 32 bit process
(under WOW64)
x64 Amd64 x64 0 0 PE32+ ERROR 64 bit process

In summary, there are a few simple rules to be aware of with using AnyCPU and the 32bit Preferred flag but essentially AnyCPU will enable the most compatibility in most cases.

Calculate a file hash without 3rd party tools on Windows & Linux.

Calculate a file hash without 3rd party tools on Windows & Linux.

If you need to generate a hash of a file (e.g. MD5, SHA256 etc) then there are numerous 3rd party tools that you can download but if you are restricted to only built in tools or don’t need to do this often enough to install something then there are built in OS tools for Windows and Linux that can be used.

Windows:

For Windows there is “certUtil” which can be used from the command prompt console with  the “-hashfile” option to generate a hash for a supplied file:

CertUtil [Options] -hashfile filePath [HashAlgorithm]

The [HashAlgorithm] options are MD2, MD4, MD5, SHA1 (default), SHA256, SHA384 and SHA512.

For example to get an MD5 hash of a file use:

CertUtil -hashfile C:\ExampleFile1.txt MD5

More documentation for CertUtil can be seen here.

For those with access to PowerShell v4  and above (Windows 8.1 & Win Server 2012 R2) you can use the built in commandlet called get-filehash like this:

Get-FileHash C:\ExampleFile1.txt  -Algorithm MD5 | Format-List

The algorithms supported are SHA1, SHA256 (default), SHA384, SHA512, MACTripleDES, MD5 & RIPEMD160.

For Powershell versions prior to V4 there are numerous scripts available on the web that will work out the hash for you using various methods.

Linux:

For Linux use the correct  hashalgorithmSUM command in the terminal for the algorithm you are looking for, i.e. for an MD5 hash use md5sum or for SHA512 hash use sha512sum.

For example:

md5sum /home/rich/Documents/ExampleFile1.txt 
sha1sum /home/rich/Documents/ExampleFile1.txt
sha512sum /home/rich/Documents/ExampleFile1.txt

 

 

Speed up a slow JSF XHTML editing experience in Eclipse or IBM RAD/RSA.

Speed up a slow JSF XHTML editing experience in Eclipse or IBM RAD/RSA.

If you find yourself doing some JSF (Java Server Faces) development within either Eclipse, IBM’s RAD (Rapid Application Developer) or IBM RSA (Rational Software Architect) IDEs you may find that the JSF editor can run slowly with some lag. This seems particularly a problem on RAM starved machines and/or older versions of the Eclipse/RAD IDEs. The problem (which can be intermittent) is very frustrating and can result in whole seconds going by after typing before your changes appear in the editor. It seems that the JSF code validator is taking too long to re-validate the edited JSF code file. At one point this got so bad for our team many would revert to making JSF changes in a text editor and then copy/paste the final code into the IDE.

java_small

Thankfully there is a workaround and in order that I don’t forget if I hit this problem again I’m posting it here. The workaround (although sadly not a fix) is to use a different “editor” within the same IDE. If you right click the JSF file you want to edit and use the pop-up menu to choose to open it with the XML Editor instead of the XHTML Editor then you will find a much faster experience. Whilst this does remove some of the JSF/XHTML specific validations it provides support for tags etc and will perform faster.

Should you wish to always use the XML Editor to edit XHTML files you can make this global change via the preferences. Go to General > Editors > File Associations > File Types list > select XHTML extension > click Add > Add XML Editor. Then in the associated editors list select the XML Editor and click the ‘Default’ button – thus making XML Editor the default for all XHTML files. Of course once this is done you can still click on individual XHTML files and right click to open in the original XHTML editor should you want to temporarily switch back for an individual file.

Hopefully this will prevent you pulling your hair out in frustration when editing XHTML files.

Building a Python Flask Web UI For Raspberry Pi Sure Elec LCD

Building a Python Flask Web UI For Raspberry Pi Sure Elec LCD

In an earlier post I outlined how I setup a Sure Electronics LCD screen with my Raspberry Pi 3 using a Python driver. Whilst updating the LCD via command line is immensely useful I decided to build a UI to control the LCD send messages too it. By using a browser based UI I could update the LCD screen from anywhere. Essentially this was a chance to play with a Python web framework and write some code!

BlogFlaskUISureElecLCD2 

I’ve passed the UI’s URL round my family’s devices at home and they now send me messages whilst I’m in my study working/playing.

The end result can be found in my GitHub repo.

As my driver was in Python and I’m enjoying coding in Python at the moment I decided to use a Python Web framework to serve the HTML/JavaScript UI and host RESTful services on the server side to accept LCD commands. After some reading I went with Flask which  seemed perfect for my needs. I could have gone with Django but Flask seemed for appropriate for my needs. For a good comparison see this CodeMentor.io post. For a great tutorial on Flask checkout this series by Miguel Grinberg and this great post by Scotch.io.  

Building the server side web framework was easy and logical in Flask and I was able to get something setup in one file which served my needs. However after reading some Flask best practices I spread my solution out into a more appropriate structure. Flask will seem familiar to web developers with experience of ASP.net MVC, Web API, Node/Express etc. You define routes to handle incoming requests. The key aspects my solution are outlined below. I am running the Flask server directly on my Raspberry Pi and using it to serve the pages and host the services for commanding the LCD screen.

To install Flask (on a Pi) first install Python Pip (a popular Python Package Manager) via “apt-get install python-pip” or “apt-get install python3-pip” (for a Python v3 specific Pip) and then install Flask via “pip install flask”.

Flask comes with a small lightweight development server which runs your app in Dev mode and also auto restarts after code changes. I found this fast and robust enough for my needs. 

Lets check out the main parts of the code:

run.py:  This is the entry point for the app. When run it calls run in the app file and here I have optionally passed IP/Port I want the app to run on which enables the app to be exposed to the internal network so I can connect from other machines on the network. 

from app import app 
if __name__ == '__main__':
    app.run(host="192.100.100.100", port=5000)

app/__inti_.py & config.py: This is the app initialisation code and where it points to the config.py file where config settings can be set for the app.

app/views.py: This is the heart of the app. After importing the relevant python components and instantiating the Smartie LCD driver (from previous post), the routes for the app are defined.

@app.route("/")
def show_homepage():
    return "Home Page!"

@app.route("/lcd")
def show_lcdpage():
    name="Jeff"
    return render_template("lcd.html", name=name)

The route for root will just return the text “Home Page” whereas the route for /lcd will call render_template to return a templated HTML page (lcd.html) and passes any relevant data (e.g. “Jeff” which is irrelevant in this example”). Templates will be covered shortly below.

@app.route("/lcd/clear", methods=["POST","GET"])
def display_clear():
    smartie1.clear_screen()
    return "Success"

@app.route("/lcd/displaymessage", methods=["POST"])
def display_message():
    if not request.json:
        abort(400)
    smartie1.clear_screen()
    smartie1.backlight_on()
    smartie1.write_lines_scroll(request.json['Lines'])
    return "Success" 

Any POST or GET on http://SERVERADDRESS:PORT/lcd/clear will result in the smartie drivers clear screen method being called. A POST to “/lcd/displaymessage” will be validated to ensure that the request contains JSON data and then the data will be passed to the driver for display.

/app/templates/lcd.html: This is the main HTML page that enables the user to type the messages to display.

BlogFlaskUI2

The CSS and JavaScript used by this page is found in the static folder and referenced in the usual way………….

<!-- CSS for our app -->         
 <link rel="stylesheet" href="/static/lcd.css"/>

<!-- JS for our app --> 
<script type="text/javascript" src="/static/lcd.js" charset="utf-8"></script>

So we need to ensure that the flask server returns these static files, but we don’t want to have to define  a specific app.route for each one so instead we use this one in our views.py :

@app.route('/<path:filename>')  
def send_file(filename):  
      return send_from_directory('/static', filename)

This basically states that any requests for a file path are sourced from the /static folder directly. So we can just place any files in the static folder that we want to be served directly (the CSS and JavaScript files in our case).

/app/static/lcd.js:  From this JavaScript code we can consume the services hosted by Flask for our application. It’s using the XMLHttpRequest object to make AJAX requests to the Flask server. The SendCommand function takes callback methods which will be called on success or error.

function SendCommand(url, httpVerb, data, successCallback, errorCallback){
                
  var dataToSend;
  if(data!=null){
      var dataToSend = JSON.stringify(data);          
  }
   var request = new XMLHttpRequest();
  request.open(httpVerb, url, true);
  request.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
   request.onload = function() {
      if(this.status >= 200 && this.status < 400){
          // success here
          var returnedData; 
          if (this.response != null){
              successCallback(returnedData, this.status);
          }                                                                        
      }
      else{
          //error returned from server
          errorCallback("Error response returned from server", this.status);
      }
  }
   request.onerror = function() {
          errorCallback("Error contacting server", this.status);
  }

  if (dataToSend != null){
      request.send(dataToSend);
  }
  else{
      request.send();
  }
}

That’s mostly it. Run the app by running the run.py module (e.g in the Python IDLE or terminal) and direct your browser to http://SERVERADDRESS:5000/lcd.

The code for my Python driver and this web app is available on GitHub here https://github.com/RichHewlett/smartie and https://github.com/RichHewlett/LCD-Smartie-Web.