Grunt in Visual Studio

This post walks through the basics of using Grunt in Visual Studio 2013.  Specifically, we will add two compile time tasks; one for linting TypeScript files, and another for linting SCSS.  You are not limited to using just linters with Grunt, in fact you can do just about anything.

Grunt is specifically good at running repetitive tasks and is widely used in the non-.NET world via the command line.  Other common tasks include; minification, compilation and running unit tests.  Fortunately for us, some unofficial tools have been developed by Microsoft (specifically, the awesome Mads Kristensen and his dedicated team) which enable us to use Grunt directly inside Visual Studio.

Grunt is JavaScript configuration based, meaning you configure Grunt via a special file called GruntFile.js, which you add to your project.

This post specifically targets Visual Studio, because at the time of writing Visual Studio hasn’t been released yet (its actually RC at this point).  As Microsoft tooling is evolving rapidly at this time, i’ll avoid covering 2015 for now (however, I may revise this post after Visual Studio 2015 is released).

Prerequisites

NodeJS and npm

Grunt is built on top NodeJS and is installed via the Node Package Manager (npm).  Both of these tools need to be installed to use Grunt.  Check that you already have NodeJS installed, and/or download the latest version.  The installation is a trivial process, so I wont document it fully here, but please ensure that the “npm package manager” feature is installed.  This feature is selected by default, and it is need to install Grunt (and indeed, any other package).

 

Grunt

Before we can use Grunt, we must install it globally.

Open a command prompt and type

npm update -g npm

As advised by the product documentation, this ensures that you’ve got the latest version of npm before continuing.  Now, run the following command;

npm install -g grunt-cli

You might want to get reacquainted with running commands in this way because with the rapid state of change in the industry, development of visual tools is always going to lag behind (even with Microsoft’s new rapid development and deployment ethos, but that’s a whole other discussion).

 

Task Runner Explorer

This tool will run our Grunt tasks for us.  It will parse our Grunt File (which we will create shortly), list our tasks, let us run our tasks individually, and let us bind our tasks to “IDE events” (pre-build, after build, clean & solution open).

To install Task Runner Explorer, go to Tools > Extensions and Updates and type Task Runner Explorer.  The first result should by created by Microsoft Corp.

Task Runner Explorer
Task Runner Explorer

Local Grunt

Now that we have the necessary tooling installed, we need to add Grunt to our actual project (and any additional packages we want to use).  Packages are controlled using a special file called package.json.  This file contains metadata about your project, and the packages its dependent on (and potentially a bunch of other stuff that is out of the scope of this tutorial).

Create a file called package.json at the root of your project, and add the following code;

{
    "name": "Grunt101",
    "version": "1.0.0",
    "description": "Grunt101",
    "author": "Jon Preece",
    "devDependencies": {
        "grunt": "~0.4.5",
        "grunt-tslint": "~2.3.1-beta",
        "grunt-scss-lint": "~0.3.6"
    }
}

The file is pretty self explanatory, but if you want a little more information with regards to versioning, see this useful post; Node and npm Version Numbering: Guide and Best Practices.

To save time later, I’ve also included the other development dependencies.  In this case, our TypeScript linter (TSLint) and SCSS linter (scss-lint).  We will configure these shortly.

To install the dependencies, open a command prompt and change the directory to that which contains your newly created package.json and run the following command;

npm install

This will result in a new directory being created, called node_modules, which will contain each dependency.

npm install
npm install

Gruntfile.js

As previously mentioned, Grunt is a JavaScript based task runner.  Grunt is configured/controlled using a file called Gruntfile.js.  This file is responsible for defining the configuration for each task, loading the npm tasks, and registering the tasks.  You can have many tasks and many aliased groups that run multiple tasks.

For this tutorial we are going to validate our TypeScript and SCSS code quality using linters.  Specifically, Grunt has packages called Grunt-TSLint and Grunt-SCSS-Lint (you defined and installed these packages earlier).

Add the following configuration;

module.exports = function (grunt) {
    'use strict';

    grunt.initConfig({
        scsslint: {
            allFiles: [
              '*.scss'
            ]
        },
        tslint: {
            options: {
                configuration: grunt.file.readJSON("tslint.json")
            },
            files: {
                src: ['*.ts']
            }
        }
    });

    grunt.loadNpmTasks('grunt-scss-lint');
    grunt.loadNpmTasks('grunt-tslint');

    grunt.registerTask('default', ['scsslint', 'tslint']);
};

Again, the beauty of Node and Grunt especially is that its pretty simple to understand what’s going on, even for a beginner.

SCSS Lint Task

The minimum configuration required is the path to your SCSS files, using the property allFiles.  In the given file, we just look at the root directory of our application for SCSS files, but you can make this more complicated, and recursive, as follows;

Content/**/*.scss

There’s a comprehensive tutorial on how to customize this further to meet your needs over on GruntJS.com.

TS Lint Task

Before you can lint a TypeScript file, you must define some rules that you want to enforce throughout your code-base.  You can push this configuration to source control and share with your team for maximum consistency.

If you have Web Essentials installed (and I highly recommend all web developers install this excellent tool) then you can quickly generate a configuration file by clicking Web Essentials > Edit Global TSLint Settings (tslint.json).  Clicking this drops a tslint.json file into your Documents folder and opens it directly in Visual Studio, where you can make edits.  I highly recommend closing the file, moving it into your solution folder, and adding it as part of your project for easy access later.

If you don’t use Web Essentials are you insane? you can grab a sample configuration file directly from the GitHub source code repository and again I recommend that you add it as part of your project.  I won’t go into the ins and outs of each of the TypeScript linting rules, that’s for another time.  Just know that you can change them to your hearts content.

The files property works exactly the same as for the SCSS linter.  You pass in a string that represents the path of your TypeScript files.

 

Final Steps

Add some code

There’s just one more step we need to take before running our tasks, we need some code to test!

Add a new SCSS file to your project, and a new TypeScript file.  Add the following beautifully written ugly code to the TypeScript file;

module SomeModule {

    export class SomeClass {

        str: string;

        constructor() {
            this["str"] = "Hello, World"
        }
    }
}

The main issues with this code are trailing whitespace, no "use strict", missing semi-colons and access to a property via a string literal.  We will let Grunt tell us about these shortly.

Open the task runner

Open the task runner by going to View > Other Windows > Task Runner Explorer.

Select your project (if it isn’t selected by default) and click the refresh button.  Your Gruntfile.js configuration file should be loaded and the alias tasks and individual tasks should be displayed.

You can manually run a task (or set of tasks) by right clicking and selecting Run.  If all went well, you should see the following output;

C:\Users\Jon\Documents\Visual Studio 2013\Projects\Grunt101> cmd.exe /c grunt -b "C:\Users\Jon\Documents\Visual Studio 2013\Projects\Grunt101" --gruntfile "C:\Users\Jon\Documents\Visual Studio 2013\Projects\Grunt101\gruntfile.js" default
Running "scsslint:allFiles" (scsslint) task
Running scss-lint on allFiles
>> 1 file is lint free
Running "tslint:files" (tslint) task
>> app.ts[14, 3]: file should end with a newline
>> app.ts[9, 19]: object access via string literals is disallowed
>> app.ts[1, 1]: trailing whitespace
>> app.ts[3, 1]: trailing whitespace
>> app.ts[5, 1]: trailing whitespace
>> app.ts[9, 42]: missing semicolon
>> app.ts[2, 2]: missing 'use strict'
>> 7 errors in 1 file
Warning: Task "tslint:files" failed. Use --force to continue.
Aborted due to warnings.
Process terminated with code 3.

Success! Wait what? Oh yes, the task has failed because our TypeScript file has lots of linting errors.  Open up the TypeScript file and use the output message to fix each problem.  Once done re-run the task again by right clicking and clicking run.  Once all issues are resolved, you should get the following;

C:\Users\Jon\Documents\Visual Studio 2013\Projects\Grunt101> cmd.exe /c grunt -b "C:\Users\Jon\Documents\Visual Studio 2013\Projects\Grunt101" --gruntfile "C:\Users\Jon\Documents\Visual Studio 2013\Projects\Grunt101\gruntfile.js" default
Running "scsslint:allFiles" (scsslint) task
Running scss-lint on allFiles
>> 1 file is lint free
Running "tslint:files" (tslint) task
>> 1 file lint free.
Done, without errors.
Process terminated with code 0.

Bindings

Right clicking and clicking run is a little too manual for my liking, this is an automation tool isn’t it? Alas, the Task Runner Explorer provides you with Bindings.  Basically you can hook in to events in the IDE and run your tasks automatically when one of events happens.

Available bindings;

  • Before Build
  • After Build
  • Clean
  • Solution Open

To bind a task to a binding, right click the task (or again, the aliased task) and select Bindings > {An IDE Event of your choice}.  In my case, I’ve chosen Before Build.  So when I press Ctrl+Shift+B the tasks are ran automatically.

Note that as far as I know, you can’t fail the build if a task fails at this point.  No doubt this is an upcoming feature.

Summary

Grunt is a brave new world for most .NET developers.  Tooling is certainly in its infancy (within Visual Studio) at this point, but the awesome power of NodeJS, npm and Grunt can be harnessed with some useful extensions.

Why I don’t want to be a front-end web developer

The job title isn’t representative of my skill set

As a front-end developer, you portray yourself as having a narrow set of skills. This probably isn’t the case.

I did a quick search on a popular job forum for front-end developer jobs, and there is a clear recurring theme as to what skills are required to be a mid-level/senior front-end developer;

  • (X)HTML (5), CSS, SASS/SCSS, LESS.
  • Backbone, Angular, Knockout.
  • Responsive web design (I’m assuming Bootstrap knowledge, Foundation etc).
  • Adobe Photoshop, Magento.
  • Knowledge of source control and some form of client side unit testing.

My perception of these skills;

  • HTML has remained relatively unchanged since it was invented in 1990. If you don’t agree, just take a look at the source code for the first web page. HTML is easy, whatever the flavour. That’s actually is greatest strength, no barrier to entry for new developers.
  • CSS is easy to learn, impossible to be great at. Thankfully tools such as SASS/SCSS and LESS are eliminating the pain. A web developer of any skill level and experience can learn to use these CSS pre-processors in 60 minutes or less. They’re simple. They just work.
  • If you’re good at responsive web design, this is a valuable skill. Thankfully, if, like me, you’re not strong with design… front-end frameworks, such as Bootstrap and Foundation, help most developers sweep this skill gap under the rug.
  • Photoshop is in a world of its own. Its ridiculous level of complexity is only matched by its mind boggling feature set. Kudos if you can even get it to install and run.
  • Source control. All you need to know; git push and git pull.

These are, of course, tongue-in-cheek observations. What I’m trying to say is that a full-stack developer can be strong in all of these areas with minimal exposure and experience. These are not specialist skills. This generalisation I think applies to JavaScript development too. After 3 months of constant exposure to AngularJS, for example, you should have a good (if high level) understanding of how it works, how to use it, when to use it, and most importantly, when not to use it.

I don’t want to be a front-end developer because I have broader range of skills and I don’t want to undersell myself.

From a consultants perspective

Portraying yourself as a front-end developer might make sense in the short term. Developers in general at the minute are in high demand. In the UK especially there is a clear skills shortage, so presenting the image of being an expert or specialist in this field might help you land a lucrative role.

Rather than pitching as a front-end developer, however, I see more value in pitching yourself as a front-end developer with extenstive full-stack experience. That way you’re still ticking the boxes on the potential employers checklist, whilst making clear that your skill set goes much deeper.

Front end development is moving too fast

Sorry but it is. It feels like every day there is some new shiney JavaScript framework or “must have” tool that I “must have” (although if I really “must have” it is often debatable). The web is becoming more and more mature as a platform and I see this trend continuing, but we’re still some way away from being there. Yesterday all the cool kids were using PHP, then ASP .NET MVC, then AngularJS/KnockoutJS/WhateverJS. Tomorrow, ReactJS will (probably) be the framework of choice (or perhaps Aurelia will emerge as a viable competitor).

There is also and endless list of web development tools; Visual Studio, Code, Sublime, Webstorm, Dreamweaver (joking, who uses that?!), Eclipse, Netbeans, arguably Notepad++, VIM, EMACS … and infinitely more.

The net result is that I’ve spent literally hundreds of man hours learning FrameworkX (and probably a decent amount of money too) just for it to be superseeded or even die a painful death literally overnight. (Silverlight…remember that?, AngularJS 1.x according to many). It often feels like despite my best efforts, and regardless of how many hours of effort I put into my own education, my skill level is actually declining.

The pace needs to slow down and things need to stabilise before I can even consider specialising as a front-end developer.

I don’t want to be a front-end developer because I can’t (and don’t want to) half kill myself trying to keep up with the trend setters.

Front-end developers are probably not designers

I’ve found through experience that generally technical people fall in to one of two categories. I agree that this is not true in all cases.

  1. You are either a logical thinker aand prefer to write code
  2. You understand how to make things look beautiful.

Typically you don’t get too many coders with excellent design skills and vice versa.

Speaking personally, I’m a strong coder and always have been. I can scrape by when it comes to design, usually be utilising frameworks such as Bootstrap or Foundation, but I don’t excel at this.

There is a perception that front-end developers are good coders and good at design (take a look at the aforementioned job advertisement skill list, specifically the mention about Adobe Photoshop knowledge). Employers are hiring front-end guys and expecting them to be good at writing code and designing pretty websites. I think this is a mistake and the roles should be seperate.

I don’t want to be a front-end developer because I’m not a strong enough designer, and don’t claim to be. Employers have unrealistic expectations about what they will get from a front-end developer.

Front-end developers earn less money

Its true.

Developer vs Front-end Developer

£10k difference. That’s quite a gap. And that’s just one example.

I don’t want to be a front-end web developer because I want to reach my full earnings potential.

Summary

I don’t want to be a front-end developer because I don’t want to undersell myself, because I want to reach my full earnings potential, and because I don’t want to half kill myself trying to keep up with industry trend setters.

Agree or disagree… leave a comment.

What I learnt from using TypeScript “for real”

I completed my first commercial Greenfield project using TypeScript over plain old JavaScript throughout, and there were some frustrations along the way.

 

TL;DR

  • TypeScript is awesome, for sure, but there needs to be improvements to tooling to streamline the experience.
  • TypeScript is strongly typed of course, but it doesn’t force you to code in this manner, which can result in shoddy code.
  • Tweaking is required to make bundling work properly.

 

Tooling Frustrations

When I started the project, I was using Visual Studio 2012 Update 4 with TypeScript 1.0 pre-installed.  I was brought in to work on this project on a short term basis, and this is how the machine was set up when I arrived.  Normally I wouldn’t choose to use Visual Studio 2012 (I like to use the latest tools where possible) but I decided to just go with it and make do.  Once deficiencies in tooling became clear, I did eventually push for an update to Visual Studio 2013 Update 4, which in fairness resolved most of my frustrations.

I have a license for JetBrains ReSharper, which has support for TypeScript 1.4 (and indeed that is the default setting when using ReSharper 9).  This was actually my first tooling problem.

When using Visual Studio 2012, you are using TypeScript 1.0 (unless you have proactively updated to version 1.4).  Naturally 1.0 is an older version and doesn’t support language features such as protected methods or ECMAScript 6 template strings.  ReSharper, however, does understand these features and offers suggestions to use them.  Trying to use these features results in build errors, which of course was confusing because I didn’t understand that I was actually using version 1.0 at the time (does it say anywhere? not to my knowledge).

Also, due to the aforementioned old version of TypeScript, I also encountered an issue with TypeScript definition files.  These files are designed to not only provide IntelliSense in Visual Studio when working with third party libraries such as AngularJS, but also are used during the build process for compile time checking.  Because the TypeScript definition files (sourced using NuGet via GitHub) were based on TypeScript 1.4, again hundreds of errors at compile time.  Interestingly, however, with version 1.0 it was actually possible to ignore these build errors and still generate the compiled JavaScript files.  Only once upgrading to version 1.4 later on did it become mandatory to fix these build issues.

Compile on save does what it says on the tin.  You write some TypeScript code, click Save, and the JavaScript file is generated automatically.  Well, actually, no.  There were dozens of occasions when this just didn’t work.  I had to literally open the scripts folder, delete the JavaScript files manually, re-open Visual Studio, and save the TypeScript file again before it would actually recompile the JavaScript file.  This was tedious and boring and didn’t seem to be resolved by upgrading to Visual Studio 2013 Update 4 (although it did become less frequent).  I have Web Essentials installed, which gives you a side by side view of the TypeScript and JavaScript files which I originally turned off.  I did eventually turn this back on so I could see if the files were being recompiled at a glace before refreshing the web browser.

On a side note, the tooling provided by Google Chrome developer tools is excellent.  You can debug the actual TypeScript source directly in the browser, and if you inadvertently set a breakpoint on a JavaScript file, Chrome will navigate you back to the TypeScript file.  Nice.

 

Bundling in MVC

The project I was working on used some features of ASP .NET and Web API.  One such feature was bundling and minification, which I always use to improve performance and reduce server load.  I wrote many based files and interfaces as you would expect, as both are fully supported in TypeScript.  However, what I later realised (perhaps I should have realised at the start, but hey) was that order is important.  The MVC bundler doesn’t care about order, so there’s a big problem here.  After switching on bundling, I went from 0 runtime errors to at least 20.  There are two approaches I could have taken to resolve the problem;

  1. Create two separate bundles, one for base classes, the other for derived classes (I don’t care for this)
  2. Create a custom orderer and use naming conventions (prefix base class files with the name “Base”) to ensure order.

I went for the latter option.  This involved having to rename all my base class files and interfaces (granted interfaces don’t actually generate any JavaScript, but I wanted to keep the naming consistent) and writing a custom convention based orderer.  Neither of these things were challenging, just time consuming.

 

TypeScript is inconsistent

I can only speak from my own experiences here.  I don’t claim to be a TypeScript or JavaScript master, I’m just a normal developer as experienced as anybody else.

I love the strongly typed nature of TypeScript, that’s the feature that appeals most to me.  I do however have the following qualms;

 

TypeScript does not force you to use the language features of TypeScript

This is either a good thing, or a bad thing.  I can’t decide.  To be honest, I wish TypeScript did force you to use its own paradigm.  You can easily write pure JavaScript inside a TypeScript file and it will all compile and “just work”.  This is a bit of a downfall for me because I’ve always found JavaScript to be frustrating at best (can you think of an application you’ve written in JavaScript that hasn’t eventually descended into mayhem?).

What I mean is this, I can create a local variable and assign it a type of Element;

var element : Element = angular.element('.myelement')

The type constraint is redundant, and ReSharper makes a song and dance about the fact and basically forces you to get rid of it (yeah, I know I could turn this off but at the end of the day ReSharper is right).  I can assign function parameters type declarations, but I don’t have to.  (I did eventually go back in fill these in where I could).  I know that I should use type declarations and to be honest I wish I did use this feature more, but I’m lazy.  Next time I’ll be sure to do this from the start as I think it adds further structure to the code.

 

TypeScript doesn’t know it all

The project I was developing was built in conjunction with AngularJS.  I’ve had an up-down relationship with AngularJS to say the least, but I decided to fully embrace it this time and not switch between jQuery and Angular (yeah I’m aware that Angular uses jQuery lite under the hood, I’m referring to not switching between using $() and angular.element()).  I made heavy use of Angular Scope and I often had a dozen or more child scopes running on any one page.  So as a result, I had lots of code that looked roughly like this;

var scope = angular.element('.myselector').first().children('.mychildren').scope()

That’s perfectly valid code.  Even though I have already added the appropriate type definition files and referenced appropriate at the top of the file, TypeScript still insists that scope() is not valid in this context and I get a compilation error.  So I’d have to exploit the dynamic functionality of JavaScript to get around this;

var children = angular.element('.myselector').first().children('.mychildren');
var scope = children.scope();

Ok its not a big deal.  The scope variable is now of type any, so there’s no IntelliSense and no compile time checking.  But hey, wait, isn’t that the main point of using TypeScript in the first place?  Now if I pass scope to another function, I’ll exacerbate the problem (due to the fact that the parameter will have to be of type any) and before you know it, I’m just writing JavaScript again.  Fail.

Positive note

I’ve ranted about what I don’t like about TypeScript, and the frustrations I’ve encountered so far, but honestly I can’t sign its praises enough.  I estimate that so far TypeScript has saved me at least 3 million hours of effort.  Ok, perhaps a slight exaggeration, but you get the point.  Writing TypeScript is easy and familiar because I am comfortable with JavaScript and strongly typed languages such as C#.  I did spend perhaps a few hours reading the documentation and experimenting in the excellent TypeScript playground (both of which are excellent by the way), but generally thanks to previous experience with the aforementioned languages, I found that I “just know” TypeScript and it has a very smooth learning curve.

 

Summary

TypeScript is awesome, there is no doubt in my mind about that.  TypeScript saves all the pain that naturally comes with JavaScript and its only going to get better over time.   The tooling issues and inconsistencies are relatively insignificant, once you get used to them.

WCF custom authentication using ServiceCredentials

The generally accepted way of authenticating a user with WCF is with a User Name and Password with the UserNamePasswordValidator class.  So common that even MSDN has a tutorial, and the MSDN documentation for WCF is seriously lacking at best.  The username/password approach does what it says on the tin, you pass along a username and password credential from the client to the server, do your authentication, and only if there is a problem then you throw an exception.  It’s a primitive approach, but it works.  But what about when you want to do something a little bit less trivial than that? ServiceCredentials is probably what you need.

Source code for this post is available on GitHub.

Scenario

I should prefix this tutorial with a disclaimer, and this disclaimer is just my opinion.  WCF is incredibly poorly documented and at times counter intuitive.  In fact, I generally avoid WCF development like the black plague, preferring technologies such as Web API.  The saving grace of WCF is that you have full control over a much more substantial set of functionality, and you’re not limited by REST but empowered by SOAP.  WCF plays particularly nicely with WPF, my favourite desktop software technology.  I’ve never used WCF as part of a web service before, and I doubt I ever will.

Tangent aside, sometimes its not appropriate to authenticate a user with simply a username or password.  You might want to pass along a User Name and a License Key, along with some kind of unique identification code based on the hardware configuration of the users computer.  Passing along this kind of information in a clean way can’t be done with the simple UserNamePasswordValidator, without using some hacky kind of delimited string approach (“UserName~LicenseKey~UniqueCode”).

So this is what we will do for this tutorial; pass a User Name, License Key and “Unique Key” from the client to the server for authentication and authorization.  And for security, we will avoid using WsHttpBinding and instead create a CustomBinding and use an SSL certificate (PFX on the server, CER on the client).  The reasons for this are discussed throughout this tutorial, but primarily because I’ve encountered so many problems with WsHttpBinding when used in a load balanced environment that its just not worth the hassle.

As a final note, we will also go “configuration free”.   All of this is hard coded because I can’t make the assumption that if you use this code in a production environment that you will have access to the machine certificate store, which a lot of web hosting providers restrict access to. As far as I know, the SSL certificate cannot be loaded from a file or a resource using the Web.config.

Server Side Implementation

Basic Structure

All preamble aside, lets dive straight in.  This tutorial isn’t about creating a full featured WCF service (a quick Google of the term “WCF Tutorial” presents about 878,000 results for that) so the specific implementation details aren’t important.  What is important is that you have a Service Contract with at least one Operation Contract, for testing purposes.  Create a new WCF Service Application in Visual Studio, and refactor the boiler plate code as follows;

[ServiceContract]
public interface IEchoService
{
    [OperationContract]
    string Echo(int value);
}

public class EchoService : IEchoService
{
    public string Echo(int value)
    {
        return string.Format("You entered: {0}", value);
    }
}

And rename the SVC file to EchoService.svc.

Open up the Web.config file and delete everything inside the <system.serviceModel> element.  You don’t need any of that.

NuGet Package

It is not exactly clear to me why, but you’ll also need to install the NuGet package Microsoft ASP.NET Web Pages (Install-Package Microsoft.AspNet.WebPages).  I suppose this might be used for the WSDL definition page or the help page.  I didn’t really look into it.

 

Hosting In Local IIS (Internet Information Services)

I’m hosting this in IIS on my local machine (using a self-signed certificate) but I’ve thoroughly tested on a real server using a “real” SSL certificate, so I’ll give you some helpful hints of that as we go along.

First things first;

  1. Open IIS Manager (inetmgr)
  2. Add a new website called “echo”
  3. Add a HTTP binding with the host name “echo.local”
  4. Open up the hosts file (C:\Windows\System32\drivers\etc) and add an entry for “echo.local” and IP address 127.0.0.1
  5. Use your favourite SSL self signed certificate creation tool to generate a certificate for cn=echo.local  (See another tutorial I wrote that explains how to do this).  Be sure to save the SSL certificate in PFX format, this is important for later.
  6. The quickest way I’ve found to generate the CER file (which is the certificate excluding the private key, for security) is to import the PFX into the Personal certificate store for your local machine.  Then right click > All Tasks > Export (excluding private key) and select DER encoded binary X.509 (.CER).  Save to some useful location for use later.  Naturally when doing this “for real”, your SSL certificate provider will provide the PFX and CER (and tonnes of other formats) so you can skip this step.  This tutorial assumes you don’t have access to the certificate store (either physically or programmatically) on the production machine.
  7. DO NOT add a binding for HTTPS unless you are confident that your web host fully supports HTTPS connections.  More on this later.
  8. Flip back to Visual Studio and publish your site to IIS.  I like to publish in “Debug” mode initially, just to make debugging slightly less impossible.

ImportCertificate

Open your favourite web browser and navigate to http://echo.local/EchoService.svc?wsdl.  You won’t get much of anything at this time, just a message to say that service metadata is unavailable and instructions on how to turn it on.  Forget it, its not important.

Beyond UserNamePasswordValidator

Normally at this stage you would create a UserNamePasswordValidator, add your database/authentication/authorization logic and be done after about 10 minutes of effort.  Well forget that, you should expect to spend at least the next hour creating a myriad of classes and helpers, authenticators, policies, tokens, factories and credentials.  Hey, I never said this was easy, just that it can be done.

Factory Pattern

The default WCF Service Application template you used to create the project generates a ServiceHost object with a Service property that points to the actual implementation of our service, the guts.  We need to change this to use a ServiceHostFactory, which will spawn new service hosts for us.  Right click on the EchoService.svc file and change the Service property to Factory, and EchoService to EchoServiceFactory;

//Change 
Service="WCFCustomClientCredentials.EchoService"

//To
Factory="WCFCustomClientCredentials.EchoServiceFactory"

Just before we continue, add a new class to your project called EchoServiceHost and derive from ServiceHost.  This is the actual ServiceHost that was previously created automatically under the hood for us.  We will flesh this out over the course of the tutorial.  For now, just add a constructor that takes an array of base addresses for our service, and which passes the type of the service to the base.

public class EchoServiceHost : ServiceHost
{
    public EchoServiceHost(params Uri[] addresses)
        : base(typeof(EchoService), addresses)
    {

    }
}

Now add another new class to your project, named EchoServiceFactory, and derived from ServiceHostFactoryBase.  Override CreateServiceHost and return a new instance of EchoServiceHost with the appropriate base address.

public override ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses)
{
    return new EchoServiceHost(new[]
    {
        new Uri("http://echo.local/")
    });
}

We won’t initialize the ServiceHost just let, we’ll come back to that later.

Custom ServiceCredentials

ServiceCredentials has many responsibilities, including; serialization/deserialization and authentication/authorization.  Not to be confused with ClientCredentials, which has the additional responsibility of generating a token which contains all the fields to pass to the service (User Name, License Key and Unique Code).  There is a pretty decent tutorial on MSDN which explains some concepts in a little bit more detail that I will attempt.  The ServiceCredentials will (as well as all the aforementioned things) load in our SSL certificate and use that to verify (using the private key) that the certificate passed from the client is valid before attempting authentication/authorization. Before creating the ServiceCredentials class, add each of the following;

  1. EchoServiceCredentialsSecurityTokenManager which derives from ServiceCredentialsSecurityTokenManager.
  2. EchoSecurityTokenAuthenticator which derives from SecurityTokenAuthenticator.

Use ReSharper or Visual Studio IntelliSense to stub out any abstract methods for the time being.  We will flesh these out as we go along.

You will need to add a reference to System.IdentityModel, which we will need when creating our authorization policies next.

You can now flesh out the EchoServiceCredentials class as follows;

public class EchoServiceCredentials : ServiceCredentials
{
    public override SecurityTokenManager CreateSecurityTokenManager()
    {
        return new EchoServiceCredentialsSecurityTokenManager(this);
    }

    protected override ServiceCredentials CloneCore()
    {
        return new EchoServiceCredentials();
    }
}

If things are not clear at this stage, stick with me… your understanding will improve as we go along.

Namespaces and constant values

Several namespaces are required to identify our custom token and its properties.  It makes sense to stick these properties all in one place as constants, which we will also make available to the client later.  The token is ultimately encrypted using a Symmetric encryption algorithm (as shown later), so we can’t see the namespaces in the resulting SOAP message, but I’m sure they’re there.

Create a new class called EchoConstants, and add the following;

public class EchoConstants
{
    public const string EchoNamespace = "https://echo/";

    public const string EchoLicenseKeyClaim = EchoNamespace + "Claims/LicenseKey";
    public const string EchoUniqueCodeClaim = EchoNamespace + "Claims/UniqueCode";
    public const string EchoUserNameClaim = EchoNamespace + "Claims/UserName";
    public const string EchoTokenType = EchoNamespace + "Tokens/EchoToken";

    public const string EchoTokenPrefix = "ct";
    public const string EchoUrlPrefix = "url";
    public const string EchoTokenName = "EchoToken";
    public const string Id = "Id";
    public const string WsUtilityPrefix = "wsu";
    public const string WsUtilityNamespace = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd";

    public const string EchoLicenseKeyElementName = "LicenseKey";
    public const string EchoUniqueCodeElementName = "UniqueCodeKey";
    public const string EchoUserNameElementName = "UserNameKey";
}

All these string values (except for the WsUtilityNamespace) are arbitrary values.  They give the message structure and conformity with open standards.

We will use these constant values throughout the remainder of the tutorial.

Security Token

Lets work through this starting with the most interesting classes first, and work backwards in descending order.  The SecurityToken contains all our custom credentials that we will ultimately use to determine if the user is allowed to use the service.  A security token can contain pretty much anything you want, as long as the token itself has a unique ID, and a valid from/to date and time.

Add the following class to your project;

public class EchoToken : SecurityToken
{
    private readonly DateTime _effectiveTime = DateTime.UtcNow;
    private readonly string _id;
    private readonly ReadOnlyCollection<SecurityKey> _securityKeys;

    public string LicenseKey { get; set; }
    public string UniqueCode { get; set; }
    public string UserName { get; set; }

    public EchoToken(string licenseKey, string uniqueCode, string userName, string id = null)
    {
        LicenseKey = licenseKey;
        UniqueCode = uniqueCode;
        UserName = userName;

        _id = id ?? Guid.NewGuid().ToString();
        _securityKeys = new ReadOnlyCollection<SecurityKey>(new List<SecurityKey>());
    }

    public override string Id
    {
        get { return _id; }
    }

    public override ReadOnlyCollection<SecurityKey> SecurityKeys
    {
        get { return _securityKeys; }
    }

    public override DateTime ValidFrom
    {
        get { return _effectiveTime; }
    }

    public override DateTime ValidTo
    {
        get { return DateTime.MaxValue; }
    }
}

There are a few things to note here;

  1. The token has a unique identifier, in this case a random Guid.  You can use whatever mechanism you like here, as long as it results in a unique identifier for the token.
  2. The token is valid from now until forever.  You might want to put a realistic timeframe in place here.
  3. I don’t know what SecurityKeys is for, and it doesn’t seem to matter.

Before you rush off to MSDN, here is what it says;

Base class for security keys.

Helpful.

We’re not quite ready to use this token yet, so we’ll revisit later.  All the pieces come together at once, like a really dull jigsaw.

Authorization Policy

We only care at this point about authorizing the request based on the User Name, License Key and Unique Code provided in the token.  We could however use an Authorization Policy to limit access to certain service methods based on any one of these factors.  If you want to restrict access to your API in this way, see the MSDN documentation for more information.  If, however, the basic authorization is good enough for you, add the following code;

public class EchoTokenAuthorizationPolicy : IAuthorizationPolicy
{
    private readonly string _id;
    private readonly IEnumerable<ClaimSet> _issuedClaimSets;
    private readonly ClaimSet _issuer;

    public EchoTokenAuthorizationPolicy(ClaimSet issuedClaims)
    {
        if (issuedClaims == null)
        {
            throw new ArgumentNullException("issuedClaims");
        }

        _issuer = issuedClaims.Issuer;
        _issuedClaimSets = new[] { issuedClaims };
        _id = Guid.NewGuid().ToString();
    }

    public ClaimSet Issuer
    {
        get { return _issuer; }
    }

    public string Id
    {
        get { return _id; }
    }

    public bool Evaluate(EvaluationContext context, ref object state)
    {
        foreach (ClaimSet issuance in _issuedClaimSets)
        {
            context.AddClaimSet(this, issuance);
        }

        return true;
    }
}

The key to this working is the Evaluate method.  We are just adding each claim to the EvaluationContext claim set, without doing any sort of checks.  This is fine because we will do our own authorization as part of the SecurityTokenAuthenticator, shown next.

Security Token Authentication and Authorization

Now that we have our Authorization Policies in place, we can get down to business and tell WCF to allow or deny the request.  We must create a class that derives from SecurityTokenAuthenticator, and override the ValidateTokenCore method.  If an exception is thrown in this method, the request will be rejected.  You’re also required to return the authorization policies, which will be evaluated accordingly and the request rejected if the token does not have the claims required to access the desired operation.  How you authorize/authenticate the request is down to you, but will inevitably involve some database call or similar tasks to check for the existence and legitimacy of the given token parameters.

Here is a sample implementation;

public class EchoSecurityTokenAuthenticator : SecurityTokenAuthenticator
{
    protected override bool CanValidateTokenCore(SecurityToken token)
    {
        return (token is EchoToken);
    }

    protected override ReadOnlyCollection<IAuthorizationPolicy> ValidateTokenCore(SecurityToken token)
    {
        var echoToken = token as EchoToken;

        if (echoToken == null)
        {
            throw new ArgumentNullException("token");
        }

        var authorizationException = IsAuthorized(echoToken.LicenseKey, echoToken.UniqueCode, echoToken.UserName);
        if (authorizationException != null)
        {
            throw authorizationException;
        }

        var policies = new List<IAuthorizationPolicy>(3)
        {
            CreateAuthorizationPolicy(EchoConstants.EchoLicenseKeyClaim, echoToken.LicenseKey, Rights.PossessProperty),
            CreateAuthorizationPolicy(EchoConstants.EchoUniqueCodeClaim, echoToken.UniqueCode, Rights.PossessProperty),
            CreateAuthorizationPolicy(EchoConstants.EchoUserNameClaim, echoToken.UserName, Rights.PossessProperty),
        };

        return policies.AsReadOnly();
    }

    private static Exception IsAuthorized(string licenseKey, string uniqueCode, string userName)
    {
        Exception result = null;

        //Check if user is authorized.  If not you must return a FaultException

        return result;
    }

    private static EchoTokenAuthorizationPolicy CreateAuthorizationPolicy<T>(string claimType, T resource, string rights)
    {
        return new EchoTokenAuthorizationPolicy(new DefaultClaimSet(new Claim(claimType, resource, rights)));
    }
}

Token Serialization

Before we can continue, we have neglected to discuss one very important detail.  WCF generates messages in XML SOAP format for standardised communication between the client and the server applications.  This is achieved by serializing the token using a token serializer.  Surprisingly, however, this doesn’t happen automatically.  You have to give WCF a hand and tell it exactly how to both read and write the messages.  It gives you the tools (an XmlReader and XmlWriter) but you have to do the hammering yourself.

The code for this isn’t short, so I apologise for that.  Here is an explanation of what happens;

  1. CanReadTokenCore is called when deserializing a token.  The responsibility of this method is to tell the underlying framework if this class is capable of reading the token contents.
  2. ReadTokenCore is called with an XmlReader, which provides access to the raw token itself.  You use the XmlReader to retrieve the parts of the token of interest (the User Name, Unique Code and License Key) and ultimately return a new SecurityToken (EchoSecurityToken).
  3. CanWriteTokenCore is called when serializing a token.  Return true if the serializer is capable of serializing then given token.
  4. WriteTokenCore is called with an XmlWriter and the actual SecurityToken.  Use both objects to do the serialization manually.

And the code itself;

public class EchoSecurityTokenSerializer : WSSecurityTokenSerializer
{
    private readonly SecurityTokenVersion _version;

    public EchoSecurityTokenSerializer(SecurityTokenVersion version)
    {
        _version = version;
    }

    protected override bool CanReadTokenCore(XmlReader reader)
    {
        if (reader == null)
        {
            throw new ArgumentNullException("reader");
        }
        if (reader.IsStartElement(EchoConstants.EchoTokenName, EchoConstants.EchoNamespace))
        {
            return true;
        }
        return base.CanReadTokenCore(reader);
    }

    protected override SecurityToken ReadTokenCore(XmlReader reader, SecurityTokenResolver tokenResolver)
    {
        if (reader == null)
        {
            throw new ArgumentNullException("reader");
        }
        if (reader.IsStartElement(EchoConstants.EchoTokenName, EchoConstants.EchoNamespace))
        {
            string id = reader.GetAttribute(EchoConstants.Id, EchoConstants.WsUtilityNamespace);

            reader.ReadStartElement();

            string licenseKey = reader.ReadElementString(EchoConstants.EchoLicenseKeyElementName, EchoConstants.EchoNamespace);
            string companyKey = reader.ReadElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace);
            string machineKey = reader.ReadElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace);

            reader.ReadEndElement();

            return new EchoToken(licenseKey, companyKey, machineKey, id);
        }
        return DefaultInstance.ReadToken(reader, tokenResolver);
    }

    protected override bool CanWriteTokenCore(SecurityToken token)
    {
        if (token is EchoToken)
        {
            return true;
        }
        return base.CanWriteTokenCore(token);
    }

    protected override void WriteTokenCore(XmlWriter writer, SecurityToken token)
    {
        if (writer == null)
        {
            throw new ArgumentNullException("writer");
        }
        if (token == null)
        {
            throw new ArgumentNullException("token");
        }

        var EchoToken = token as EchoToken;
        if (EchoToken != null)
        {
            writer.WriteStartElement(EchoConstants.EchoTokenPrefix, EchoConstants.EchoTokenName, EchoConstants.EchoNamespace);
            writer.WriteAttributeString(EchoConstants.WsUtilityPrefix, EchoConstants.Id, EchoConstants.WsUtilityNamespace, token.Id);
            writer.WriteElementString(EchoConstants.EchoLicenseKeyElementName, EchoConstants.EchoNamespace, EchoToken.LicenseKey);
            writer.WriteElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace, EchoToken.UniqueCode);
            writer.WriteElementString(EchoConstants.EchoUserNameElementName, EchoConstants.EchoNamespace, EchoToken.UserName);
            writer.WriteEndElement();
            writer.Flush();
        }
        else
        {
            base.WriteTokenCore(writer, token);
        }
    }
}

Service Credentials Security Token Manager

A long time ago… in a blog post right here, you created a class called EchoServiceCredentialsSecurityTokenManager.  The purpose of this class is to tell WCF that we want to use our custom token authenticator (EchoSecurityTokenAuthenticator) when it encounters our custom token.

Update the EchoServiceCredentialsSecurityTokenManager as follows;

public class EchoServiceCredentialsSecurityTokenManager : ServiceCredentialsSecurityTokenManager
{
    public EchoServiceCredentialsSecurityTokenManager(ServiceCredentials parent)
        : base(parent)
    {
    }

    public override SecurityTokenAuthenticator CreateSecurityTokenAuthenticator(SecurityTokenRequirement tokenRequirement, out SecurityTokenResolver outOfBandTokenResolver)
    {
        if (tokenRequirement.TokenType == EchoConstants.EchoTokenType)
        {
            outOfBandTokenResolver = null;
            return new EchoSecurityTokenAuthenticator();
        }
        return base.CreateSecurityTokenAuthenticator(tokenRequirement, out outOfBandTokenResolver);
    }

    public override SecurityTokenSerializer CreateSecurityTokenSerializer(SecurityTokenVersion version)
    {
        return new EchoSecurityTokenSerializer(version);
    }
}

The code is pretty self explanatory.  When an EchoToken is encountered, use the EchoSecurityTokenAuthenticator to confirm that the token is valid, authentic and authorized.  Also, the token can be serialized/deserialized using the EchoSecurityTokenSerializer.

Service Host Endpoints

The last remaining consideration is exposing endpoints so that the client has “something to connect to”.  This is done in EchoServiceHost by overriding the InitializeRuntime method, as shown;

protected override void InitializeRuntime()
{
    var baseUri = new Uri("http://echo.local");
    var serviceUri = new Uri(baseUri, "EchoService.svc");

    Description.Behaviors.Remove((typeof(ServiceCredentials)));

    var serviceCredential = new EchoServiceCredentials();
    serviceCredential.ServiceCertificate.Certificate = new X509Certificate2(Resources.echo, string.Empty, X509KeyStorageFlags.MachineKeySet);
    Description.Behaviors.Add(serviceCredential);

    var behaviour = new ServiceMetadataBehavior { HttpGetEnabled = true, HttpsGetEnabled = false };
    Description.Behaviors.Add(behaviour);

    Description.Behaviors.Find<ServiceDebugBehavior>().IncludeExceptionDetailInFaults = true;
    Description.Behaviors.Find<ServiceDebugBehavior>().HttpHelpPageUrl = serviceUri;

    AddServiceEndpoint(typeof(IEchoService), new BindingHelper().CreateHttpBinding(), string.Empty);

    base.InitializeRuntime();
}

The code does the following;

  1. Define the base URL of the and the service URL
  2. Remove the default implementation of ServiceCredentials, and replace with our custom implementation.  Ensure that the custom implementation uses our SSL certificate (in this case, the SSL certificate is added to the project as a resource).  If the PFX (and it must be a PFX) requires a password, be sure to specify it.
  3. Define and add a metadata endpoint (not strictly required)
  4. Turn on detailed exceptions for debugging purposes, and expose a help page (again not strictly required)
  5. Add an endpoint for our service, use a custom binding.  (DO NOT attempt to use WsHttpBinding or BasicHttpsBinding, you will lose 4 days of your life trying to figure out why it doesn’t work in a load balanced environment!)

Custom Http Binding

In the interest of simplicity, I want the server and the client to use the exact same binding.  To make this easier, I’ve extracted the code out into a separate helper class which will be referenced by both once we’ve refactored (discussed next).  We’re using HTTP  right now but we will discuss security and production environments towards the end of the post.  The custom binding will provide some level of security via a Symmetric encryption algorithm that will be applied to aspects of the message.

public Binding CreateHttpBinding()
{
    var httpTransport = new HttpTransportBindingElement
    {
        MaxReceivedMessageSize = 10000000
    };

    var messageSecurity = new SymmetricSecurityBindingElement();

    var x509ProtectionParameters = new X509SecurityTokenParameters
    {
        InclusionMode = SecurityTokenInclusionMode.Never
    };

    messageSecurity.ProtectionTokenParameters = x509ProtectionParameters;
    return new CustomBinding(messageSecurity, httpTransport);
}

Note, I’ve increased the max message size to 10,000,000 bytes (10MB ish) because this is appropriate for my scenario.  You might want to think long and hard about doing this.  The default message size limit is relatively small to help ward off DDoS attacks, so think carefully before changing the default.  10MB is a lot of data to receive in a single request, even though it might not sound like much.

With the endpoint now exposed, a client (if we had one) would be able to connect.  Lets do some refactoring first to make our life a bit easier.

Refactoring

In the interest of simplicity, I haven’t worried too much about the client so far.  We need to make some changes to the project structure so that some of the lovely code we have written so far can be shared and kept DRY.  Add a class library to your project, called Shared and move the following classes into it (be sure to update the namespaces and add the appropriate reference).

  1. BindingHelper.cs
  2. IEchoService.cs
  3. EchoSecurityTokenSerializer.cs
  4. EchoConstants.cs
  5. EchoToken.cs

Client Side Implementation

We’re about 2/3 of the way through now.  Most of the leg work has been done and we just have to configure the client correctly so it can make first contact with the server.

Create a new console application (or whatever you fancy) and start by adding a reference to the Shared library you just created for the server.  Add the SSL certificate (CER format, doesn’t contain the private key) to your project as a resource.  Also add a reference to System.ServiceModel.

Custom ClientCredentials

The ClientCredentials works in a similar way to ServiceCredentials, but a couple of subtle differences.  When you instantiate the ClientCredentials, you want to pass it all the arbitrary claims you want to pass to the WCF service (License Key, Unique Code, User Name).  This object will be passed to the serializer that you created as part of the server side code (EchoSecurityTokenSerializer) later on.

First things first, create the EchoClientCredentials class as follows;

public class EchoClientCredentials : ClientCredentials
{
    public string LicenseKey { get; private set; }
    public string UniqueCode { get; private set; }
    public string ClientUserName { get; private set; }

    public EchoClientCredentials(string licenseKey, string uniqueCode, string userName)
    {
        LicenseKey = licenseKey;
        UniqueCode = uniqueCode;
        ClientUserName = userName;
    }

    protected override ClientCredentials CloneCore()
    {
        return new EchoClientCredentials(LicenseKey, UniqueCode, ClientUserName);
    }

    public override SecurityTokenManager CreateSecurityTokenManager()
    {
        return new EchoClientCredentialsSecurityTokenManager(this);
    }
}

The ClientCredentials has an abstract method CreateSecurityTokenManager, where we will use to tell WCF how to ultimately generate our token.

Client side Security Token Manager

As discussed, the ClientCredentialsSecurityTokenManager is responsible for “figuring out” what to do with a token that it has encountered.  Before it uses its own underlying token providers, it gives us the chance to specify our own, by calling CreateSecurityTokenProvider.  We can check the token type to see if we can handle that token ourselves.

Create a new class, called EchoClientCredentialsSecurityTokenManager, that derives from ClientCredentialsSecurityTokenManager, and add the following code;

public class EchoClientCredentialsSecurityTokenManager : ClientCredentialsSecurityTokenManager
{
    private readonly EchoClientCredentials _credentials;

    public EchoClientCredentialsSecurityTokenManager(EchoClientCredentials connectClientCredentials)
        : base(connectClientCredentials)
    {
        _credentials = connectClientCredentials;
    }

    public override SecurityTokenProvider CreateSecurityTokenProvider(SecurityTokenRequirement tokenRequirement)
    {
        if (tokenRequirement.TokenType == EchoConstants.EchoTokenType)
        {
            // Handle this token for Custom.
            return new EchoTokenProvider(_credentials);
        }
        if (tokenRequirement is InitiatorServiceModelSecurityTokenRequirement)
        {
            // Return server certificate.
            if (tokenRequirement.TokenType == SecurityTokenTypes.X509Certificate)
            {
                return new X509SecurityTokenProvider(_credentials.ServiceCertificate.DefaultCertificate);
            }
        }
        return base.CreateSecurityTokenProvider(tokenRequirement);
    }

    public override SecurityTokenSerializer CreateSecurityTokenSerializer(SecurityTokenVersion version)
    {
        return new EchoSecurityTokenSerializer(version);
    }
}

The code is pretty verbose, and we can see clearly what is happening here.  We can inspect the token type and see if it makes that of our Echo token.  If we find a match, return an EchoTokenProvider (coming next) which is just simply a wrapper containing our claims.  Note that we also are able to reuse the token serializer that we created as part of the server side work, a nice (not so little) time saver!

Security Token Provider

In this case, the security token provider is nothing more than a vessel that contains our client credentials.  The token provider instantiates the token, passes the client credentials, and passes the token off for serialization.

public class EchoTokenProvider : SecurityTokenProvider
{
    private readonly EchoClientCredentials _credentials;

    public EchoTokenProvider(EchoClientCredentials credentials)
    {
        if (credentials == null) throw new ArgumentNullException("credentials");

        _credentials = credentials;
    }

    protected override SecurityToken GetTokenCore(TimeSpan timeout)
    {
        return new EchoToken(_credentials.LicenseKey, _credentials.UniqueCode, _credentials.ClientUserName);
    }
}

Test Client

The client side code for establishing a connection with our service is relatively simple. We need each of the following:

  1. Define the endpoint (the address) of our service
  2. Create an instance of EchoClientCredentials
  3. Load the SSL certificate (the public key aspect at least) and pass to the credentials object we just instantiated
  4. Remove the default implementation of ClientCredentials and pass in our own
  5. Create a channel factory, and call our service method

Here is an example of what your client code would look like;

var serviceAddress = new EndpointAddress("http://echo.local/EchoService.svc");

var channelFactory = new ChannelFactory<IEchoService>(new BindingHelper().CreateHttpBinding(), serviceAddress);

var credentials = new EchoClientCredentials("license key", "unique code", "user name");
var certificate = new X509Certificate2(Resources.echo);
credentials.ServiceCertificate.DefaultCertificate = certificate;

channelFactory.Endpoint.Behaviors.Remove(typeof(ClientCredentials));
channelFactory.Endpoint.Behaviors.Add(credentials);

var service = channelFactory.CreateChannel();
Console.WriteLine(service.Echo(10));

Security and Production Environment Considerations

Throughout this tutorial I have used HTTP bindings and told you explicitly not to use HTTPS, and there is a very good reason for that.  If you have a simple hosting environment, i.e. an environment that is NOT load balanced, then you can go ahead and make the following changes;

  • Change your service URL to HTTPS
  • Change HttpTransportBindingElement (on the server, inside the BindingHelper) to HttpsTransportBindingElement.
  • Add a HTTPS binding in IIS

Re-launch the client and all should be good.  If you get the following error message, you’re in big trouble.

The protocol ‘https’ is not supported.

After 4 days of battling with this error, I found what the problem is.  Basically WCF requires end to end HTTPS for HTTPS to be “supported”.  Take the following set up;

load-balancing-1

Some hosting companies will load balance the traffic.  That makes absolutely perfect sense and is completely reasonable.  The communications will be made from the client (laptop, desktop or whatever) via HTTPS, that bit is fine.  If you go to the service via HTTPS you will get a response.  However, and here’s the key, the communication between the load balancer and the physical web server probably isn’t secured.  I.e. doesn’t use HTTPS.  So the end-to-end communication isn’t HTTPS and therefore you get the error message described.

To work around this, use a HTTPS binding on the client, and a HTTP binding on the server.  This will guarantee that the traffic between the client and the server will be secure (thus preventing MIM attacks) but the traffic between the load balancer and the physical web server will not be secure (you’ll have to decide for yourself if you can live with that).

Quirks

I’ve encountered a few quirks whilst developing this service over the last few weeks.  Quirks are things I can’t explain or don’t care to understand.  You must make the following changes to the server side code, or else it might not work.  If you find any other quirks, feel free to let me know and I’ll credit your discovery;

 

You must add the AddressFilterMode ‘Any’ to the service implementation, or it won’t work.

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]

Summary

A lot of work is required to be able to do custom authentication using ServiceCredentials with WCF, no fewer than 18 classes in total. For cases when a trivial User Name and password simply won’t suffice, you can use this approach. WCF works really well when developing non-web based applications, but the lack of documentation can make development and maintenance harder than it should be. Be careful when using in a load balanced environment, you may need to make some changes to your bindings as already discussed.

Every developer must be proficient at these 7 things…

In 2015, it is as important as ever for developers of all levels of expertise and experience to re-train and update their skills.  In the fast moving world of technology, failure to do so can result in career stagnation and ultimately not reaching your full earnings potential.

This post is an update to the popular post 10 things every software developer should do in 2014.  All of the points made in that post are still relevant and valid so I recommend you take a look.

This post is entirely based on my own opinion and I highly recommend you use this to figure out your own learning plan, based on your own level of skills an expertise.

1. Source Control

Lets start simple.  You must be using source control.  There is literally no excuse.

99% of companies today are using some form of source control, and its becoming expected that you have at least a basic understanding of how source control works.  Most source control systems have the ability to create branches, view history, merge changes, and display differences between revisions.  You should be familiar at least at a high level, how to perform each of these actions in at least 2 different solutions.

As a minimum, I strongly recommend that you learn how to use the basics of Git and TFS Version Control (TFSVC), as these seem to be the most popular in most circles today (especially if you are a .NET developer.

You may also want to learn more about SVN and Mercurial just to ensure that you have most bases covered.

 

2. CSS Pre-processors

There is a lot of debate on the web about CSS pre-processors and if you should/shouldn’t be using them.

Lets rewind a second. A CSS pre-processor is basically a scripting language that is complied down into plain old CSS by some external tool.  The scripting language is typically an abstraction that can reduce the complexity of the underlying CSS and add additional features.  The scripting language compiles into plain CSS, which a web browser can understand.

The general argument about CSS pre-processors between developers is that CSS should be simple and if your CSS is not simple, then your doing it wrong.  If your CSS is simple, then a pre-processor should be redundant.

My argument is that whilst this is generally true for very small applications, once you start working in a team where everybody has their own way of doing things, or the application generally starts getting larger, then this breaks down.  You usually end up with super long CSS files full of duplication and even worse, inconsistencies.

There are three main CSS pre-processor players; LESS, SASS, Stylus.  The main goals of each of these projects is as follows;

  1. Reduce length and complexity of CSS files.  Usually through partials that are merged together at compile time.
  2. Reduced maintenance through less duplication.
  3. To organise your CSS through nesting.
  4. To introduce variables and operators for calculating sizes etc.

I personally am I big fan of both LESS and SASS, and I’m currently migrating from the former to the latter.  I highly recommend that you have at least a high level of understanding of one or the other.

 

3. JavaScript Supersets/Trans compilers/Pre-processors

Fundamentally the same as CSS pre-processors.  A JavaScript pre-processor is a tool that provides you with some higher level language that ultimately compiles back down to JavaScript.

I’m particularly fond of TypeScript right now.  TypeScript has a slightly steeper learning curve than something like CoffeeScript or LiveScript.

Some features of TypeScript;

  1. Static typing…l types are known or inferred, so compile time checking informs you of any errors.
  2. Strong typing…prevents incompatible types from being passed between functions.
  3. Familiar constructs such as classes and interfaces.
  4. Reduced complexity compared to plain JavaScript.
  5. Decent IntelliSense (if you are a Visual Studio developer), even for third party libraries.

At this point, I feel that JavaScript pre-processors are still in their infancy.  I would still strongly recommend learning JavaScript in its pure form as it will no doubt still be around for at least the next 10 years, but if you can, have a look at TypeScript or CoffeeScript.  I suspect both will grow exponentially over the next year.

 

4. Your primary (code) language

This (might) be a slight generalisation, but most people feel most comfortable when writing one or two languages.  That doesn’t mean they’re not very good at other languages, but that generally they learnt to write one or two languages in particular before branching out.

Personally my first programming language was VB 6 way back in the day, but since then I’ve spent most of my time writing C#.  Whether it be Windows Forms, WPF, ASP .NET or just general back end development.  I know that I can achieve anything I want with C# with relative ease.  When I’m having a conversation with peers, or interviewers, or whomever I’m talking to at that time I’m secretly hoping that they will ask me questions about C# so that I can demonstrate my knowledge.  I have completed many Microsoft exams and gained a tonne of accreditation in the subject.  More importantly I have completed dozens of projects for customers, clients, colleagues and friends so I know how to see it through to completion.

My point is, you should have your own C#.  It doesn’t have to be C# it can be whatever language you want.  JavaScript, HTML, Ruby, Java, C++, or anything you like.  The point to take away is that you should know a language inside out, including its strengths, weaknesses, and especially when you should and shouldn’t use it.

Probably the best way you can really master a language is to start teaching it to others.  You might want to write a blog, give presentations to your colleagues, or attend local user groups and share your knowledge to strangers.  Teaching forces you to look at something thoroughly and in-depth to broaden your understanding, mostly because of the fear that if you don’t you will be caught out by a tricky question from an audience member.

 

5. How to interview well and how to make your CV stand out

This one is so obvious, I almost didn’t include it.

You would be amazed at how many developers submit CVs to potential employers littered with spelling mistakes, incomplete or untrue information, and just plain messy by design.

I don’t claim to be an expert in this field.  I have been for a lot of interviews and I have sent my CV to many, many employers.  I have also had a lot of feedback from friends, family, recruiters and indeed interviewers too.

Here are my top tips for a solid CV;

  • Get the basics right.  Ensure you CV is 100% free of spelling and grammatical errors.  Even the smallest mistakes will guarantee your CV is relegated to the bottom of the pile.
  • Keep the design simple.  Personally I use only a single font face, with 2-3 different sizes and minimal formatting. If you are more design oriented, you might want to do something a little more creative, and that’s fine too…but don’t be too creative.
  • Don’t tell lies.  You will be found out.
  • If you are including links to your portfolio, make sure those links are appropriate to the job your interviewing for.  No potential employers want to see your Geocities website from the late 1990’s.  (Yes I’ve seen somebody actually do this! Needless to say, they didn’t get hired)

As for interviewing, again its all in the basics.  Make sure you prepare well before the interview, read the job description thoroughly and think about what skills/experience you have that the company are specifically looking for.  And as you are a developer, you’ll probably want to brush up on your technical skills too, and read some common interview questions so that you aren’t caught by surprise.

For more information on how to boost your career, I highly recommend checking out an awesome book by John Sonmez, Soft Skills: The software developer’s life manual.

 

6. Relational database alternatives

I’ve touched on this previously in a related post, 8 things every .NET developer must understand.  However, I cannot emphasise the point enough.

Relational databases are not the be-all-and-end-all.  Its not sufficient in todays world to only having an understanding of Microsoft SQL Server or MySQL databases and how they work.  More and more companies are relying on No-SQL databases to improve performance of persisting and retrieving highly volatile data.

No-SQL databases typically lack any formal structure and typically consist of some sort of list of key-value pairs.  At first this may seem a bit bizarre, but there are many benefits to structuring data like this;

  • Easy to develop with.  Either no or few joins.  No complex new language to learn.
  • Horizontal scaling
  • Schema-less
  • Supports large objects
  • Low or no licensing costs.

For a more comprehensive look at the benefits of No-SQL databases, take a look at Adam Fowlers blog post on the subject.

RavenDB, MongoDB, and Windows Azure Blob Storage are the big players at the minute.  Each has a relatively low learning curve and are becoming widely adopted by companies.  I would suggest you take a look at each.

 

7. DevOps

Another industry buzz word that seems to be confusing people, myself included.

What is DevOps? Simply put, an effort to improve collaboration between the software development and IT support teams (referred to as operations).

Traditionally, the role of the software development team within a company is to develop new functionality and generally “make changes” to a system.  That system might be internal or external, and it might be desktop or web based software.

The role of operations is to ensure stability of systems.  The biggest risk to stability is change.

DevOps is a concerned effort to improve the way in which the two teams communicate with each other to ensure continuous deployment whilst minimizing disruptions.  We have in the past, myself very much included, typically taken the attitude that once we make a change or develop a new feature and commit to source control then its “done”.  Its then becomes operations responsibility to deploy the change and “handle that side of things”.

You should at least know what DevOps is, and how you can get more involved in it within your own company.  If your company does not currently employ a DevOps mentality, perhaps you could introduce it.  Get rid of the “them vs. us” culture and gain some respect in the process.

I highly recommend look at What is DevOps by Damon Edwards on Dev2Ops for more information.

 

Summary

2015 is yet another year of rapid change for our industry and developers alike.  There are clear trends towards pre-processors, non-relational databases, JavaScript frameworks, cross platform development should not be ignored.