Browse Tag: c#

Angular 2 server side paging using ng2-pagination

Angular 2 is not quite out of beta yet (Beta 12 at the time of writing) but I’m in the full flow of developing with it for production use. A common feature, for good or bad, is to have lists/tables of data that the user can navigate through page by page, or even filter, to help find something useful.

Angular 2 doesn’t come with any out of the box functionality to support this, so we have to implement it ourselves. And of course what the means today is to use a third party package!

To make this happen, we will utilise n2-pagination, a great plugin, and Web API.

I’ve chosen Web API because that is what I’m using in my production app, but you could easily use ExpressJS or (insert your favourite RESTful framework here).

Checklist

Here is a checklist of what we will do to make this work;

  • Create a new Web API project (you could very easily use an existing project)
  • Enable CORS, as we will use using a seperate development server for the Angular 2 project
  • Download the Angular 2 quick start, ng2-pagination and connect the dots
  • Expose some sample data for testing

I will try to stick with this order.

Web API (for the back end)

Open up Visual Studio (free version here) and create a new Web API project. I prefer to create an Empty project and add Web API.

Add a new controller, called DataController and add the following code;

public class DataModel
{
    public int Id { get; set; }
    public string Text { get; set; }
}

[RoutePrefix("api/data")]
public class DataController : ApiController
{
    private readonly List<DataModel> _data;

    public DataController()
    {
        _data = new List<DataModel>();

        for (var i = 0; i < 10000; i++)
        {
            _data.Add(new DataModel {Id = i + 1, Text = "Data Item " + (i + 1)});
        }
    }

    [HttpGet]
    [Route("{pageIndex:int}/{pageSize:int}")]
    public PagedResponse<DataModel> Get(int pageIndex, int pageSize)
    {
        return new PagedResponse<DataModel>(_data, pageIndex, pageSize);
    }
}

We don’t need to connect to a database to make this work, so we just dummy up 10,000 “items” and page through that instead. If you chose to use Entity Framework, the code is exactly the same, except you initialise a DbContext and query a Set instead.

PagedResponse

Add the following code;

public class PagedResponse<T>
{
    public PagedResponse(IEnumerable<T> data, int pageIndex, int pageSize)
    {
        Data = data.Skip((pageIndex - 1)*pageSize).Take(pageSize).ToList();
        Total = data.Count();
    }

    public int Total { get; set; }
    public ICollection<T> Data { get; set; }
}

PagedResponse exposes two properties. Total and Data. Total is the total number of records in the set. Data is the subset of data itself. We have to include the total number of items in the set so that ng2-pagination knows how many pages there are in total. It will then generate some links/buttons to enable the user to skip forward several pages at once (or as many as required).

Enable CORS (Cross Origin Resource Sharing)

To enable communication between our client and server, we need to enable Cross Origin Resource Sharing (CORS) as they will be (at least during development) running under different servers.

To enable CORS, first install the following package (using NuGet);

Microsoft.AspNet.WebApi.Cors

Now open up WebApiConfig.cs and add the following to the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add a new nested class, as shown;

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

Now when Angular makes a request for data, it will send an OPTIONS header first to check access. This request will be intercepted above and will reply with Access-Control-Allow-Origin header with value any (represented with an asterisk).

Format JSON response

If, like me, you hate Pascal Case JavaScript (ThisIsPascalCase), you will want to add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

Now lets set up the front end.

Front-end Angular 2 and ng2-pagination

If you head over the to Angular 2 quickstart, you will see there is a link to download the quick start source code. Go ahead and do that.

I’ll wait here.

Ok you’re done? Lets continue.

Install ng2-pagination and optionally bootstrap and jquery if you want this to look pretty. Skip those two if you don’t mind.

npm install --save-dev ng2-pagination bootstrap jquery

Open up index.html and add the following scripts to the header;

<script src="node_modules/angular2/bundles/http.dev.js"></script>
<script src="node_modules/ng2-pagination/dist/ng2-pagination-bundle.js"></script>

<script src="node_modules/jquery/dist/jquery.js"></script>
<script src="node_modules/bootstrap/dist/js/bootstrap.js"></script>

Also add a link to the bootstrap CSS file, if required.

<link rel="stylesheet" href="node_modules/bootstrap/dist/css/bootstrap.css">

Notice we pulled in Http? We will use that for querying our back-end.

Add a new file to the app folder, called app.component.html. We will use this instead of having all of our markup and TypeScript code in the same file.

ng2-pagination

Open app.component.ts, delete everything, and add the following code instead;

import {Component, OnInit} from 'angular2/core';
import {Http, HTTP_PROVIDERS} from 'angular2/http';
import {Observable} from 'rxjs/Rx';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/do';
import {PaginatePipe, PaginationService, PaginationControlsCmp, IPaginationInstance} from 'ng2-pagination';

export interface PagedResponse<T> {
    total: number;
    data: T[];
}

export interface DataModel {
    id: number;
    data: string;
}

@Component({
    selector: 'my-app',
    templateUrl: './app/app.component.html',
    providers: [HTTP_PROVIDERS, PaginationService],
    directives: [PaginationControlsCmp],
    pipes: [PaginatePipe]
})
export class AppComponent implements OnInit {
    private _data: Observable<DataModel[]>;
    private _page: number = 1;
    private _total: number;

    constructor(private _http: Http) {

    }
}

A quick walk-through of what I’ve changed;

  • Removed inline HTML and linked to the app.component.html file you created earlier. (This leads to cleaner seperation of concerns).
  • Imported Observable, Map, and Do from RX.js. This will enable us to write cleaner async code without having to rely on promises.
  • Imported a couple of class from angular2/http so that we can use the native Http client, add added HTTP_PROVIDERS as a provider.
  • Imported various objects required by ng2-pagination, and added to providers, directives and pipes so we can access them through our view (which we will create later).
  • Defined two interfaces, one called PagedResponse<T> and DataModel. You may notice these are identical to those we created in our Web API project.
  • Add some variables, we will discuss shortly.

We’ve got the basics in place that we need to call our data service and pass the data over to ng2-pagination. Now lets actually implement that process.

Retrieving data using Angular 2 Http

Eagle eyed readers may have noticed that I’ve pulled in and implemented the OnInit method, but not implemented the ngOnInit method yet.

Add the following method;

ngOnInit() {
    this.getPage(1);
}

When the page loads and is initialised, we want to automatically grab the first page of data. The above method will make that happen.

Note: If you are unfamiliar with ngOnInit, please read this helpful documentation on lifecycle hooks.

Now add the following code;

getPage(page: number) {
this._data = this._http.get("http://localhost:52472/api/data/" + page + "/10")
    .do((res: any) => {
        this._total = res.json().total;
        this._page = page;
    })
    .map((res: any) => res.json().data);
}

The above method does the following;

  • Calls out to our Web API (you may need to change the port number depending on your set up)
  • Passes in two values, the first being the current page number, the second being the number of results to retrieve
  • Stores a reference to the _data variable. Once the request is complete, do is executed.
  • Do is a function (an arrow function in this case) that is executed for each item in the collection received from the server. We’ve set up our Web API method to return a single object, of type PagedResponse, so this method will only be executed once. We take this opportunity to update the current page (which is the same as the page number passed into the method in the first place) and the _total variable, which stores the total number of items in the entire set (not just the paged number).
  • Map is then used to pull the data from the response and convert it to JSON. The way that RX.js works is that an event will be emitted to notify that the collection has changed.

Implement the view

Open app.component.html and add the following code;

<div class="container">
    <table class="table table-striped table-hover">
        <thead>
            <tr>
                <th>Id</th>
                <th>Text</th>
            </tr>
        </thead>
        <tbody>
            <tr *ngFor="#item of _data | async | paginate: { id: 'server', itemsPerPage: 10, currentPage: _page, totalItems: _total }">
                <td>{{item.id}}</td>
                <td>{{item.text}}</td>
            </tr>
        </tbody>
    </table>    
    <pagination-controls (pageChange)="getPage($event)" id="server"></pagination-controls>
</div>

There are a few key points on interest here;

  • On our repeater (*ngFor), we’ve used the async pipe. Under the hood, Angular subscribes to the Observable we pass to it and resolves the value automatically (asynchronously) when it becomes available.
  • We use the paginate pipe, and pass in an object containing the current page and total number of pages so ng2-pagination can render itself properly.
  • Add the pagination-controls directive, which calls back to our getPage function when the user clicks a page number that they are not currently on.

As we know the current page, and the number of items per page, we can efficiently pass this to the Web API to only retrieve data specific data.

So, why bother?

Some benefits;

  • Potentially reduce initial page load time, because less data has to be retrieved from the database, serialized and transferred over.
  • Reduced memory usage on the client. All 10,000 records would have to be held in memory!
  • Reduced processing time, as only the paged data is stored in memory, there are a lot less records to iterate through!

Drawbacks;

  • Lots of small requests for data could reduce server performance (due to chat. Using an effective caching strategy is key here.
  • User experience could be degegrated. If the server is slow to respond, the client may appear to be slow and could frustrate the user.

Summary

Using ng2-pagination, and with help from RX.js, we can easily add pagination to our pages. Doing so has the potential to reduce server load and initial page render time, and thus can result in a better user experience. A good caching strategy and server response times are important considerations when going to production.

Create a RESTful API with authentication using Web API and Jwt

Web API is a feature of the ASP .NET framework that dramatically simplifies building RESTful (REST like) HTTP services that are cross platform and device and browser agnostic. With Web API, you can create endpoints that can be accessed using a combination of descriptive URLs and HTTP verbs. Those endpoints can serve data back to the caller as either JSON or XML that is standards compliant. With JSON Web Tokens (Jwt), which are typically stateless, you can add an authentication and authorization layer enabling you to restrict access to some or all of your API.

The purpose of this tutorial is to develop the beginnings of a Book Store API, using Microsoft Web API with (C#), which authenticates and authorizes each requests, exposes OAuth2 endpoints, and returns data about books and reviews for consumption by the caller. The caller in this case will be Postman, a useful utility for querying API’s.

In a follow up to this post we will write a front end to interact with the API directly.

Set up

Open Visual Studio (I will be using Visual Studio 2015 Community edition, you can use whatever version you like) and create a new Empty project, ensuring you select the Web API option;

Where you save the project is up to you, but I will create my projects under *C:\Source*. For simplicity you might want to do the same.

New Project

Next, packages.

Packages

Open up the packages.config file. Some packages should have already been added to enable Web API itself. Please add the the following additional packages;

install-package EntityFramework
install-package Microsoft.AspNet.Cors
install-package Microsoft.AspNet.Identity.Core
install-package Microsoft.AspNet.Identity.EntityFramework
install-package Microsoft.AspNet.Identity.Owin
install-package Microsoft.AspNet.WebApi.Cors
install-package Microsoft.AspNet.WebApi.Owin
install-package Microsoft.Owin.Cors
install-package Microsoft.Owin.Security.Jwt
install-package Microsoft.Owin.Host.SystemWeb
install-package System.IdentityModel.Tokens.Jwt
install-package Thinktecture.IdentityModel.Core

These are the minimum packages required to provide data persistence, enable CORS (Cross-Origin Resource Sharing), and enable generating and authenticating/authorizing Jwt’s.

Entity Framework

We will use Entity Framework for data persistence, using the Code-First approach. Entity Framework will take care of generating a database, adding tables, stored procedures and so on. As an added benefit, Entity Framework will also upgrade the schema automatically as we make changes. Entity Framework is perfect for rapid prototyping, which is what we are in essence doing here.

Create a new IdentityDbContext called BooksContext, which will give us Users, Roles and Claims in our database. I like to add this under a folder called Core, for organization. We will add our entities to this later.

namespace BooksAPI.Core
{
    using Microsoft.AspNet.Identity.EntityFramework;

    public class BooksContext : IdentityDbContext
    {

    }
}

Claims are used to describe useful information that the user has associated with them. We will use claims to tell the client which roles the user has. The benefit of roles is that we can prevent access to certain methods/controllers to a specific group of users, and permit access to others.

Add a DbMigrationsConfiguration class and allow automatic migrations, but prevent automatic data loss;

namespace BooksAPI.Core
{
    using System.Data.Entity.Migrations;

    public class Configuration : DbMigrationsConfiguration&lt;BooksContext&gt;
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = true;
            AutomaticMigrationDataLossAllowed = false;
        }
    }
}

Whilst losing data at this stage is not important (we will use a seed method later to populate our database), I like to turn this off now so I do not forget later.

Now tell Entity Framework how to update the database schema using an initializer, as follows;

namespace BooksAPI.Core
{
    using System.Data.Entity;

    public class Initializer : MigrateDatabaseToLatestVersion&lt;BooksContext, Configuration&gt;
    {
    }
}

This tells Entity Framework to go ahead and upgrade the database to the latest version automatically for us.

Finally, tell your application about the initializer by updating the Global.asax.cs file as follows;

namespace BooksAPI
{
    using System.Data.Entity;
    using System.Web;
    using System.Web.Http;
    using Core;

    public class WebApiApplication : HttpApplication
    {
        protected void Application_Start()
        {
            GlobalConfiguration.Configure(WebApiConfig.Register);
            Database.SetInitializer(new Initializer());
        }
    }
}

Data Provider

By default, Entity Framework will configure itself to use LocalDB. If this is not desirable, say you want to use SQL Express instead, you need to make the following adjustments;

Open the Web.config file and delete the following code;

<entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
        <parameters>
            <parameter value="mssqllocaldb" />
        </parameters>
    </defaultConnectionFactory>
    <providers>
        <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
    </providers>
</entityFramework>

And add the connection string;

<connectionStrings>
    <add name="BooksContext" providerName="System.Data.SqlClient" connectionString="Server=.;Database=Books;Trusted_Connection=True;" />
</connectionStrings>

Now we’re using SQL Server directly (whatever flavour that might be) rather than LocalDB.

JSON

Whilst we’re here, we might as well configure our application to return camel-case JSON (thisIsCamelCase), instead of the default pascal-case (ThisIsPascalCase).

Add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

There is nothing worse than pascal-case JavaScript.

CORS (Cross-Origin Resource Sharing)

Cross-Origin Resource Sharing, or CORS for short, is when a client requests access to a resource (an image, or say, data from an endpoint) from an origin (domain) that is different from the domain where the resource itself originates.

This step is completely optional. We are adding in CORS support here because when we come to write our client app in subsequent posts that follow on from this one, we will likely use a separate HTTP server (for testing and debugging purposes). When released to production, these two apps would use the same host (Internet Information Services (IIS)).

To enable CORS, open WebApiConfig.cs and add the following code to the beginning of the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add the following class (in the same file if you prefer for quick reference);

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

In the CORS workflow, before sending a DELETE, PUT or POST request, the client sends an OPTIONS request to check that the domain from which the request originates is the same as the server. If the request domain and server domain are not the same, then the server must include various access headers that describe which domains have access. To enable access to all domains, we just respond with an origin header (Access-Control-Allow-Origin) with an asterisk to enable access for all.

The Access-Control-Allow-Headers header describes which headers the API can accept/is expecting to receive. The Access-Control-Allow-Methods header describes which HTTP verbs are supported/permitted.

See Mozilla Developer Network (MDN) for a more comprehensive write-up on Cross-Origin Resource Sharing (CORS).

Data Model

With Entity Framework configured, lets create our data structure. The API will expose books, and books will have reviews.

Under the Models folder add a new class called Book. Add the following code;

namespace BooksAPI.Models
{
    using System.Collections.Generic;

    public class Book
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Description { get; set; }
        public decimal Price { get; set; }
        public string ImageUrl { get; set; }

        public virtual List<Review> Reviews { get; set; }
    }
}

And add Review, as shown;

namespace BooksAPI.Models
{
    public class Review
    {
        public int Id { get; set; }    
        public string Description { get; set; }    
        public int Rating { get; set; }
        public int BookId { get; set; }
    }
}

Add these entities to the IdentityDbContext we created earlier;

public class BooksContext : IdentityDbContext
{
    public DbSet<Book> Books { get; set; }
    public DbSet<Review> Reviews { get; set; }
}

Be sure to add in the necessary using directives.

A couple of helpful abstractions

We need to abstract a couple of classes that we need to make use of, in order to keep our code clean and ensure that it works correctly.

Under the Core folder, add the following classes;

public class BookUserManager : UserManager<IdentityUser>
{
    public BookUserManager() : base(new BookUserStore())
    {
    }
}

We will make heavy use of the UserManager<T> in our project, and we don’t want to have to initialise it with a UserStore<T> every time we want to make use of it. Whilst adding this is not strictly necessary, it does go a long way to helping keep the code clean.

Now add another class for the UserStore, as shown;

public class BookUserStore : UserStore&lt;IdentityUser&gt;
{
    public BookUserStore() : base(new BooksContext())
    {
    }
}

This code is really important. If we fail to tell the UserStore which DbContext to use, it falls back to some default value.

A network-related or instance-specific error occurred while establishing a connection to SQL Server

I’m not sure what the default value is, all I know is it doesn’t seem to correspond to our applications DbContext. This code will help prevent you from tearing your hair out later wondering why you are getting the super-helpful error message shown above.

API Controller

We need to expose some data to our client (when we write it). Lets take advantage of Entity Frameworks Seed method. The Seed method will pre-populate some books and reviews automatically for us.

Instead of dropping the code in directly for this class (it is very long), please refer to the Configuration.cs file on GitHub.

This code gives us a little bit of starting data to play with, instead of having to add a bunch of data manually each time we make changes to our schema that require the database to be re-initialized (not really in our case as we have an extremely simple data model, but in larger applications this is very useful).

Books Endpoint

Next, we want to create the RESTful endpoint that will retrieve all the books data. Create a new Web API controller called BooksController and add the following;

public class BooksController : ApiController
{
    [HttpGet]
    public async Task<IHttpActionResult> Get()
    {
        using (var context = new BooksContext())
        {
            return Ok(await context.Books.Include(x => x.Reviews).ToListAsync());
        }
    }
}

With this code we are fully exploiting recent changes to the .NET framework; the introduction of async and await. Writing asynchronous code in this manner allows the thread to be released whilst data (Books and Reviews) is being retrieved from the database and converted to objects to be consumed by our code. When the asynchronous operation is complete, the code picks up where it was up to and continues executing. (By which, we mean the hydrated data objects are passed to the underlying framework and converted to JSON/XML and returned to the client).

Reviews Endpoint

We’re also going to enable authorized users to post reviews and delete reviews. For this we will need a ReviewsController with the relevant Post and Delete methods. Add the following code;

Create a new Web API controller called ReviewsController and add the following code;

public class ReviewsController : ApiController
{
    [HttpPost]
    public async Task<IHttpActionResult> Post([FromBody] ReviewViewModel review)
    {
        using (var context = new BooksContext())
        {
            var book = await context.Books.FirstOrDefaultAsync(b => b.Id == review.BookId);
            if (book == null)
            {
                return NotFound();
            }

            var newReview = context.Reviews.Add(new Review
            {
                BookId = book.Id,
                Description = review.Description,
                Rating = review.Rating
            });

            await context.SaveChangesAsync();
            return Ok(new ReviewViewModel(newReview));
        }
    }

    [HttpDelete]
    public async Task<IHttpActionResult> Delete(int id)
    {
        using (var context = new BooksContext())
        {
            var review = await context.Reviews.FirstOrDefaultAsync(r => r.Id == id);
            if (review == null)
            {
                return NotFound();
            }

            context.Reviews.Remove(review);
            await context.SaveChangesAsync();
        }
        return Ok();
    }
}

There are a couple of good practices in play here that we need to highlight.

The first method, Post allows the user to add a new review. Notice the parameter for the method;

[FromBody] ReviewViewModel review

The [FromBody] attribute tells Web API to look for the data for the method argument in the body of the HTTP message that we received from the client, and not in the URL. The second parameter is a view model that wraps around the Review entity itself. Add a new folder to your project called ViewModels, add a new class called ReviewViewModel and add the following code;

public class ReviewViewModel
{
    public ReviewViewModel()
    {
    }

    public ReviewViewModel(Review review)
    {
        if (review == null)
        {
            return;
        }

        BookId = review.BookId;
        Rating = review.Rating;
        Description = review.Description;
    }

    public int BookId { get; set; }
    public int Rating { get; set; }
    public string Description { get; set; }

    public Review ToReview()
    {
        return new Review
        {
            BookId = BookId,
            Description = Description,
            Rating = Rating
        };
    }
}

We are just copying all he properties from the Review entity to the ReviewViewModel entity and vice-versa. So why bother? First reason, to help mitigate a well known under/over-posting vulnerability (good write up about it here) inherent in most web services. Also, it helps prevent unwanted information being sent to the client. With this approach we have to explicitly expose data to the client by adding properties to the view model.

For this scenario, this approach is probably a bit overkill, but I highly recommend it keeping your application secure is important, as well as is the need to prevent leaking of potentially sensitive information. A tool I’ve used in the past to simplify this mapping code is AutoMapper. I highly recommend checking out.

Important note: In order to keep our API RESTful, we return the newly created entity (or its view model representation) back to the client for consumption, removing the need to re-fetch the entire data set.

The Delete method is trivial. We accept the Id of the review we want to delete as a parameter, then fetch the entity and finally remove it from the collection. Calling SaveChangesAsync will make the change permanent.

Meaningful response codes

We want to return useful information back to the client as much as possible. Notice that the Post method returns NotFound(), which translates to a 404 HTTP status code, if the corresponding Book for the given review cannot be found. This is useful for client side error handling. Returning Ok() will return 200 (HTTP ‘Ok’ status code), which informs the client that the operation was successful.

Authentication and Authorization Using OAuth and JSON Web Tokens (JWT)

My preferred approach for dealing with authentication and authorization is to use JSON Web Tokens (JWT). We will open up an OAuth endpoint to client credentials and return a token which describes the users claims. For each of the users roles we will add a claim (which could be used to control which views the user has access to on the client side).

We use OWIN to add our OAuth configuration into the pipeline. Add a new class to the project called Startup.cs and add the following code;

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (BooksAPI.Startup))]

namespace BooksAPI
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureOAuth(app);
        }
    }
}

Notice that Startup is a partial class. I’ve done that because I want to keep this class as simple as possible, because as the application becomes more complicated and we add more and more middle-ware, this class will grow exponentially. You could use a static helper class here, but the preferred method from the MSDN documentation seems to be leaning towards using partial classes specifically.

Under the App_Start folder add a new class called Startup.OAuth.cs and add the following code;

using System;
using System.Configuration;
using BooksAPI.Core;
using BooksAPI.Identity;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.DataHandler.Encoder;
using Microsoft.Owin.Security.Jwt;
using Microsoft.Owin.Security.OAuth;
using Owin;

namespace BooksAPI
{
    public partial class Startup
    {
        public void ConfigureOAuth(IAppBuilder app)
        {            
        }
    }
}

Note. When I wrote this code originally I encountered a quirk. After spending hours pulling out my hair trying to figure out why something was not working, I eventually discovered that the ordering of the code in this class is very important. If you don’t copy the code in the exact same order, you may encounter unexpected behaviour. Please add the code in the same order as described below.

OAuth secrets

First, add the following code;

var issuer = ConfigurationManager.AppSettings["issuer"];
var secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
  • Issuer – a unique identifier for the entity that issued the token (not to be confused with Entity Framework’s entities)
  • Secret – a secret key used to secure the token and prevent tampering

I keep these values in the Web configuration file (Web.config). To be precise, I split these values out into their own configuration file called keys.config and add a reference to that file in the main Web.config. I do this so that I can exclude just the keys from source control by adding a line to my .gitignore file.

To do this, open Web.config and change the <appSettings> section as follows;

<appSettings file="keys.config">
</appSettings>

Now add a new file to your project called keys.config and add the following code;

<appSettings>
  <add key="issuer" value="http://localhost/"/>
  <add key="secret" value="IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw"/>
</appSettings>

Adding objects to the OWIN context

We can make use of OWIN to manage instances of objects for us, on a per request basis. The pattern is comparable to IoC, in that you tell the “container” how to create an instance of a specific type of object, then request the instance using a Get<T> method.

Add the following code;

app.CreatePerOwinContext(() => new BooksContext());
app.CreatePerOwinContext(() => new BookUserManager());

The first time we request an instance of BooksContext for example, the lambda expression will execute and a new BooksContext will be created and returned to us. Subsequent requests will return the same instance.

Important note: The life-cycle of object instance is per-request. As soon as the request is complete, the instance is cleaned up.

Enabling Bearer Authentication/Authorization

To enable bearer authentication, add the following code;

app.UseJwtBearerAuthentication(new JwtBearerAuthenticationOptions
{
    AuthenticationMode = AuthenticationMode.Active,
    AllowedAudiences = new[] { "Any" },
    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
    {
        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
    }
});

The key takeaway of this code;

  • State who is the audience (we’re specifying “Any” for the audience, as this is a required field but we’re not fully implementing it).
  • State who is responsible for generating the tokens. Here we’re using SymmetricKeyIssuerSecurityTokenProvider and passing it our secret key to prevent tampering. We could use the X509CertificateSecurityTokenProvider, which uses a X509 certificate to secure the token (but I’ve found these to be overly complex in the past and I prefer a simpler implementation).

This code adds JWT bearer authentication to the OWIN pipeline.

Enabling OAuth

We need to expose an OAuth endpoint so that the client can request a token (by passing a user name and password).

Add the following code;

app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
    AllowInsecureHttp = true,
    TokenEndpointPath = new PathString("/oauth2/token"),
    AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
    Provider = new CustomOAuthProvider(),
    AccessTokenFormat = new CustomJwtFormat(issuer)
});

Some important notes with this code;

  • We’re going to allow insecure HTTP requests whilst we are in development mode. You might want to disable this using a #IF Debug directive so that you don’t allow insecure connections in production.
  • Open an endpoint under /oauth2/token that accepts post requests.
  • When generating a token, make it expire after 30 minutes (1800 seconds).
  • We will use our own provider, CustomOAuthProvider, and formatter, CustomJwtFormat, to take care of authentication and building the actual token itself.

We need to write the provider and formatter next.

Formatting the JWT

Create a new class under the Identity folder called CustomJwtFormat.cs. Add the following code;

namespace BooksAPI.Identity
{
    using System;
    using System.Configuration;
    using System.IdentityModel.Tokens;
    using Microsoft.Owin.Security;
    using Microsoft.Owin.Security.DataHandler.Encoder;
    using Thinktecture.IdentityModel.Tokens;

    public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
        private static readonly byte[] _secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
        private readonly string _issuer;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException(nameof(data));
            }

            var signingKey = new HmacSigningCredentials(_secret);
            var issued = data.Properties.IssuedUtc;
            var expires = data.Properties.ExpiresUtc;

            return new JwtSecurityTokenHandler().WriteToken(new JwtSecurityToken(_issuer, null, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey));
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }
}

This is a complicated looking class, but its pretty straightforward. We are just fetching all the information needed to generate the token, including the claims, issued date, expiration date, key and then we’re generating the token and returning it back.

Please note: Some of the code we are writing today was influenced by JSON Web Token in ASP.NET Web API 2 using OWIN by Taiseer Joudeh. I highly recommend checking it out.

The authentication bit

We’re almost there, honest! Now we want to authenticate the user.

using System.Linq;
using System.Security.Claims;
using System.Security.Principal;
using System.Threading;
using System.Threading.Tasks;
using System.Web;
using BooksAPI.Core;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.AspNet.Identity.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.OAuth;

namespace BooksAPI.Identity
{
    public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {
        public override Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {
            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] {"*"});

            var user = context.OwinContext.Get<BooksContext>().Users.FirstOrDefault(u => u.UserName == context.UserName);
            if (!context.OwinContext.Get<BookUserManager>().CheckPassword(user, context.Password))
            {
                context.SetError("invalid_grant", "The user name or password is incorrect");
                context.Rejected();
                return Task.FromResult<object>(null);
            }

            var ticket = new AuthenticationTicket(SetClaimsIdentity(context, user), new AuthenticationProperties());
            context.Validated(ticket);

            return Task.FromResult<object>(null);
        }

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        private static ClaimsIdentity SetClaimsIdentity(OAuthGrantResourceOwnerCredentialsContext context, IdentityUser user)
        {
            var identity = new ClaimsIdentity("JWT");
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));

            var userRoles = context.OwinContext.Get<BookUserManager>().GetRoles(user.Id);
            foreach (var role in userRoles)
            {
                identity.AddClaim(new Claim(ClaimTypes.Role, role));
            }

            return identity;
        }
    }
}

As we’re not checking the audience, when ValidateClientAuthentication is called we can just validate the request. When the request has a grant_type of password, which all our requests to the OAuth endpoint will have, the above GrantResourceOwnerCredentials method is executed. This method authenticates the user and creates the claims to be added to the JWT.

Testing

There are 2 tools you can use for testing this.

Technique 1 – Using the browser

Open up a web browser, and navigate to the books URL.

Testing with the web browser

You will see the list of books, displayed as XML. This is because Web API can serve up data either as XML or as JSON. Personally, I do not like XML, JSON is my choice these days.

Technique 2 (Preferred) – Using Postman

To make Web API respond in JSON we need to send along a Accept header. The best tool to enable use to do this (for Google Chrome) is Postman. Download it and give it a go if you like.

Drop the same URL into the Enter request URL field, and click Send. Notice the response is in JSON;

Postman response in JSON

This worked because Postman automatically adds the Accept header to each request. You can see this by clicking on the Headers tab. If the header isn’t there and you’re still getting XML back, just add the header as shown in the screenshot and re-send the request.

To test the delete method, change the HTTP verb to Delete and add the ReviewId to the end of the URL. For example; http://localhost:62996/api/reviews/9

Putting it all together

First, we need to restrict access to our endpoints.

Add a new file to the App_Start folder, called FilterConfig.cs and add the following code;

public class FilterConfig
{
    public static void Configure(HttpConfiguration config)
    {
        config.Filters.Add(new AuthorizeAttribute());
    }
}

And call the code from Global.asax.cs as follows;

GlobalConfiguration.Configure(FilterConfig.Configure);

Adding this code will restrict access to all endpoints (except the OAuth endpoint) to requests that have been authenticated (a request that sends along a valid Jwt).

You have much more fine-grain control here, if required. Instead of adding the above code, you could instead add the AuthorizeAttribute to specific controllers or even specific methods. The added benefit here is that you can also restrict access to specific users or specific roles;

Example code;

[Authorize(Roles = "Admin")]

The roles value (“Admin”) can be a comma-separated list. For us, restricting access to all endpoints will suffice.

To test that this code is working correctly, simply make a GET request to the books endpoint;

GET http://localhost:62996/api/books

You should get the following response;

{
  "message": "Authorization has been denied for this request."
}

Great its working. Now let’s fix that problem.

Make a POST request to the OAuth endpoint, and include the following;

  • Headers
    • Accept application/json
    • Accept-Language en-gb
    • Audience Any
  • Body
    • username administrator
    • password administrator123
    • grant_type password

Shown in the below screenshot;

OAuth Request

Make sure you set the message type as x-www-form-urlencoded.

If you are interested, here is the raw message;

POST /oauth2/token HTTP/1.1
Host: localhost:62996
Accept: application/json
Accept-Language: en-gb
Audience: Any
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache
Postman-Token: 8bc258b2-a08a-32ea-3cb2-2e7da46ddc09

username=administrator&password=administrator123&grant_type=password

The form data has been URL encoded and placed in the message body.

The web service should authenticate the request, and return a token (Shown in the response section in Postman). You can test that the authentication is working correctly by supplying an invalid username/password. In this case, you should get the following reply;

{
  "error": "invalid_grant"
}

This is deliberately vague to avoid giving any malicious users more information than they need.

Now to get a list of books, we need to call the endpoint passing in the token as a header.

Change the HTTP verb to GET and change the URL to; http://localhost:62996/api/books.

On the Headers tab in Postman, add the following additional headers;

Authorization Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6ImFkbWluaXN0cmF0b3IiLCJzdWIiOiJhZG1pbmlzdHJhdG9yIiwicm9sZSI6IkFkbWluaXN0cmF0b3IiLCJpc3MiOiJodHRwOi8vand0YXV0aHpzcnYuYXp1cmV3ZWJzaXRlcy5uZXQiLCJhdWQiOiJBbnkiLCJleHAiOjE0NTgwNDI4MjgsIm5iZiI6MTQ1ODA0MTAyOH0.uhrqQW6Ik_us1lvDXWJNKtsyxYlwKkUrCGXs-eQRWZQ

See screenshot below;

Authorization Header

Success! We have data from our secure endpoint.

Summary

In this introduction we looked at creating a project using Web API to issue and authenticate Jwt (JSON Web Tokens). We created a simple endpoint to retrieve a list of books, and also added the ability to get a specific book/review and delete reviews in a RESTful way.

This project is the foundation for subsequent posts that will explore creating a rich client side application, using modern JavaScript frameworks, which will enable authentication and authorization.

ASP .NET 5 (vNext), first thoughts

Microsoft ASP .NET 5 is a major shift from traditional ASP .NET methodologies. Whilst I am not actively developing ASP .NET 5 applications at the minute, .NET has always been my bread and butter technology. When I look at industry trends here in the UK, all I see is .NET .NET .NET, therefore it is important to have one eye on the future. I’ve watched all the introduction videos on the ASP .NET website, but I also wanted to take a look at what ASP .NET 5 means to me.

This is not meant to be a fully formed post. This will come later down the line. Right now, I think ASP .NET 5 is evolving too quickly to be “bloggable” fully.

Version disambiguation and terminology

Lets take a second to disambiguate some terminology. Microsoft’s understanding of versioning has always been different to everybody else. This tweet from Todd Motto really sums it up;

Looks like versioning is not going to get any simpler for the time being

ASP .NET 5 (ASP .NET 4.6 is the current version)

Previously known as ASP .NET vNext, ASP .NET 5 is the successor of ASP .NET 4.6. In the past, versions of ASP .NET have followed the .NET Framework release cycle. It looks like that is coming to an end now. ASP .NET should not be confused with MVC. ASP .NET is a technology, MVC is a framework.

ASP .NET 5 is currently scheduled for release in the first quarter of 2016, as per this tweet from Scott Hansleman; (I suspect this date will slip though)

The ASP .NET team would rather “get it right” and take longer, than rush the product and get it wrong (which would spell long term disaster for the platform)

MVC 6

This is the new version of Microsoft’s Model-View-Controller framework. There is a nice post on StackOverflow that describes the new features of MVC 6. Here are a few of the best;

  • “Cloud optimization” … so better performance.
  • MVC, WebAPI and Web Pages are now unified.
  • Removed dependency on System.Web, which results in more an 10x reduction in request overhead.
  • Built in dependency injection, which is pluggable, so it can be switched out for other DI providers.
  • Roslyn enables dynamic compilation. Save your file, refresh the browser. Works for C# too. No compilation required.
  • Cross platform.

DNX (.NET Execution Environment)

The .NET Execution Environment, DNX, is a cross platform thing that will run your .NET applications. DNX is built around the .NET Core, which is a super lightweight framework for .NET applications, resulting in drastically improved performance thanks to a reduce pipeline. Dependency on the dinosaur assembly System.Web has gone, but in return you are restricted to more of a subset of features. This is a good thing, my friend. System.Web has every featured imagined over the last 13 years, 75% of which you probably don’t even care about.

Interesting new features and changes

  • Use of data annotations for things that would previously have been HTML helpers (Tag helpers)
  • Environment tag on _Layout. Enables a simple means to specify which resources to load depending on the application configuration (Debug mode, release mode etc)
  • Bower support
  • Gulp out of the box (interesting that they chose Gulp over Grunt, I think its Gulps superior speed that has won the day.)
  • .NET Core. Drastically reduced web pipeline, could result in 10x faster response in some cases (remains to be seen!).
  • Noticeably faster starting up.
  • Save to build. With Roslyn, it is now not necessary to build every time you make a change to a CSharp (.cs) code file. Just save and refresh. Compilation is done in memory.
  • Intellisense hints that assembly is not available in .NET Core (nice!)
  • Built in dependency injection, which can be switched out for a third party mechanism.
  • Web API is now no longer a separate component. Web API was originally a separate technology from MVC. The two were always very alike, and it makes sense that the two should be merged together.

Deleted stuff

  • Web.config has finally been removed and exchanged for a simpler JSON formatted file. Parties have been thrown for less.
  • packages.config has gone, seems redundant now that things are in line with how the rest of the web develop, i.e. using package.json

Bad points

  • Still heavy use of the Viewbag in default projects. I’d like to see the ViewBag removed entirely, but I suspect that will never happen.
  • The default project template is still full of “junk”, although it is now a bit simpler to tidy up. Visual Studio automatically managers bower and npm packages, so removing a package is as simple as deleting it from the package.json file.

Summary

I am very keen to get cracking with ASP .NET 5 (vNext), although at the time of writing I feel that it still a little bit too dynamic to start diving in to at a deep level. The introduction of .NET Core, a cross platform, open source subset of the .NET framework is awesome… I can’t wait to see the benefits of using this in the wild (reduced server costs!!! especially when running on a Linux based machine, although it remains to be seen). The ViewBag still exists, but we can’t have it all I suppose.

At this point, we’re at least 5-6 months away from a release, so develop with it at your own risk!

WCF custom authentication using ServiceCredentials

The generally accepted way of authenticating a user with WCF is with a User Name and Password with the UserNamePasswordValidator class.  So common that even MSDN has a tutorial, and the MSDN documentation for WCF is seriously lacking at best.  The username/password approach does what it says on the tin, you pass along a username and password credential from the client to the server, do your authentication, and only if there is a problem then you throw an exception.  It’s a primitive approach, but it works.  But what about when you want to do something a little bit less trivial than that? ServiceCredentials is probably what you need.

Source code for this post is available on GitHub.

Scenario

I should prefix this tutorial with a disclaimer, and this disclaimer is just my opinion.  WCF is incredibly poorly documented and at times counter intuitive.  In fact, I generally avoid WCF development like the black plague, preferring technologies such as Web API.  The saving grace of WCF is that you have full control over a much more substantial set of functionality, and you’re not limited by REST but empowered by SOAP.  WCF plays particularly nicely with WPF, my favourite desktop software technology.  I’ve never used WCF as part of a web service before, and I doubt I ever will.

Tangent aside, sometimes its not appropriate to authenticate a user with simply a username or password.  You might want to pass along a User Name and a License Key, along with some kind of unique identification code based on the hardware configuration of the users computer.  Passing along this kind of information in a clean way can’t be done with the simple UserNamePasswordValidator, without using some hacky kind of delimited string approach (“UserName~LicenseKey~UniqueCode”).

So this is what we will do for this tutorial; pass a User Name, License Key and “Unique Key” from the client to the server for authentication and authorization.  And for security, we will avoid using WsHttpBinding and instead create a CustomBinding and use an SSL certificate (PFX on the server, CER on the client).  The reasons for this are discussed throughout this tutorial, but primarily because I’ve encountered so many problems with WsHttpBinding when used in a load balanced environment that its just not worth the hassle.

As a final note, we will also go “configuration free”.   All of this is hard coded because I can’t make the assumption that if you use this code in a production environment that you will have access to the machine certificate store, which a lot of web hosting providers restrict access to. As far as I know, the SSL certificate cannot be loaded from a file or a resource using the Web.config.

Server Side Implementation

Basic Structure

All preamble aside, lets dive straight in.  This tutorial isn’t about creating a full featured WCF service (a quick Google of the term “WCF Tutorial” presents about 878,000 results for that) so the specific implementation details aren’t important.  What is important is that you have a Service Contract with at least one Operation Contract, for testing purposes.  Create a new WCF Service Application in Visual Studio, and refactor the boiler plate code as follows;

[ServiceContract]
public interface IEchoService
{
    [OperationContract]
    string Echo(int value);
}

public class EchoService : IEchoService
{
    public string Echo(int value)
    {
        return string.Format("You entered: {0}", value);
    }
}

And rename the SVC file to EchoService.svc.

Open up the Web.config file and delete everything inside the <system.serviceModel> element.  You don’t need any of that.

NuGet Package

It is not exactly clear to me why, but you’ll also need to install the NuGet package Microsoft ASP.NET Web Pages (Install-Package Microsoft.AspNet.WebPages).  I suppose this might be used for the WSDL definition page or the help page.  I didn’t really look into it.

 

Hosting In Local IIS (Internet Information Services)

I’m hosting this in IIS on my local machine (using a self-signed certificate) but I’ve thoroughly tested on a real server using a “real” SSL certificate, so I’ll give you some helpful hints of that as we go along.

First things first;

  1. Open IIS Manager (inetmgr)
  2. Add a new website called “echo”
  3. Add a HTTP binding with the host name “echo.local”
  4. Open up the hosts file (C:\Windows\System32\drivers\etc) and add an entry for “echo.local” and IP address 127.0.0.1
  5. Use your favourite SSL self signed certificate creation tool to generate a certificate for cn=echo.local  (See another tutorial I wrote that explains how to do this).  Be sure to save the SSL certificate in PFX format, this is important for later.
  6. The quickest way I’ve found to generate the CER file (which is the certificate excluding the private key, for security) is to import the PFX into the Personal certificate store for your local machine.  Then right click > All Tasks > Export (excluding private key) and select DER encoded binary X.509 (.CER).  Save to some useful location for use later.  Naturally when doing this “for real”, your SSL certificate provider will provide the PFX and CER (and tonnes of other formats) so you can skip this step.  This tutorial assumes you don’t have access to the certificate store (either physically or programmatically) on the production machine.
  7. DO NOT add a binding for HTTPS unless you are confident that your web host fully supports HTTPS connections.  More on this later.
  8. Flip back to Visual Studio and publish your site to IIS.  I like to publish in “Debug” mode initially, just to make debugging slightly less impossible.

ImportCertificate

Open your favourite web browser and navigate to http://echo.local/EchoService.svc?wsdl.  You won’t get much of anything at this time, just a message to say that service metadata is unavailable and instructions on how to turn it on.  Forget it, its not important.

Beyond UserNamePasswordValidator

Normally at this stage you would create a UserNamePasswordValidator, add your database/authentication/authorization logic and be done after about 10 minutes of effort.  Well forget that, you should expect to spend at least the next hour creating a myriad of classes and helpers, authenticators, policies, tokens, factories and credentials.  Hey, I never said this was easy, just that it can be done.

Factory Pattern

The default WCF Service Application template you used to create the project generates a ServiceHost object with a Service property that points to the actual implementation of our service, the guts.  We need to change this to use a ServiceHostFactory, which will spawn new service hosts for us.  Right click on the EchoService.svc file and change the Service property to Factory, and EchoService to EchoServiceFactory;

//Change 
Service="WCFCustomClientCredentials.EchoService"

//To
Factory="WCFCustomClientCredentials.EchoServiceFactory"

Just before we continue, add a new class to your project called EchoServiceHost and derive from ServiceHost.  This is the actual ServiceHost that was previously created automatically under the hood for us.  We will flesh this out over the course of the tutorial.  For now, just add a constructor that takes an array of base addresses for our service, and which passes the type of the service to the base.

public class EchoServiceHost : ServiceHost
{
    public EchoServiceHost(params Uri[] addresses)
        : base(typeof(EchoService), addresses)
    {

    }
}

Now add another new class to your project, named EchoServiceFactory, and derived from ServiceHostFactoryBase.  Override CreateServiceHost and return a new instance of EchoServiceHost with the appropriate base address.

public override ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses)
{
    return new EchoServiceHost(new[]
    {
        new Uri("http://echo.local/")
    });
}

We won’t initialize the ServiceHost just let, we’ll come back to that later.

Custom ServiceCredentials

ServiceCredentials has many responsibilities, including; serialization/deserialization and authentication/authorization.  Not to be confused with ClientCredentials, which has the additional responsibility of generating a token which contains all the fields to pass to the service (User Name, License Key and Unique Code).  There is a pretty decent tutorial on MSDN which explains some concepts in a little bit more detail that I will attempt.  The ServiceCredentials will (as well as all the aforementioned things) load in our SSL certificate and use that to verify (using the private key) that the certificate passed from the client is valid before attempting authentication/authorization. Before creating the ServiceCredentials class, add each of the following;

  1. EchoServiceCredentialsSecurityTokenManager which derives from ServiceCredentialsSecurityTokenManager.
  2. EchoSecurityTokenAuthenticator which derives from SecurityTokenAuthenticator.

Use ReSharper or Visual Studio IntelliSense to stub out any abstract methods for the time being.  We will flesh these out as we go along.

You will need to add a reference to System.IdentityModel, which we will need when creating our authorization policies next.

You can now flesh out the EchoServiceCredentials class as follows;

public class EchoServiceCredentials : ServiceCredentials
{
    public override SecurityTokenManager CreateSecurityTokenManager()
    {
        return new EchoServiceCredentialsSecurityTokenManager(this);
    }

    protected override ServiceCredentials CloneCore()
    {
        return new EchoServiceCredentials();
    }
}

If things are not clear at this stage, stick with me… your understanding will improve as we go along.

Namespaces and constant values

Several namespaces are required to identify our custom token and its properties.  It makes sense to stick these properties all in one place as constants, which we will also make available to the client later.  The token is ultimately encrypted using a Symmetric encryption algorithm (as shown later), so we can’t see the namespaces in the resulting SOAP message, but I’m sure they’re there.

Create a new class called EchoConstants, and add the following;

public class EchoConstants
{
    public const string EchoNamespace = "https://echo/";

    public const string EchoLicenseKeyClaim = EchoNamespace + "Claims/LicenseKey";
    public const string EchoUniqueCodeClaim = EchoNamespace + "Claims/UniqueCode";
    public const string EchoUserNameClaim = EchoNamespace + "Claims/UserName";
    public const string EchoTokenType = EchoNamespace + "Tokens/EchoToken";

    public const string EchoTokenPrefix = "ct";
    public const string EchoUrlPrefix = "url";
    public const string EchoTokenName = "EchoToken";
    public const string Id = "Id";
    public const string WsUtilityPrefix = "wsu";
    public const string WsUtilityNamespace = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd";

    public const string EchoLicenseKeyElementName = "LicenseKey";
    public const string EchoUniqueCodeElementName = "UniqueCodeKey";
    public const string EchoUserNameElementName = "UserNameKey";
}

All these string values (except for the WsUtilityNamespace) are arbitrary values.  They give the message structure and conformity with open standards.

We will use these constant values throughout the remainder of the tutorial.

Security Token

Lets work through this starting with the most interesting classes first, and work backwards in descending order.  The SecurityToken contains all our custom credentials that we will ultimately use to determine if the user is allowed to use the service.  A security token can contain pretty much anything you want, as long as the token itself has a unique ID, and a valid from/to date and time.

Add the following class to your project;

public class EchoToken : SecurityToken
{
    private readonly DateTime _effectiveTime = DateTime.UtcNow;
    private readonly string _id;
    private readonly ReadOnlyCollection<SecurityKey> _securityKeys;

    public string LicenseKey { get; set; }
    public string UniqueCode { get; set; }
    public string UserName { get; set; }

    public EchoToken(string licenseKey, string uniqueCode, string userName, string id = null)
    {
        LicenseKey = licenseKey;
        UniqueCode = uniqueCode;
        UserName = userName;

        _id = id ?? Guid.NewGuid().ToString();
        _securityKeys = new ReadOnlyCollection<SecurityKey>(new List<SecurityKey>());
    }

    public override string Id
    {
        get { return _id; }
    }

    public override ReadOnlyCollection<SecurityKey> SecurityKeys
    {
        get { return _securityKeys; }
    }

    public override DateTime ValidFrom
    {
        get { return _effectiveTime; }
    }

    public override DateTime ValidTo
    {
        get { return DateTime.MaxValue; }
    }
}

There are a few things to note here;

  1. The token has a unique identifier, in this case a random Guid.  You can use whatever mechanism you like here, as long as it results in a unique identifier for the token.
  2. The token is valid from now until forever.  You might want to put a realistic timeframe in place here.
  3. I don’t know what SecurityKeys is for, and it doesn’t seem to matter.

Before you rush off to MSDN, here is what it says;

Base class for security keys.

Helpful.

We’re not quite ready to use this token yet, so we’ll revisit later.  All the pieces come together at once, like a really dull jigsaw.

Authorization Policy

We only care at this point about authorizing the request based on the User Name, License Key and Unique Code provided in the token.  We could however use an Authorization Policy to limit access to certain service methods based on any one of these factors.  If you want to restrict access to your API in this way, see the MSDN documentation for more information.  If, however, the basic authorization is good enough for you, add the following code;

public class EchoTokenAuthorizationPolicy : IAuthorizationPolicy
{
    private readonly string _id;
    private readonly IEnumerable<ClaimSet> _issuedClaimSets;
    private readonly ClaimSet _issuer;

    public EchoTokenAuthorizationPolicy(ClaimSet issuedClaims)
    {
        if (issuedClaims == null)
        {
            throw new ArgumentNullException("issuedClaims");
        }

        _issuer = issuedClaims.Issuer;
        _issuedClaimSets = new[] { issuedClaims };
        _id = Guid.NewGuid().ToString();
    }

    public ClaimSet Issuer
    {
        get { return _issuer; }
    }

    public string Id
    {
        get { return _id; }
    }

    public bool Evaluate(EvaluationContext context, ref object state)
    {
        foreach (ClaimSet issuance in _issuedClaimSets)
        {
            context.AddClaimSet(this, issuance);
        }

        return true;
    }
}

The key to this working is the Evaluate method.  We are just adding each claim to the EvaluationContext claim set, without doing any sort of checks.  This is fine because we will do our own authorization as part of the SecurityTokenAuthenticator, shown next.

Security Token Authentication and Authorization

Now that we have our Authorization Policies in place, we can get down to business and tell WCF to allow or deny the request.  We must create a class that derives from SecurityTokenAuthenticator, and override the ValidateTokenCore method.  If an exception is thrown in this method, the request will be rejected.  You’re also required to return the authorization policies, which will be evaluated accordingly and the request rejected if the token does not have the claims required to access the desired operation.  How you authorize/authenticate the request is down to you, but will inevitably involve some database call or similar tasks to check for the existence and legitimacy of the given token parameters.

Here is a sample implementation;

public class EchoSecurityTokenAuthenticator : SecurityTokenAuthenticator
{
    protected override bool CanValidateTokenCore(SecurityToken token)
    {
        return (token is EchoToken);
    }

    protected override ReadOnlyCollection<IAuthorizationPolicy> ValidateTokenCore(SecurityToken token)
    {
        var echoToken = token as EchoToken;

        if (echoToken == null)
        {
            throw new ArgumentNullException("token");
        }

        var authorizationException = IsAuthorized(echoToken.LicenseKey, echoToken.UniqueCode, echoToken.UserName);
        if (authorizationException != null)
        {
            throw authorizationException;
        }

        var policies = new List<IAuthorizationPolicy>(3)
        {
            CreateAuthorizationPolicy(EchoConstants.EchoLicenseKeyClaim, echoToken.LicenseKey, Rights.PossessProperty),
            CreateAuthorizationPolicy(EchoConstants.EchoUniqueCodeClaim, echoToken.UniqueCode, Rights.PossessProperty),
            CreateAuthorizationPolicy(EchoConstants.EchoUserNameClaim, echoToken.UserName, Rights.PossessProperty),
        };

        return policies.AsReadOnly();
    }

    private static Exception IsAuthorized(string licenseKey, string uniqueCode, string userName)
    {
        Exception result = null;

        //Check if user is authorized.  If not you must return a FaultException

        return result;
    }

    private static EchoTokenAuthorizationPolicy CreateAuthorizationPolicy<T>(string claimType, T resource, string rights)
    {
        return new EchoTokenAuthorizationPolicy(new DefaultClaimSet(new Claim(claimType, resource, rights)));
    }
}

Token Serialization

Before we can continue, we have neglected to discuss one very important detail.  WCF generates messages in XML SOAP format for standardised communication between the client and the server applications.  This is achieved by serializing the token using a token serializer.  Surprisingly, however, this doesn’t happen automatically.  You have to give WCF a hand and tell it exactly how to both read and write the messages.  It gives you the tools (an XmlReader and XmlWriter) but you have to do the hammering yourself.

The code for this isn’t short, so I apologise for that.  Here is an explanation of what happens;

  1. CanReadTokenCore is called when deserializing a token.  The responsibility of this method is to tell the underlying framework if this class is capable of reading the token contents.
  2. ReadTokenCore is called with an XmlReader, which provides access to the raw token itself.  You use the XmlReader to retrieve the parts of the token of interest (the User Name, Unique Code and License Key) and ultimately return a new SecurityToken (EchoSecurityToken).
  3. CanWriteTokenCore is called when serializing a token.  Return true if the serializer is capable of serializing then given token.
  4. WriteTokenCore is called with an XmlWriter and the actual SecurityToken.  Use both objects to do the serialization manually.

And the code itself;

public class EchoSecurityTokenSerializer : WSSecurityTokenSerializer
{
    private readonly SecurityTokenVersion _version;

    public EchoSecurityTokenSerializer(SecurityTokenVersion version)
    {
        _version = version;
    }

    protected override bool CanReadTokenCore(XmlReader reader)
    {
        if (reader == null)
        {
            throw new ArgumentNullException("reader");
        }
        if (reader.IsStartElement(EchoConstants.EchoTokenName, EchoConstants.EchoNamespace))
        {
            return true;
        }
        return base.CanReadTokenCore(reader);
    }

    protected override SecurityToken ReadTokenCore(XmlReader reader, SecurityTokenResolver tokenResolver)
    {
        if (reader == null)
        {
            throw new ArgumentNullException("reader");
        }
        if (reader.IsStartElement(EchoConstants.EchoTokenName, EchoConstants.EchoNamespace))
        {
            string id = reader.GetAttribute(EchoConstants.Id, EchoConstants.WsUtilityNamespace);

            reader.ReadStartElement();

            string licenseKey = reader.ReadElementString(EchoConstants.EchoLicenseKeyElementName, EchoConstants.EchoNamespace);
            string companyKey = reader.ReadElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace);
            string machineKey = reader.ReadElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace);

            reader.ReadEndElement();

            return new EchoToken(licenseKey, companyKey, machineKey, id);
        }
        return DefaultInstance.ReadToken(reader, tokenResolver);
    }

    protected override bool CanWriteTokenCore(SecurityToken token)
    {
        if (token is EchoToken)
        {
            return true;
        }
        return base.CanWriteTokenCore(token);
    }

    protected override void WriteTokenCore(XmlWriter writer, SecurityToken token)
    {
        if (writer == null)
        {
            throw new ArgumentNullException("writer");
        }
        if (token == null)
        {
            throw new ArgumentNullException("token");
        }

        var EchoToken = token as EchoToken;
        if (EchoToken != null)
        {
            writer.WriteStartElement(EchoConstants.EchoTokenPrefix, EchoConstants.EchoTokenName, EchoConstants.EchoNamespace);
            writer.WriteAttributeString(EchoConstants.WsUtilityPrefix, EchoConstants.Id, EchoConstants.WsUtilityNamespace, token.Id);
            writer.WriteElementString(EchoConstants.EchoLicenseKeyElementName, EchoConstants.EchoNamespace, EchoToken.LicenseKey);
            writer.WriteElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace, EchoToken.UniqueCode);
            writer.WriteElementString(EchoConstants.EchoUserNameElementName, EchoConstants.EchoNamespace, EchoToken.UserName);
            writer.WriteEndElement();
            writer.Flush();
        }
        else
        {
            base.WriteTokenCore(writer, token);
        }
    }
}

Service Credentials Security Token Manager

A long time ago… in a blog post right here, you created a class called EchoServiceCredentialsSecurityTokenManager.  The purpose of this class is to tell WCF that we want to use our custom token authenticator (EchoSecurityTokenAuthenticator) when it encounters our custom token.

Update the EchoServiceCredentialsSecurityTokenManager as follows;

public class EchoServiceCredentialsSecurityTokenManager : ServiceCredentialsSecurityTokenManager
{
    public EchoServiceCredentialsSecurityTokenManager(ServiceCredentials parent)
        : base(parent)
    {
    }

    public override SecurityTokenAuthenticator CreateSecurityTokenAuthenticator(SecurityTokenRequirement tokenRequirement, out SecurityTokenResolver outOfBandTokenResolver)
    {
        if (tokenRequirement.TokenType == EchoConstants.EchoTokenType)
        {
            outOfBandTokenResolver = null;
            return new EchoSecurityTokenAuthenticator();
        }
        return base.CreateSecurityTokenAuthenticator(tokenRequirement, out outOfBandTokenResolver);
    }

    public override SecurityTokenSerializer CreateSecurityTokenSerializer(SecurityTokenVersion version)
    {
        return new EchoSecurityTokenSerializer(version);
    }
}

The code is pretty self explanatory.  When an EchoToken is encountered, use the EchoSecurityTokenAuthenticator to confirm that the token is valid, authentic and authorized.  Also, the token can be serialized/deserialized using the EchoSecurityTokenSerializer.

Service Host Endpoints

The last remaining consideration is exposing endpoints so that the client has “something to connect to”.  This is done in EchoServiceHost by overriding the InitializeRuntime method, as shown;

protected override void InitializeRuntime()
{
    var baseUri = new Uri("http://echo.local");
    var serviceUri = new Uri(baseUri, "EchoService.svc");

    Description.Behaviors.Remove((typeof(ServiceCredentials)));

    var serviceCredential = new EchoServiceCredentials();
    serviceCredential.ServiceCertificate.Certificate = new X509Certificate2(Resources.echo, string.Empty, X509KeyStorageFlags.MachineKeySet);
    Description.Behaviors.Add(serviceCredential);

    var behaviour = new ServiceMetadataBehavior { HttpGetEnabled = true, HttpsGetEnabled = false };
    Description.Behaviors.Add(behaviour);

    Description.Behaviors.Find<ServiceDebugBehavior>().IncludeExceptionDetailInFaults = true;
    Description.Behaviors.Find<ServiceDebugBehavior>().HttpHelpPageUrl = serviceUri;

    AddServiceEndpoint(typeof(IEchoService), new BindingHelper().CreateHttpBinding(), string.Empty);

    base.InitializeRuntime();
}

The code does the following;

  1. Define the base URL of the and the service URL
  2. Remove the default implementation of ServiceCredentials, and replace with our custom implementation.  Ensure that the custom implementation uses our SSL certificate (in this case, the SSL certificate is added to the project as a resource).  If the PFX (and it must be a PFX) requires a password, be sure to specify it.
  3. Define and add a metadata endpoint (not strictly required)
  4. Turn on detailed exceptions for debugging purposes, and expose a help page (again not strictly required)
  5. Add an endpoint for our service, use a custom binding.  (DO NOT attempt to use WsHttpBinding or BasicHttpsBinding, you will lose 4 days of your life trying to figure out why it doesn’t work in a load balanced environment!)

Custom Http Binding

In the interest of simplicity, I want the server and the client to use the exact same binding.  To make this easier, I’ve extracted the code out into a separate helper class which will be referenced by both once we’ve refactored (discussed next).  We’re using HTTP  right now but we will discuss security and production environments towards the end of the post.  The custom binding will provide some level of security via a Symmetric encryption algorithm that will be applied to aspects of the message.

public Binding CreateHttpBinding()
{
    var httpTransport = new HttpTransportBindingElement
    {
        MaxReceivedMessageSize = 10000000
    };

    var messageSecurity = new SymmetricSecurityBindingElement();

    var x509ProtectionParameters = new X509SecurityTokenParameters
    {
        InclusionMode = SecurityTokenInclusionMode.Never
    };

    messageSecurity.ProtectionTokenParameters = x509ProtectionParameters;
    return new CustomBinding(messageSecurity, httpTransport);
}

Note, I’ve increased the max message size to 10,000,000 bytes (10MB ish) because this is appropriate for my scenario.  You might want to think long and hard about doing this.  The default message size limit is relatively small to help ward off DDoS attacks, so think carefully before changing the default.  10MB is a lot of data to receive in a single request, even though it might not sound like much.

With the endpoint now exposed, a client (if we had one) would be able to connect.  Lets do some refactoring first to make our life a bit easier.

Refactoring

In the interest of simplicity, I haven’t worried too much about the client so far.  We need to make some changes to the project structure so that some of the lovely code we have written so far can be shared and kept DRY.  Add a class library to your project, called Shared and move the following classes into it (be sure to update the namespaces and add the appropriate reference).

  1. BindingHelper.cs
  2. IEchoService.cs
  3. EchoSecurityTokenSerializer.cs
  4. EchoConstants.cs
  5. EchoToken.cs

Client Side Implementation

We’re about 2/3 of the way through now.  Most of the leg work has been done and we just have to configure the client correctly so it can make first contact with the server.

Create a new console application (or whatever you fancy) and start by adding a reference to the Shared library you just created for the server.  Add the SSL certificate (CER format, doesn’t contain the private key) to your project as a resource.  Also add a reference to System.ServiceModel.

Custom ClientCredentials

The ClientCredentials works in a similar way to ServiceCredentials, but a couple of subtle differences.  When you instantiate the ClientCredentials, you want to pass it all the arbitrary claims you want to pass to the WCF service (License Key, Unique Code, User Name).  This object will be passed to the serializer that you created as part of the server side code (EchoSecurityTokenSerializer) later on.

First things first, create the EchoClientCredentials class as follows;

public class EchoClientCredentials : ClientCredentials
{
    public string LicenseKey { get; private set; }
    public string UniqueCode { get; private set; }
    public string ClientUserName { get; private set; }

    public EchoClientCredentials(string licenseKey, string uniqueCode, string userName)
    {
        LicenseKey = licenseKey;
        UniqueCode = uniqueCode;
        ClientUserName = userName;
    }

    protected override ClientCredentials CloneCore()
    {
        return new EchoClientCredentials(LicenseKey, UniqueCode, ClientUserName);
    }

    public override SecurityTokenManager CreateSecurityTokenManager()
    {
        return new EchoClientCredentialsSecurityTokenManager(this);
    }
}

The ClientCredentials has an abstract method CreateSecurityTokenManager, where we will use to tell WCF how to ultimately generate our token.

Client side Security Token Manager

As discussed, the ClientCredentialsSecurityTokenManager is responsible for “figuring out” what to do with a token that it has encountered.  Before it uses its own underlying token providers, it gives us the chance to specify our own, by calling CreateSecurityTokenProvider.  We can check the token type to see if we can handle that token ourselves.

Create a new class, called EchoClientCredentialsSecurityTokenManager, that derives from ClientCredentialsSecurityTokenManager, and add the following code;

public class EchoClientCredentialsSecurityTokenManager : ClientCredentialsSecurityTokenManager
{
    private readonly EchoClientCredentials _credentials;

    public EchoClientCredentialsSecurityTokenManager(EchoClientCredentials connectClientCredentials)
        : base(connectClientCredentials)
    {
        _credentials = connectClientCredentials;
    }

    public override SecurityTokenProvider CreateSecurityTokenProvider(SecurityTokenRequirement tokenRequirement)
    {
        if (tokenRequirement.TokenType == EchoConstants.EchoTokenType)
        {
            // Handle this token for Custom.
            return new EchoTokenProvider(_credentials);
        }
        if (tokenRequirement is InitiatorServiceModelSecurityTokenRequirement)
        {
            // Return server certificate.
            if (tokenRequirement.TokenType == SecurityTokenTypes.X509Certificate)
            {
                return new X509SecurityTokenProvider(_credentials.ServiceCertificate.DefaultCertificate);
            }
        }
        return base.CreateSecurityTokenProvider(tokenRequirement);
    }

    public override SecurityTokenSerializer CreateSecurityTokenSerializer(SecurityTokenVersion version)
    {
        return new EchoSecurityTokenSerializer(version);
    }
}

The code is pretty verbose, and we can see clearly what is happening here.  We can inspect the token type and see if it makes that of our Echo token.  If we find a match, return an EchoTokenProvider (coming next) which is just simply a wrapper containing our claims.  Note that we also are able to reuse the token serializer that we created as part of the server side work, a nice (not so little) time saver!

Security Token Provider

In this case, the security token provider is nothing more than a vessel that contains our client credentials.  The token provider instantiates the token, passes the client credentials, and passes the token off for serialization.

public class EchoTokenProvider : SecurityTokenProvider
{
    private readonly EchoClientCredentials _credentials;

    public EchoTokenProvider(EchoClientCredentials credentials)
    {
        if (credentials == null) throw new ArgumentNullException("credentials");

        _credentials = credentials;
    }

    protected override SecurityToken GetTokenCore(TimeSpan timeout)
    {
        return new EchoToken(_credentials.LicenseKey, _credentials.UniqueCode, _credentials.ClientUserName);
    }
}

Test Client

The client side code for establishing a connection with our service is relatively simple. We need each of the following:

  1. Define the endpoint (the address) of our service
  2. Create an instance of EchoClientCredentials
  3. Load the SSL certificate (the public key aspect at least) and pass to the credentials object we just instantiated
  4. Remove the default implementation of ClientCredentials and pass in our own
  5. Create a channel factory, and call our service method

Here is an example of what your client code would look like;

var serviceAddress = new EndpointAddress("http://echo.local/EchoService.svc");

var channelFactory = new ChannelFactory<IEchoService>(new BindingHelper().CreateHttpBinding(), serviceAddress);

var credentials = new EchoClientCredentials("license key", "unique code", "user name");
var certificate = new X509Certificate2(Resources.echo);
credentials.ServiceCertificate.DefaultCertificate = certificate;

channelFactory.Endpoint.Behaviors.Remove(typeof(ClientCredentials));
channelFactory.Endpoint.Behaviors.Add(credentials);

var service = channelFactory.CreateChannel();
Console.WriteLine(service.Echo(10));

Security and Production Environment Considerations

Throughout this tutorial I have used HTTP bindings and told you explicitly not to use HTTPS, and there is a very good reason for that.  If you have a simple hosting environment, i.e. an environment that is NOT load balanced, then you can go ahead and make the following changes;

  • Change your service URL to HTTPS
  • Change HttpTransportBindingElement (on the server, inside the BindingHelper) to HttpsTransportBindingElement.
  • Add a HTTPS binding in IIS

Re-launch the client and all should be good.  If you get the following error message, you’re in big trouble.

The protocol ‘https’ is not supported.

After 4 days of battling with this error, I found what the problem is.  Basically WCF requires end to end HTTPS for HTTPS to be “supported”.  Take the following set up;

load-balancing-1

Some hosting companies will load balance the traffic.  That makes absolutely perfect sense and is completely reasonable.  The communications will be made from the client (laptop, desktop or whatever) via HTTPS, that bit is fine.  If you go to the service via HTTPS you will get a response.  However, and here’s the key, the communication between the load balancer and the physical web server probably isn’t secured.  I.e. doesn’t use HTTPS.  So the end-to-end communication isn’t HTTPS and therefore you get the error message described.

To work around this, use a HTTPS binding on the client, and a HTTP binding on the server.  This will guarantee that the traffic between the client and the server will be secure (thus preventing MIM attacks) but the traffic between the load balancer and the physical web server will not be secure (you’ll have to decide for yourself if you can live with that).

Quirks

I’ve encountered a few quirks whilst developing this service over the last few weeks.  Quirks are things I can’t explain or don’t care to understand.  You must make the following changes to the server side code, or else it might not work.  If you find any other quirks, feel free to let me know and I’ll credit your discovery;

 

You must add the AddressFilterMode ‘Any’ to the service implementation, or it won’t work.

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]

Summary

A lot of work is required to be able to do custom authentication using ServiceCredentials with WCF, no fewer than 18 classes in total. For cases when a trivial User Name and password simply won’t suffice, you can use this approach. WCF works really well when developing non-web based applications, but the lack of documentation can make development and maintenance harder than it should be. Be careful when using in a load balanced environment, you may need to make some changes to your bindings as already discussed.

8 things every .NET developer must understand

You’ve been in your current job for a while now, and you’re really starting to get good at what you do.  You’re perhaps thinking about finding something new, and you’re wondering what sort of questions a potential new employer might ask.  I’ve been interviewing a lot recently and I have noticed there are 8 questions that get asked a lot.  Spend some time and make sure that you understand each point in turn, doing so will help make that dream job become a reality.

SOLID Principals

The ultimate acronym of acronyms.  You’ve heard of it, but do you know what it stands for? Do you really understand what each principal means? Yeah thought so.  This awesome video by Derick Bailey will clear things up a lot for you;

Garbage Collection & IDisposable

One of the best features of developing with any .NET language is the lack of effort you have to put in to garbage collection.  You generally don’t have to care too much about 1st/2nd/3rd gen collection cycles, de-allocating memory or anything like that.  Its still a very important topic, however, and every .NET developer should understand how it works.

Once you become a more experienced developer (and I’m especially talking to WPF developers here) you quickly learn that memory management isn’t a forgotten topic.  Failure to unsubscribe from events, failure to close streams, and keeping hold of large objects (say instantiating them in a loop that never ends) is a sure-fire way to balloon up your apps memory usage eventually resulting in a crash (commonly referred to as Memory Leaks).

A useful way of ensuring that managed resources are correctly cleaned up in a timely manner is to implement the IDisposable interface (and actually use it within a using block) on your objects.  Make sure you understand how this works how to implement it.

Example:

private Boolean disposed;

protected virtual void Dispose(Boolean disposing)
{
    if (disposed)
    {
        return;
    }

    if (disposing)
    {
        //TODO: Managed cleanup code here, while managed refs still valid
    }
    //TODO: Unmanaged cleanup code here

    disposed = true;
}

public void Dispose()
{
    Dispose(true);
    GC.SuppressFinalize(this);
}

~Program()
{
    Dispose(false);
}

Code snippet taken from SideWaffle.  Its not enough to simply implement IDisposable, you have to take it a step further by adding a second Dispose method to ensure that both managed and unmanaged resources are properly disposed.

Useful resources:

Three Common Causes of Memory Leaks in Managed Applications (DavidKlineMS)

Garbage Collector Basics and Performance Hints (MSDN)

Writing High-Performance .NET Code (Ben Watson)

Explain why you might use MVC over WebForms?

Another curve ball that employers might throw at you is “Why might you decide to use ASP .NET MVC over something like WebForms”.  I stuttered for a good 30 seconds before I eventually came up with a decent answer to this one, because simply saying “because MVC is better” is not a good enough argument.

Here are some things that come to mind;

  • MVC generates much simpler HTML code, which will be easier to style and maintain over time.
  • MVC arguably has a smaller learning curve, because Razor is very intuitive and developers get to reuse their existing knowledge of HTML/CSS without having to learn how specific user controls work.
  • Due to MVC’s simplified page lifecycle, overhead on the server is reduced potentially resulting in better performance.

There is endless argument about this on the web (the only example you need). I think in reality the employer is trying to establish two things here;

  • How well do you know your frameworks
  • But more importantly, can you assess the benefits and drawbacks of different frameworks and make an informed, unbiased decision regarding which one to use.  I.e. don’t just use it because everybody else is.

No-SQL databases

If you think that SQL server is the be-all-and-end-all, then its time to wake up! Its the year 2014 and the tech world has moved on.  I’m not suggesting for a second that companies are abandoning SQL Server, I believe that it will continue to play a major role in our industry for at least the next 5 years.  However, No-SQL databases are gaining massive traction because of their general speed, ease of use, and scalability benefits (not to mention the fact that SQL Server is very expensive, whereas RavenDB and MongoDB are much more affordable).

I’d recommend that you look at, and understand, each of the following;

Boxing and Un-boxing

It simply amazes me just how many developers don’t understand boxing and un-boxing.  Granted, its been less of an issue since generics was introduced in .NET 2.0 but done wrong, your application’s performance and memory usage will be seriously affected. 

Useful resources:

Also note, when a prospective employer asks you to explain this problem, they may also ask you to explain the difference between reference types and value types.  Reference types of course are classes, whereas value types are structs.  A value type can be thought of as the actual value of an object, whereas a reference type typically contains the address of the actual value, or null (value types are not nullable).

Hoisting and Closures

Developers now are required to have a broader range of skills than has typically been the case.  Its usually not enough to have just SQL and C# on your CV, employers are increasingly looking for more well rounded developers, with a range of skills including (but not limited to); HTML, CSS, JavaScript, KnockoutJS, TDD, AngularJS and so on.

You may never have realised it, but when writing JavaScript code variable scope is not exactly black and white.

Take the following example (pinched and adapted from here)

(function(){

    x = 5; 
 
    alert(x);

    var x;

})();

What is the value of x? No tricks here, the answer is of course 5, but why does this work?  Because the variable declaration is pulled (hoisted) to the top of the current scope.  So regardless of where you declare your variables inside a function, they will always be hoisted to the top.  Be sure that you understand this, as its a basic concept but often misunderstood.

Similarly, closures are another confusing concept of JavaScript that you may be asked about.  In the simplest terms, when you have a function inside another function, the inner function has access to any declared variables in the outer function. 

Example:

(function(){

    var x = "Hello";

    var f = function(){
        alert(x + ", World!");
    }

    f();

})();

What is the result? Hello, World! of course, again no tricks.  The code in the inner function always has access to variables declared in the outer function.

That explanation should pass the Albert Einstein test.

If you can’t explain it to a six year old, you don’t understand it yourself.

Is a string a reference type or a value type?

The one I used to dread the most, until I learnt it properly and understood it.  A string is a reference type, but it behaves like a value type!  Unlike most other reference types, a string is immutable, meaning that the object itself cannot be changed.  When you call a method such as Remove or Substring, you are creating a copy of the string with the new value.  The original string remains intact until it is de-referenced.

The primary reason for this is because the size of strings means that they are too big to be allocated on the stack.

As a side note, take the following code;

string c = "hello";
string d = "hello";

Console.WriteLine(c == d);

Why is the result true?  Well this is a .NET optimization to reduce the memory footprint.  Under the hood each variable has the same pointer (0x021x15ec in this case) to the actual value.  You should always use String.Equals when comparing strings to ensure that the actual value of each string is equality checked, instead of the pointer.

Summary

We looked at 8 concepts that every decent .NET developer should understand, especially when interview for a new role.  Whilst these may seem like simple concepts, they are often misunderstood and this will quickly be picked up by even the most inexperienced interviewer.  It’s important for .NET developers to know their language and their tools inside out to ensure that they have every chance of landing that next dream job.

Quick tip: Avoid ‘async void’

When developing a Web API application recently with an AngularJS front end, I made a basic mistake and then lost 2 hours of my life trying to figure out what was causing the problem … async void.

Its pretty common nowadays to use tasks to improve performance/scalability when writing a Web API controller.  Take the following code:

public async Task<Entry[]> Get()
{
    using (var context = new EntriesContext())
    {
        return await context.Entries.ToArrayAsync();
    }
}

At a high level, when ToArrayAsync is executed the call will be moved off onto another thread and the execution of the method will only continue once the operation is complete (when the data is returned from the database in this case).  This is great because it frees up the thread for use by other requests, resulting in better performance/scalability (we could argue about how true this is all day long, so lets not do this here! Smile).

So what about when you still want to harness this functionality, but you don’t need to return anything to the client? async void? Not quite

Take the following Delete method:

public async void Delete(int id)
{
    using (var context = new EntriesContext())
    {
        Entry entity = await context.Entries.FirstOrDefaultAsync(c => c.Id == id);
        if (entity != null)
        {
            context.Entry(entity).State = EntityState.Deleted;
            await context.SaveChangesAsync();
        }
    }
}

The client uses the Id property to do what it needs to do, so it doesn’t care what actually gets returned…as long as the operation (deleting the entity) completes successfully.

To help illustrate the problem, here is the client side code (written in AngularJS, but it really doesn’t matter what the client side framework is);

$scope.delete = function () {

<pre><code>var entry = $scope.entries[0];

$http.delete('/api/Entries/' + entry.Id).then(function () {
    $scope.entries.splice(0, 1);
});
</code></pre>

};

When the delete operation is completed successfully (i.e. a 2xx response code), the then call-back method is raised and the entry is removed from the entries collection.  Only this code never actually runs.  So why?

If you’re lucky, your web browser will give you a error message to let you know that something went wrong…

browser-error

I have however seen this error get swallowed up completely.

To get the actual error message, you will need to use a HTTP proxy tool, such as Fiddler.  With this you can capture the response message returned by the server, which should look something like this (for the sake of clarity I’ve omitted all the HTML code which collectively makes up the yellow screen of death);

An asynchronous module or handler completed while an asynchronous operation was still pending.

Yep, you have a race condition.  The method returned before it finished executing.  Under the hood, the framework didn’t create a Task for the method because the method does not return a Task.  Therefore when calling FirstOrDefaultAsync, the method does not pause execution and the error is encountered.

To resolve the problem, simply change the return type of the method from void to Task.  Don’t worry, you don’t actually have to return anything, and the compiler knows not to generate a build error if there is no return statement.  An easy fix, when you know what the problem is!

Summary

Web API fully supports Tasks, which are helpful for writing more scalable applications.  When writing methods that don’t need to return a value to the client, it may make sense to return void.  However, under the hood .NET requires the method to return Task in order for it to properly support asynchronous  functionality.

AutoMapper

5 AutoMapper tips and tricks

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code. AutoMapper maps objects to objects, using both convention and configuration.  AutoMapper is flexible enough that it can be overridden so that it will work with even the oldest legacy systems.  This post demonstrates what I have found to be 5 of the most useful, lesser known features.

Tip: I wrote unit tests to demonstrate each of the basic concepts.  If you would like to learn more about unit testing, please check out my post C# Writing Unit Tests with NUnit And Moq.

Demo project code

This is the basic structure of the code I will use throughout the tutorial;

public class Doctor
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class HealthcareProfessional
{
    public string FullName { get; set; }
}

public class Person
{
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class KitchenCutlery
{
    public int Knifes { get; set; }
    public int Forks { get; set; }
}

public class Kitchen
{
    public int KnifesAndForks { get; set; }
}

public class MyContext : DbContext
{
    public DbSet<Doctor> Doctors { get; set; }
}

public class DbInitializer : DropCreateDatabaseAlways<MyContext>
{
    protected override void Seed(MyContext context)
    {
        context.Doctors.Add(new Doctor
        {
            FirstName = "Jon",
            LastName = "Preece",
            Title = "Mr"
        });
    }
}

I will refer back to this code in each example.

AutoMapper Projection

No doubt one of the best, and probably least used features of AutoMapper is projection.  AutoMapper, when used with an Object Relational Mapper (ORM) such as Entity Framework, can cast the source object to the destination type at database level. This may result in more efficient database queries.

AutoMapper provides the Project extension method, which extends the IQueryable interface for this task.  This means that the source object does not have to be fully retrieved before mapping can take place.

Take the following unit test;

[Test]
public void Doctor_ProjectToPerson_PersonFirstNameIsNotNull()
{
    //Arrange
    Mapper.CreateMap<Doctor, Person>()
            .ForMember(dest => dest.LastName, opt => opt.Ignore());

    //Act
    Person result;
    using (MyContext context = new MyContext())
    {
        context.Database.Log += s => Debug.WriteLine(s);
        result = context.Doctors.Project().To<Person>().FirstOrDefault();
    }

    //Assert
    Assert.IsNotNull(result.FirstName);
}

The query that is created and executed against the database is as follows;

SELECT TOP (1) 
    [d].[Id] AS [Id], 
    [d].[FirstName] AS [FirstName]
    FROM [dbo].[Doctors] AS [d]

Notice that LastName is not returned from the database?  This is quite a simple example, but the potential performance gains are obvious when working with more complex objects.

InstantAutoMapperRecommended Further Reading: Instant AutoMapper

Automapper is a simple library that will help eliminate complex code for mapping objects from one to another. It solves the deceptively complex problem of mapping objects and leaves you with clean and maintainable code.

Instant Automapper Starter is a practical guide that provides numerous step-by-step instructions detailing some of the many features Automapper provides to streamline your object-to-object mapping. Importantly it helps in eliminating complex code.

Configuration Validation

Hands down the most useful, time saving feature of AutoMapper is Configuration Validation.  Basically after you set up your maps, you can call Mapper.AssertConfigurationIsValid() to ensure that the maps you have defined make sense.  This saves you the hassle of having to run your project, navigate to the appropriate page, click button A/B/C and so on to test that you mapping code actually works.

Take the following unit test;

[Test]
public void Doctor_MapsToHealthcareProfessional_ConfigurationIsValid()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>();

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

AutoMapper throws the following exception;

AutoMapper.AutoMapperConfigurationException : 
Unmapped members were found. Review the types and members below.
Add a custom mapping expression, ignore, add a custom resolver, or modify the source/destination type
===================================================================
Doctor -> HealthcareProfessional (Destination member list)
MakingLifeEasier.Doctor -> MakingLifeEasier.HealthcareProfessional (Destination member list)
-------------------------------------------------------------------
FullName

AutoMapper can’t infer a map between Doctor and HealthcareProfessional because they are structurally very different.  A custom converter, or ForMember needs to be used to indicate the relationship;

[Test]
public void Doctor_MapsToHealthcareProfessional_ConfigurationIsValid()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>()
          .ForMember(dest => dest.FullName, opt => opt.MapFrom(src => string.Join(" ", src.Title, src.FirstName, src.LastName)));

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

The test now passes because every public property now has a valid mapping.

Custom Conversion

Sometimes when the source and destination objects are too different to be mapped using convention, and simply too big to write elegant inline mapping code (ForMember) for each individual member, it can make sense to do the mapping yourself.  AutoMapper makes this easy by providing the ITypeConverter<TSource, TDestination> interface.

The following is an implementation for mapping Doctor to a HealthcareProfessional;

public class HealthcareProfessionalTypeConverter : ITypeConverter<Doctor, HealthcareProfessional>
{
    public HealthcareProfessional Convert(ResolutionContext context)
    {
        if (context == null || context.IsSourceValueNull)
            return null;

        Doctor source = (Doctor)context.SourceValue;

        return new HealthcareProfessional
        {
            FullName = string.Join(" ", new[] { source.Title, source.FirstName, source.LastName })
        };
    }
}

You instruct AutoMapper to use your converter by using the ConvertUsing method, passing the type of your converter, as shown below;

[Test]
public void Legacy_SourceMappedToDestination_DestinationNotNull()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>()
            .ConvertUsing<HealthcareProfessionalTypeConverter>();

    Doctor source = new Doctor
    {
        Title = "Mr",
        FirstName = "Jon",
        LastName = "Preece",
    };

    Mapper.AssertConfigurationIsValid();

    //Act
    HealthcareProfessional result = Mapper.Map<HealthcareProfessional>(source);

    //Assert
    Assert.IsNotNull(result);
}

AutoMapper simply hands over the source object (Doctor) to you, and you return a new instance of the destination object (HealthcareProfessional), with the populated properties.  I like this approach because it means I can keep all my monkey mapping code in one single place.

Value Resolvers

Value resolves allow for correct mapping of value types.  The source object KitchenCutlery contains a precise breakdown of the number of knifes and forks in the kitchen, whereas the destination object Kitchen only cares about the sum total of both.  AutoMapper won’t be able to create a convention based mapping here for us, so we use a Value (type) Resolver;

public class KitchenResolver : ValueResolver<KitchenCutlery, int>
{
    protected override int ResolveCore(KitchenCutlery source)
    {
        return source.Knifes + source.Forks;
    }
}

The value resolver, similar to the type converter, takes care of the mapping and returns a result, but notice that it is specific to the individual property, and not the full object.

The following code snippet shows how to use a Value Resolver;

[Test]
public void Kitchen_KnifesKitchen_ConfigurationIsValid()
{
    //Arrange

    Mapper.CreateMap<KitchenCutlery, Kitchen>()
            .ForMember(dest => dest.KnifesAndForks, opt => opt.ResolveUsing<KitchenResolver>());

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

Null Substitution

Think default values.  In the event that you want to give a destination object a default value when the source value is null, you can use AutoMapper’s NullSubstitute feature.

Example usage of the NullSubstitute method, applied individually to each property;

[Test]
public void Doctor_TitleIsNull_DefaultTitleIsUsed()
{
    //Arrange
    Doctor source = new Doctor
    {
        FirstName = "Jon",
        LastName = "Preece"
    };

    Mapper.CreateMap<Doctor, Person>()
            .ForMember(dest => dest.Title, opt => opt.NullSubstitute("Dr"));

    //Act
    Person result = Mapper.Map<Person>(source);

    //Assert
    Assert.AreSame(result.Title, "Dr");
}

Summary

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code.  You don’t have to rewrite your existing code or write code in a particular style to use AutoMapper, as AutoMapper is flexible enough to be configured to work with even the oldest legacy code.  Most developers aren’t using AutoMapper to its full potential, rarely straying away from Mapper.Map.  There are a multitude of useful tidbits, including; Projection, Configuration Validation, Custom Conversion, Value Resolvers and Null Substitution, which can help simplify complex logic when used correctly.

How to create your own ASP .NET MVC model binder

Model binding is the process of converting POST data or data present in the Url into a .NET object(s).  ASP .NET MVC makes this very simple by providing the DefaultModelBinder.  You’ve probably seen this in action many times (even if you didn’t realise it!), but did you know you can easily write your own?

A typical ASP .NET MVC Controller

You’ve probably written or seen code like this many hundreds of times;

public ActionResult Index(int id)
{
    using (ExceptionManagerEntities context = new ExceptionManagerEntities())
    {
        Error entity = context.Errors.FirstOrDefault(c => c.ID == id);

<pre><code>    if (entity != null)
    {
        return View(entity);                    
    }
}

return View();
</code></pre>

}

Where did Id come from? It probably came from one of three sources; the Url (Controller/View/{id}), the query string (Controller/View?id={id}), or the post data.  Under the hood, ASP .NET examines your controller method, and searches each of these places looking for data that matches the data type and the name of the parameter.  It may also look at your route configuration to aid this process.

A typical controller method

The code shown in the first snippet is very common in many ASP .NET MVC controllers.  Your action method accepts an Id parameter, your method then fetches an entity based on that Id, and then does something useful with it (and typically saves it back to the database or returns it back to the view).

You can create your own MVC model binder to cut out this step, and simply have the entity itself passed to your action method. 

Take the following code;

public ActionResult Index(Error error)
{
    if (error != null)
    {
        return View(error);
    }

<pre><code>return View();
</code></pre>

}

How much sweeter is that?

Create your own ASP .NET MVC model binder

You can create your own model binder in two simple steps;

  1. Create a class that inherits from DefaultModelBinder, and override the BindModel method (and build up your entity in there)
  2. Add a line of code to your Global.asax.cs file to tell MVC to use that model binder.

Before we forget, tell MVC about your model binder as follows (in the Application_Start method in your Global.asax.cs file);

ModelBinders.Binders.Add(typeof(Error), new ErrorModelBinder());

This tells MVC that if it stumbles across a parameter on an action method of type Error, it should attempt to bind it using the ErrorModelBinder class you just created.

Your BindModel implementation will look like this;

public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
    if (bindingContext.ModelType == typeof(Error))
    {
        ValueProviderResult valueProviderValue = bindingContext.ValueProvider.GetValue("id");

<pre><code>    int id;
    if (valueProviderValue != null &amp;&amp; int.TryParse((string)valueProviderValue.RawValue, out id))
    {
        using (ExceptionManagerEntities context = new ExceptionManagerEntities())
        {
            return context.Errors.FirstOrDefault(c =&gt; c.ID == id);
        }
    }
}

return base.BindModel(controllerContext, bindingContext);
</code></pre>

}

The code digested;

  1. Make sure that we are only trying to build an object of type Error (this should always be true, but just as a safety net lets include this check anyway).
  2. Get the ValueProviderResult of the value provider we care about (in this case, the Id property).
  3. Check that it exists, and that its definitely an integer.
  4. Now fetch our entity and return it back.
  5. Finally, if any of our safety nets fail, just return back to the model binder and let that try and figure it out for us.

And the end result?

ErrorIsBound

Your new model binder can now be used on any action method throughout your ASP .NET MVC application.

Summary

You can significantly reduce code duplication and simplify your controller classes by creating your own model binder.  Simply create a new class that derives from DefaultModelBinder and add your logic to fetch your entity.  Be sure to add a line to your Global.asax.cs file so that MVC knows what to do with it, or you may get some confusing error messages.

Moq and NUnit – Abstract and interface types

Effectively unit testing code using Moq and NUnit is a breeze and a pleasure.  If you’re not currently unit testing your code, and you’re interested in getting started, please take a look at my C# Writing unit tests with NUnit and Moq tutorial.

Mocking interfaces and abstract classes using Moq is no more complicated than mocking any other type.  There are just a couple of things to look out for.

Mocking Interfaces

Assume the following interface;

public interface IVehicle
{
    int BHP { get; set; }
    bool HasWheels { get; }
    int Wheels { get; }

    bool Move();
}

And the following unit test;

[Test]
public void IVehicle_Move()
{
    Mock vehicle = new Mock();

    int wheels = vehicle.Object.Wheels;

    Assert.IsTrue(wheels == 0);
}

As far as I know, there is no way to specify a concrete implementation to use when mocking an interface.  By default, Moq will return the default value for each property on the interface, and does nothing when void methods are execute.  If you want to override this behaviour, you must tell Moq what to do when the property/method is accessed;

[Test]
public void IVehicle_Move()
{
    Mock vehicle = new Mock();

    vehicle.Setup(t => t.Wheels).Returns(4);
    vehicle.Setup(t => t.Move()).Callback(() => Console.WriteLine("Move was called"));

    int wheels = vehicle.Object.Wheels;

    Assert.IsTrue(wheels == 0);
    vehicle.Verify(t => t.Move(), Times.Exactly(1));
}

The above test obviously fails miserably, but this is just a contrived example to make the point.  You use the Setup method on your mock object with the Callback method to override the default behaviour.

Abstract Classes

Abstract classes are subtly different.  Take the following abstract class;

public abstract class Vehicle : IVehicle
{
    public int BHP { get; set; }

    public bool HasWheels
    {
        get
        {
            return Wheels > 0;
        }
    }

    public abstract int Wheels { get; }

    public string WhoYouGonnaCall
    {
        get
        {
            return "Ghostbusters";
        }
    }

    public abstract bool Move();
}

The class itself is marked as abstract, meaning it cannot be directly instantiated.  The class contains an mix of abstract methods/properties and non-abstract properties.

Assuming the following unit test;

[Test]
public void Vehicle_Move()
{
    Mock vehicle = new Mock();

    int wheels = vehicle.Object.Wheels;

    Assert.IsTrue(wheels == 0);
}

As Wheels is abstract, it has no direct implementation, there Moq will return the default value of the properties data type (Int32, default value of 0).  However, the property WhoYouGonnaCall is not abstract, meaning it can be intercepted.  Take the following test;

[Test]
public void Vehicle_WhoYouGonnaCall()
{
    Mock vehicle = new Mock();

    string gonnaCall = vehicle.Object.WhoYouGonnaCall;

    Assert.AreEqual(gonnaCall, "Ghostbusters");
}

The property WhoYouGonnaCall is not mocked and its original value is returned rather than the default value of string.

Summary

Moq can easily be used to unit test abstract and interface types.  The process is the same as mocking any other type, just with subtle differences in behaviour to look out for.

Easy WCF Security and authorization of users

There are several steps involved in making your WCF service secure, and ensure that clients consuming your service are properly authenticated.  WCF uses BasicHttpBinding out-of-the-box, which generates SOAP envelopes (messages) for each request.  BasicHttpBinding works over standard HTTP, which is great for completely open general purpose services, but not good if you are sending sensitive data over the internet (as HTTP traffic can easily be intercepted).

This post discusses how to take a basic WCF service, which uses BasicHttpBinding, and upgrade it to use WsHttpBinding over SSL (with username/password validation). If you want to become a better WCF developer, you may want to check out Learning WCF: A Hands-on Guide by Michele Lerouz Bustamante. This is a very thorough and insightful WCF book with detailed and practical samples and tips.

Here is the basic sequence of steps needed;

  • Generate a self-signed SSL certificate (you would use a real SSL certificate for live) and add this to the TrustedPeople certificate store.
  • Add a UserNamePasswordValidator.
  • Switch our BasicHttpBinding to WsHttpBinding.
  • Change our MEX (Metadata Exchange) endpoint to support SSL.
  • Specify how the client will authenticate, using the ServiceCredentials class.

You may notice that most of the changes are configuration changes.  You can make the same changes in code if you so desire, but I find the process easier and cleaner when done in XML.

 

BasicHttpBinding vs. WsHttpBinding

Before we kick things off, i found myself asking this question (like so many others before me).  What is the difference between BasicHttpBinding and WsHttpBinding?

If you want a very thorough explanation, there is a very detailed explanation written by Shivprasad Koirala on CodeProject.com.  I highly recommend that you check this out.

The TL:DR version is simply this;

  • BasicHttpBinding supports SOAP v1.1 (WsHttpBinding supports SOAP v1.2)
  • BasicHttpBinding does not support Reliable messaging
  • BasicHttpBinding is insecure, WsHttpBinding supports WS-* specifications.
  • WsHttpBinding supports transporting messages with credentials, BasicHttpBinding supports only Windows/Basic/Certificate authentication.

The project structure

You can view and download the full source code for this project via GitHub, see the end of the post for more details.

We have a WCF Service application with a Service Contract as follows;

[ServiceContract]
public interface IPeopleService
{
    [OperationContract]
    Person[] GetPeople();
}

And the implementation of the Service Contract;

public class PeopleService : IPeopleService
{
    public Person[] GetPeople()
    {
        return new[]
                    {
                        new Person { Age = 45, FirstName = "John", LastName = "Smith" }, 
                        new Person { Age = 42, FirstName = "Jane", LastName = "Smith" }
                    };
    }
}

The model class (composite type, if you will) is as follows;

[DataContract]
public class Person
{
    [DataMember]
    public int Age { get; set; }

    [DataMember]
    public string FirstName { get; set; }

    [DataMember]
    public string LastName { get; set; }
}

The initial configuration is as follows;

<system.serviceModel>
  <behaviors>
    <serviceBehaviors>
      <behavior>
        <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/>
        <serviceDebug includeExceptionDetailInFaults="false"/>
      </behavior>
    </serviceBehaviors>
  </behaviors>
  <protocolMapping>
    <add binding="basicHttpsBinding" scheme="https"/>
  </protocolMapping>
  <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/>
</system.serviceModel>

The WCF service can easily be hosted in IIS, simply add a service reference to the WSDL definition file and you’re away. In the interest of completeness, here is the entire client code;

static void Main(string[] args)
{
    PeopleServiceClient client = new PeopleServiceClient();

    foreach (var person in client.GetPeople())
    {
        Console.WriteLine(person.FirstName);
    }

    Console.ReadLine();
}

Hosting in IIS

As briefly mentioned, you can (and probably always will) host your WCF service using Internet Information Services (IIS).

Generating an SSL certificate

Before doing anything, you need an SSL certificate.  Transport based authentication simply does not work if A) You are not on a secure channel and B) Your SSL certificate is not trusted.  You don’t have to purchase an SSL certificate at this stage as a self-signed certificate will suffice (with 1 or 2 extra steps).  You will want to purchase a real SSL certificate when you move your service to the production environment.

You can generate a self-signed SSL certificate either 1 of 2 ways.  You can either do it the hard way, using Microsoft’s rather painful MakeCert.exe Certificate Creation Tool or you can download a free tool from PluralSight (of all places), which provides a super simple user interface and can even add the certificate to the certificate store for you.

Once you have downloaded the tool, run it as an Administrator;

SelfCert

For the purposes of this tutorial, we will be creating a fake website called peoplesite.local.  We will add an entry into the hosts file for this and set it up in IIS.  Its very important that the X.500 distinguished name matches your domain name (or it will not work!).  You will also want to save the certificate as a PFX file so that it can be imported into IIS and used for the HTTPS binding.

Once done open up IIS, click on the root level node, and double click on Server Certificates.  Click Import (on the right hand side) and point to the PFX file you saved on the desktop.  Click OK to import the certificate.

Import

Next, create a new site in IIS called PeopleService.  Point it to an appropriate folder on your computer and edit the site bindings.  Add a new HTTPS binding and select the SSL certificate you just imported.

EditBinding

Be sure to remove the standard HTTP binding after adding the HTTPS binding as you wont be needing it.

Update the hosts file (C:\Windows\System32\Drivers\etc\hosts) with an entry for peoplesite.local as follows;

127.0.0.1            peoplesite.local

Finally, flip back to Visual Studio and create a publish profile (which we will use later once we have finished the configuration).  The publish method screen should look something like this;

Publish

Configuration

Ok we have set up our environment, now its time to get down to the fun stuff…configuration.  Its easier if you delete everything you have between the <system.serviceModel> elements and follow along with me.

Add the following skeleton code between the <system.serviceModel> opening and closing tags, we will fill in each element separately;  (update the Service Name to match that in your project)

<services>
  <service name="PeopleService.Service.PeopleService" behaviorConfiguration="ServiceBehaviour">
    <host>
    </host>
  </service>
</services>
<bindings>
</bindings>
<behaviors>
  <serviceBehaviors>
  </serviceBehaviors>
</behaviors>

Base Address

Start by adding a base address (directly inside the host element) so that we can use relative addresses’;

<baseAddresses>
  <add baseAddress="https://peoplesite.local/" />
</baseAddresses>

Endpoints

Next, add two endpoints (one for the WsHttpBinding and one for MEX);

<endpoint address="" binding="wsHttpBinding" bindingConfiguration="BasicBinding" contract="PeopleService.Service.IPeopleService" name="BasicEndpoint" />
<endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" name="mex" />

Note that we are using mexHttpsBinding because our site does not support standard HTTP binding.  We don’t need to explicitly add a binding for the MEX endpoint as WCF will deal with this automatically for us.  Add a wsHttpBinding as follows;

<wsHttpBinding>
  <binding name="BasicBinding">
    <security mode="TransportWithMessageCredential">
      <message clientCredentialType="UserName" />
    </security>
  </binding>
</wsHttpBinding>

Bindings

This is where we specify what type of security we want to use.  In our case, we want to validate that the user is whom they say they are in the form of a username/password combination.  The TransportWithMessageCredential basic http security mode requires the username/password combination be passed in the message header.  A snoop using a HTTP proxy tool (such as Fiddler) reveals this;

fiddler

Service Behaviours

Finally we need to update our existing service behaviour with a serviceCredentials element as follows;

<behavior name="ServiceBehaviour">
  <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" />
  <serviceDebug includeExceptionDetailInFaults="true" />
  <serviceCredentials>
    <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="PeopleService.Service.Authenticator, PeopleService.Service" />
    <serviceCertificate findValue="peoplesite.local" storeLocation="LocalMachine" storeName="TrustedPeople" x509FindType="FindBySubjectName" />
  </serviceCredentials>
</behavior>

The two elements of interest are userNameAuthentication and serviceCertificate.

User Name Authentication

This is where we tell WCF about our custom authentication class.  Lets go ahead and create this.  Add a new class to your project called Authenticator.cs and add the following code;

using System.IdentityModel.Selectors;
using System.ServiceModel;

public class Authenticator : UserNamePasswordValidator
{
    public override void Validate(string userName, string password)
    {
        if (userName != "peoplesite" && password != "password")
        {
            throw new FaultException("Invalid user and/or password");
        }
    }
}

Basically, you can add whatever code you want here to do your authentication/authorisation.  Notice that the Validate method returns void.  If you determine that the credentials supplied are invalid, you should throw a FaultException, which will be automatically handled for you by WCF.

You should ensure that the customUserNamePasswordValidatorType attribute in your App.config file is the fully qualified type of your authenticator type.

Service Certificate

This is key, if this is not quite right nothing will work.  Basically you are telling WCF where to find your SSL certificate.  Its very important that the findValue is the same as your SSL certificate name, and that you point to the correct certificate store.  Typically you will install the certificate on the LocalMachine in the TrustedPeople certificate store.  I would certainly recommend sticking with the FindBySubjectName search mode, as this avoid issues when you have multiple SSL certificates with similar details.  You may need a little trial and error when starting out to get this right.  If you have been following this tutorial throughout, you should be OK with the default.

Supplying user credentials

We just need one final tweak to our test client to make all this work.  Update the test client code as follows;

PeopleServiceClient client = new PeopleServiceClient();
client.ClientCredentials.UserName.UserName = "peoplesite";
client.ClientCredentials.UserName.Password = "password";

We pass in the client credentials via the, you guessed it, ClientCredentials object on the service client.

If you run the client now, you should get some test data back from the service written out to the console window.  Notice that you will get an exception if the username/password is incorrect, or if the connection is not over SSL.

Troubleshooting

SecurityNegotiationException

As an aside, if you receive a SecurityNegotiationException please ensure that your self-signed certificate is correctly named to match your domain, and that you have imported it into the TrustedPeople certificate store.

SecurityNegotiationException

A handy trick for diagnosing the problem is by updating the service reference, Visual Studio will advise you as to what is wrong with the certificate;

SecurityAlert

Summary

With a few small configuration changes you can easily utilise WS-Security specifications/standards to ensure  that your WCF service is secure.  You can generate a self-signed SSL certificate using a free tool from Pluralsight, and install it to your local certificate store and IIS.  Then you add a UserNamePasswordValidator to take care of your authentication.  Finally, you can troubleshoot and debug your service using Fiddler and Visual Studio.

github4848_thumb.pngThe source code is available on GitHub