Browse Tag: Architecture

Angular 2 server side paging using ng2-pagination

Angular 2 is not quite out of beta yet (Beta 12 at the time of writing) but I’m in the full flow of developing with it for production use. A common feature, for good or bad, is to have lists/tables of data that the user can navigate through page by page, or even filter, to help find something useful.

Angular 2 doesn’t come with any out of the box functionality to support this, so we have to implement it ourselves. And of course what the means today is to use a third party package!

To make this happen, we will utilise n2-pagination, a great plugin, and Web API.

I’ve chosen Web API because that is what I’m using in my production app, but you could easily use ExpressJS or (insert your favourite RESTful framework here).

Checklist

Here is a checklist of what we will do to make this work;

  • Create a new Web API project (you could very easily use an existing project)
  • Enable CORS, as we will use using a seperate development server for the Angular 2 project
  • Download the Angular 2 quick start, ng2-pagination and connect the dots
  • Expose some sample data for testing

I will try to stick with this order.

Web API (for the back end)

Open up Visual Studio (free version here) and create a new Web API project. I prefer to create an Empty project and add Web API.

Add a new controller, called DataController and add the following code;

public class DataModel
{
    public int Id { get; set; }
    public string Text { get; set; }
}

[RoutePrefix("api/data")]
public class DataController : ApiController
{
    private readonly List<DataModel> _data;

    public DataController()
    {
        _data = new List<DataModel>();

        for (var i = 0; i < 10000; i++)
        {
            _data.Add(new DataModel {Id = i + 1, Text = "Data Item " + (i + 1)});
        }
    }

    [HttpGet]
    [Route("{pageIndex:int}/{pageSize:int}")]
    public PagedResponse<DataModel> Get(int pageIndex, int pageSize)
    {
        return new PagedResponse<DataModel>(_data, pageIndex, pageSize);
    }
}

We don’t need to connect to a database to make this work, so we just dummy up 10,000 “items” and page through that instead. If you chose to use Entity Framework, the code is exactly the same, except you initialise a DbContext and query a Set instead.

PagedResponse

Add the following code;

public class PagedResponse<T>
{
    public PagedResponse(IEnumerable<T> data, int pageIndex, int pageSize)
    {
        Data = data.Skip((pageIndex - 1)*pageSize).Take(pageSize).ToList();
        Total = data.Count();
    }

    public int Total { get; set; }
    public ICollection<T> Data { get; set; }
}

PagedResponse exposes two properties. Total and Data. Total is the total number of records in the set. Data is the subset of data itself. We have to include the total number of items in the set so that ng2-pagination knows how many pages there are in total. It will then generate some links/buttons to enable the user to skip forward several pages at once (or as many as required).

Enable CORS (Cross Origin Resource Sharing)

To enable communication between our client and server, we need to enable Cross Origin Resource Sharing (CORS) as they will be (at least during development) running under different servers.

To enable CORS, first install the following package (using NuGet);

Microsoft.AspNet.WebApi.Cors

Now open up WebApiConfig.cs and add the following to the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add a new nested class, as shown;

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

Now when Angular makes a request for data, it will send an OPTIONS header first to check access. This request will be intercepted above and will reply with Access-Control-Allow-Origin header with value any (represented with an asterisk).

Format JSON response

If, like me, you hate Pascal Case JavaScript (ThisIsPascalCase), you will want to add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

Now lets set up the front end.

Front-end Angular 2 and ng2-pagination

If you head over the to Angular 2 quickstart, you will see there is a link to download the quick start source code. Go ahead and do that.

I’ll wait here.

Ok you’re done? Lets continue.

Install ng2-pagination and optionally bootstrap and jquery if you want this to look pretty. Skip those two if you don’t mind.

npm install --save-dev ng2-pagination bootstrap jquery

Open up index.html and add the following scripts to the header;

<script src="node_modules/angular2/bundles/http.dev.js"></script>
<script src="node_modules/ng2-pagination/dist/ng2-pagination-bundle.js"></script>

<script src="node_modules/jquery/dist/jquery.js"></script>
<script src="node_modules/bootstrap/dist/js/bootstrap.js"></script>

Also add a link to the bootstrap CSS file, if required.

<link rel="stylesheet" href="node_modules/bootstrap/dist/css/bootstrap.css">

Notice we pulled in Http? We will use that for querying our back-end.

Add a new file to the app folder, called app.component.html. We will use this instead of having all of our markup and TypeScript code in the same file.

ng2-pagination

Open app.component.ts, delete everything, and add the following code instead;

import {Component, OnInit} from 'angular2/core';
import {Http, HTTP_PROVIDERS} from 'angular2/http';
import {Observable} from 'rxjs/Rx';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/do';
import {PaginatePipe, PaginationService, PaginationControlsCmp, IPaginationInstance} from 'ng2-pagination';

export interface PagedResponse<T> {
    total: number;
    data: T[];
}

export interface DataModel {
    id: number;
    data: string;
}

@Component({
    selector: 'my-app',
    templateUrl: './app/app.component.html',
    providers: [HTTP_PROVIDERS, PaginationService],
    directives: [PaginationControlsCmp],
    pipes: [PaginatePipe]
})
export class AppComponent implements OnInit {
    private _data: Observable<DataModel[]>;
    private _page: number = 1;
    private _total: number;

    constructor(private _http: Http) {

    }
}

A quick walk-through of what I’ve changed;

  • Removed inline HTML and linked to the app.component.html file you created earlier. (This leads to cleaner seperation of concerns).
  • Imported Observable, Map, and Do from RX.js. This will enable us to write cleaner async code without having to rely on promises.
  • Imported a couple of class from angular2/http so that we can use the native Http client, add added HTTP_PROVIDERS as a provider.
  • Imported various objects required by ng2-pagination, and added to providers, directives and pipes so we can access them through our view (which we will create later).
  • Defined two interfaces, one called PagedResponse<T> and DataModel. You may notice these are identical to those we created in our Web API project.
  • Add some variables, we will discuss shortly.

We’ve got the basics in place that we need to call our data service and pass the data over to ng2-pagination. Now lets actually implement that process.

Retrieving data using Angular 2 Http

Eagle eyed readers may have noticed that I’ve pulled in and implemented the OnInit method, but not implemented the ngOnInit method yet.

Add the following method;

ngOnInit() {
    this.getPage(1);
}

When the page loads and is initialised, we want to automatically grab the first page of data. The above method will make that happen.

Note: If you are unfamiliar with ngOnInit, please read this helpful documentation on lifecycle hooks.

Now add the following code;

getPage(page: number) {
this._data = this._http.get("http://localhost:52472/api/data/" + page + "/10")
    .do((res: any) => {
        this._total = res.json().total;
        this._page = page;
    })
    .map((res: any) => res.json().data);
}

The above method does the following;

  • Calls out to our Web API (you may need to change the port number depending on your set up)
  • Passes in two values, the first being the current page number, the second being the number of results to retrieve
  • Stores a reference to the _data variable. Once the request is complete, do is executed.
  • Do is a function (an arrow function in this case) that is executed for each item in the collection received from the server. We’ve set up our Web API method to return a single object, of type PagedResponse, so this method will only be executed once. We take this opportunity to update the current page (which is the same as the page number passed into the method in the first place) and the _total variable, which stores the total number of items in the entire set (not just the paged number).
  • Map is then used to pull the data from the response and convert it to JSON. The way that RX.js works is that an event will be emitted to notify that the collection has changed.

Implement the view

Open app.component.html and add the following code;

<div class="container">
    <table class="table table-striped table-hover">
        <thead>
            <tr>
                <th>Id</th>
                <th>Text</th>
            </tr>
        </thead>
        <tbody>
            <tr *ngFor="#item of _data | async | paginate: { id: 'server', itemsPerPage: 10, currentPage: _page, totalItems: _total }">
                <td>{{item.id}}</td>
                <td>{{item.text}}</td>
            </tr>
        </tbody>
    </table>    
    <pagination-controls (pageChange)="getPage($event)" id="server"></pagination-controls>
</div>

There are a few key points on interest here;

  • On our repeater (*ngFor), we’ve used the async pipe. Under the hood, Angular subscribes to the Observable we pass to it and resolves the value automatically (asynchronously) when it becomes available.
  • We use the paginate pipe, and pass in an object containing the current page and total number of pages so ng2-pagination can render itself properly.
  • Add the pagination-controls directive, which calls back to our getPage function when the user clicks a page number that they are not currently on.

As we know the current page, and the number of items per page, we can efficiently pass this to the Web API to only retrieve data specific data.

So, why bother?

Some benefits;

  • Potentially reduce initial page load time, because less data has to be retrieved from the database, serialized and transferred over.
  • Reduced memory usage on the client. All 10,000 records would have to be held in memory!
  • Reduced processing time, as only the paged data is stored in memory, there are a lot less records to iterate through!

Drawbacks;

  • Lots of small requests for data could reduce server performance (due to chat. Using an effective caching strategy is key here.
  • User experience could be degegrated. If the server is slow to respond, the client may appear to be slow and could frustrate the user.

Summary

Using ng2-pagination, and with help from RX.js, we can easily add pagination to our pages. Doing so has the potential to reduce server load and initial page render time, and thus can result in a better user experience. A good caching strategy and server response times are important considerations when going to production.

Create a RESTful API with authentication using Web API and Jwt

Web API is a feature of the ASP .NET framework that dramatically simplifies building RESTful (REST like) HTTP services that are cross platform and device and browser agnostic. With Web API, you can create endpoints that can be accessed using a combination of descriptive URLs and HTTP verbs. Those endpoints can serve data back to the caller as either JSON or XML that is standards compliant. With JSON Web Tokens (Jwt), which are typically stateless, you can add an authentication and authorization layer enabling you to restrict access to some or all of your API.

The purpose of this tutorial is to develop the beginnings of a Book Store API, using Microsoft Web API with (C#), which authenticates and authorizes each requests, exposes OAuth2 endpoints, and returns data about books and reviews for consumption by the caller. The caller in this case will be Postman, a useful utility for querying API’s.

In a follow up to this post we will write a front end to interact with the API directly.

Set up

Open Visual Studio (I will be using Visual Studio 2015 Community edition, you can use whatever version you like) and create a new Empty project, ensuring you select the Web API option;

Where you save the project is up to you, but I will create my projects under *C:\Source*. For simplicity you might want to do the same.

New Project

Next, packages.

Packages

Open up the packages.config file. Some packages should have already been added to enable Web API itself. Please add the the following additional packages;

install-package EntityFramework
install-package Microsoft.AspNet.Cors
install-package Microsoft.AspNet.Identity.Core
install-package Microsoft.AspNet.Identity.EntityFramework
install-package Microsoft.AspNet.Identity.Owin
install-package Microsoft.AspNet.WebApi.Cors
install-package Microsoft.AspNet.WebApi.Owin
install-package Microsoft.Owin.Cors
install-package Microsoft.Owin.Security.Jwt
install-package Microsoft.Owin.Host.SystemWeb
install-package System.IdentityModel.Tokens.Jwt
install-package Thinktecture.IdentityModel.Core

These are the minimum packages required to provide data persistence, enable CORS (Cross-Origin Resource Sharing), and enable generating and authenticating/authorizing Jwt’s.

Entity Framework

We will use Entity Framework for data persistence, using the Code-First approach. Entity Framework will take care of generating a database, adding tables, stored procedures and so on. As an added benefit, Entity Framework will also upgrade the schema automatically as we make changes. Entity Framework is perfect for rapid prototyping, which is what we are in essence doing here.

Create a new IdentityDbContext called BooksContext, which will give us Users, Roles and Claims in our database. I like to add this under a folder called Core, for organization. We will add our entities to this later.

namespace BooksAPI.Core
{
    using Microsoft.AspNet.Identity.EntityFramework;

    public class BooksContext : IdentityDbContext
    {

    }
}

Claims are used to describe useful information that the user has associated with them. We will use claims to tell the client which roles the user has. The benefit of roles is that we can prevent access to certain methods/controllers to a specific group of users, and permit access to others.

Add a DbMigrationsConfiguration class and allow automatic migrations, but prevent automatic data loss;

namespace BooksAPI.Core
{
    using System.Data.Entity.Migrations;

    public class Configuration : DbMigrationsConfiguration&lt;BooksContext&gt;
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = true;
            AutomaticMigrationDataLossAllowed = false;
        }
    }
}

Whilst losing data at this stage is not important (we will use a seed method later to populate our database), I like to turn this off now so I do not forget later.

Now tell Entity Framework how to update the database schema using an initializer, as follows;

namespace BooksAPI.Core
{
    using System.Data.Entity;

    public class Initializer : MigrateDatabaseToLatestVersion&lt;BooksContext, Configuration&gt;
    {
    }
}

This tells Entity Framework to go ahead and upgrade the database to the latest version automatically for us.

Finally, tell your application about the initializer by updating the Global.asax.cs file as follows;

namespace BooksAPI
{
    using System.Data.Entity;
    using System.Web;
    using System.Web.Http;
    using Core;

    public class WebApiApplication : HttpApplication
    {
        protected void Application_Start()
        {
            GlobalConfiguration.Configure(WebApiConfig.Register);
            Database.SetInitializer(new Initializer());
        }
    }
}

Data Provider

By default, Entity Framework will configure itself to use LocalDB. If this is not desirable, say you want to use SQL Express instead, you need to make the following adjustments;

Open the Web.config file and delete the following code;

<entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
        <parameters>
            <parameter value="mssqllocaldb" />
        </parameters>
    </defaultConnectionFactory>
    <providers>
        <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
    </providers>
</entityFramework>

And add the connection string;

<connectionStrings>
    <add name="BooksContext" providerName="System.Data.SqlClient" connectionString="Server=.;Database=Books;Trusted_Connection=True;" />
</connectionStrings>

Now we’re using SQL Server directly (whatever flavour that might be) rather than LocalDB.

JSON

Whilst we’re here, we might as well configure our application to return camel-case JSON (thisIsCamelCase), instead of the default pascal-case (ThisIsPascalCase).

Add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

There is nothing worse than pascal-case JavaScript.

CORS (Cross-Origin Resource Sharing)

Cross-Origin Resource Sharing, or CORS for short, is when a client requests access to a resource (an image, or say, data from an endpoint) from an origin (domain) that is different from the domain where the resource itself originates.

This step is completely optional. We are adding in CORS support here because when we come to write our client app in subsequent posts that follow on from this one, we will likely use a separate HTTP server (for testing and debugging purposes). When released to production, these two apps would use the same host (Internet Information Services (IIS)).

To enable CORS, open WebApiConfig.cs and add the following code to the beginning of the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add the following class (in the same file if you prefer for quick reference);

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

In the CORS workflow, before sending a DELETE, PUT or POST request, the client sends an OPTIONS request to check that the domain from which the request originates is the same as the server. If the request domain and server domain are not the same, then the server must include various access headers that describe which domains have access. To enable access to all domains, we just respond with an origin header (Access-Control-Allow-Origin) with an asterisk to enable access for all.

The Access-Control-Allow-Headers header describes which headers the API can accept/is expecting to receive. The Access-Control-Allow-Methods header describes which HTTP verbs are supported/permitted.

See Mozilla Developer Network (MDN) for a more comprehensive write-up on Cross-Origin Resource Sharing (CORS).

Data Model

With Entity Framework configured, lets create our data structure. The API will expose books, and books will have reviews.

Under the Models folder add a new class called Book. Add the following code;

namespace BooksAPI.Models
{
    using System.Collections.Generic;

    public class Book
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Description { get; set; }
        public decimal Price { get; set; }
        public string ImageUrl { get; set; }

        public virtual List<Review> Reviews { get; set; }
    }
}

And add Review, as shown;

namespace BooksAPI.Models
{
    public class Review
    {
        public int Id { get; set; }    
        public string Description { get; set; }    
        public int Rating { get; set; }
        public int BookId { get; set; }
    }
}

Add these entities to the IdentityDbContext we created earlier;

public class BooksContext : IdentityDbContext
{
    public DbSet<Book> Books { get; set; }
    public DbSet<Review> Reviews { get; set; }
}

Be sure to add in the necessary using directives.

A couple of helpful abstractions

We need to abstract a couple of classes that we need to make use of, in order to keep our code clean and ensure that it works correctly.

Under the Core folder, add the following classes;

public class BookUserManager : UserManager<IdentityUser>
{
    public BookUserManager() : base(new BookUserStore())
    {
    }
}

We will make heavy use of the UserManager<T> in our project, and we don’t want to have to initialise it with a UserStore<T> every time we want to make use of it. Whilst adding this is not strictly necessary, it does go a long way to helping keep the code clean.

Now add another class for the UserStore, as shown;

public class BookUserStore : UserStore&lt;IdentityUser&gt;
{
    public BookUserStore() : base(new BooksContext())
    {
    }
}

This code is really important. If we fail to tell the UserStore which DbContext to use, it falls back to some default value.

A network-related or instance-specific error occurred while establishing a connection to SQL Server

I’m not sure what the default value is, all I know is it doesn’t seem to correspond to our applications DbContext. This code will help prevent you from tearing your hair out later wondering why you are getting the super-helpful error message shown above.

API Controller

We need to expose some data to our client (when we write it). Lets take advantage of Entity Frameworks Seed method. The Seed method will pre-populate some books and reviews automatically for us.

Instead of dropping the code in directly for this class (it is very long), please refer to the Configuration.cs file on GitHub.

This code gives us a little bit of starting data to play with, instead of having to add a bunch of data manually each time we make changes to our schema that require the database to be re-initialized (not really in our case as we have an extremely simple data model, but in larger applications this is very useful).

Books Endpoint

Next, we want to create the RESTful endpoint that will retrieve all the books data. Create a new Web API controller called BooksController and add the following;

public class BooksController : ApiController
{
    [HttpGet]
    public async Task<IHttpActionResult> Get()
    {
        using (var context = new BooksContext())
        {
            return Ok(await context.Books.Include(x => x.Reviews).ToListAsync());
        }
    }
}

With this code we are fully exploiting recent changes to the .NET framework; the introduction of async and await. Writing asynchronous code in this manner allows the thread to be released whilst data (Books and Reviews) is being retrieved from the database and converted to objects to be consumed by our code. When the asynchronous operation is complete, the code picks up where it was up to and continues executing. (By which, we mean the hydrated data objects are passed to the underlying framework and converted to JSON/XML and returned to the client).

Reviews Endpoint

We’re also going to enable authorized users to post reviews and delete reviews. For this we will need a ReviewsController with the relevant Post and Delete methods. Add the following code;

Create a new Web API controller called ReviewsController and add the following code;

public class ReviewsController : ApiController
{
    [HttpPost]
    public async Task<IHttpActionResult> Post([FromBody] ReviewViewModel review)
    {
        using (var context = new BooksContext())
        {
            var book = await context.Books.FirstOrDefaultAsync(b => b.Id == review.BookId);
            if (book == null)
            {
                return NotFound();
            }

            var newReview = context.Reviews.Add(new Review
            {
                BookId = book.Id,
                Description = review.Description,
                Rating = review.Rating
            });

            await context.SaveChangesAsync();
            return Ok(new ReviewViewModel(newReview));
        }
    }

    [HttpDelete]
    public async Task<IHttpActionResult> Delete(int id)
    {
        using (var context = new BooksContext())
        {
            var review = await context.Reviews.FirstOrDefaultAsync(r => r.Id == id);
            if (review == null)
            {
                return NotFound();
            }

            context.Reviews.Remove(review);
            await context.SaveChangesAsync();
        }
        return Ok();
    }
}

There are a couple of good practices in play here that we need to highlight.

The first method, Post allows the user to add a new review. Notice the parameter for the method;

[FromBody] ReviewViewModel review

The [FromBody] attribute tells Web API to look for the data for the method argument in the body of the HTTP message that we received from the client, and not in the URL. The second parameter is a view model that wraps around the Review entity itself. Add a new folder to your project called ViewModels, add a new class called ReviewViewModel and add the following code;

public class ReviewViewModel
{
    public ReviewViewModel()
    {
    }

    public ReviewViewModel(Review review)
    {
        if (review == null)
        {
            return;
        }

        BookId = review.BookId;
        Rating = review.Rating;
        Description = review.Description;
    }

    public int BookId { get; set; }
    public int Rating { get; set; }
    public string Description { get; set; }

    public Review ToReview()
    {
        return new Review
        {
            BookId = BookId,
            Description = Description,
            Rating = Rating
        };
    }
}

We are just copying all he properties from the Review entity to the ReviewViewModel entity and vice-versa. So why bother? First reason, to help mitigate a well known under/over-posting vulnerability (good write up about it here) inherent in most web services. Also, it helps prevent unwanted information being sent to the client. With this approach we have to explicitly expose data to the client by adding properties to the view model.

For this scenario, this approach is probably a bit overkill, but I highly recommend it keeping your application secure is important, as well as is the need to prevent leaking of potentially sensitive information. A tool I’ve used in the past to simplify this mapping code is AutoMapper. I highly recommend checking out.

Important note: In order to keep our API RESTful, we return the newly created entity (or its view model representation) back to the client for consumption, removing the need to re-fetch the entire data set.

The Delete method is trivial. We accept the Id of the review we want to delete as a parameter, then fetch the entity and finally remove it from the collection. Calling SaveChangesAsync will make the change permanent.

Meaningful response codes

We want to return useful information back to the client as much as possible. Notice that the Post method returns NotFound(), which translates to a 404 HTTP status code, if the corresponding Book for the given review cannot be found. This is useful for client side error handling. Returning Ok() will return 200 (HTTP ‘Ok’ status code), which informs the client that the operation was successful.

Authentication and Authorization Using OAuth and JSON Web Tokens (JWT)

My preferred approach for dealing with authentication and authorization is to use JSON Web Tokens (JWT). We will open up an OAuth endpoint to client credentials and return a token which describes the users claims. For each of the users roles we will add a claim (which could be used to control which views the user has access to on the client side).

We use OWIN to add our OAuth configuration into the pipeline. Add a new class to the project called Startup.cs and add the following code;

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (BooksAPI.Startup))]

namespace BooksAPI
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureOAuth(app);
        }
    }
}

Notice that Startup is a partial class. I’ve done that because I want to keep this class as simple as possible, because as the application becomes more complicated and we add more and more middle-ware, this class will grow exponentially. You could use a static helper class here, but the preferred method from the MSDN documentation seems to be leaning towards using partial classes specifically.

Under the App_Start folder add a new class called Startup.OAuth.cs and add the following code;

using System;
using System.Configuration;
using BooksAPI.Core;
using BooksAPI.Identity;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.DataHandler.Encoder;
using Microsoft.Owin.Security.Jwt;
using Microsoft.Owin.Security.OAuth;
using Owin;

namespace BooksAPI
{
    public partial class Startup
    {
        public void ConfigureOAuth(IAppBuilder app)
        {            
        }
    }
}

Note. When I wrote this code originally I encountered a quirk. After spending hours pulling out my hair trying to figure out why something was not working, I eventually discovered that the ordering of the code in this class is very important. If you don’t copy the code in the exact same order, you may encounter unexpected behaviour. Please add the code in the same order as described below.

OAuth secrets

First, add the following code;

var issuer = ConfigurationManager.AppSettings["issuer"];
var secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
  • Issuer – a unique identifier for the entity that issued the token (not to be confused with Entity Framework’s entities)
  • Secret – a secret key used to secure the token and prevent tampering

I keep these values in the Web configuration file (Web.config). To be precise, I split these values out into their own configuration file called keys.config and add a reference to that file in the main Web.config. I do this so that I can exclude just the keys from source control by adding a line to my .gitignore file.

To do this, open Web.config and change the <appSettings> section as follows;

<appSettings file="keys.config">
</appSettings>

Now add a new file to your project called keys.config and add the following code;

<appSettings>
  <add key="issuer" value="http://localhost/"/>
  <add key="secret" value="IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw"/>
</appSettings>

Adding objects to the OWIN context

We can make use of OWIN to manage instances of objects for us, on a per request basis. The pattern is comparable to IoC, in that you tell the “container” how to create an instance of a specific type of object, then request the instance using a Get<T> method.

Add the following code;

app.CreatePerOwinContext(() => new BooksContext());
app.CreatePerOwinContext(() => new BookUserManager());

The first time we request an instance of BooksContext for example, the lambda expression will execute and a new BooksContext will be created and returned to us. Subsequent requests will return the same instance.

Important note: The life-cycle of object instance is per-request. As soon as the request is complete, the instance is cleaned up.

Enabling Bearer Authentication/Authorization

To enable bearer authentication, add the following code;

app.UseJwtBearerAuthentication(new JwtBearerAuthenticationOptions
{
    AuthenticationMode = AuthenticationMode.Active,
    AllowedAudiences = new[] { "Any" },
    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
    {
        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
    }
});

The key takeaway of this code;

  • State who is the audience (we’re specifying “Any” for the audience, as this is a required field but we’re not fully implementing it).
  • State who is responsible for generating the tokens. Here we’re using SymmetricKeyIssuerSecurityTokenProvider and passing it our secret key to prevent tampering. We could use the X509CertificateSecurityTokenProvider, which uses a X509 certificate to secure the token (but I’ve found these to be overly complex in the past and I prefer a simpler implementation).

This code adds JWT bearer authentication to the OWIN pipeline.

Enabling OAuth

We need to expose an OAuth endpoint so that the client can request a token (by passing a user name and password).

Add the following code;

app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
    AllowInsecureHttp = true,
    TokenEndpointPath = new PathString("/oauth2/token"),
    AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
    Provider = new CustomOAuthProvider(),
    AccessTokenFormat = new CustomJwtFormat(issuer)
});

Some important notes with this code;

  • We’re going to allow insecure HTTP requests whilst we are in development mode. You might want to disable this using a #IF Debug directive so that you don’t allow insecure connections in production.
  • Open an endpoint under /oauth2/token that accepts post requests.
  • When generating a token, make it expire after 30 minutes (1800 seconds).
  • We will use our own provider, CustomOAuthProvider, and formatter, CustomJwtFormat, to take care of authentication and building the actual token itself.

We need to write the provider and formatter next.

Formatting the JWT

Create a new class under the Identity folder called CustomJwtFormat.cs. Add the following code;

namespace BooksAPI.Identity
{
    using System;
    using System.Configuration;
    using System.IdentityModel.Tokens;
    using Microsoft.Owin.Security;
    using Microsoft.Owin.Security.DataHandler.Encoder;
    using Thinktecture.IdentityModel.Tokens;

    public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
        private static readonly byte[] _secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
        private readonly string _issuer;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException(nameof(data));
            }

            var signingKey = new HmacSigningCredentials(_secret);
            var issued = data.Properties.IssuedUtc;
            var expires = data.Properties.ExpiresUtc;

            return new JwtSecurityTokenHandler().WriteToken(new JwtSecurityToken(_issuer, null, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey));
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }
}

This is a complicated looking class, but its pretty straightforward. We are just fetching all the information needed to generate the token, including the claims, issued date, expiration date, key and then we’re generating the token and returning it back.

Please note: Some of the code we are writing today was influenced by JSON Web Token in ASP.NET Web API 2 using OWIN by Taiseer Joudeh. I highly recommend checking it out.

The authentication bit

We’re almost there, honest! Now we want to authenticate the user.

using System.Linq;
using System.Security.Claims;
using System.Security.Principal;
using System.Threading;
using System.Threading.Tasks;
using System.Web;
using BooksAPI.Core;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.AspNet.Identity.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.OAuth;

namespace BooksAPI.Identity
{
    public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {
        public override Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {
            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] {"*"});

            var user = context.OwinContext.Get<BooksContext>().Users.FirstOrDefault(u => u.UserName == context.UserName);
            if (!context.OwinContext.Get<BookUserManager>().CheckPassword(user, context.Password))
            {
                context.SetError("invalid_grant", "The user name or password is incorrect");
                context.Rejected();
                return Task.FromResult<object>(null);
            }

            var ticket = new AuthenticationTicket(SetClaimsIdentity(context, user), new AuthenticationProperties());
            context.Validated(ticket);

            return Task.FromResult<object>(null);
        }

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        private static ClaimsIdentity SetClaimsIdentity(OAuthGrantResourceOwnerCredentialsContext context, IdentityUser user)
        {
            var identity = new ClaimsIdentity("JWT");
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));

            var userRoles = context.OwinContext.Get<BookUserManager>().GetRoles(user.Id);
            foreach (var role in userRoles)
            {
                identity.AddClaim(new Claim(ClaimTypes.Role, role));
            }

            return identity;
        }
    }
}

As we’re not checking the audience, when ValidateClientAuthentication is called we can just validate the request. When the request has a grant_type of password, which all our requests to the OAuth endpoint will have, the above GrantResourceOwnerCredentials method is executed. This method authenticates the user and creates the claims to be added to the JWT.

Testing

There are 2 tools you can use for testing this.

Technique 1 – Using the browser

Open up a web browser, and navigate to the books URL.

Testing with the web browser

You will see the list of books, displayed as XML. This is because Web API can serve up data either as XML or as JSON. Personally, I do not like XML, JSON is my choice these days.

Technique 2 (Preferred) – Using Postman

To make Web API respond in JSON we need to send along a Accept header. The best tool to enable use to do this (for Google Chrome) is Postman. Download it and give it a go if you like.

Drop the same URL into the Enter request URL field, and click Send. Notice the response is in JSON;

Postman response in JSON

This worked because Postman automatically adds the Accept header to each request. You can see this by clicking on the Headers tab. If the header isn’t there and you’re still getting XML back, just add the header as shown in the screenshot and re-send the request.

To test the delete method, change the HTTP verb to Delete and add the ReviewId to the end of the URL. For example; http://localhost:62996/api/reviews/9

Putting it all together

First, we need to restrict access to our endpoints.

Add a new file to the App_Start folder, called FilterConfig.cs and add the following code;

public class FilterConfig
{
    public static void Configure(HttpConfiguration config)
    {
        config.Filters.Add(new AuthorizeAttribute());
    }
}

And call the code from Global.asax.cs as follows;

GlobalConfiguration.Configure(FilterConfig.Configure);

Adding this code will restrict access to all endpoints (except the OAuth endpoint) to requests that have been authenticated (a request that sends along a valid Jwt).

You have much more fine-grain control here, if required. Instead of adding the above code, you could instead add the AuthorizeAttribute to specific controllers or even specific methods. The added benefit here is that you can also restrict access to specific users or specific roles;

Example code;

[Authorize(Roles = "Admin")]

The roles value (“Admin”) can be a comma-separated list. For us, restricting access to all endpoints will suffice.

To test that this code is working correctly, simply make a GET request to the books endpoint;

GET http://localhost:62996/api/books

You should get the following response;

{
  "message": "Authorization has been denied for this request."
}

Great its working. Now let’s fix that problem.

Make a POST request to the OAuth endpoint, and include the following;

  • Headers
    • Accept application/json
    • Accept-Language en-gb
    • Audience Any
  • Body
    • username administrator
    • password administrator123
    • grant_type password

Shown in the below screenshot;

OAuth Request

Make sure you set the message type as x-www-form-urlencoded.

If you are interested, here is the raw message;

POST /oauth2/token HTTP/1.1
Host: localhost:62996
Accept: application/json
Accept-Language: en-gb
Audience: Any
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache
Postman-Token: 8bc258b2-a08a-32ea-3cb2-2e7da46ddc09

username=administrator&password=administrator123&grant_type=password

The form data has been URL encoded and placed in the message body.

The web service should authenticate the request, and return a token (Shown in the response section in Postman). You can test that the authentication is working correctly by supplying an invalid username/password. In this case, you should get the following reply;

{
  "error": "invalid_grant"
}

This is deliberately vague to avoid giving any malicious users more information than they need.

Now to get a list of books, we need to call the endpoint passing in the token as a header.

Change the HTTP verb to GET and change the URL to; http://localhost:62996/api/books.

On the Headers tab in Postman, add the following additional headers;

Authorization Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6ImFkbWluaXN0cmF0b3IiLCJzdWIiOiJhZG1pbmlzdHJhdG9yIiwicm9sZSI6IkFkbWluaXN0cmF0b3IiLCJpc3MiOiJodHRwOi8vand0YXV0aHpzcnYuYXp1cmV3ZWJzaXRlcy5uZXQiLCJhdWQiOiJBbnkiLCJleHAiOjE0NTgwNDI4MjgsIm5iZiI6MTQ1ODA0MTAyOH0.uhrqQW6Ik_us1lvDXWJNKtsyxYlwKkUrCGXs-eQRWZQ

See screenshot below;

Authorization Header

Success! We have data from our secure endpoint.

Summary

In this introduction we looked at creating a project using Web API to issue and authenticate Jwt (JSON Web Tokens). We created a simple endpoint to retrieve a list of books, and also added the ability to get a specific book/review and delete reviews in a RESTful way.

This project is the foundation for subsequent posts that will explore creating a rich client side application, using modern JavaScript frameworks, which will enable authentication and authorization.

WCF custom authentication using ServiceCredentials

The generally accepted way of authenticating a user with WCF is with a User Name and Password with the UserNamePasswordValidator class.  So common that even MSDN has a tutorial, and the MSDN documentation for WCF is seriously lacking at best.  The username/password approach does what it says on the tin, you pass along a username and password credential from the client to the server, do your authentication, and only if there is a problem then you throw an exception.  It’s a primitive approach, but it works.  But what about when you want to do something a little bit less trivial than that? ServiceCredentials is probably what you need.

Source code for this post is available on GitHub.

Scenario

I should prefix this tutorial with a disclaimer, and this disclaimer is just my opinion.  WCF is incredibly poorly documented and at times counter intuitive.  In fact, I generally avoid WCF development like the black plague, preferring technologies such as Web API.  The saving grace of WCF is that you have full control over a much more substantial set of functionality, and you’re not limited by REST but empowered by SOAP.  WCF plays particularly nicely with WPF, my favourite desktop software technology.  I’ve never used WCF as part of a web service before, and I doubt I ever will.

Tangent aside, sometimes its not appropriate to authenticate a user with simply a username or password.  You might want to pass along a User Name and a License Key, along with some kind of unique identification code based on the hardware configuration of the users computer.  Passing along this kind of information in a clean way can’t be done with the simple UserNamePasswordValidator, without using some hacky kind of delimited string approach (“UserName~LicenseKey~UniqueCode”).

So this is what we will do for this tutorial; pass a User Name, License Key and “Unique Key” from the client to the server for authentication and authorization.  And for security, we will avoid using WsHttpBinding and instead create a CustomBinding and use an SSL certificate (PFX on the server, CER on the client).  The reasons for this are discussed throughout this tutorial, but primarily because I’ve encountered so many problems with WsHttpBinding when used in a load balanced environment that its just not worth the hassle.

As a final note, we will also go “configuration free”.   All of this is hard coded because I can’t make the assumption that if you use this code in a production environment that you will have access to the machine certificate store, which a lot of web hosting providers restrict access to. As far as I know, the SSL certificate cannot be loaded from a file or a resource using the Web.config.

Server Side Implementation

Basic Structure

All preamble aside, lets dive straight in.  This tutorial isn’t about creating a full featured WCF service (a quick Google of the term “WCF Tutorial” presents about 878,000 results for that) so the specific implementation details aren’t important.  What is important is that you have a Service Contract with at least one Operation Contract, for testing purposes.  Create a new WCF Service Application in Visual Studio, and refactor the boiler plate code as follows;

[ServiceContract]
public interface IEchoService
{
    [OperationContract]
    string Echo(int value);
}

public class EchoService : IEchoService
{
    public string Echo(int value)
    {
        return string.Format("You entered: {0}", value);
    }
}

And rename the SVC file to EchoService.svc.

Open up the Web.config file and delete everything inside the <system.serviceModel> element.  You don’t need any of that.

NuGet Package

It is not exactly clear to me why, but you’ll also need to install the NuGet package Microsoft ASP.NET Web Pages (Install-Package Microsoft.AspNet.WebPages).  I suppose this might be used for the WSDL definition page or the help page.  I didn’t really look into it.

 

Hosting In Local IIS (Internet Information Services)

I’m hosting this in IIS on my local machine (using a self-signed certificate) but I’ve thoroughly tested on a real server using a “real” SSL certificate, so I’ll give you some helpful hints of that as we go along.

First things first;

  1. Open IIS Manager (inetmgr)
  2. Add a new website called “echo”
  3. Add a HTTP binding with the host name “echo.local”
  4. Open up the hosts file (C:\Windows\System32\drivers\etc) and add an entry for “echo.local” and IP address 127.0.0.1
  5. Use your favourite SSL self signed certificate creation tool to generate a certificate for cn=echo.local  (See another tutorial I wrote that explains how to do this).  Be sure to save the SSL certificate in PFX format, this is important for later.
  6. The quickest way I’ve found to generate the CER file (which is the certificate excluding the private key, for security) is to import the PFX into the Personal certificate store for your local machine.  Then right click > All Tasks > Export (excluding private key) and select DER encoded binary X.509 (.CER).  Save to some useful location for use later.  Naturally when doing this “for real”, your SSL certificate provider will provide the PFX and CER (and tonnes of other formats) so you can skip this step.  This tutorial assumes you don’t have access to the certificate store (either physically or programmatically) on the production machine.
  7. DO NOT add a binding for HTTPS unless you are confident that your web host fully supports HTTPS connections.  More on this later.
  8. Flip back to Visual Studio and publish your site to IIS.  I like to publish in “Debug” mode initially, just to make debugging slightly less impossible.

ImportCertificate

Open your favourite web browser and navigate to http://echo.local/EchoService.svc?wsdl.  You won’t get much of anything at this time, just a message to say that service metadata is unavailable and instructions on how to turn it on.  Forget it, its not important.

Beyond UserNamePasswordValidator

Normally at this stage you would create a UserNamePasswordValidator, add your database/authentication/authorization logic and be done after about 10 minutes of effort.  Well forget that, you should expect to spend at least the next hour creating a myriad of classes and helpers, authenticators, policies, tokens, factories and credentials.  Hey, I never said this was easy, just that it can be done.

Factory Pattern

The default WCF Service Application template you used to create the project generates a ServiceHost object with a Service property that points to the actual implementation of our service, the guts.  We need to change this to use a ServiceHostFactory, which will spawn new service hosts for us.  Right click on the EchoService.svc file and change the Service property to Factory, and EchoService to EchoServiceFactory;

//Change 
Service="WCFCustomClientCredentials.EchoService"

//To
Factory="WCFCustomClientCredentials.EchoServiceFactory"

Just before we continue, add a new class to your project called EchoServiceHost and derive from ServiceHost.  This is the actual ServiceHost that was previously created automatically under the hood for us.  We will flesh this out over the course of the tutorial.  For now, just add a constructor that takes an array of base addresses for our service, and which passes the type of the service to the base.

public class EchoServiceHost : ServiceHost
{
    public EchoServiceHost(params Uri[] addresses)
        : base(typeof(EchoService), addresses)
    {

    }
}

Now add another new class to your project, named EchoServiceFactory, and derived from ServiceHostFactoryBase.  Override CreateServiceHost and return a new instance of EchoServiceHost with the appropriate base address.

public override ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses)
{
    return new EchoServiceHost(new[]
    {
        new Uri("http://echo.local/")
    });
}

We won’t initialize the ServiceHost just let, we’ll come back to that later.

Custom ServiceCredentials

ServiceCredentials has many responsibilities, including; serialization/deserialization and authentication/authorization.  Not to be confused with ClientCredentials, which has the additional responsibility of generating a token which contains all the fields to pass to the service (User Name, License Key and Unique Code).  There is a pretty decent tutorial on MSDN which explains some concepts in a little bit more detail that I will attempt.  The ServiceCredentials will (as well as all the aforementioned things) load in our SSL certificate and use that to verify (using the private key) that the certificate passed from the client is valid before attempting authentication/authorization. Before creating the ServiceCredentials class, add each of the following;

  1. EchoServiceCredentialsSecurityTokenManager which derives from ServiceCredentialsSecurityTokenManager.
  2. EchoSecurityTokenAuthenticator which derives from SecurityTokenAuthenticator.

Use ReSharper or Visual Studio IntelliSense to stub out any abstract methods for the time being.  We will flesh these out as we go along.

You will need to add a reference to System.IdentityModel, which we will need when creating our authorization policies next.

You can now flesh out the EchoServiceCredentials class as follows;

public class EchoServiceCredentials : ServiceCredentials
{
    public override SecurityTokenManager CreateSecurityTokenManager()
    {
        return new EchoServiceCredentialsSecurityTokenManager(this);
    }

    protected override ServiceCredentials CloneCore()
    {
        return new EchoServiceCredentials();
    }
}

If things are not clear at this stage, stick with me… your understanding will improve as we go along.

Namespaces and constant values

Several namespaces are required to identify our custom token and its properties.  It makes sense to stick these properties all in one place as constants, which we will also make available to the client later.  The token is ultimately encrypted using a Symmetric encryption algorithm (as shown later), so we can’t see the namespaces in the resulting SOAP message, but I’m sure they’re there.

Create a new class called EchoConstants, and add the following;

public class EchoConstants
{
    public const string EchoNamespace = "https://echo/";

    public const string EchoLicenseKeyClaim = EchoNamespace + "Claims/LicenseKey";
    public const string EchoUniqueCodeClaim = EchoNamespace + "Claims/UniqueCode";
    public const string EchoUserNameClaim = EchoNamespace + "Claims/UserName";
    public const string EchoTokenType = EchoNamespace + "Tokens/EchoToken";

    public const string EchoTokenPrefix = "ct";
    public const string EchoUrlPrefix = "url";
    public const string EchoTokenName = "EchoToken";
    public const string Id = "Id";
    public const string WsUtilityPrefix = "wsu";
    public const string WsUtilityNamespace = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd";

    public const string EchoLicenseKeyElementName = "LicenseKey";
    public const string EchoUniqueCodeElementName = "UniqueCodeKey";
    public const string EchoUserNameElementName = "UserNameKey";
}

All these string values (except for the WsUtilityNamespace) are arbitrary values.  They give the message structure and conformity with open standards.

We will use these constant values throughout the remainder of the tutorial.

Security Token

Lets work through this starting with the most interesting classes first, and work backwards in descending order.  The SecurityToken contains all our custom credentials that we will ultimately use to determine if the user is allowed to use the service.  A security token can contain pretty much anything you want, as long as the token itself has a unique ID, and a valid from/to date and time.

Add the following class to your project;

public class EchoToken : SecurityToken
{
    private readonly DateTime _effectiveTime = DateTime.UtcNow;
    private readonly string _id;
    private readonly ReadOnlyCollection<SecurityKey> _securityKeys;

    public string LicenseKey { get; set; }
    public string UniqueCode { get; set; }
    public string UserName { get; set; }

    public EchoToken(string licenseKey, string uniqueCode, string userName, string id = null)
    {
        LicenseKey = licenseKey;
        UniqueCode = uniqueCode;
        UserName = userName;

        _id = id ?? Guid.NewGuid().ToString();
        _securityKeys = new ReadOnlyCollection<SecurityKey>(new List<SecurityKey>());
    }

    public override string Id
    {
        get { return _id; }
    }

    public override ReadOnlyCollection<SecurityKey> SecurityKeys
    {
        get { return _securityKeys; }
    }

    public override DateTime ValidFrom
    {
        get { return _effectiveTime; }
    }

    public override DateTime ValidTo
    {
        get { return DateTime.MaxValue; }
    }
}

There are a few things to note here;

  1. The token has a unique identifier, in this case a random Guid.  You can use whatever mechanism you like here, as long as it results in a unique identifier for the token.
  2. The token is valid from now until forever.  You might want to put a realistic timeframe in place here.
  3. I don’t know what SecurityKeys is for, and it doesn’t seem to matter.

Before you rush off to MSDN, here is what it says;

Base class for security keys.

Helpful.

We’re not quite ready to use this token yet, so we’ll revisit later.  All the pieces come together at once, like a really dull jigsaw.

Authorization Policy

We only care at this point about authorizing the request based on the User Name, License Key and Unique Code provided in the token.  We could however use an Authorization Policy to limit access to certain service methods based on any one of these factors.  If you want to restrict access to your API in this way, see the MSDN documentation for more information.  If, however, the basic authorization is good enough for you, add the following code;

public class EchoTokenAuthorizationPolicy : IAuthorizationPolicy
{
    private readonly string _id;
    private readonly IEnumerable<ClaimSet> _issuedClaimSets;
    private readonly ClaimSet _issuer;

    public EchoTokenAuthorizationPolicy(ClaimSet issuedClaims)
    {
        if (issuedClaims == null)
        {
            throw new ArgumentNullException("issuedClaims");
        }

        _issuer = issuedClaims.Issuer;
        _issuedClaimSets = new[] { issuedClaims };
        _id = Guid.NewGuid().ToString();
    }

    public ClaimSet Issuer
    {
        get { return _issuer; }
    }

    public string Id
    {
        get { return _id; }
    }

    public bool Evaluate(EvaluationContext context, ref object state)
    {
        foreach (ClaimSet issuance in _issuedClaimSets)
        {
            context.AddClaimSet(this, issuance);
        }

        return true;
    }
}

The key to this working is the Evaluate method.  We are just adding each claim to the EvaluationContext claim set, without doing any sort of checks.  This is fine because we will do our own authorization as part of the SecurityTokenAuthenticator, shown next.

Security Token Authentication and Authorization

Now that we have our Authorization Policies in place, we can get down to business and tell WCF to allow or deny the request.  We must create a class that derives from SecurityTokenAuthenticator, and override the ValidateTokenCore method.  If an exception is thrown in this method, the request will be rejected.  You’re also required to return the authorization policies, which will be evaluated accordingly and the request rejected if the token does not have the claims required to access the desired operation.  How you authorize/authenticate the request is down to you, but will inevitably involve some database call or similar tasks to check for the existence and legitimacy of the given token parameters.

Here is a sample implementation;

public class EchoSecurityTokenAuthenticator : SecurityTokenAuthenticator
{
    protected override bool CanValidateTokenCore(SecurityToken token)
    {
        return (token is EchoToken);
    }

    protected override ReadOnlyCollection<IAuthorizationPolicy> ValidateTokenCore(SecurityToken token)
    {
        var echoToken = token as EchoToken;

        if (echoToken == null)
        {
            throw new ArgumentNullException("token");
        }

        var authorizationException = IsAuthorized(echoToken.LicenseKey, echoToken.UniqueCode, echoToken.UserName);
        if (authorizationException != null)
        {
            throw authorizationException;
        }

        var policies = new List<IAuthorizationPolicy>(3)
        {
            CreateAuthorizationPolicy(EchoConstants.EchoLicenseKeyClaim, echoToken.LicenseKey, Rights.PossessProperty),
            CreateAuthorizationPolicy(EchoConstants.EchoUniqueCodeClaim, echoToken.UniqueCode, Rights.PossessProperty),
            CreateAuthorizationPolicy(EchoConstants.EchoUserNameClaim, echoToken.UserName, Rights.PossessProperty),
        };

        return policies.AsReadOnly();
    }

    private static Exception IsAuthorized(string licenseKey, string uniqueCode, string userName)
    {
        Exception result = null;

        //Check if user is authorized.  If not you must return a FaultException

        return result;
    }

    private static EchoTokenAuthorizationPolicy CreateAuthorizationPolicy<T>(string claimType, T resource, string rights)
    {
        return new EchoTokenAuthorizationPolicy(new DefaultClaimSet(new Claim(claimType, resource, rights)));
    }
}

Token Serialization

Before we can continue, we have neglected to discuss one very important detail.  WCF generates messages in XML SOAP format for standardised communication between the client and the server applications.  This is achieved by serializing the token using a token serializer.  Surprisingly, however, this doesn’t happen automatically.  You have to give WCF a hand and tell it exactly how to both read and write the messages.  It gives you the tools (an XmlReader and XmlWriter) but you have to do the hammering yourself.

The code for this isn’t short, so I apologise for that.  Here is an explanation of what happens;

  1. CanReadTokenCore is called when deserializing a token.  The responsibility of this method is to tell the underlying framework if this class is capable of reading the token contents.
  2. ReadTokenCore is called with an XmlReader, which provides access to the raw token itself.  You use the XmlReader to retrieve the parts of the token of interest (the User Name, Unique Code and License Key) and ultimately return a new SecurityToken (EchoSecurityToken).
  3. CanWriteTokenCore is called when serializing a token.  Return true if the serializer is capable of serializing then given token.
  4. WriteTokenCore is called with an XmlWriter and the actual SecurityToken.  Use both objects to do the serialization manually.

And the code itself;

public class EchoSecurityTokenSerializer : WSSecurityTokenSerializer
{
    private readonly SecurityTokenVersion _version;

    public EchoSecurityTokenSerializer(SecurityTokenVersion version)
    {
        _version = version;
    }

    protected override bool CanReadTokenCore(XmlReader reader)
    {
        if (reader == null)
        {
            throw new ArgumentNullException("reader");
        }
        if (reader.IsStartElement(EchoConstants.EchoTokenName, EchoConstants.EchoNamespace))
        {
            return true;
        }
        return base.CanReadTokenCore(reader);
    }

    protected override SecurityToken ReadTokenCore(XmlReader reader, SecurityTokenResolver tokenResolver)
    {
        if (reader == null)
        {
            throw new ArgumentNullException("reader");
        }
        if (reader.IsStartElement(EchoConstants.EchoTokenName, EchoConstants.EchoNamespace))
        {
            string id = reader.GetAttribute(EchoConstants.Id, EchoConstants.WsUtilityNamespace);

            reader.ReadStartElement();

            string licenseKey = reader.ReadElementString(EchoConstants.EchoLicenseKeyElementName, EchoConstants.EchoNamespace);
            string companyKey = reader.ReadElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace);
            string machineKey = reader.ReadElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace);

            reader.ReadEndElement();

            return new EchoToken(licenseKey, companyKey, machineKey, id);
        }
        return DefaultInstance.ReadToken(reader, tokenResolver);
    }

    protected override bool CanWriteTokenCore(SecurityToken token)
    {
        if (token is EchoToken)
        {
            return true;
        }
        return base.CanWriteTokenCore(token);
    }

    protected override void WriteTokenCore(XmlWriter writer, SecurityToken token)
    {
        if (writer == null)
        {
            throw new ArgumentNullException("writer");
        }
        if (token == null)
        {
            throw new ArgumentNullException("token");
        }

        var EchoToken = token as EchoToken;
        if (EchoToken != null)
        {
            writer.WriteStartElement(EchoConstants.EchoTokenPrefix, EchoConstants.EchoTokenName, EchoConstants.EchoNamespace);
            writer.WriteAttributeString(EchoConstants.WsUtilityPrefix, EchoConstants.Id, EchoConstants.WsUtilityNamespace, token.Id);
            writer.WriteElementString(EchoConstants.EchoLicenseKeyElementName, EchoConstants.EchoNamespace, EchoToken.LicenseKey);
            writer.WriteElementString(EchoConstants.EchoUniqueCodeElementName, EchoConstants.EchoNamespace, EchoToken.UniqueCode);
            writer.WriteElementString(EchoConstants.EchoUserNameElementName, EchoConstants.EchoNamespace, EchoToken.UserName);
            writer.WriteEndElement();
            writer.Flush();
        }
        else
        {
            base.WriteTokenCore(writer, token);
        }
    }
}

Service Credentials Security Token Manager

A long time ago… in a blog post right here, you created a class called EchoServiceCredentialsSecurityTokenManager.  The purpose of this class is to tell WCF that we want to use our custom token authenticator (EchoSecurityTokenAuthenticator) when it encounters our custom token.

Update the EchoServiceCredentialsSecurityTokenManager as follows;

public class EchoServiceCredentialsSecurityTokenManager : ServiceCredentialsSecurityTokenManager
{
    public EchoServiceCredentialsSecurityTokenManager(ServiceCredentials parent)
        : base(parent)
    {
    }

    public override SecurityTokenAuthenticator CreateSecurityTokenAuthenticator(SecurityTokenRequirement tokenRequirement, out SecurityTokenResolver outOfBandTokenResolver)
    {
        if (tokenRequirement.TokenType == EchoConstants.EchoTokenType)
        {
            outOfBandTokenResolver = null;
            return new EchoSecurityTokenAuthenticator();
        }
        return base.CreateSecurityTokenAuthenticator(tokenRequirement, out outOfBandTokenResolver);
    }

    public override SecurityTokenSerializer CreateSecurityTokenSerializer(SecurityTokenVersion version)
    {
        return new EchoSecurityTokenSerializer(version);
    }
}

The code is pretty self explanatory.  When an EchoToken is encountered, use the EchoSecurityTokenAuthenticator to confirm that the token is valid, authentic and authorized.  Also, the token can be serialized/deserialized using the EchoSecurityTokenSerializer.

Service Host Endpoints

The last remaining consideration is exposing endpoints so that the client has “something to connect to”.  This is done in EchoServiceHost by overriding the InitializeRuntime method, as shown;

protected override void InitializeRuntime()
{
    var baseUri = new Uri("http://echo.local");
    var serviceUri = new Uri(baseUri, "EchoService.svc");

    Description.Behaviors.Remove((typeof(ServiceCredentials)));

    var serviceCredential = new EchoServiceCredentials();
    serviceCredential.ServiceCertificate.Certificate = new X509Certificate2(Resources.echo, string.Empty, X509KeyStorageFlags.MachineKeySet);
    Description.Behaviors.Add(serviceCredential);

    var behaviour = new ServiceMetadataBehavior { HttpGetEnabled = true, HttpsGetEnabled = false };
    Description.Behaviors.Add(behaviour);

    Description.Behaviors.Find<ServiceDebugBehavior>().IncludeExceptionDetailInFaults = true;
    Description.Behaviors.Find<ServiceDebugBehavior>().HttpHelpPageUrl = serviceUri;

    AddServiceEndpoint(typeof(IEchoService), new BindingHelper().CreateHttpBinding(), string.Empty);

    base.InitializeRuntime();
}

The code does the following;

  1. Define the base URL of the and the service URL
  2. Remove the default implementation of ServiceCredentials, and replace with our custom implementation.  Ensure that the custom implementation uses our SSL certificate (in this case, the SSL certificate is added to the project as a resource).  If the PFX (and it must be a PFX) requires a password, be sure to specify it.
  3. Define and add a metadata endpoint (not strictly required)
  4. Turn on detailed exceptions for debugging purposes, and expose a help page (again not strictly required)
  5. Add an endpoint for our service, use a custom binding.  (DO NOT attempt to use WsHttpBinding or BasicHttpsBinding, you will lose 4 days of your life trying to figure out why it doesn’t work in a load balanced environment!)

Custom Http Binding

In the interest of simplicity, I want the server and the client to use the exact same binding.  To make this easier, I’ve extracted the code out into a separate helper class which will be referenced by both once we’ve refactored (discussed next).  We’re using HTTP  right now but we will discuss security and production environments towards the end of the post.  The custom binding will provide some level of security via a Symmetric encryption algorithm that will be applied to aspects of the message.

public Binding CreateHttpBinding()
{
    var httpTransport = new HttpTransportBindingElement
    {
        MaxReceivedMessageSize = 10000000
    };

    var messageSecurity = new SymmetricSecurityBindingElement();

    var x509ProtectionParameters = new X509SecurityTokenParameters
    {
        InclusionMode = SecurityTokenInclusionMode.Never
    };

    messageSecurity.ProtectionTokenParameters = x509ProtectionParameters;
    return new CustomBinding(messageSecurity, httpTransport);
}

Note, I’ve increased the max message size to 10,000,000 bytes (10MB ish) because this is appropriate for my scenario.  You might want to think long and hard about doing this.  The default message size limit is relatively small to help ward off DDoS attacks, so think carefully before changing the default.  10MB is a lot of data to receive in a single request, even though it might not sound like much.

With the endpoint now exposed, a client (if we had one) would be able to connect.  Lets do some refactoring first to make our life a bit easier.

Refactoring

In the interest of simplicity, I haven’t worried too much about the client so far.  We need to make some changes to the project structure so that some of the lovely code we have written so far can be shared and kept DRY.  Add a class library to your project, called Shared and move the following classes into it (be sure to update the namespaces and add the appropriate reference).

  1. BindingHelper.cs
  2. IEchoService.cs
  3. EchoSecurityTokenSerializer.cs
  4. EchoConstants.cs
  5. EchoToken.cs

Client Side Implementation

We’re about 2/3 of the way through now.  Most of the leg work has been done and we just have to configure the client correctly so it can make first contact with the server.

Create a new console application (or whatever you fancy) and start by adding a reference to the Shared library you just created for the server.  Add the SSL certificate (CER format, doesn’t contain the private key) to your project as a resource.  Also add a reference to System.ServiceModel.

Custom ClientCredentials

The ClientCredentials works in a similar way to ServiceCredentials, but a couple of subtle differences.  When you instantiate the ClientCredentials, you want to pass it all the arbitrary claims you want to pass to the WCF service (License Key, Unique Code, User Name).  This object will be passed to the serializer that you created as part of the server side code (EchoSecurityTokenSerializer) later on.

First things first, create the EchoClientCredentials class as follows;

public class EchoClientCredentials : ClientCredentials
{
    public string LicenseKey { get; private set; }
    public string UniqueCode { get; private set; }
    public string ClientUserName { get; private set; }

    public EchoClientCredentials(string licenseKey, string uniqueCode, string userName)
    {
        LicenseKey = licenseKey;
        UniqueCode = uniqueCode;
        ClientUserName = userName;
    }

    protected override ClientCredentials CloneCore()
    {
        return new EchoClientCredentials(LicenseKey, UniqueCode, ClientUserName);
    }

    public override SecurityTokenManager CreateSecurityTokenManager()
    {
        return new EchoClientCredentialsSecurityTokenManager(this);
    }
}

The ClientCredentials has an abstract method CreateSecurityTokenManager, where we will use to tell WCF how to ultimately generate our token.

Client side Security Token Manager

As discussed, the ClientCredentialsSecurityTokenManager is responsible for “figuring out” what to do with a token that it has encountered.  Before it uses its own underlying token providers, it gives us the chance to specify our own, by calling CreateSecurityTokenProvider.  We can check the token type to see if we can handle that token ourselves.

Create a new class, called EchoClientCredentialsSecurityTokenManager, that derives from ClientCredentialsSecurityTokenManager, and add the following code;

public class EchoClientCredentialsSecurityTokenManager : ClientCredentialsSecurityTokenManager
{
    private readonly EchoClientCredentials _credentials;

    public EchoClientCredentialsSecurityTokenManager(EchoClientCredentials connectClientCredentials)
        : base(connectClientCredentials)
    {
        _credentials = connectClientCredentials;
    }

    public override SecurityTokenProvider CreateSecurityTokenProvider(SecurityTokenRequirement tokenRequirement)
    {
        if (tokenRequirement.TokenType == EchoConstants.EchoTokenType)
        {
            // Handle this token for Custom.
            return new EchoTokenProvider(_credentials);
        }
        if (tokenRequirement is InitiatorServiceModelSecurityTokenRequirement)
        {
            // Return server certificate.
            if (tokenRequirement.TokenType == SecurityTokenTypes.X509Certificate)
            {
                return new X509SecurityTokenProvider(_credentials.ServiceCertificate.DefaultCertificate);
            }
        }
        return base.CreateSecurityTokenProvider(tokenRequirement);
    }

    public override SecurityTokenSerializer CreateSecurityTokenSerializer(SecurityTokenVersion version)
    {
        return new EchoSecurityTokenSerializer(version);
    }
}

The code is pretty verbose, and we can see clearly what is happening here.  We can inspect the token type and see if it makes that of our Echo token.  If we find a match, return an EchoTokenProvider (coming next) which is just simply a wrapper containing our claims.  Note that we also are able to reuse the token serializer that we created as part of the server side work, a nice (not so little) time saver!

Security Token Provider

In this case, the security token provider is nothing more than a vessel that contains our client credentials.  The token provider instantiates the token, passes the client credentials, and passes the token off for serialization.

public class EchoTokenProvider : SecurityTokenProvider
{
    private readonly EchoClientCredentials _credentials;

    public EchoTokenProvider(EchoClientCredentials credentials)
    {
        if (credentials == null) throw new ArgumentNullException("credentials");

        _credentials = credentials;
    }

    protected override SecurityToken GetTokenCore(TimeSpan timeout)
    {
        return new EchoToken(_credentials.LicenseKey, _credentials.UniqueCode, _credentials.ClientUserName);
    }
}

Test Client

The client side code for establishing a connection with our service is relatively simple. We need each of the following:

  1. Define the endpoint (the address) of our service
  2. Create an instance of EchoClientCredentials
  3. Load the SSL certificate (the public key aspect at least) and pass to the credentials object we just instantiated
  4. Remove the default implementation of ClientCredentials and pass in our own
  5. Create a channel factory, and call our service method

Here is an example of what your client code would look like;

var serviceAddress = new EndpointAddress("http://echo.local/EchoService.svc");

var channelFactory = new ChannelFactory<IEchoService>(new BindingHelper().CreateHttpBinding(), serviceAddress);

var credentials = new EchoClientCredentials("license key", "unique code", "user name");
var certificate = new X509Certificate2(Resources.echo);
credentials.ServiceCertificate.DefaultCertificate = certificate;

channelFactory.Endpoint.Behaviors.Remove(typeof(ClientCredentials));
channelFactory.Endpoint.Behaviors.Add(credentials);

var service = channelFactory.CreateChannel();
Console.WriteLine(service.Echo(10));

Security and Production Environment Considerations

Throughout this tutorial I have used HTTP bindings and told you explicitly not to use HTTPS, and there is a very good reason for that.  If you have a simple hosting environment, i.e. an environment that is NOT load balanced, then you can go ahead and make the following changes;

  • Change your service URL to HTTPS
  • Change HttpTransportBindingElement (on the server, inside the BindingHelper) to HttpsTransportBindingElement.
  • Add a HTTPS binding in IIS

Re-launch the client and all should be good.  If you get the following error message, you’re in big trouble.

The protocol ‘https’ is not supported.

After 4 days of battling with this error, I found what the problem is.  Basically WCF requires end to end HTTPS for HTTPS to be “supported”.  Take the following set up;

load-balancing-1

Some hosting companies will load balance the traffic.  That makes absolutely perfect sense and is completely reasonable.  The communications will be made from the client (laptop, desktop or whatever) via HTTPS, that bit is fine.  If you go to the service via HTTPS you will get a response.  However, and here’s the key, the communication between the load balancer and the physical web server probably isn’t secured.  I.e. doesn’t use HTTPS.  So the end-to-end communication isn’t HTTPS and therefore you get the error message described.

To work around this, use a HTTPS binding on the client, and a HTTP binding on the server.  This will guarantee that the traffic between the client and the server will be secure (thus preventing MIM attacks) but the traffic between the load balancer and the physical web server will not be secure (you’ll have to decide for yourself if you can live with that).

Quirks

I’ve encountered a few quirks whilst developing this service over the last few weeks.  Quirks are things I can’t explain or don’t care to understand.  You must make the following changes to the server side code, or else it might not work.  If you find any other quirks, feel free to let me know and I’ll credit your discovery;

 

You must add the AddressFilterMode ‘Any’ to the service implementation, or it won’t work.

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]

Summary

A lot of work is required to be able to do custom authentication using ServiceCredentials with WCF, no fewer than 18 classes in total. For cases when a trivial User Name and password simply won’t suffice, you can use this approach. WCF works really well when developing non-web based applications, but the lack of documentation can make development and maintenance harder than it should be. Be careful when using in a load balanced environment, you may need to make some changes to your bindings as already discussed.

Quick tip: Avoid ‘async void’

When developing a Web API application recently with an AngularJS front end, I made a basic mistake and then lost 2 hours of my life trying to figure out what was causing the problem … async void.

Its pretty common nowadays to use tasks to improve performance/scalability when writing a Web API controller.  Take the following code:

public async Task<Entry[]> Get()
{
    using (var context = new EntriesContext())
    {
        return await context.Entries.ToArrayAsync();
    }
}

At a high level, when ToArrayAsync is executed the call will be moved off onto another thread and the execution of the method will only continue once the operation is complete (when the data is returned from the database in this case).  This is great because it frees up the thread for use by other requests, resulting in better performance/scalability (we could argue about how true this is all day long, so lets not do this here! Smile).

So what about when you still want to harness this functionality, but you don’t need to return anything to the client? async void? Not quite

Take the following Delete method:

public async void Delete(int id)
{
    using (var context = new EntriesContext())
    {
        Entry entity = await context.Entries.FirstOrDefaultAsync(c => c.Id == id);
        if (entity != null)
        {
            context.Entry(entity).State = EntityState.Deleted;
            await context.SaveChangesAsync();
        }
    }
}

The client uses the Id property to do what it needs to do, so it doesn’t care what actually gets returned…as long as the operation (deleting the entity) completes successfully.

To help illustrate the problem, here is the client side code (written in AngularJS, but it really doesn’t matter what the client side framework is);

$scope.delete = function () {

<pre><code>var entry = $scope.entries[0];

$http.delete('/api/Entries/' + entry.Id).then(function () {
    $scope.entries.splice(0, 1);
});
</code></pre>

};

When the delete operation is completed successfully (i.e. a 2xx response code), the then call-back method is raised and the entry is removed from the entries collection.  Only this code never actually runs.  So why?

If you’re lucky, your web browser will give you a error message to let you know that something went wrong…

browser-error

I have however seen this error get swallowed up completely.

To get the actual error message, you will need to use a HTTP proxy tool, such as Fiddler.  With this you can capture the response message returned by the server, which should look something like this (for the sake of clarity I’ve omitted all the HTML code which collectively makes up the yellow screen of death);

An asynchronous module or handler completed while an asynchronous operation was still pending.

Yep, you have a race condition.  The method returned before it finished executing.  Under the hood, the framework didn’t create a Task for the method because the method does not return a Task.  Therefore when calling FirstOrDefaultAsync, the method does not pause execution and the error is encountered.

To resolve the problem, simply change the return type of the method from void to Task.  Don’t worry, you don’t actually have to return anything, and the compiler knows not to generate a build error if there is no return statement.  An easy fix, when you know what the problem is!

Summary

Web API fully supports Tasks, which are helpful for writing more scalable applications.  When writing methods that don’t need to return a value to the client, it may make sense to return void.  However, under the hood .NET requires the method to return Task in order for it to properly support asynchronous  functionality.

AutoMapper

5 AutoMapper tips and tricks

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code. AutoMapper maps objects to objects, using both convention and configuration.  AutoMapper is flexible enough that it can be overridden so that it will work with even the oldest legacy systems.  This post demonstrates what I have found to be 5 of the most useful, lesser known features.

Tip: I wrote unit tests to demonstrate each of the basic concepts.  If you would like to learn more about unit testing, please check out my post C# Writing Unit Tests with NUnit And Moq.

Demo project code

This is the basic structure of the code I will use throughout the tutorial;

public class Doctor
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class HealthcareProfessional
{
    public string FullName { get; set; }
}

public class Person
{
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class KitchenCutlery
{
    public int Knifes { get; set; }
    public int Forks { get; set; }
}

public class Kitchen
{
    public int KnifesAndForks { get; set; }
}

public class MyContext : DbContext
{
    public DbSet<Doctor> Doctors { get; set; }
}

public class DbInitializer : DropCreateDatabaseAlways<MyContext>
{
    protected override void Seed(MyContext context)
    {
        context.Doctors.Add(new Doctor
        {
            FirstName = "Jon",
            LastName = "Preece",
            Title = "Mr"
        });
    }
}

I will refer back to this code in each example.

AutoMapper Projection

No doubt one of the best, and probably least used features of AutoMapper is projection.  AutoMapper, when used with an Object Relational Mapper (ORM) such as Entity Framework, can cast the source object to the destination type at database level. This may result in more efficient database queries.

AutoMapper provides the Project extension method, which extends the IQueryable interface for this task.  This means that the source object does not have to be fully retrieved before mapping can take place.

Take the following unit test;

[Test]
public void Doctor_ProjectToPerson_PersonFirstNameIsNotNull()
{
    //Arrange
    Mapper.CreateMap<Doctor, Person>()
            .ForMember(dest => dest.LastName, opt => opt.Ignore());

    //Act
    Person result;
    using (MyContext context = new MyContext())
    {
        context.Database.Log += s => Debug.WriteLine(s);
        result = context.Doctors.Project().To<Person>().FirstOrDefault();
    }

    //Assert
    Assert.IsNotNull(result.FirstName);
}

The query that is created and executed against the database is as follows;

SELECT TOP (1) 
    [d].[Id] AS [Id], 
    [d].[FirstName] AS [FirstName]
    FROM [dbo].[Doctors] AS [d]

Notice that LastName is not returned from the database?  This is quite a simple example, but the potential performance gains are obvious when working with more complex objects.

InstantAutoMapperRecommended Further Reading: Instant AutoMapper

Automapper is a simple library that will help eliminate complex code for mapping objects from one to another. It solves the deceptively complex problem of mapping objects and leaves you with clean and maintainable code.

Instant Automapper Starter is a practical guide that provides numerous step-by-step instructions detailing some of the many features Automapper provides to streamline your object-to-object mapping. Importantly it helps in eliminating complex code.

Configuration Validation

Hands down the most useful, time saving feature of AutoMapper is Configuration Validation.  Basically after you set up your maps, you can call Mapper.AssertConfigurationIsValid() to ensure that the maps you have defined make sense.  This saves you the hassle of having to run your project, navigate to the appropriate page, click button A/B/C and so on to test that you mapping code actually works.

Take the following unit test;

[Test]
public void Doctor_MapsToHealthcareProfessional_ConfigurationIsValid()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>();

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

AutoMapper throws the following exception;

AutoMapper.AutoMapperConfigurationException : 
Unmapped members were found. Review the types and members below.
Add a custom mapping expression, ignore, add a custom resolver, or modify the source/destination type
===================================================================
Doctor -> HealthcareProfessional (Destination member list)
MakingLifeEasier.Doctor -> MakingLifeEasier.HealthcareProfessional (Destination member list)
-------------------------------------------------------------------
FullName

AutoMapper can’t infer a map between Doctor and HealthcareProfessional because they are structurally very different.  A custom converter, or ForMember needs to be used to indicate the relationship;

[Test]
public void Doctor_MapsToHealthcareProfessional_ConfigurationIsValid()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>()
          .ForMember(dest => dest.FullName, opt => opt.MapFrom(src => string.Join(" ", src.Title, src.FirstName, src.LastName)));

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

The test now passes because every public property now has a valid mapping.

Custom Conversion

Sometimes when the source and destination objects are too different to be mapped using convention, and simply too big to write elegant inline mapping code (ForMember) for each individual member, it can make sense to do the mapping yourself.  AutoMapper makes this easy by providing the ITypeConverter<TSource, TDestination> interface.

The following is an implementation for mapping Doctor to a HealthcareProfessional;

public class HealthcareProfessionalTypeConverter : ITypeConverter<Doctor, HealthcareProfessional>
{
    public HealthcareProfessional Convert(ResolutionContext context)
    {
        if (context == null || context.IsSourceValueNull)
            return null;

        Doctor source = (Doctor)context.SourceValue;

        return new HealthcareProfessional
        {
            FullName = string.Join(" ", new[] { source.Title, source.FirstName, source.LastName })
        };
    }
}

You instruct AutoMapper to use your converter by using the ConvertUsing method, passing the type of your converter, as shown below;

[Test]
public void Legacy_SourceMappedToDestination_DestinationNotNull()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>()
            .ConvertUsing<HealthcareProfessionalTypeConverter>();

    Doctor source = new Doctor
    {
        Title = "Mr",
        FirstName = "Jon",
        LastName = "Preece",
    };

    Mapper.AssertConfigurationIsValid();

    //Act
    HealthcareProfessional result = Mapper.Map<HealthcareProfessional>(source);

    //Assert
    Assert.IsNotNull(result);
}

AutoMapper simply hands over the source object (Doctor) to you, and you return a new instance of the destination object (HealthcareProfessional), with the populated properties.  I like this approach because it means I can keep all my monkey mapping code in one single place.

Value Resolvers

Value resolves allow for correct mapping of value types.  The source object KitchenCutlery contains a precise breakdown of the number of knifes and forks in the kitchen, whereas the destination object Kitchen only cares about the sum total of both.  AutoMapper won’t be able to create a convention based mapping here for us, so we use a Value (type) Resolver;

public class KitchenResolver : ValueResolver<KitchenCutlery, int>
{
    protected override int ResolveCore(KitchenCutlery source)
    {
        return source.Knifes + source.Forks;
    }
}

The value resolver, similar to the type converter, takes care of the mapping and returns a result, but notice that it is specific to the individual property, and not the full object.

The following code snippet shows how to use a Value Resolver;

[Test]
public void Kitchen_KnifesKitchen_ConfigurationIsValid()
{
    //Arrange

    Mapper.CreateMap<KitchenCutlery, Kitchen>()
            .ForMember(dest => dest.KnifesAndForks, opt => opt.ResolveUsing<KitchenResolver>());

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

Null Substitution

Think default values.  In the event that you want to give a destination object a default value when the source value is null, you can use AutoMapper’s NullSubstitute feature.

Example usage of the NullSubstitute method, applied individually to each property;

[Test]
public void Doctor_TitleIsNull_DefaultTitleIsUsed()
{
    //Arrange
    Doctor source = new Doctor
    {
        FirstName = "Jon",
        LastName = "Preece"
    };

    Mapper.CreateMap<Doctor, Person>()
            .ForMember(dest => dest.Title, opt => opt.NullSubstitute("Dr"));

    //Act
    Person result = Mapper.Map<Person>(source);

    //Assert
    Assert.AreSame(result.Title, "Dr");
}

Summary

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code.  You don’t have to rewrite your existing code or write code in a particular style to use AutoMapper, as AutoMapper is flexible enough to be configured to work with even the oldest legacy code.  Most developers aren’t using AutoMapper to its full potential, rarely straying away from Mapper.Map.  There are a multitude of useful tidbits, including; Projection, Configuration Validation, Custom Conversion, Value Resolvers and Null Substitution, which can help simplify complex logic when used correctly.

How to create your own ASP .NET MVC model binder

Model binding is the process of converting POST data or data present in the Url into a .NET object(s).  ASP .NET MVC makes this very simple by providing the DefaultModelBinder.  You’ve probably seen this in action many times (even if you didn’t realise it!), but did you know you can easily write your own?

A typical ASP .NET MVC Controller

You’ve probably written or seen code like this many hundreds of times;

public ActionResult Index(int id)
{
    using (ExceptionManagerEntities context = new ExceptionManagerEntities())
    {
        Error entity = context.Errors.FirstOrDefault(c => c.ID == id);

<pre><code>    if (entity != null)
    {
        return View(entity);                    
    }
}

return View();
</code></pre>

}

Where did Id come from? It probably came from one of three sources; the Url (Controller/View/{id}), the query string (Controller/View?id={id}), or the post data.  Under the hood, ASP .NET examines your controller method, and searches each of these places looking for data that matches the data type and the name of the parameter.  It may also look at your route configuration to aid this process.

A typical controller method

The code shown in the first snippet is very common in many ASP .NET MVC controllers.  Your action method accepts an Id parameter, your method then fetches an entity based on that Id, and then does something useful with it (and typically saves it back to the database or returns it back to the view).

You can create your own MVC model binder to cut out this step, and simply have the entity itself passed to your action method. 

Take the following code;

public ActionResult Index(Error error)
{
    if (error != null)
    {
        return View(error);
    }

<pre><code>return View();
</code></pre>

}

How much sweeter is that?

Create your own ASP .NET MVC model binder

You can create your own model binder in two simple steps;

  1. Create a class that inherits from DefaultModelBinder, and override the BindModel method (and build up your entity in there)
  2. Add a line of code to your Global.asax.cs file to tell MVC to use that model binder.

Before we forget, tell MVC about your model binder as follows (in the Application_Start method in your Global.asax.cs file);

ModelBinders.Binders.Add(typeof(Error), new ErrorModelBinder());

This tells MVC that if it stumbles across a parameter on an action method of type Error, it should attempt to bind it using the ErrorModelBinder class you just created.

Your BindModel implementation will look like this;

public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
    if (bindingContext.ModelType == typeof(Error))
    {
        ValueProviderResult valueProviderValue = bindingContext.ValueProvider.GetValue("id");

<pre><code>    int id;
    if (valueProviderValue != null &amp;&amp; int.TryParse((string)valueProviderValue.RawValue, out id))
    {
        using (ExceptionManagerEntities context = new ExceptionManagerEntities())
        {
            return context.Errors.FirstOrDefault(c =&gt; c.ID == id);
        }
    }
}

return base.BindModel(controllerContext, bindingContext);
</code></pre>

}

The code digested;

  1. Make sure that we are only trying to build an object of type Error (this should always be true, but just as a safety net lets include this check anyway).
  2. Get the ValueProviderResult of the value provider we care about (in this case, the Id property).
  3. Check that it exists, and that its definitely an integer.
  4. Now fetch our entity and return it back.
  5. Finally, if any of our safety nets fail, just return back to the model binder and let that try and figure it out for us.

And the end result?

ErrorIsBound

Your new model binder can now be used on any action method throughout your ASP .NET MVC application.

Summary

You can significantly reduce code duplication and simplify your controller classes by creating your own model binder.  Simply create a new class that derives from DefaultModelBinder and add your logic to fetch your entity.  Be sure to add a line to your Global.asax.cs file so that MVC knows what to do with it, or you may get some confusing error messages.

Easy WCF Security and authorization of users

There are several steps involved in making your WCF service secure, and ensure that clients consuming your service are properly authenticated.  WCF uses BasicHttpBinding out-of-the-box, which generates SOAP envelopes (messages) for each request.  BasicHttpBinding works over standard HTTP, which is great for completely open general purpose services, but not good if you are sending sensitive data over the internet (as HTTP traffic can easily be intercepted).

This post discusses how to take a basic WCF service, which uses BasicHttpBinding, and upgrade it to use WsHttpBinding over SSL (with username/password validation). If you want to become a better WCF developer, you may want to check out Learning WCF: A Hands-on Guide by Michele Lerouz Bustamante. This is a very thorough and insightful WCF book with detailed and practical samples and tips.

Here is the basic sequence of steps needed;

  • Generate a self-signed SSL certificate (you would use a real SSL certificate for live) and add this to the TrustedPeople certificate store.
  • Add a UserNamePasswordValidator.
  • Switch our BasicHttpBinding to WsHttpBinding.
  • Change our MEX (Metadata Exchange) endpoint to support SSL.
  • Specify how the client will authenticate, using the ServiceCredentials class.

You may notice that most of the changes are configuration changes.  You can make the same changes in code if you so desire, but I find the process easier and cleaner when done in XML.

 

BasicHttpBinding vs. WsHttpBinding

Before we kick things off, i found myself asking this question (like so many others before me).  What is the difference between BasicHttpBinding and WsHttpBinding?

If you want a very thorough explanation, there is a very detailed explanation written by Shivprasad Koirala on CodeProject.com.  I highly recommend that you check this out.

The TL:DR version is simply this;

  • BasicHttpBinding supports SOAP v1.1 (WsHttpBinding supports SOAP v1.2)
  • BasicHttpBinding does not support Reliable messaging
  • BasicHttpBinding is insecure, WsHttpBinding supports WS-* specifications.
  • WsHttpBinding supports transporting messages with credentials, BasicHttpBinding supports only Windows/Basic/Certificate authentication.

The project structure

You can view and download the full source code for this project via GitHub, see the end of the post for more details.

We have a WCF Service application with a Service Contract as follows;

[ServiceContract]
public interface IPeopleService
{
    [OperationContract]
    Person[] GetPeople();
}

And the implementation of the Service Contract;

public class PeopleService : IPeopleService
{
    public Person[] GetPeople()
    {
        return new[]
                    {
                        new Person { Age = 45, FirstName = "John", LastName = "Smith" }, 
                        new Person { Age = 42, FirstName = "Jane", LastName = "Smith" }
                    };
    }
}

The model class (composite type, if you will) is as follows;

[DataContract]
public class Person
{
    [DataMember]
    public int Age { get; set; }

    [DataMember]
    public string FirstName { get; set; }

    [DataMember]
    public string LastName { get; set; }
}

The initial configuration is as follows;

<system.serviceModel>
  <behaviors>
    <serviceBehaviors>
      <behavior>
        <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/>
        <serviceDebug includeExceptionDetailInFaults="false"/>
      </behavior>
    </serviceBehaviors>
  </behaviors>
  <protocolMapping>
    <add binding="basicHttpsBinding" scheme="https"/>
  </protocolMapping>
  <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/>
</system.serviceModel>

The WCF service can easily be hosted in IIS, simply add a service reference to the WSDL definition file and you’re away. In the interest of completeness, here is the entire client code;

static void Main(string[] args)
{
    PeopleServiceClient client = new PeopleServiceClient();

    foreach (var person in client.GetPeople())
    {
        Console.WriteLine(person.FirstName);
    }

    Console.ReadLine();
}

Hosting in IIS

As briefly mentioned, you can (and probably always will) host your WCF service using Internet Information Services (IIS).

Generating an SSL certificate

Before doing anything, you need an SSL certificate.  Transport based authentication simply does not work if A) You are not on a secure channel and B) Your SSL certificate is not trusted.  You don’t have to purchase an SSL certificate at this stage as a self-signed certificate will suffice (with 1 or 2 extra steps).  You will want to purchase a real SSL certificate when you move your service to the production environment.

You can generate a self-signed SSL certificate either 1 of 2 ways.  You can either do it the hard way, using Microsoft’s rather painful MakeCert.exe Certificate Creation Tool or you can download a free tool from PluralSight (of all places), which provides a super simple user interface and can even add the certificate to the certificate store for you.

Once you have downloaded the tool, run it as an Administrator;

SelfCert

For the purposes of this tutorial, we will be creating a fake website called peoplesite.local.  We will add an entry into the hosts file for this and set it up in IIS.  Its very important that the X.500 distinguished name matches your domain name (or it will not work!).  You will also want to save the certificate as a PFX file so that it can be imported into IIS and used for the HTTPS binding.

Once done open up IIS, click on the root level node, and double click on Server Certificates.  Click Import (on the right hand side) and point to the PFX file you saved on the desktop.  Click OK to import the certificate.

Import

Next, create a new site in IIS called PeopleService.  Point it to an appropriate folder on your computer and edit the site bindings.  Add a new HTTPS binding and select the SSL certificate you just imported.

EditBinding

Be sure to remove the standard HTTP binding after adding the HTTPS binding as you wont be needing it.

Update the hosts file (C:\Windows\System32\Drivers\etc\hosts) with an entry for peoplesite.local as follows;

127.0.0.1            peoplesite.local

Finally, flip back to Visual Studio and create a publish profile (which we will use later once we have finished the configuration).  The publish method screen should look something like this;

Publish

Configuration

Ok we have set up our environment, now its time to get down to the fun stuff…configuration.  Its easier if you delete everything you have between the <system.serviceModel> elements and follow along with me.

Add the following skeleton code between the <system.serviceModel> opening and closing tags, we will fill in each element separately;  (update the Service Name to match that in your project)

<services>
  <service name="PeopleService.Service.PeopleService" behaviorConfiguration="ServiceBehaviour">
    <host>
    </host>
  </service>
</services>
<bindings>
</bindings>
<behaviors>
  <serviceBehaviors>
  </serviceBehaviors>
</behaviors>

Base Address

Start by adding a base address (directly inside the host element) so that we can use relative addresses’;

<baseAddresses>
  <add baseAddress="https://peoplesite.local/" />
</baseAddresses>

Endpoints

Next, add two endpoints (one for the WsHttpBinding and one for MEX);

<endpoint address="" binding="wsHttpBinding" bindingConfiguration="BasicBinding" contract="PeopleService.Service.IPeopleService" name="BasicEndpoint" />
<endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" name="mex" />

Note that we are using mexHttpsBinding because our site does not support standard HTTP binding.  We don’t need to explicitly add a binding for the MEX endpoint as WCF will deal with this automatically for us.  Add a wsHttpBinding as follows;

<wsHttpBinding>
  <binding name="BasicBinding">
    <security mode="TransportWithMessageCredential">
      <message clientCredentialType="UserName" />
    </security>
  </binding>
</wsHttpBinding>

Bindings

This is where we specify what type of security we want to use.  In our case, we want to validate that the user is whom they say they are in the form of a username/password combination.  The TransportWithMessageCredential basic http security mode requires the username/password combination be passed in the message header.  A snoop using a HTTP proxy tool (such as Fiddler) reveals this;

fiddler

Service Behaviours

Finally we need to update our existing service behaviour with a serviceCredentials element as follows;

<behavior name="ServiceBehaviour">
  <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" />
  <serviceDebug includeExceptionDetailInFaults="true" />
  <serviceCredentials>
    <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="PeopleService.Service.Authenticator, PeopleService.Service" />
    <serviceCertificate findValue="peoplesite.local" storeLocation="LocalMachine" storeName="TrustedPeople" x509FindType="FindBySubjectName" />
  </serviceCredentials>
</behavior>

The two elements of interest are userNameAuthentication and serviceCertificate.

User Name Authentication

This is where we tell WCF about our custom authentication class.  Lets go ahead and create this.  Add a new class to your project called Authenticator.cs and add the following code;

using System.IdentityModel.Selectors;
using System.ServiceModel;

public class Authenticator : UserNamePasswordValidator
{
    public override void Validate(string userName, string password)
    {
        if (userName != "peoplesite" && password != "password")
        {
            throw new FaultException("Invalid user and/or password");
        }
    }
}

Basically, you can add whatever code you want here to do your authentication/authorisation.  Notice that the Validate method returns void.  If you determine that the credentials supplied are invalid, you should throw a FaultException, which will be automatically handled for you by WCF.

You should ensure that the customUserNamePasswordValidatorType attribute in your App.config file is the fully qualified type of your authenticator type.

Service Certificate

This is key, if this is not quite right nothing will work.  Basically you are telling WCF where to find your SSL certificate.  Its very important that the findValue is the same as your SSL certificate name, and that you point to the correct certificate store.  Typically you will install the certificate on the LocalMachine in the TrustedPeople certificate store.  I would certainly recommend sticking with the FindBySubjectName search mode, as this avoid issues when you have multiple SSL certificates with similar details.  You may need a little trial and error when starting out to get this right.  If you have been following this tutorial throughout, you should be OK with the default.

Supplying user credentials

We just need one final tweak to our test client to make all this work.  Update the test client code as follows;

PeopleServiceClient client = new PeopleServiceClient();
client.ClientCredentials.UserName.UserName = "peoplesite";
client.ClientCredentials.UserName.Password = "password";

We pass in the client credentials via the, you guessed it, ClientCredentials object on the service client.

If you run the client now, you should get some test data back from the service written out to the console window.  Notice that you will get an exception if the username/password is incorrect, or if the connection is not over SSL.

Troubleshooting

SecurityNegotiationException

As an aside, if you receive a SecurityNegotiationException please ensure that your self-signed certificate is correctly named to match your domain, and that you have imported it into the TrustedPeople certificate store.

SecurityNegotiationException

A handy trick for diagnosing the problem is by updating the service reference, Visual Studio will advise you as to what is wrong with the certificate;

SecurityAlert

Summary

With a few small configuration changes you can easily utilise WS-Security specifications/standards to ensure  that your WCF service is secure.  You can generate a self-signed SSL certificate using a free tool from Pluralsight, and install it to your local certificate store and IIS.  Then you add a UserNamePasswordValidator to take care of your authentication.  Finally, you can troubleshoot and debug your service using Fiddler and Visual Studio.

github4848_thumb.pngThe source code is available on GitHub

Create custom C# attributes

You have probably added various attributes to your ASP .NET MVC applications, desktop applications, or basically any software you have developed using C# recently.  Attributes allow you to provide meta data to the consuming code, but have you ever created and consumed your own attributes?  This very quick tutorial shows how to create your own attribute, apply it to your classes, and then read out its value.

Sample Project

To demonstrate this concept, I have created a Console application and added a few classes.  This is an arbitrary example just to show off how its done.

The basic foundation of our project is as follows;

namespace Reflection
{
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Reflection;

<pre><code>internal class Program
{
    private static void Main()
    {
        //TODO
    }
}

public interface IMammal
{
    bool IsWarmBlooded { get; }
}

public class BaseMammal
{
    public bool IsWarmBlooded
    {
        get
        {
            return true;
        }
    }
}

public class Human : BaseMammal, IMammal
{
}

public class Bat : BaseMammal, IMammal
{
}

public class DuskyDolphin : BaseMammal, IMammal
{
}
</code></pre>

}

We will create an attribute, and apply it to each of the Mammal classes, then write some code to display the value of the attribute to the user.  The attribute will hold the latin (scientific) name of the mammal.

Create/Apply an attribute

There are two ways to create an attribute in C#, the easy way or the manual way. If you want to make your life a whole lot easier, you should use the Attribute code snippet.

To use the Attribute snippet, simply start typing Attribute and press Tab Tab on the keyboard.

Attribute Code Snippet

Call the attribute LatinNameAttribute, accept the other defaults, delete all the comments that come as part of the snippet, and add a public property called Name (type System.String).

Your attribute should be as follows;

[AttributeUsage(AttributeTargets.Class, Inherited = false, AllowMultiple = true)]
internal sealed class LatinNameAttribute : Attribute
{
    public LatinNameAttribute(string name)
    {
        Name = name;
    }

<pre><code>public string Name { get; set; }
</code></pre>

}

Go ahead and apply the attribute to a couple of classes, as follows;

[LatinName("Homo sapiens")]
public class Human : BaseMammal, IMammal
{
}

[LatinName("Chiroptera")]
public class Bat : BaseMammal, IMammal
{
}

public class DuskyDolphin : BaseMammal, IMammal
{
}

Now that we have written the attribute and applied it, we just have to write some code to extract the actual value.

Discovering attributes

It is common to create a helper class for working with attributes, or perhaps put the code on a low level base class. Ultimately it is up to you.

We only care at this stage about reading out all of the attributes that exist in our code base. To do this, we must discover all the types in our assembly that are decorated with the attribute in question (See A Step Further).

Create a new class, named LatinNameHelper and add a method named DisplayLatinNames.

public class LatinNameHelper
{
    public void DisplayLatinNames()
    {
        IEnumerable&lt;string&gt; latinNames = Assembly.GetEntryAssembly().GetTypes()
                                        .Where(t =&gt; t.GetCustomAttributes(typeof(LatinNameAttribute), true).Any())
                                        .Select(t =&gt; ((LatinNameAttribute)t.GetCustomAttributes(typeof(LatinNameAttribute), true).First()).Name);

<pre><code>    foreach (string latinName in latinNames)
    {
        Console.WriteLine(latinName);
    }
}
</code></pre>

}

Lets step through each line;

  1. Get all the types in the current assembly
  2. Filter the list to only include classes that are decorated with our LatinNameAttribute
  3. Read the first LatinNameAttribute you find decorated on the class (we stated that we can have more than one attribute defined on our attribute) and select the value of the Name property.
  4. Loop through each latin name, write it out for the user to see

Note that I have only decorated Human and Bat with LatinNameAttribute, so you should only get two outputs when you run the program.

Screenshot of attribute names

For the sake of completeness, here is the Main method;

internal class Program
{
    private static void Main()
    {
        LatinNameHelper helper = new LatinNameHelper();
        helper.DisplayLatinNames();

<pre><code>    Console.ReadLine();
}
</code></pre>

}

Congratulations… you have written an attribute, decorated your classes with it, and consumed the value.

A step further

A common practice is to use attributes to identify classes/or methods that instantiate/run. If you want to do this, you can use Activator.CreateInstance to instantiate the class and then you can cast it to an interface to make it easier to work with.

Add a new method to LatinNameHelper called GetDecoratedMammals as follows;

public void GetDecoratedMammals()
{
    IEnumerable&lt;IMammal&gt; mammals = Assembly.GetEntryAssembly().GetTypes()
                                    .Where(t =&gt; t.GetCustomAttributes(typeof(LatinNameAttribute), true).Any())
                                    .Select(t =&gt; (IMammal)Activator.CreateInstance(t));

<pre><code>foreach (var mammal in mammals)
{
    Console.WriteLine(mammal.GetType().Name);
}
</code></pre>

}

Summary

C# features attributes, which can be used to add meta data to a class, method, property (basically anything). You can create your own custom attributes by creating a class derived from Attribute and adding your own properties to it. You can then find all the classes that are decorated with the attribute using reflection, and read out any meta data as needed. You can also use the Activator to create an instance of the class that is decorated with your attribute and do anything you require.

Use T4 Templates to create enumerations from your database lookup tables

T4 (Text Template Transformation Toolkit) has been around for a while now… its been a part of Visual Studio since the 2005 release.  In case you don’t know, T4 can be used to automatically generate files based on templates.  You create a text template, which is then transformed (interpreted) by Visual Studio into a working file. T4 can be used to create C# code files, and indeed it forms the basis of the current scaffolding templates you have probably used when creating ASP .NET web applications.  You’re not limited to using T4 to create code classes, but this is one of its most common usages.

I’ve known of T4 templates for quite a while, and I’ve edited some of the existing T4 templates in the past (see Scott Hanselman’s post for details on how to do this). To be honest, I’ve only recently found a practical scenario where I would want to write my own T4 templates, mapping lookup tables to enumerations (C# enum).

What is a lookup table?  A lookup table consists of data that is indexed and referenced from other tables, allowing the data to be changed without affecting existing foreign key constraints.  Its common to add new data to these tables, and even make occasional changes, but lookup tables are unlikely to change much over time.

Database Tables

Take Adventure Works for example, there are three lookup tables ^.  There is a consistent theme across each table, a primary key (the lookup Id) and a Name (a description of the lookup item).  We will use T4 templates to map these lookup tables into our code in the form of enumerations, so that we can avoid the dreaded “magic numbers” … in other words, we give our code some strong typing, which will significantly improve code maintainability over time.

Tooling

It has to be said, sorry Microsoft, but native tooling for T4 templates is still pretty poor (even after 9 years (as of 2014) since the initial release).  Out of the box, Visual Studio lets you run the T4 templates, but not much else.  There is no native syntax highlighting, IntelliSense or basically any of the usual Visual Studio goodness we are used to.  We’re going to need some third party help.

There are two main players here;

devartT4 Editor from Devart.

My preferred tool, offers syntax highlighting, basic IntelliSense, GoTo (code navigation), outlining (collapsible code) and code indentation.  Also I partically love how the T4 template is executed every time I hit Save, this is a great time saver.

The download is very lean (0.63 – 1.79 MB depending on your version) and installs as a simple Visual Studio extension (.vsix file extension).  The extension is also completely free, which is fantastic.

tangible t4 editor

Tangible T4 Editor from Tangible Engineering

This is a comprehensive tool with advanced IntelliSense, code navigation and validation.

Personally I don’t use this tool because I didn’t like the bulky download, or the full blown Windows installation, but it looks like a decent tool so I recommend you give it a shot.  There is a free version, but the full version will set you back an eye watering 99 €.

This is not supposed to be a comprehensive review about each product, just a mile-high snapshot.  I highly recommend that you test both tools and pick the one that works best for you.

Basic Set-up

Once you’ve picked your preferred tooling, its time to set started.  For the purposes of this tutorial we will create a simple console application, but the type of project doesn’t matter.

Add a new Text Template using the Add New Item dialog (shown below).  Call the file Mapper.tt;

Add New Item

A new Text Template will be created for you, with a few default assemblies and imports. Please change the output extension to .cs;

<#@ template debug=”false” hostspecific=”false” language=”C#” #>
<#@ assembly name=”System.Core” #>
<#@ import namespace=”System.Linq” #>
<#@ import namespace=”System.Text” #>
<#@ import namespace=”System.Collections.Generic” #>
<#@ output extension=”.cs” #>

Before making any further T4 specific changes, lets add in some simple code and show how to transform the template.  Add the following code to Mapper.tt;

using System;
namespace Tutorial
{
    //Logic goes here
}

To transform the template, simply save (if using Devart T4 Editor) or right click on Mapper.tt and click Run Custom Tool.

Run Custom Tool

You should notice a file appear nested underneath Mapper.tt, called Mapper.cs.  Open the file and see the result of the template transformation.  Congratulations, you have written and run your first T4 template.

A step further

With the “Hello World” stuff out the way, we’re free to get to the all the goodness that T4 offers.

Blocks

If you’re familiar with the ASP .NET Web Forms engine tags (<% %> <%= %>) or indeed the PHP equivalent (<? ?>) there really isn’t anything new for you to learn here.  Otherwise, all you need to know is there are special tags that give instructions to T4 that express how the proceeding text should be interpreted.

Expression Block <#= #> A simple expression, exclude the semi colon at the end.
Statement Block <# #> Typically multi-line blocks of code
Class Feature Block <#+ #> Complex structures, including methods, classes, properties etc
Directive Block <#@ #> Used to specify template details, included files, imports etc

Any text that is not contained within any of these tags is treated as plain text, otherwise the T4 engine will attempt to evaluate each expression/line of code, using the standard C#/VB compilers.

A simple loop

T4 is designed to work with both C# and VB, so you can just choose the right block and start typing C# as normal, so a loop might look something like this;

using System;

namespace Tutorial
{
    <# for(int i = 0; i < 10; i++) { #>
        //This is comment <#= i #>
    <# } #>
}

I simply added a statement block for the for loop, and an expression block for outputting the value of i because the for loop itself doesn’t have any sort of output, whereas I do want to output the value of i in this case.

using System;

namespace Tutorial
{
//This is comment 0
//This is comment 1
//This is comment 2
//This is comment 3
//This is comment 4
//This is comment 5
//This is comment 6
//This is comment 7
//This is comment 8
//This is comment 9
}

Includes

Includes are basically references to other T4 templates.  Rather than simply having all our logic in a single file, we can break it up into several smaller files.  This will reduce duplication and make our code more readable going forward.

Add a new T4 template, call it SqlHelper.ttinclude.  The ttinclude file extension denotes, as I’m sure you have surmised, that this file is basically a child of the parent that references it.  We don’t need to double up our imports/assembly tags, so you can safely clear out anything that the template gives you by default and start fresh.

Write some SQL to find your lookup tables

To query our database, we’re just going to knock up some very simple ADO .NET code, with a little in-line T-SQL.  There is really nothing special here.  I highly recommend that you create a scratch application and get this all working before finally dropping it into your template.  (Doing this will save your sanity, as the T4 debugging tools are somewhat primitive!)

Use the Class Feature Block syntax we discussed earlier and drop in the following code;

<#+
public static IEnumerable<IGrouping> GetTables()
{
    string connectionString = "Server=.;Database=AdventureWorks2012;Trusted_Connection=True;";

    List tables = new List();
    using (SqlConnection sqlConnection = new SqlConnection(connectionString))
    {
        SqlCommand command = new SqlCommand("DECLARE @tmpTable TABLE ( [RowNumber] int, [Schema] nvarchar(15), [TableName] nvarchar(20), [ColumnName] nvarchar(20), [Sql] nvarchar(200) ) INSERT INTO @tmpTable ([RowNumber], [Schema], [TableName], [ColumnName], [Sql]) SELECT ROW_NUMBER() OVER (ORDER BY KU.TABLE_SCHEMA) AS RowNumber, KU.TABLE_SCHEMA, KU.table_name, column_name, 'SELECT "' + KU.TABLE_SCHEMA + "', "' + KU.TABLE_NAME + "', Name, CAST(ROW_NUMBER() OVER (ORDER BY Name) AS INT) AS RowNumber FROM ' + KU.TABLE_SCHEMA + '.' + KU.TABLE_NAME as [Sql] FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS TC INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS KU ON TC.CONSTRAINT_TYPE = 'PRIMARY KEY' AND TC.CONSTRAINT_NAME = KU.CONSTRAINT_NAME and ku.table_name in (SELECT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME LIKE '%Type' GROUP BY TABLE_NAME, TABLE_SCHEMA) DECLARE @counter INT = 1 DECLARE @total INT = (SELECT COUNT([Schema]) FROM @tmpTable) DECLARE @sqlCommand varchar(1000) WHILE (@counter  1) SET @sqlCommand = CONCAT(@sqlCommand, ' UNION ') SET @sqlCommand = CONCAT(@sqlCommand, @sql) SET @counter = @counter + 1 END EXEC (@sqlCommand)", sqlConnection);
        sqlConnection.Open();

        var reader = command.ExecuteReader();
        while (reader.Read())
        {
            DatabaseTable table = new DatabaseTable();
            table.Schema = reader.GetString(0);
            table.TableName = reader.GetString(1);
            table.Name = reader.GetString(2);
            table.Id = reader.GetInt32(3);

            tables.Add(table);
        }
    }

    return tables.GroupBy(t => t.TableName);
}

public class DatabaseTable
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string TableName { get; set; }
    public string Schema { get; set; }
}

#>

You may want to adjust this code a little to work with your set-up (change the connection string for example).

In a nutshell, the code will connect to SQL Server, get all the tables whose name ends with Type, and return each row in each table as a single query.  This code is far from perfect, I am far from a SQL hero, but it gets the job done so I am happy.  You may want to use your SQL expertise to tidy it up.

Tying it all together

Almost there now, we just need to reference our include file, import a couple of assemblies, and update our loop in Mapper.tt to call the code we have just written;

To add a reference to the include file, add the following underneath the main directive block;

<#@ include file="SqlHelper.ttinclude" #>

And use the the assembly hint tag to bring in a reference to System.Data;

<#@ assembly name="System.Data" #>

And finally add a import for System.Data.SqlClient;

<#@ import namespace="System.Data.SqlClient" #>

You should end up with the following;

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ include file="SqlHelper.ttinclude" #>
<#@ assembly name="System.Core" #>
<#@ assembly name="System.Data" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ import namespace="System.Data.SqlClient" #>
<#@ output extension=".cs" #>

Now, and I promise this is the last step, update your loop that you created earlier to call out to the database using the methods we created in SqlHelper.include;

using System;

namespace AutoEnum
{
    <# foreach (var table in GetTables()) { #>
    /// <summary>
    /// The <#= table.Key #> enumeration
    /// </summary>
    public enum <#= table.Key #>
    {
        <# for(int i = 0; i < table.Count(); i++) { #>
        <# var item = table.ElementAt(i); #>
        <#= item.Name.Replace(" ","").Replace("/", "") #> = <#= item.Id #><# if(i < table.Count() - 1) { #>,
        <# } #><# } #>
    };

<#}#>}

The result

Assuming everything is working, correctly, you should end up with the following enumerations in Mapper.cs;

using System;

namespace AutoEnum
{
    /// <summary>
    /// The AddressType enumeration
    /// </summary>
    public enum AddressType
    {
    Archive = 1,
        Billing = 2,
        Home = 3,
        MainOffice = 4,
        Primary = 5,
        Shipping = 6   
    };

    /// <summary>
    /// The ContactType enumeration
    /// </summary>
    public enum ContactType
    {
    AccountingManager = 1,
        AssistantSalesAgent = 2,
        AssistantSalesRepresentative = 3,
        CoordinatorForeignMarkets = 4,
        ExportAdministrator = 5,
        InternationalMarketingManager = 6,
        MarketingAssistant = 7,
        MarketingManager = 8,
        MarketingRepresentative = 9,
        OrderAdministrator = 10,
        Owner = 11,
        OwnerMarketingAssistant = 12,
        ProductManager = 13,
        PurchasingAgent = 14,
        PurchasingManager = 15,
        RegionalAccountRepresentative = 16,
        SalesAgent = 17,
        SalesAssociate = 18,
        SalesManager = 19,
        SalesRepresentative = 20   
    };

    /// <summary>
    /// The PhoneNumberType enumeration
    /// </summary>
    public enum PhoneNumberType
    {
    Cell = 1,
        Home = 2,
        Work = 3   
    };

}

Summary

Visual Studio has native support for text templates, also known as T4.  Text templates can be used to automatically generate just about anything, but it is common to generate code files based on existing database structures.  Out of the box tooling is pretty poor, but there are several third party tools that you can use to enhance the experience.  Generally these templates can be a little clunky to write, but once you get the right they can be a real time saver.

Further Reading

  1. How to generate multiple outputs from a single template
  2. Just about every page on Oleg Sych’s blog
  3. Basic introduction about T4 Templates and how to customize them for ASP .NET MVC project
  4. T4 template generation, best kept secret in Visual Studio

Publish your website to an IIS staging environment using Microsoft Web Deploy

One of the simplest and quickest ways to publish your website to a staging environment is, at least in my opinion, using Microsoft Web Deploy.  This post is about how you approach this, a future article will discuss why you probably shouldn’t do this.

Key points;

  1. The remote server should be running Internet Information Services (IIS) 7.0 or later.
  2. You can use the Microsoft Web Platform Installer to install all the extra bits you need to make this work.
  3. You need to set appropriate permissions to allow remote publishing.

Windows Server 2012 R2

On my local machine, for testing purposes, I have a Windows Server 2012 R2 virtual machine which is bare bones configured.

The first thing you need to do is install IIS.  You can do this using the Server Manager;

Open the Server Manager > click Add roles and features > select Role-based or feature-based installation > select the target server > and finally, select Web Server (IIS) and Windows Deployment Services.  Feel free to drill into each item and ensure you have the following selected (as well as whatever the defaults are);

  • Basic Authentication (very important)
  • ASP .NET 3.5 / 4.5
  • .NET Extensibility 3.5 / 4.5
  • IIS Management Console and Management Service (very important)

Once installed, you should be able to open IIS Manager by opening the Start menu, type inetmgr and press enter.

When IIS Manager opens (referred to herein as IIS), you should be prompted to download Microsoft Web Platform installer.  Ensure you do this.  Use the Web Platform installer to ensure you have all the following installed;

  • IIS 7 Recommended Configuration
  • IIS Management Service (should already be installed)
  • IIS Basic Authentication (should already be installed)
  • Web Deployment Tool (The current version is 3.5 at the time of writing, I also like to install Web Deploy for Hosting Servers as well)
  • Current version of the Microsoft .NET Framework
  • ASP .NET MVC 3 (as we will be publishing an ASP .NET MVC website)

I like to do a restart at this point, just to ensure that everything is tidied up (although I don’t think its 100% necessary, just ensure you restart IIS at the very least).

Enabling web deployment

The next step is to “switch on” the web management service.  This will allow remote users to connect up and deploy the website.

For the sake of simplicity, we will use basic authentication.  There are other means of authenticating users, but that is out of the scope of this tutorial.

AuthenticationIn IIS, select the server level node and then select the Authentication module (under the IIS grouping).

Simply right click on Basic Authentication, and the click Enable.

Next we need to configure the web management service to accept incoming connections.  Again, select the server level node, and select Management Service.

If the management service is already running, you need to stop it before continuing.  To do this, go to the Start Menu and type services.msc.  This will open the Services manager.  Search for Web Management Service, right click, and click Stop.  I ran through this process twice from scratch and the first time the service wasn’t running and the second time it was.  I not sure what triggers it to run.

Tick Enable Remote Connections and feel free to accept the default settings for now.  You could always revisit this later.  Click Start on the right hand side to start the service.

Configure your website

I’m sure you’ve done this many times before, so I will not regurgitate the details here.

Add a new website, give it a host name if you like, and specify the physical path (remember this).  Please ensure that you set the application pool to .NET CLR Version 4.0.30319 to avoid errors running the website further down the line.

Set the appropriate permissions for IIS_IUSRS

IIS requires read permissions to access the files that make up your website.  The simplest way is to head over to the physical folder for your website (that path you’re remembering from earlier), right click the folder, click Properties > Security > Edit > Add.  Type IIS_IUSRS then click Check Names.  Click OK, then OK to close the properties windows.

Create a Web Deploy Publish Profile

Finally, you can create a web deploy publish profile (simply an XML file with a few basic settings)  which you can import into Visual Studio to save you the hassle of having to type anything.

imageHead back over to IIS, right click on your website, click Deploy > Configure Web Deploy Publishing.

You can (and definitely should) create a restricted user account and grant permission to publish to that account (either an IIS account of a Windows authentication based account).

Once you have selected a user, click Setup.  A message should appear in the Results text area;

Publish enabled for 'WIN-DLICU73MRD0Jon'
Granted 'WIN-DLICU73MRD0Jon' full control on 'C:inetpubwwwroottestwebsite'
Successfully created settings file 'C:UsersJonDesktopWIN-DLICU73MRD0_Jon_TestWebsite.PublishSettings'

Success! This is your publish profile that you can import into Visual Studio.

Import your publish profile into Visual Studio

To import your publish profile, open your web solution and from the Build menu select Publish [your website].

PublishWebOn the Profile tab, click Import… and browse to the publish profile you just created.  Once imported, switch to the Connection tab and type the password for the account you selected earlier on the Configure Web Deploy Publishing dialog you saw earlier.

If you’re feeling lucky, hit the Validate Connection button.  All should validate properly.  If not, please refer to this little hidden gem from the IIS team to help troubleshoot any error messages you might be receiving.

Browse to your website using the host name you specified earlier (don’t forget to update your hosts file if you are “just testing”) and congratulations, after the initial feels like a lifetime compilation period, all should be good.

Next time you’re ready to publish, simply open the Publish dialog in Visual Studio, go straight to the Preview tab and hit Publish.  No more manual deployment for you my friend!

Summary

The Microsoft Web Deployment tool is a quick and convenient way to publish your website to a staging area for further testing.  You use the web deployment tool to generate a publish profile which can be imported into Visual Studio (saving you the hassle of having to type all that connection info)  and then call that service and pass it a package which will automatically deploy your site to the appropriate folders.

  • 1
  • 2