Browse Category: C#

Create a RESTful API with authentication using Web API and Jwt

Web API is a feature of the ASP .NET framework that dramatically simplifies building RESTful (REST like) HTTP services that are cross platform and device and browser agnostic. With Web API, you can create endpoints that can be accessed using a combination of descriptive URLs and HTTP verbs. Those endpoints can serve data back to the caller as either JSON or XML that is standards compliant. With JSON Web Tokens (Jwt), which are typically stateless, you can add an authentication and authorization layer enabling you to restrict access to some or all of your API.

The purpose of this tutorial is to develop the beginnings of a Book Store API, using Microsoft Web API with (C#), which authenticates and authorizes each requests, exposes OAuth2 endpoints, and returns data about books and reviews for consumption by the caller. The caller in this case will be Postman, a useful utility for querying API’s.

In a follow up to this post we will write a front end to interact with the API directly.

Set up

Open Visual Studio (I will be using Visual Studio 2015 Community edition, you can use whatever version you like) and create a new Empty project, ensuring you select the Web API option;

Where you save the project is up to you, but I will create my projects under *C:\Source*. For simplicity you might want to do the same.

New Project

Next, packages.

Packages

Open up the packages.config file. Some packages should have already been added to enable Web API itself. Please add the the following additional packages;

install-package EntityFramework
install-package Microsoft.AspNet.Cors
install-package Microsoft.AspNet.Identity.Core
install-package Microsoft.AspNet.Identity.EntityFramework
install-package Microsoft.AspNet.Identity.Owin
install-package Microsoft.AspNet.WebApi.Cors
install-package Microsoft.AspNet.WebApi.Owin
install-package Microsoft.Owin.Cors
install-package Microsoft.Owin.Security.Jwt
install-package Microsoft.Owin.Host.SystemWeb
install-package System.IdentityModel.Tokens.Jwt
install-package Thinktecture.IdentityModel.Core

These are the minimum packages required to provide data persistence, enable CORS (Cross-Origin Resource Sharing), and enable generating and authenticating/authorizing Jwt’s.

Entity Framework

We will use Entity Framework for data persistence, using the Code-First approach. Entity Framework will take care of generating a database, adding tables, stored procedures and so on. As an added benefit, Entity Framework will also upgrade the schema automatically as we make changes. Entity Framework is perfect for rapid prototyping, which is what we are in essence doing here.

Create a new IdentityDbContext called BooksContext, which will give us Users, Roles and Claims in our database. I like to add this under a folder called Core, for organization. We will add our entities to this later.

namespace BooksAPI.Core
{
    using Microsoft.AspNet.Identity.EntityFramework;

    public class BooksContext : IdentityDbContext
    {

    }
}

Claims are used to describe useful information that the user has associated with them. We will use claims to tell the client which roles the user has. The benefit of roles is that we can prevent access to certain methods/controllers to a specific group of users, and permit access to others.

Add a DbMigrationsConfiguration class and allow automatic migrations, but prevent automatic data loss;

namespace BooksAPI.Core
{
    using System.Data.Entity.Migrations;

    public class Configuration : DbMigrationsConfiguration<BooksContext>
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = true;
            AutomaticMigrationDataLossAllowed = false;
        }
    }
}

Whilst losing data at this stage is not important (we will use a seed method later to populate our database), I like to turn this off now so I do not forget later.

Now tell Entity Framework how to update the database schema using an initializer, as follows;

namespace BooksAPI.Core
{
    using System.Data.Entity;

    public class Initializer : MigrateDatabaseToLatestVersion<BooksContext, Configuration>
    {
    }
}

This tells Entity Framework to go ahead and upgrade the database to the latest version automatically for us.

Finally, tell your application about the initializer by updating the Global.asax.cs file as follows;

namespace BooksAPI
{
    using System.Data.Entity;
    using System.Web;
    using System.Web.Http;
    using Core;

    public class WebApiApplication : HttpApplication
    {
        protected void Application_Start()
        {
            GlobalConfiguration.Configure(WebApiConfig.Register);
            Database.SetInitializer(new Initializer());
        }
    }
}

Data Provider

By default, Entity Framework will configure itself to use LocalDB. If this is not desirable, say you want to use SQL Express instead, you need to make the following adjustments;

Open the Web.config file and delete the following code;

<entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
        <parameters>
            <parameter value="mssqllocaldb" />
        </parameters>
    </defaultConnectionFactory>
    <providers>
        <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
    </providers>
</entityFramework>

And add the connection string;

<connectionStrings>
    <add name="BooksContext" providerName="System.Data.SqlClient" connectionString="Server=.;Database=Books;Trusted_Connection=True;" />
</connectionStrings>

Now we’re using SQL Server directly (whatever flavour that might be) rather than LocalDB.

JSON

Whilst we’re here, we might as well configure our application to return camel-case JSON (thisIsCamelCase), instead of the default pascal-case (ThisIsPascalCase).

Add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

There is nothing worse than pascal-case JavaScript.

CORS (Cross-Origin Resource Sharing)

Cross-Origin Resource Sharing, or CORS for short, is when a client requests access to a resource (an image, or say, data from an endpoint) from an origin (domain) that is different from the domain where the resource itself originates.

This step is completely optional. We are adding in CORS support here because when we come to write our client app in subsequent posts that follow on from this one, we will likely use a separate HTTP server (for testing and debugging purposes). When released to production, these two apps would use the same host (Internet Information Services (IIS)).

To enable CORS, open WebApiConfig.cs and add the following code to the beginning of the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add the following class (in the same file if you prefer for quick reference);

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

In the CORS workflow, before sending a DELETE, PUT or POST request, the client sends an OPTIONS request to check that the domain from which the request originates is the same as the server. If the request domain and server domain are not the same, then the server must include various access headers that describe which domains have access. To enable access to all domains, we just respond with an origin header (Access-Control-Allow-Origin) with an asterisk to enable access for all.

The Access-Control-Allow-Headers header describes which headers the API can accept/is expecting to receive. The Access-Control-Allow-Methods header describes which HTTP verbs are supported/permitted.

See Mozilla Developer Network (MDN) for a more comprehensive write-up on Cross-Origin Resource Sharing (CORS).

Data Model

With Entity Framework configured, lets create our data structure. The API will expose books, and books will have reviews.

Under the Models folder add a new class called Book. Add the following code;

namespace BooksAPI.Models
{
    using System.Collections.Generic;

    public class Book
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Description { get; set; }
        public decimal Price { get; set; }
        public string ImageUrl { get; set; }

        public virtual List<Review> Reviews { get; set; }
    }
}

And add Review, as shown;

namespace BooksAPI.Models
{
    public class Review
    {
        public int Id { get; set; }    
        public string Description { get; set; }    
        public int Rating { get; set; }
        public int BookId { get; set; }
    }
}

Add these entities to the IdentityDbContext we created earlier;

public class BooksContext : IdentityDbContext
{
    public DbSet<Book> Books { get; set; }
    public DbSet<Review> Reviews { get; set; }
}

Be sure to add in the necessary using directives.

A couple of helpful abstractions

We need to abstract a couple of classes that we need to make use of, in order to keep our code clean and ensure that it works correctly.

Under the Core folder, add the following classes;

public class BookUserManager : UserManager<IdentityUser>
{
    public BookUserManager() : base(new BookUserStore())
    {
    }
}

We will make heavy use of the UserManager<T> in our project, and we don’t want to have to initialise it with a UserStore<T> every time we want to make use of it. Whilst adding this is not strictly necessary, it does go a long way to helping keep the code clean.

Now add another class for the UserStore, as shown;

public class BookUserStore : UserStore&lt;IdentityUser&gt;
{
    public BookUserStore() : base(new BooksContext())
    {
    }
}

This code is really important. If we fail to tell the UserStore which DbContext to use, it falls back to some default value.

A network-related or instance-specific error occurred while establishing a connection to SQL Server

I’m not sure what the default value is, all I know is it doesn’t seem to correspond to our applications DbContext. This code will help prevent you from tearing your hair out later wondering why you are getting the super-helpful error message shown above.

API Controller

We need to expose some data to our client (when we write it). Lets take advantage of Entity Frameworks Seed method. The Seed method will pre-populate some books and reviews automatically for us.

Instead of dropping the code in directly for this class (it is very long), please refer to the Configuration.cs file on GitHub.

This code gives us a little bit of starting data to play with, instead of having to add a bunch of data manually each time we make changes to our schema that require the database to be re-initialized (not really in our case as we have an extremely simple data model, but in larger applications this is very useful).

Books Endpoint

Next, we want to create the RESTful endpoint that will retrieve all the books data. Create a new Web API controller called BooksController and add the following;

public class BooksController : ApiController
{
    [HttpGet]
    public async Task<IHttpActionResult> Get()
    {
        using (var context = new BooksContext())
        {
            return Ok(await context.Books.Include(x => x.Reviews).ToListAsync());
        }
    }
}

With this code we are fully exploiting recent changes to the .NET framework; the introduction of async and await. Writing asynchronous code in this manner allows the thread to be released whilst data (Books and Reviews) is being retrieved from the database and converted to objects to be consumed by our code. When the asynchronous operation is complete, the code picks up where it was up to and continues executing. (By which, we mean the hydrated data objects are passed to the underlying framework and converted to JSON/XML and returned to the client).

Reviews Endpoint

We’re also going to enable authorized users to post reviews and delete reviews. For this we will need a ReviewsController with the relevant Post and Delete methods. Add the following code;

Create a new Web API controller called ReviewsController and add the following code;

public class ReviewsController : ApiController
{
    [HttpPost]
    public async Task<IHttpActionResult> Post([FromBody] ReviewViewModel review)
    {
        using (var context = new BooksContext())
        {
            var book = await context.Books.FirstOrDefaultAsync(b => b.Id == review.BookId);
            if (book == null)
            {
                return NotFound();
            }

            var newReview = context.Reviews.Add(new Review
            {
                BookId = book.Id,
                Description = review.Description,
                Rating = review.Rating
            });

            await context.SaveChangesAsync();
            return Ok(new ReviewViewModel(newReview));
        }
    }

    [HttpDelete]
    public async Task<IHttpActionResult> Delete(int id)
    {
        using (var context = new BooksContext())
        {
            var review = await context.Reviews.FirstOrDefaultAsync(r => r.Id == id);
            if (review == null)
            {
                return NotFound();
            }

            context.Reviews.Remove(review);
            await context.SaveChangesAsync();
        }
        return Ok();
    }
}

There are a couple of good practices in play here that we need to highlight.

The first method, Post allows the user to add a new review. Notice the parameter for the method;

[FromBody] ReviewViewModel review

The [FromBody] attribute tells Web API to look for the data for the method argument in the body of the HTTP message that we received from the client, and not in the URL. The second parameter is a view model that wraps around the Review entity itself. Add a new folder to your project called ViewModels, add a new class called ReviewViewModel and add the following code;

public class ReviewViewModel
{
    public ReviewViewModel()
    {
    }

    public ReviewViewModel(Review review)
    {
        if (review == null)
        {
            return;
        }

        BookId = review.BookId;
        Rating = review.Rating;
        Description = review.Description;
    }

    public int BookId { get; set; }
    public int Rating { get; set; }
    public string Description { get; set; }

    public Review ToReview()
    {
        return new Review
        {
            BookId = BookId,
            Description = Description,
            Rating = Rating
        };
    }
}

We are just copying all he properties from the Review entity to the ReviewViewModel entity and vice-versa. So why bother? First reason, to help mitigate a well known under/over-posting vulnerability (good write up about it here) inherent in most web services. Also, it helps prevent unwanted information being sent to the client. With this approach we have to explicitly expose data to the client by adding properties to the view model.

For this scenario, this approach is probably a bit overkill, but I highly recommend it keeping your application secure is important, as well as is the need to prevent leaking of potentially sensitive information. A tool I’ve used in the past to simplify this mapping code is AutoMapper. I highly recommend checking out.

Important note: In order to keep our API RESTful, we return the newly created entity (or its view model representation) back to the client for consumption, removing the need to re-fetch the entire data set.

The Delete method is trivial. We accept the Id of the review we want to delete as a parameter, then fetch the entity and finally remove it from the collection. Calling SaveChangesAsync will make the change permanent.

Meaningful response codes

We want to return useful information back to the client as much as possible. Notice that the Post method returns NotFound(), which translates to a 404 HTTP status code, if the corresponding Book for the given review cannot be found. This is useful for client side error handling. Returning Ok() will return 200 (HTTP ‘Ok’ status code), which informs the client that the operation was successful.

Authentication and Authorization Using OAuth and JSON Web Tokens (JWT)

My preferred approach for dealing with authentication and authorization is to use JSON Web Tokens (JWT). We will open up an OAuth endpoint to client credentials and return a token which describes the users claims. For each of the users roles we will add a claim (which could be used to control which views the user has access to on the client side).

We use OWIN to add our OAuth configuration into the pipeline. Add a new class to the project called Startup.cs and add the following code;

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (BooksAPI.Startup))]

namespace BooksAPI
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureOAuth(app);
        }
    }
}

Notice that Startup is a partial class. I’ve done that because I want to keep this class as simple as possible, because as the application becomes more complicated and we add more and more middle-ware, this class will grow exponentially. You could use a static helper class here, but the preferred method from the MSDN documentation seems to be leaning towards using partial classes specifically.

Under the App_Start folder add a new class called Startup.OAuth.cs and add the following code;

using System;
using System.Configuration;
using BooksAPI.Core;
using BooksAPI.Identity;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.DataHandler.Encoder;
using Microsoft.Owin.Security.Jwt;
using Microsoft.Owin.Security.OAuth;
using Owin;

namespace BooksAPI
{
    public partial class Startup
    {
        public void ConfigureOAuth(IAppBuilder app)
        {            
        }
    }
}

Note. When I wrote this code originally I encountered a quirk. After spending hours pulling out my hair trying to figure out why something was not working, I eventually discovered that the ordering of the code in this class is very important. If you don’t copy the code in the exact same order, you may encounter unexpected behaviour. Please add the code in the same order as described below.

OAuth secrets

First, add the following code;

var issuer = ConfigurationManager.AppSettings["issuer"];
var secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
  • Issuer – a unique identifier for the entity that issued the token (not to be confused with Entity Framework’s entities)
  • Secret – a secret key used to secure the token and prevent tampering

I keep these values in the Web configuration file (Web.config). To be precise, I split these values out into their own configuration file called keys.config and add a reference to that file in the main Web.config. I do this so that I can exclude just the keys from source control by adding a line to my .gitignore file.

To do this, open Web.config and change the <appSettings> section as follows;

<appSettings file="keys.config">
</appSettings>

Now add a new file to your project called keys.config and add the following code;

<appSettings>
  <add key="issuer" value="http://localhost/"/>
  <add key="secret" value="IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw"/>
</appSettings>

Adding objects to the OWIN context

We can make use of OWIN to manage instances of objects for us, on a per request basis. The pattern is comparable to IoC, in that you tell the “container” how to create an instance of a specific type of object, then request the instance using a Get<T> method.

Add the following code;

app.CreatePerOwinContext(() => new BooksContext());
app.CreatePerOwinContext(() => new BookUserManager());

The first time we request an instance of BooksContext for example, the lambda expression will execute and a new BooksContext will be created and returned to us. Subsequent requests will return the same instance.

Important note: The life-cycle of object instance is per-request. As soon as the request is complete, the instance is cleaned up.

Enabling Bearer Authentication/Authorization

To enable bearer authentication, add the following code;

app.UseJwtBearerAuthentication(new JwtBearerAuthenticationOptions
{
    AuthenticationMode = AuthenticationMode.Active,
    AllowedAudiences = new[] { "Any" },
    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
    {
        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
    }
});

The key takeaway of this code;

  • State who is the audience (we’re specifying “Any” for the audience, as this is a required field but we’re not fully implementing it).
  • State who is responsible for generating the tokens. Here we’re using SymmetricKeyIssuerSecurityTokenProvider and passing it our secret key to prevent tampering. We could use the X509CertificateSecurityTokenProvider, which uses a X509 certificate to secure the token (but I’ve found these to be overly complex in the past and I prefer a simpler implementation).

This code adds JWT bearer authentication to the OWIN pipeline.

Enabling OAuth

We need to expose an OAuth endpoint so that the client can request a token (by passing a user name and password).

Add the following code;

app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
    AllowInsecureHttp = true,
    TokenEndpointPath = new PathString("/oauth2/token"),
    AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
    Provider = new CustomOAuthProvider(),
    AccessTokenFormat = new CustomJwtFormat(issuer)
});

Some important notes with this code;

  • We’re going to allow insecure HTTP requests whilst we are in development mode. You might want to disable this using a #IF Debug directive so that you don’t allow insecure connections in production.
  • Open an endpoint under /oauth2/token that accepts post requests.
  • When generating a token, make it expire after 30 minutes (1800 seconds).
  • We will use our own provider, CustomOAuthProvider, and formatter, CustomJwtFormat, to take care of authentication and building the actual token itself.

We need to write the provider and formatter next.

Formatting the JWT

Create a new class under the Identity folder called CustomJwtFormat.cs. Add the following code;

namespace BooksAPI.Identity
{
    using System;
    using System.Configuration;
    using System.IdentityModel.Tokens;
    using Microsoft.Owin.Security;
    using Microsoft.Owin.Security.DataHandler.Encoder;
    using Thinktecture.IdentityModel.Tokens;

    public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
        private static readonly byte[] _secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
        private readonly string _issuer;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException(nameof(data));
            }

            var signingKey = new HmacSigningCredentials(_secret);
            var issued = data.Properties.IssuedUtc;
            var expires = data.Properties.ExpiresUtc;

            return new JwtSecurityTokenHandler().WriteToken(new JwtSecurityToken(_issuer, null, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey));
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }
}

This is a complicated looking class, but its pretty straightforward. We are just fetching all the information needed to generate the token, including the claims, issued date, expiration date, key and then we’re generating the token and returning it back.

Please note: Some of the code we are writing today was influenced by JSON Web Token in ASP.NET Web API 2 using OWIN by Taiseer Joudeh. I highly recommend checking it out.

The authentication bit

We’re almost there, honest! Now we want to authenticate the user.

using System.Linq;
using System.Security.Claims;
using System.Security.Principal;
using System.Threading;
using System.Threading.Tasks;
using System.Web;
using BooksAPI.Core;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.AspNet.Identity.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.OAuth;

namespace BooksAPI.Identity
{
    public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {
        public override Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {
            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] {"*"});

            var user = context.OwinContext.Get<BooksContext>().Users.FirstOrDefault(u => u.UserName == context.UserName);
            if (!context.OwinContext.Get<BookUserManager>().CheckPassword(user, context.Password))
            {
                context.SetError("invalid_grant", "The user name or password is incorrect");
                context.Rejected();
                return Task.FromResult<object>(null);
            }

            var ticket = new AuthenticationTicket(SetClaimsIdentity(context, user), new AuthenticationProperties());
            context.Validated(ticket);

            return Task.FromResult<object>(null);
        }

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        private static ClaimsIdentity SetClaimsIdentity(OAuthGrantResourceOwnerCredentialsContext context, IdentityUser user)
        {
            var identity = new ClaimsIdentity("JWT");
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));

            var userRoles = context.OwinContext.Get<BookUserManager>().GetRoles(user.Id);
            foreach (var role in userRoles)
            {
                identity.AddClaim(new Claim(ClaimTypes.Role, role));
            }

            return identity;
        }
    }
}

As we’re not checking the audience, when ValidateClientAuthentication is called we can just validate the request. When the request has a grant_type of password, which all our requests to the OAuth endpoint will have, the above GrantResourceOwnerCredentials method is executed. This method authenticates the user and creates the claims to be added to the JWT.

Testing

There are 2 tools you can use for testing this.

Technique 1 – Using the browser

Open up a web browser, and navigate to the books URL.

Testing with the web browser

You will see the list of books, displayed as XML. This is because Web API can serve up data either as XML or as JSON. Personally, I do not like XML, JSON is my choice these days.

Technique 2 (Preferred) – Using Postman

To make Web API respond in JSON we need to send along a Accept header. The best tool to enable use to do this (for Google Chrome) is Postman. Download it and give it a go if you like.

Drop the same URL into the Enter request URL field, and click Send. Notice the response is in JSON;

Postman response in JSON

This worked because Postman automatically adds the Accept header to each request. You can see this by clicking on the Headers tab. If the header isn’t there and you’re still getting XML back, just add the header as shown in the screenshot and re-send the request.

To test the delete method, change the HTTP verb to Delete and add the ReviewId to the end of the URL. For example; http://localhost:62996/api/reviews/9

Putting it all together

First, we need to restrict access to our endpoints.

Add a new file to the App_Start folder, called FilterConfig.cs and add the following code;

public class FilterConfig
{
    public static void Configure(HttpConfiguration config)
    {
        config.Filters.Add(new AuthorizeAttribute());
    }
}

And call the code from Global.asax.cs as follows;

GlobalConfiguration.Configure(FilterConfig.Configure);

Adding this code will restrict access to all endpoints (except the OAuth endpoint) to requests that have been authenticated (a request that sends along a valid Jwt).

You have much more fine-grain control here, if required. Instead of adding the above code, you could instead add the AuthorizeAttribute to specific controllers or even specific methods. The added benefit here is that you can also restrict access to specific users or specific roles;

Example code;

[Authorize(Roles = "Admin")]

The roles value (“Admin”) can be a comma-separated list. For us, restricting access to all endpoints will suffice.

To test that this code is working correctly, simply make a GET request to the books endpoint;

GET http://localhost:62996/api/books

You should get the following response;

{
  "message": "Authorization has been denied for this request."
}

Great its working. Now let’s fix that problem.

Make a POST request to the OAuth endpoint, and include the following;

  • Headers
    • Accept application/json
    • Accept-Language en-gb
    • Audience Any
  • Body
    • username administrator
    • password administrator123
    • grant_type password

Shown in the below screenshot;

OAuth Request

Make sure you set the message type as x-www-form-urlencoded.

If you are interested, here is the raw message;

POST /oauth2/token HTTP/1.1
Host: localhost:62996
Accept: application/json
Accept-Language: en-gb
Audience: Any
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache
Postman-Token: 8bc258b2-a08a-32ea-3cb2-2e7da46ddc09

username=administrator&password=administrator123&grant_type=password

The form data has been URL encoded and placed in the message body.

The web service should authenticate the request, and return a token (Shown in the response section in Postman). You can test that the authentication is working correctly by supplying an invalid username/password. In this case, you should get the following reply;

{
  "error": "invalid_grant"
}

This is deliberately vague to avoid giving any malicious users more information than they need.

Now to get a list of books, we need to call the endpoint passing in the token as a header.

Change the HTTP verb to GET and change the URL to; http://localhost:62996/api/books.

On the Headers tab in Postman, add the following additional headers;

Authorization Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6ImFkbWluaXN0cmF0b3IiLCJzdWIiOiJhZG1pbmlzdHJhdG9yIiwicm9sZSI6IkFkbWluaXN0cmF0b3IiLCJpc3MiOiJodHRwOi8vand0YXV0aHpzcnYuYXp1cmV3ZWJzaXRlcy5uZXQiLCJhdWQiOiJBbnkiLCJleHAiOjE0NTgwNDI4MjgsIm5iZiI6MTQ1ODA0MTAyOH0.uhrqQW6Ik_us1lvDXWJNKtsyxYlwKkUrCGXs-eQRWZQ

See screenshot below;

Authorization Header

Success! We have data from our secure endpoint.

Summary

In this introduction we looked at creating a project using Web API to issue and authenticate Jwt (JSON Web Tokens). We created a simple endpoint to retrieve a list of books, and also added the ability to get a specific book/review and delete reviews in a RESTful way.

This project is the foundation for subsequent posts that will explore creating a rich client side application, using modern JavaScript frameworks, which will enable authentication and authorization.

ASP .NET 5 (vNext), first thoughts

Microsoft ASP .NET 5 is a major shift from traditional ASP .NET methodologies. Whilst I am not actively developing ASP .NET 5 applications at the minute, .NET has always been my bread and butter technology. When I look at industry trends here in the UK, all I see is .NET .NET .NET, therefore it is important to have one eye on the future. I’ve watched all the introduction videos on the ASP .NET website, but I also wanted to take a look at what ASP .NET 5 means to me.

This is not meant to be a fully formed post. This will come later down the line. Right now, I think ASP .NET 5 is evolving too quickly to be “bloggable” fully.

Version disambiguation and terminology

Lets take a second to disambiguate some terminology. Microsoft’s understanding of versioning has always been different to everybody else. This tweet from Todd Motto really sums it up;

Looks like versioning is not going to get any simpler for the time being

ASP .NET 5 (ASP .NET 4.6 is the current version)

Previously known as ASP .NET vNext, ASP .NET 5 is the successor of ASP .NET 4.6. In the past, versions of ASP .NET have followed the .NET Framework release cycle. It looks like that is coming to an end now. ASP .NET should not be confused with MVC. ASP .NET is a technology, MVC is a framework.

ASP .NET 5 is currently scheduled for release in the first quarter of 2016, as per this tweet from Scott Hansleman; (I suspect this date will slip though)

The ASP .NET team would rather “get it right” and take longer, than rush the product and get it wrong (which would spell long term disaster for the platform)

MVC 6

This is the new version of Microsoft’s Model-View-Controller framework. There is a nice post on StackOverflow that describes the new features of MVC 6. Here are a few of the best;

  • “Cloud optimization” … so better performance.
  • MVC, WebAPI and Web Pages are now unified.
  • Removed dependency on System.Web, which results in more an 10x reduction in request overhead.
  • Built in dependency injection, which is pluggable, so it can be switched out for other DI providers.
  • Roslyn enables dynamic compilation. Save your file, refresh the browser. Works for C# too. No compilation required.
  • Cross platform.

DNX (.NET Execution Environment)

The .NET Execution Environment, DNX, is a cross platform thing that will run your .NET applications. DNX is built around the .NET Core, which is a super lightweight framework for .NET applications, resulting in drastically improved performance thanks to a reduce pipeline. Dependency on the dinosaur assembly System.Web has gone, but in return you are restricted to more of a subset of features. This is a good thing, my friend. System.Web has every featured imagined over the last 13 years, 75% of which you probably don’t even care about.

Interesting new features and changes

  • Use of data annotations for things that would previously have been HTML helpers (Tag helpers)
  • Environment tag on _Layout. Enables a simple means to specify which resources to load depending on the application configuration (Debug mode, release mode etc)
  • Bower support
  • Gulp out of the box (interesting that they chose Gulp over Grunt, I think its Gulps superior speed that has won the day.)
  • .NET Core. Drastically reduced web pipeline, could result in 10x faster response in some cases (remains to be seen!).
  • Noticeably faster starting up.
  • Save to build. With Roslyn, it is now not necessary to build every time you make a change to a CSharp (.cs) code file. Just save and refresh. Compilation is done in memory.
  • Intellisense hints that assembly is not available in .NET Core (nice!)
  • Built in dependency injection, which can be switched out for a third party mechanism.
  • Web API is now no longer a separate component. Web API was originally a separate technology from MVC. The two were always very alike, and it makes sense that the two should be merged together.

Deleted stuff

  • Web.config has finally been removed and exchanged for a simpler JSON formatted file. Parties have been thrown for less.
  • packages.config has gone, seems redundant now that things are in line with how the rest of the web develop, i.e. using package.json

Bad points

  • Still heavy use of the Viewbag in default projects. I’d like to see the ViewBag removed entirely, but I suspect that will never happen.
  • The default project template is still full of “junk”, although it is now a bit simpler to tidy up. Visual Studio automatically managers bower and npm packages, so removing a package is as simple as deleting it from the package.json file.

Summary

I am very keen to get cracking with ASP .NET 5 (vNext), although at the time of writing I feel that it still a little bit too dynamic to start diving in to at a deep level. The introduction of .NET Core, a cross platform, open source subset of the .NET framework is awesome… I can’t wait to see the benefits of using this in the wild (reduced server costs!!! especially when running on a Linux based machine, although it remains to be seen). The ViewBag still exists, but we can’t have it all I suppose.

At this point, we’re at least 5-6 months away from a release, so develop with it at your own risk!

Quick tip: Avoid ‘async void’

When developing a Web API application recently with an AngularJS front end, I made a basic mistake and then lost 2 hours of my life trying to figure out what was causing the problem … async void.

Its pretty common nowadays to use tasks to improve performance/scalability when writing a Web API controller.  Take the following code:

public async Task<Entry[]> Get()
{
    using (var context = new EntriesContext())
    {
        return await context.Entries.ToArrayAsync();
    }
}

At a high level, when ToArrayAsync is executed the call will be moved off onto another thread and the execution of the method will only continue once the operation is complete (when the data is returned from the database in this case).  This is great because it frees up the thread for use by other requests, resulting in better performance/scalability (we could argue about how true this is all day long, so lets not do this here! Smile).

So what about when you still want to harness this functionality, but you don’t need to return anything to the client? async void? Not quite

Take the following Delete method:

public async void Delete(int id)
{
    using (var context = new EntriesContext())
    {
        Entry entity = await context.Entries.FirstOrDefaultAsync(c => c.Id == id);
        if (entity != null)
        {
            context.Entry(entity).State = EntityState.Deleted;
            await context.SaveChangesAsync();
        }
    }
}

The client uses the Id property to do what it needs to do, so it doesn’t care what actually gets returned…as long as the operation (deleting the entity) completes successfully.

To help illustrate the problem, here is the client side code (written in AngularJS, but it really doesn’t matter what the client side framework is);

$scope.delete = function () {

<pre><code>var entry = $scope.entries[0];

$http.delete('/api/Entries/' + entry.Id).then(function () {
    $scope.entries.splice(0, 1);
});
</code></pre>

};

When the delete operation is completed successfully (i.e. a 2xx response code), the then call-back method is raised and the entry is removed from the entries collection.  Only this code never actually runs.  So why?

If you’re lucky, your web browser will give you a error message to let you know that something went wrong…

browser-error

I have however seen this error get swallowed up completely.

To get the actual error message, you will need to use a HTTP proxy tool, such as Fiddler.  With this you can capture the response message returned by the server, which should look something like this (for the sake of clarity I’ve omitted all the HTML code which collectively makes up the yellow screen of death);

An asynchronous module or handler completed while an asynchronous operation was still pending.

Yep, you have a race condition.  The method returned before it finished executing.  Under the hood, the framework didn’t create a Task for the method because the method does not return a Task.  Therefore when calling FirstOrDefaultAsync, the method does not pause execution and the error is encountered.

To resolve the problem, simply change the return type of the method from void to Task.  Don’t worry, you don’t actually have to return anything, and the compiler knows not to generate a build error if there is no return statement.  An easy fix, when you know what the problem is!

Summary

Web API fully supports Tasks, which are helpful for writing more scalable applications.  When writing methods that don’t need to return a value to the client, it may make sense to return void.  However, under the hood .NET requires the method to return Task in order for it to properly support asynchronous  functionality.

AutoMapper

5 AutoMapper tips and tricks

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code. AutoMapper maps objects to objects, using both convention and configuration.  AutoMapper is flexible enough that it can be overridden so that it will work with even the oldest legacy systems.  This post demonstrates what I have found to be 5 of the most useful, lesser known features.

Tip: I wrote unit tests to demonstrate each of the basic concepts.  If you would like to learn more about unit testing, please check out my post C# Writing Unit Tests with NUnit And Moq.

Demo project code

This is the basic structure of the code I will use throughout the tutorial;

public class Doctor
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class HealthcareProfessional
{
    public string FullName { get; set; }
}

public class Person
{
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class KitchenCutlery
{
    public int Knifes { get; set; }
    public int Forks { get; set; }
}

public class Kitchen
{
    public int KnifesAndForks { get; set; }
}

public class MyContext : DbContext
{
    public DbSet<Doctor> Doctors { get; set; }
}

public class DbInitializer : DropCreateDatabaseAlways<MyContext>
{
    protected override void Seed(MyContext context)
    {
        context.Doctors.Add(new Doctor
        {
            FirstName = "Jon",
            LastName = "Preece",
            Title = "Mr"
        });
    }
}

I will refer back to this code in each example.

AutoMapper Projection

No doubt one of the best, and probably least used features of AutoMapper is projection.  AutoMapper, when used with an Object Relational Mapper (ORM) such as Entity Framework, can cast the source object to the destination type at database level. This may result in more efficient database queries.

AutoMapper provides the Project extension method, which extends the IQueryable interface for this task.  This means that the source object does not have to be fully retrieved before mapping can take place.

Take the following unit test;

[Test]
public void Doctor_ProjectToPerson_PersonFirstNameIsNotNull()
{
    //Arrange
    Mapper.CreateMap<Doctor, Person>()
            .ForMember(dest => dest.LastName, opt => opt.Ignore());

    //Act
    Person result;
    using (MyContext context = new MyContext())
    {
        context.Database.Log += s => Debug.WriteLine(s);
        result = context.Doctors.Project().To<Person>().FirstOrDefault();
    }

    //Assert
    Assert.IsNotNull(result.FirstName);
}

The query that is created and executed against the database is as follows;

SELECT TOP (1) 
    [d].[Id] AS [Id], 
    [d].[FirstName] AS [FirstName]
    FROM [dbo].[Doctors] AS [d]

Notice that LastName is not returned from the database?  This is quite a simple example, but the potential performance gains are obvious when working with more complex objects.

InstantAutoMapperRecommended Further Reading: Instant AutoMapper

Automapper is a simple library that will help eliminate complex code for mapping objects from one to another. It solves the deceptively complex problem of mapping objects and leaves you with clean and maintainable code.

Instant Automapper Starter is a practical guide that provides numerous step-by-step instructions detailing some of the many features Automapper provides to streamline your object-to-object mapping. Importantly it helps in eliminating complex code.

Configuration Validation

Hands down the most useful, time saving feature of AutoMapper is Configuration Validation.  Basically after you set up your maps, you can call Mapper.AssertConfigurationIsValid() to ensure that the maps you have defined make sense.  This saves you the hassle of having to run your project, navigate to the appropriate page, click button A/B/C and so on to test that you mapping code actually works.

Take the following unit test;

[Test]
public void Doctor_MapsToHealthcareProfessional_ConfigurationIsValid()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>();

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

AutoMapper throws the following exception;

AutoMapper.AutoMapperConfigurationException : 
Unmapped members were found. Review the types and members below.
Add a custom mapping expression, ignore, add a custom resolver, or modify the source/destination type
===================================================================
Doctor -> HealthcareProfessional (Destination member list)
MakingLifeEasier.Doctor -> MakingLifeEasier.HealthcareProfessional (Destination member list)
-------------------------------------------------------------------
FullName

AutoMapper can’t infer a map between Doctor and HealthcareProfessional because they are structurally very different.  A custom converter, or ForMember needs to be used to indicate the relationship;

[Test]
public void Doctor_MapsToHealthcareProfessional_ConfigurationIsValid()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>()
          .ForMember(dest => dest.FullName, opt => opt.MapFrom(src => string.Join(" ", src.Title, src.FirstName, src.LastName)));

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

The test now passes because every public property now has a valid mapping.

Custom Conversion

Sometimes when the source and destination objects are too different to be mapped using convention, and simply too big to write elegant inline mapping code (ForMember) for each individual member, it can make sense to do the mapping yourself.  AutoMapper makes this easy by providing the ITypeConverter<TSource, TDestination> interface.

The following is an implementation for mapping Doctor to a HealthcareProfessional;

public class HealthcareProfessionalTypeConverter : ITypeConverter<Doctor, HealthcareProfessional>
{
    public HealthcareProfessional Convert(ResolutionContext context)
    {
        if (context == null || context.IsSourceValueNull)
            return null;

        Doctor source = (Doctor)context.SourceValue;

        return new HealthcareProfessional
        {
            FullName = string.Join(" ", new[] { source.Title, source.FirstName, source.LastName })
        };
    }
}

You instruct AutoMapper to use your converter by using the ConvertUsing method, passing the type of your converter, as shown below;

[Test]
public void Legacy_SourceMappedToDestination_DestinationNotNull()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>()
            .ConvertUsing<HealthcareProfessionalTypeConverter>();

    Doctor source = new Doctor
    {
        Title = "Mr",
        FirstName = "Jon",
        LastName = "Preece",
    };

    Mapper.AssertConfigurationIsValid();

    //Act
    HealthcareProfessional result = Mapper.Map<HealthcareProfessional>(source);

    //Assert
    Assert.IsNotNull(result);
}

AutoMapper simply hands over the source object (Doctor) to you, and you return a new instance of the destination object (HealthcareProfessional), with the populated properties.  I like this approach because it means I can keep all my monkey mapping code in one single place.

Value Resolvers

Value resolves allow for correct mapping of value types.  The source object KitchenCutlery contains a precise breakdown of the number of knifes and forks in the kitchen, whereas the destination object Kitchen only cares about the sum total of both.  AutoMapper won’t be able to create a convention based mapping here for us, so we use a Value (type) Resolver;

public class KitchenResolver : ValueResolver<KitchenCutlery, int>
{
    protected override int ResolveCore(KitchenCutlery source)
    {
        return source.Knifes + source.Forks;
    }
}

The value resolver, similar to the type converter, takes care of the mapping and returns a result, but notice that it is specific to the individual property, and not the full object.

The following code snippet shows how to use a Value Resolver;

[Test]
public void Kitchen_KnifesKitchen_ConfigurationIsValid()
{
    //Arrange

    Mapper.CreateMap<KitchenCutlery, Kitchen>()
            .ForMember(dest => dest.KnifesAndForks, opt => opt.ResolveUsing<KitchenResolver>());

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

Null Substitution

Think default values.  In the event that you want to give a destination object a default value when the source value is null, you can use AutoMapper’s NullSubstitute feature.

Example usage of the NullSubstitute method, applied individually to each property;

[Test]
public void Doctor_TitleIsNull_DefaultTitleIsUsed()
{
    //Arrange
    Doctor source = new Doctor
    {
        FirstName = "Jon",
        LastName = "Preece"
    };

    Mapper.CreateMap<Doctor, Person>()
            .ForMember(dest => dest.Title, opt => opt.NullSubstitute("Dr"));

    //Act
    Person result = Mapper.Map<Person>(source);

    //Assert
    Assert.AreSame(result.Title, "Dr");
}

Summary

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code.  You don’t have to rewrite your existing code or write code in a particular style to use AutoMapper, as AutoMapper is flexible enough to be configured to work with even the oldest legacy code.  Most developers aren’t using AutoMapper to its full potential, rarely straying away from Mapper.Map.  There are a multitude of useful tidbits, including; Projection, Configuration Validation, Custom Conversion, Value Resolvers and Null Substitution, which can help simplify complex logic when used correctly.

How to create your own ASP .NET MVC model binder

Model binding is the process of converting POST data or data present in the Url into a .NET object(s).  ASP .NET MVC makes this very simple by providing the DefaultModelBinder.  You’ve probably seen this in action many times (even if you didn’t realise it!), but did you know you can easily write your own?

A typical ASP .NET MVC Controller

You’ve probably written or seen code like this many hundreds of times;

public ActionResult Index(int id)
{
    using (ExceptionManagerEntities context = new ExceptionManagerEntities())
    {
        Error entity = context.Errors.FirstOrDefault(c => c.ID == id);

<pre><code>    if (entity != null)
    {
        return View(entity);                    
    }
}

return View();
</code></pre>

}

Where did Id come from? It probably came from one of three sources; the Url (Controller/View/{id}), the query string (Controller/View?id={id}), or the post data.  Under the hood, ASP .NET examines your controller method, and searches each of these places looking for data that matches the data type and the name of the parameter.  It may also look at your route configuration to aid this process.

A typical controller method

The code shown in the first snippet is very common in many ASP .NET MVC controllers.  Your action method accepts an Id parameter, your method then fetches an entity based on that Id, and then does something useful with it (and typically saves it back to the database or returns it back to the view).

You can create your own MVC model binder to cut out this step, and simply have the entity itself passed to your action method. 

Take the following code;

public ActionResult Index(Error error)
{
    if (error != null)
    {
        return View(error);
    }

<pre><code>return View();
</code></pre>

}

How much sweeter is that?

Create your own ASP .NET MVC model binder

You can create your own model binder in two simple steps;

  1. Create a class that inherits from DefaultModelBinder, and override the BindModel method (and build up your entity in there)
  2. Add a line of code to your Global.asax.cs file to tell MVC to use that model binder.

Before we forget, tell MVC about your model binder as follows (in the Application_Start method in your Global.asax.cs file);

ModelBinders.Binders.Add(typeof(Error), new ErrorModelBinder());

This tells MVC that if it stumbles across a parameter on an action method of type Error, it should attempt to bind it using the ErrorModelBinder class you just created.

Your BindModel implementation will look like this;

public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
    if (bindingContext.ModelType == typeof(Error))
    {
        ValueProviderResult valueProviderValue = bindingContext.ValueProvider.GetValue("id");

<pre><code>    int id;
    if (valueProviderValue != null &amp;&amp; int.TryParse((string)valueProviderValue.RawValue, out id))
    {
        using (ExceptionManagerEntities context = new ExceptionManagerEntities())
        {
            return context.Errors.FirstOrDefault(c =&gt; c.ID == id);
        }
    }
}

return base.BindModel(controllerContext, bindingContext);
</code></pre>

}

The code digested;

  1. Make sure that we are only trying to build an object of type Error (this should always be true, but just as a safety net lets include this check anyway).
  2. Get the ValueProviderResult of the value provider we care about (in this case, the Id property).
  3. Check that it exists, and that its definitely an integer.
  4. Now fetch our entity and return it back.
  5. Finally, if any of our safety nets fail, just return back to the model binder and let that try and figure it out for us.

And the end result?

ErrorIsBound

Your new model binder can now be used on any action method throughout your ASP .NET MVC application.

Summary

You can significantly reduce code duplication and simplify your controller classes by creating your own model binder.  Simply create a new class that derives from DefaultModelBinder and add your logic to fetch your entity.  Be sure to add a line to your Global.asax.cs file so that MVC knows what to do with it, or you may get some confusing error messages.

Create custom C# attributes

You have probably added various attributes to your ASP .NET MVC applications, desktop applications, or basically any software you have developed using C# recently.  Attributes allow you to provide meta data to the consuming code, but have you ever created and consumed your own attributes?  This very quick tutorial shows how to create your own attribute, apply it to your classes, and then read out its value.

Sample Project

To demonstrate this concept, I have created a Console application and added a few classes.  This is an arbitrary example just to show off how its done.

The basic foundation of our project is as follows;

namespace Reflection
{
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Reflection;

<pre><code>internal class Program
{
    private static void Main()
    {
        //TODO
    }
}

public interface IMammal
{
    bool IsWarmBlooded { get; }
}

public class BaseMammal
{
    public bool IsWarmBlooded
    {
        get
        {
            return true;
        }
    }
}

public class Human : BaseMammal, IMammal
{
}

public class Bat : BaseMammal, IMammal
{
}

public class DuskyDolphin : BaseMammal, IMammal
{
}
</code></pre>

}

We will create an attribute, and apply it to each of the Mammal classes, then write some code to display the value of the attribute to the user.  The attribute will hold the latin (scientific) name of the mammal.

Create/Apply an attribute

There are two ways to create an attribute in C#, the easy way or the manual way. If you want to make your life a whole lot easier, you should use the Attribute code snippet.

To use the Attribute snippet, simply start typing Attribute and press Tab Tab on the keyboard.

Attribute Code Snippet

Call the attribute LatinNameAttribute, accept the other defaults, delete all the comments that come as part of the snippet, and add a public property called Name (type System.String).

Your attribute should be as follows;

[AttributeUsage(AttributeTargets.Class, Inherited = false, AllowMultiple = true)]
internal sealed class LatinNameAttribute : Attribute
{
    public LatinNameAttribute(string name)
    {
        Name = name;
    }

<pre><code>public string Name { get; set; }
</code></pre>

}

Go ahead and apply the attribute to a couple of classes, as follows;

[LatinName("Homo sapiens")]
public class Human : BaseMammal, IMammal
{
}

[LatinName("Chiroptera")]
public class Bat : BaseMammal, IMammal
{
}

public class DuskyDolphin : BaseMammal, IMammal
{
}

Now that we have written the attribute and applied it, we just have to write some code to extract the actual value.

Discovering attributes

It is common to create a helper class for working with attributes, or perhaps put the code on a low level base class. Ultimately it is up to you.

We only care at this stage about reading out all of the attributes that exist in our code base. To do this, we must discover all the types in our assembly that are decorated with the attribute in question (See A Step Further).

Create a new class, named LatinNameHelper and add a method named DisplayLatinNames.

public class LatinNameHelper
{
    public void DisplayLatinNames()
    {
        IEnumerable&lt;string&gt; latinNames = Assembly.GetEntryAssembly().GetTypes()
                                        .Where(t =&gt; t.GetCustomAttributes(typeof(LatinNameAttribute), true).Any())
                                        .Select(t =&gt; ((LatinNameAttribute)t.GetCustomAttributes(typeof(LatinNameAttribute), true).First()).Name);

<pre><code>    foreach (string latinName in latinNames)
    {
        Console.WriteLine(latinName);
    }
}
</code></pre>

}

Lets step through each line;

  1. Get all the types in the current assembly
  2. Filter the list to only include classes that are decorated with our LatinNameAttribute
  3. Read the first LatinNameAttribute you find decorated on the class (we stated that we can have more than one attribute defined on our attribute) and select the value of the Name property.
  4. Loop through each latin name, write it out for the user to see

Note that I have only decorated Human and Bat with LatinNameAttribute, so you should only get two outputs when you run the program.

Screenshot of attribute names

For the sake of completeness, here is the Main method;

internal class Program
{
    private static void Main()
    {
        LatinNameHelper helper = new LatinNameHelper();
        helper.DisplayLatinNames();

<pre><code>    Console.ReadLine();
}
</code></pre>

}

Congratulations… you have written an attribute, decorated your classes with it, and consumed the value.

A step further

A common practice is to use attributes to identify classes/or methods that instantiate/run. If you want to do this, you can use Activator.CreateInstance to instantiate the class and then you can cast it to an interface to make it easier to work with.

Add a new method to LatinNameHelper called GetDecoratedMammals as follows;

public void GetDecoratedMammals()
{
    IEnumerable&lt;IMammal&gt; mammals = Assembly.GetEntryAssembly().GetTypes()
                                    .Where(t =&gt; t.GetCustomAttributes(typeof(LatinNameAttribute), true).Any())
                                    .Select(t =&gt; (IMammal)Activator.CreateInstance(t));

<pre><code>foreach (var mammal in mammals)
{
    Console.WriteLine(mammal.GetType().Name);
}
</code></pre>

}

Summary

C# features attributes, which can be used to add meta data to a class, method, property (basically anything). You can create your own custom attributes by creating a class derived from Attribute and adding your own properties to it. You can then find all the classes that are decorated with the attribute using reflection, and read out any meta data as needed. You can also use the Activator to create an instance of the class that is decorated with your attribute and do anything you require.

Use T4 Templates to create enumerations from your database lookup tables

T4 (Text Template Transformation Toolkit) has been around for a while now… its been a part of Visual Studio since the 2005 release.  In case you don’t know, T4 can be used to automatically generate files based on templates.  You create a text template, which is then transformed (interpreted) by Visual Studio into a working file. T4 can be used to create C# code files, and indeed it forms the basis of the current scaffolding templates you have probably used when creating ASP .NET web applications.  You’re not limited to using T4 to create code classes, but this is one of its most common usages.

I’ve known of T4 templates for quite a while, and I’ve edited some of the existing T4 templates in the past (see Scott Hanselman’s post for details on how to do this). To be honest, I’ve only recently found a practical scenario where I would want to write my own T4 templates, mapping lookup tables to enumerations (C# enum).

What is a lookup table?  A lookup table consists of data that is indexed and referenced from other tables, allowing the data to be changed without affecting existing foreign key constraints.  Its common to add new data to these tables, and even make occasional changes, but lookup tables are unlikely to change much over time.

Database Tables

Take Adventure Works for example, there are three lookup tables ^.  There is a consistent theme across each table, a primary key (the lookup Id) and a Name (a description of the lookup item).  We will use T4 templates to map these lookup tables into our code in the form of enumerations, so that we can avoid the dreaded “magic numbers” … in other words, we give our code some strong typing, which will significantly improve code maintainability over time.

Tooling

It has to be said, sorry Microsoft, but native tooling for T4 templates is still pretty poor (even after 9 years (as of 2014) since the initial release).  Out of the box, Visual Studio lets you run the T4 templates, but not much else.  There is no native syntax highlighting, IntelliSense or basically any of the usual Visual Studio goodness we are used to.  We’re going to need some third party help.

There are two main players here;

devartT4 Editor from Devart.

My preferred tool, offers syntax highlighting, basic IntelliSense, GoTo (code navigation), outlining (collapsible code) and code indentation.  Also I partically love how the T4 template is executed every time I hit Save, this is a great time saver.

The download is very lean (0.63 – 1.79 MB depending on your version) and installs as a simple Visual Studio extension (.vsix file extension).  The extension is also completely free, which is fantastic.

tangible t4 editor

Tangible T4 Editor from Tangible Engineering

This is a comprehensive tool with advanced IntelliSense, code navigation and validation.

Personally I don’t use this tool because I didn’t like the bulky download, or the full blown Windows installation, but it looks like a decent tool so I recommend you give it a shot.  There is a free version, but the full version will set you back an eye watering 99 €.

This is not supposed to be a comprehensive review about each product, just a mile-high snapshot.  I highly recommend that you test both tools and pick the one that works best for you.

Basic Set-up

Once you’ve picked your preferred tooling, its time to set started.  For the purposes of this tutorial we will create a simple console application, but the type of project doesn’t matter.

Add a new Text Template using the Add New Item dialog (shown below).  Call the file Mapper.tt;

Add New Item

A new Text Template will be created for you, with a few default assemblies and imports. Please change the output extension to .cs;

<#@ template debug=”false” hostspecific=”false” language=”C#” #>
<#@ assembly name=”System.Core” #>
<#@ import namespace=”System.Linq” #>
<#@ import namespace=”System.Text” #>
<#@ import namespace=”System.Collections.Generic” #>
<#@ output extension=”.cs” #>

Before making any further T4 specific changes, lets add in some simple code and show how to transform the template.  Add the following code to Mapper.tt;

using System;
namespace Tutorial
{
    //Logic goes here
}

To transform the template, simply save (if using Devart T4 Editor) or right click on Mapper.tt and click Run Custom Tool.

Run Custom Tool

You should notice a file appear nested underneath Mapper.tt, called Mapper.cs.  Open the file and see the result of the template transformation.  Congratulations, you have written and run your first T4 template.

A step further

With the “Hello World” stuff out the way, we’re free to get to the all the goodness that T4 offers.

Blocks

If you’re familiar with the ASP .NET Web Forms engine tags (<% %> <%= %>) or indeed the PHP equivalent (<? ?>) there really isn’t anything new for you to learn here.  Otherwise, all you need to know is there are special tags that give instructions to T4 that express how the proceeding text should be interpreted.

Expression Block <#= #> A simple expression, exclude the semi colon at the end.
Statement Block <# #> Typically multi-line blocks of code
Class Feature Block <#+ #> Complex structures, including methods, classes, properties etc
Directive Block <#@ #> Used to specify template details, included files, imports etc

Any text that is not contained within any of these tags is treated as plain text, otherwise the T4 engine will attempt to evaluate each expression/line of code, using the standard C#/VB compilers.

A simple loop

T4 is designed to work with both C# and VB, so you can just choose the right block and start typing C# as normal, so a loop might look something like this;

using System;

namespace Tutorial
{
    <# for(int i = 0; i < 10; i++) { #>
        //This is comment <#= i #>
    <# } #>
}

I simply added a statement block for the for loop, and an expression block for outputting the value of i because the for loop itself doesn’t have any sort of output, whereas I do want to output the value of i in this case.

using System;

namespace Tutorial
{
//This is comment 0
//This is comment 1
//This is comment 2
//This is comment 3
//This is comment 4
//This is comment 5
//This is comment 6
//This is comment 7
//This is comment 8
//This is comment 9
}

Includes

Includes are basically references to other T4 templates.  Rather than simply having all our logic in a single file, we can break it up into several smaller files.  This will reduce duplication and make our code more readable going forward.

Add a new T4 template, call it SqlHelper.ttinclude.  The ttinclude file extension denotes, as I’m sure you have surmised, that this file is basically a child of the parent that references it.  We don’t need to double up our imports/assembly tags, so you can safely clear out anything that the template gives you by default and start fresh.

Write some SQL to find your lookup tables

To query our database, we’re just going to knock up some very simple ADO .NET code, with a little in-line T-SQL.  There is really nothing special here.  I highly recommend that you create a scratch application and get this all working before finally dropping it into your template.  (Doing this will save your sanity, as the T4 debugging tools are somewhat primitive!)

Use the Class Feature Block syntax we discussed earlier and drop in the following code;

<#+
public static IEnumerable<IGrouping> GetTables()
{
    string connectionString = "Server=.;Database=AdventureWorks2012;Trusted_Connection=True;";

    List tables = new List();
    using (SqlConnection sqlConnection = new SqlConnection(connectionString))
    {
        SqlCommand command = new SqlCommand("DECLARE @tmpTable TABLE ( [RowNumber] int, [Schema] nvarchar(15), [TableName] nvarchar(20), [ColumnName] nvarchar(20), [Sql] nvarchar(200) ) INSERT INTO @tmpTable ([RowNumber], [Schema], [TableName], [ColumnName], [Sql]) SELECT ROW_NUMBER() OVER (ORDER BY KU.TABLE_SCHEMA) AS RowNumber, KU.TABLE_SCHEMA, KU.table_name, column_name, 'SELECT "' + KU.TABLE_SCHEMA + "', "' + KU.TABLE_NAME + "', Name, CAST(ROW_NUMBER() OVER (ORDER BY Name) AS INT) AS RowNumber FROM ' + KU.TABLE_SCHEMA + '.' + KU.TABLE_NAME as [Sql] FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS TC INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS KU ON TC.CONSTRAINT_TYPE = 'PRIMARY KEY' AND TC.CONSTRAINT_NAME = KU.CONSTRAINT_NAME and ku.table_name in (SELECT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME LIKE '%Type' GROUP BY TABLE_NAME, TABLE_SCHEMA) DECLARE @counter INT = 1 DECLARE @total INT = (SELECT COUNT([Schema]) FROM @tmpTable) DECLARE @sqlCommand varchar(1000) WHILE (@counter  1) SET @sqlCommand = CONCAT(@sqlCommand, ' UNION ') SET @sqlCommand = CONCAT(@sqlCommand, @sql) SET @counter = @counter + 1 END EXEC (@sqlCommand)", sqlConnection);
        sqlConnection.Open();

        var reader = command.ExecuteReader();
        while (reader.Read())
        {
            DatabaseTable table = new DatabaseTable();
            table.Schema = reader.GetString(0);
            table.TableName = reader.GetString(1);
            table.Name = reader.GetString(2);
            table.Id = reader.GetInt32(3);

            tables.Add(table);
        }
    }

    return tables.GroupBy(t => t.TableName);
}

public class DatabaseTable
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string TableName { get; set; }
    public string Schema { get; set; }
}

#>

You may want to adjust this code a little to work with your set-up (change the connection string for example).

In a nutshell, the code will connect to SQL Server, get all the tables whose name ends with Type, and return each row in each table as a single query.  This code is far from perfect, I am far from a SQL hero, but it gets the job done so I am happy.  You may want to use your SQL expertise to tidy it up.

Tying it all together

Almost there now, we just need to reference our include file, import a couple of assemblies, and update our loop in Mapper.tt to call the code we have just written;

To add a reference to the include file, add the following underneath the main directive block;

<#@ include file="SqlHelper.ttinclude" #>

And use the the assembly hint tag to bring in a reference to System.Data;

<#@ assembly name="System.Data" #>

And finally add a import for System.Data.SqlClient;

<#@ import namespace="System.Data.SqlClient" #>

You should end up with the following;

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ include file="SqlHelper.ttinclude" #>
<#@ assembly name="System.Core" #>
<#@ assembly name="System.Data" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ import namespace="System.Data.SqlClient" #>
<#@ output extension=".cs" #>

Now, and I promise this is the last step, update your loop that you created earlier to call out to the database using the methods we created in SqlHelper.include;

using System;

namespace AutoEnum
{
    <# foreach (var table in GetTables()) { #>
    /// <summary>
    /// The <#= table.Key #> enumeration
    /// </summary>
    public enum <#= table.Key #>
    {
        <# for(int i = 0; i < table.Count(); i++) { #>
        <# var item = table.ElementAt(i); #>
        <#= item.Name.Replace(" ","").Replace("/", "") #> = <#= item.Id #><# if(i < table.Count() - 1) { #>,
        <# } #><# } #>
    };

<#}#>}

The result

Assuming everything is working, correctly, you should end up with the following enumerations in Mapper.cs;

using System;

namespace AutoEnum
{
    /// <summary>
    /// The AddressType enumeration
    /// </summary>
    public enum AddressType
    {
    Archive = 1,
        Billing = 2,
        Home = 3,
        MainOffice = 4,
        Primary = 5,
        Shipping = 6   
    };

    /// <summary>
    /// The ContactType enumeration
    /// </summary>
    public enum ContactType
    {
    AccountingManager = 1,
        AssistantSalesAgent = 2,
        AssistantSalesRepresentative = 3,
        CoordinatorForeignMarkets = 4,
        ExportAdministrator = 5,
        InternationalMarketingManager = 6,
        MarketingAssistant = 7,
        MarketingManager = 8,
        MarketingRepresentative = 9,
        OrderAdministrator = 10,
        Owner = 11,
        OwnerMarketingAssistant = 12,
        ProductManager = 13,
        PurchasingAgent = 14,
        PurchasingManager = 15,
        RegionalAccountRepresentative = 16,
        SalesAgent = 17,
        SalesAssociate = 18,
        SalesManager = 19,
        SalesRepresentative = 20   
    };

    /// <summary>
    /// The PhoneNumberType enumeration
    /// </summary>
    public enum PhoneNumberType
    {
    Cell = 1,
        Home = 2,
        Work = 3   
    };

}

Summary

Visual Studio has native support for text templates, also known as T4.  Text templates can be used to automatically generate just about anything, but it is common to generate code files based on existing database structures.  Out of the box tooling is pretty poor, but there are several third party tools that you can use to enhance the experience.  Generally these templates can be a little clunky to write, but once you get the right they can be a real time saver.

Further Reading

  1. How to generate multiple outputs from a single template
  2. Just about every page on Oleg Sych’s blog
  3. Basic introduction about T4 Templates and how to customize them for ASP .NET MVC project
  4. T4 template generation, best kept secret in Visual Studio

Check TFS Online service status using C#

If you use TFS Online you may have experienced some unexpected downtime over the last few months.  Whilst the service is getting better and better all the time, downtime is still an issue.  I have written a little screen scraping tool based on the HTML Agility Pack that will scrape the service status page and report back the current status.

Add the following class to your project;

 

using HtmlAgilityPack;

/// &lt;summary&gt;
/// The TFS heartbeat helper.
/// &lt;/summary&gt;
public static class TFSHeartbeatHelper
{
    #region Constants

<pre><code>/// &amp;lt;summary&amp;gt;
/// The service status url.
/// &amp;lt;/summary&amp;gt;
private const string ServiceStatusUrl = &amp;quot;http://www.visualstudio.com/en-us/support/support-overview-vs.aspx&amp;quot;;

#endregion

#region Public Methods and Operators

/// &amp;lt;summary&amp;gt;
/// Gets the TFS service status
/// &amp;lt;/summary&amp;gt;
/// &amp;lt;returns&amp;gt;
/// The &amp;lt;see cref=&amp;quot;ServiceStatus&amp;quot;/&amp;gt;.
/// &amp;lt;/returns&amp;gt;
public static ServiceStatus GetStatus()
{
    HtmlDocument doc = new HtmlWeb().Load(ServiceStatusUrl);

    HtmlNode detailedImage = doc.DocumentNode.SelectSingleNode(&amp;quot;//div[@class='DetailedImage']&amp;quot;);
    HtmlNode supportImageNode = detailedImage.ChildNodes.FindFirst(&amp;quot;img&amp;quot;);

    if (supportImageNode.Id == &amp;quot;Support_STATUS_Check&amp;quot;)
    {
        return ServiceStatus.NoIssues;
    }

    if (supportImageNode.Id == &amp;quot;Support_STATUS_Exclamation_Y&amp;quot;)
    {
        return ServiceStatus.Issues;
    }

    return ServiceStatus.Undetermined;
}

#endregion
</code></pre>

}

/// &lt;summary&gt;
/// The service status.
/// &lt;/summary&gt;
public enum ServiceStatus
{
    /// &lt;summary&gt;
    /// No issues.
    /// &lt;/summary&gt;
    NoIssues,

<pre><code>/// &amp;lt;summary&amp;gt;
/// There are issues.
/// &amp;lt;/summary&amp;gt;
Issues,

/// &amp;lt;summary&amp;gt;
/// Unable to determine the status
/// &amp;lt;/summary&amp;gt;
Undetermined
</code></pre>

}

The usage for this code is as follows;

switch (TFSHeartbeatHelper.GetStatus())
{
    case ServiceStatus.NoIssues:
        ////Good news!
        break;
    case ServiceStatus.Issues:
        ////Bad news!
        break;
    case ServiceStatus.Undetermined:
        ////Erm..not sure :S
        break;
}

I hope you find this little helper useful. Please leave a comment below.

C# Create a custom XML configuration section

It is common when developing either Desktop or Web based applications to need to persist settings in an easily updateable location. Developers often choose to add normal application settings in the form of key value pairs, as shown below, and this is a great approach when you only have a small number of settings. However, as your applications configuration becomes more complicated, this approach soon becomes hard for the developer and end user alike. This blog post looks at how you can create a configuration settings section to help ease this problem.

Simple approach

If you want to take the more conventional approach to making your application configurable, you could create a list of key value pairs in the appSettings section of your application configuration file;

<appSettings>
  <add key="workspaceName" value="$machineName$"/>
  <add key="username" value="jonpreece"/>
  <add key="machineName" value="$machineName$"/>
  <add key="teamProjectPath" value="https://jpreecedev.visualstudio.com/DefaultCollection"/>
</appSettings>

You then access these settings via the ConfigurationManager class;

var workspaceName = ConfigurationManager.AppSettings["workspaceName"];

An alternative approach – Configuration sections

Configuration sections give our XML more structure. Take the following example;

<DeveloperConfiguration>
  <tfs workspaceName="$machineName$"
        username="jonpreece"
        machineName="$machineName$"
        teamProjectPath="https://jpreecedev.visualstudio.com/DefaultCollection"/>
</DeveloperConfiguration>

The best approach to structuring your C# code is to copy each configuration section/element with matching C# code files.

Starting with DeveloperConfiguration, create a new class with the same name and derive it from ConfigurationSection, as follows;

public sealed class DeveloperConfiguration : ConfigurationSection
{
}

Unfortunately instantiating the class and accessing its properties/methods directly doesn’t work, so a common approach is to make your class a singleton and call ConfigurationManager.GetSection to load the section into memory.

public sealed class DeveloperConfiguration : ConfigurationSection
{
    private static readonly DeveloperConfiguration _instance = (DeveloperConfiguration) ConfigurationManager.GetSection(&quot;DeveloperConfiguration&quot;);

<pre><code>public static DeveloperConfiguration Instance
{
    get { return _instance; }
}
</code></pre>

}

You can map child elements within your configuration section using the ConfigurationProperty attribute;

[ConfigurationProperty("tfs", IsRequired = true)]
    public Tfs Tfs
    {
        get { return (Tfs)this["tfs"]; }
    }
}

There are several named properties you can use here. In the above example, the IsRequired property states that the property must be present, or an exception will be thrown.

To access the Tfs configuration element, create a new class that derives from the ConfigurationElement class, as shown below;

public class Tfs : ConfigurationElement
{
    [ConfigurationProperty("workspaceName", IsRequired = true)]
    public string WorkspaceName
    {
        get { return ReplaceMacros((string) this["workspaceName"]); }
    }
}

You can then access its properties using the ConfigurationProperty attribute, as previously discussed.

Usage is now straightforward;

private static readonly Tfs _configuration = DeveloperConfiguration.Instance.Tfs;

static TFSHelper()
{
    _teamProject = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(_configuration.TeamProjectPath));
    _service = _teamProject.GetService<VersionControlServer>();
}

Summary

You can take a simple approach to making your application more configurable by using key-value-pairs and accessing them directly using the ConfigurationManager. Alternatively, you can create a configuration section, which is more verbose, structured, easier to read and more maintainable. Create a class that derives from ConfigurationSection, ideally with the same name as your section in XML.

How to create a new Outlook 2013 Email using C# in 3 simple steps

It has traditionally been quite painful to interact with any part of the Microsoft Office product family from a C# application, but thanks to the introduction of dynamics and optional parameters over recent years, the process has dramatically improved.

Step 1 – Prerequisites and Assembly References

Before doing anything, it is important to note that you must have Microsoft Office 2013 installed for this to work. Seems obvious, but, its still worth mentioning.

You also need two references;

Microsoft.Office.Core
Microsoft.Office.Interop.Office

The quickest way to add these references to your project is to right click on the References folder in your project, and click Add Reference. The Reference Manager dialog window will appear as shown below;

Reference Manager

  1. Click the COM tab
  2. Type Outlook into the search box
  3. Tick Microsoft Outlook 15.0 Object Library
  4. Click OK

You should now see that the appropriate references have been added to your project;

References

Step 2 – Using Directives and Initialization

Next, add the appropriate using directives to your code file.

using Microsoft.Office.Interop.Outlook;
using OutlookApp = Microsoft.Office.Interop.Outlook.Application;

The second directive is a recommendation to avoid ambiguity with other classes with the name Application.

In the constructor of your application (or wherever you want this code to go), create an instance of the Outlook Application and create a new MailItem object, as shown;

OutlookApp outlookApp = new OutlookApp();
MailItem mailItem = outlookApp.CreateItem(OlItemType.olMailItem);

Step 3 – Format and display the email to the user

Finally you can begin to flesh out your email.

mailItem.Subject = "This is the subject";
mailItem.HTMLBody = "<html><body>This is the <strong>funky</strong> message body</body></html>";

//Set a high priority to the message
mailItem.Importance = OlImportance.olImportanceHigh;

And to display the email, simply call the Display method;

mailItem.Display(false);

There are literally dozens of things you can do to an Outlook Email, including adding attachments, business cards, images, recipient, CC/BCC fields.

Summary

To create an Outlook 2013 email from C#, simply add the Microsft Outlook 15.0 Object Library to your solution, add the appropriate using directives, create a new Application object, and MailItem object, and flesh out your email. When ready, simply call MailItem.Display(false) to show the email to the user.

Please leave a comment below if you found this post useful

  • 1
  • 2